From: "Alice Ferrazzi" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Wed, 10 Feb 2021 09:51:35 +0000 (UTC) [thread overview]
Message-ID: <1612950675.6c84e9a9d87af7d00d731b3a4f131091c7393002.alicef@gentoo> (raw)
commit: 6c84e9a9d87af7d00d731b3a4f131091c7393002
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 10 09:51:09 2021 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 10 09:51:15 2021 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6c84e9a9
Linux patch 5.10.15
Signed-off-by: Alice Ferrazzi <alicef <AT> gentoo.org>
0000_README | 4 +
1014_linux-5.10.15.patch | 4352 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4356 insertions(+)
diff --git a/0000_README b/0000_README
index 7375e82..7d03d9d 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-5.10.14.patch
From: http://www.kernel.org
Desc: Linux 5.10.14
+Patch: 1014_linux-5.10.15.patch
+From: http://www.kernel.org
+Desc: Linux 5.10.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-5.10.15.patch b/1014_linux-5.10.15.patch
new file mode 100644
index 0000000..28991a6
--- /dev/null
+++ b/1014_linux-5.10.15.patch
@@ -0,0 +1,4352 @@
+diff --git a/Documentation/filesystems/overlayfs.rst b/Documentation/filesystems/overlayfs.rst
+index 580ab9a0fe319..137afeb3f581c 100644
+--- a/Documentation/filesystems/overlayfs.rst
++++ b/Documentation/filesystems/overlayfs.rst
+@@ -575,6 +575,14 @@ without significant effort.
+ The advantage of mounting with the "volatile" option is that all forms of
+ sync calls to the upper filesystem are omitted.
+
++In order to avoid a giving a false sense of safety, the syncfs (and fsync)
++semantics of volatile mounts are slightly different than that of the rest of
++VFS. If any writeback error occurs on the upperdir's filesystem after a
++volatile mount takes place, all sync functions will return an error. Once this
++condition is reached, the filesystem will not recover, and every subsequent sync
++call will return an error, even if the upperdir has not experience a new error
++since the last sync call.
++
+ When overlay is mounted with "volatile" option, the directory
+ "$workdir/work/incompat/volatile" is created. During next mount, overlay
+ checks for this directory and refuses to mount if present. This is a strong
+diff --git a/Makefile b/Makefile
+index bb3770be9779d..b62d2d4ea7b02 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+
+@@ -812,10 +812,12 @@ KBUILD_CFLAGS += -ftrivial-auto-var-init=zero
+ KBUILD_CFLAGS += -enable-trivial-auto-var-init-zero-knowing-it-will-be-removed-from-clang
+ endif
+
++DEBUG_CFLAGS :=
++
+ # Workaround for GCC versions < 5.0
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=61801
+ ifdef CONFIG_CC_IS_GCC
+-DEBUG_CFLAGS := $(call cc-ifversion, -lt, 0500, $(call cc-option, -fno-var-tracking-assignments))
++DEBUG_CFLAGS += $(call cc-ifversion, -lt, 0500, $(call cc-option, -fno-var-tracking-assignments))
+ endif
+
+ ifdef CONFIG_DEBUG_INFO
+@@ -948,12 +950,6 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=designated-init)
+ # change __FILE__ to the relative path from the srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
+
+-# ensure -fcf-protection is disabled when using retpoline as it is
+-# incompatible with -mindirect-branch=thunk-extern
+-ifdef CONFIG_RETPOLINE
+-KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
+-endif
+-
+ # include additional Makefiles when needed
+ include-y := scripts/Makefile.extrawarn
+ include-$(CONFIG_KASAN) += scripts/Makefile.kasan
+diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi
+index c8745bc800f71..7b8c18e6605e4 100644
+--- a/arch/arm/boot/dts/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/omap3-gta04.dtsi
+@@ -114,7 +114,7 @@
+ gpio-sck = <&gpio1 12 GPIO_ACTIVE_HIGH>;
+ gpio-miso = <&gpio1 18 GPIO_ACTIVE_HIGH>;
+ gpio-mosi = <&gpio1 20 GPIO_ACTIVE_HIGH>;
+- cs-gpios = <&gpio1 19 GPIO_ACTIVE_HIGH>;
++ cs-gpios = <&gpio1 19 GPIO_ACTIVE_LOW>;
+ num-chipselects = <1>;
+
+ /* lcd panel */
+@@ -124,7 +124,6 @@
+ spi-max-frequency = <100000>;
+ spi-cpol;
+ spi-cpha;
+- spi-cs-high;
+
+ backlight= <&backlight>;
+ label = "lcd";
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi
+index 62ab23824a3e7..e4d287d994214 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-drc02.dtsi
+@@ -35,7 +35,7 @@
+ */
+ rs485-rx-en {
+ gpio-hog;
+- gpios = <8 GPIO_ACTIVE_HIGH>;
++ gpios = <8 0>;
+ output-low;
+ line-name = "rs485-rx-en";
+ };
+@@ -63,7 +63,7 @@
+ */
+ usb-hub {
+ gpio-hog;
+- gpios = <2 GPIO_ACTIVE_HIGH>;
++ gpios = <2 0>;
+ output-high;
+ line-name = "usb-hub-reset";
+ };
+@@ -87,6 +87,12 @@
+ };
+ };
+
++&i2c4 {
++ touchscreen@49 {
++ status = "disabled";
++ };
++};
++
+ &i2c5 { /* TP7/TP8 */
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c5_pins_a>;
+@@ -104,7 +110,7 @@
+ * are used for on-board microSD slot instead.
+ */
+ /delete-property/broken-cd;
+- cd-gpios = <&gpioi 10 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
++ cd-gpios = <&gpioi 10 GPIO_ACTIVE_HIGH>;
+ disable-wp;
+ };
+
+diff --git a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+index f796a6150313e..2d027dafb7bce 100644
+--- a/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/stm32mp15xx-dhcom-som.dtsi
+@@ -353,7 +353,8 @@
+ pinctrl-0 = <&sdmmc1_b4_pins_a &sdmmc1_dir_pins_a>;
+ pinctrl-1 = <&sdmmc1_b4_od_pins_a &sdmmc1_dir_pins_a>;
+ pinctrl-2 = <&sdmmc1_b4_sleep_pins_a &sdmmc1_dir_sleep_pins_a>;
+- broken-cd;
++ cd-gpios = <&gpiog 1 (GPIO_ACTIVE_LOW | GPIO_PULL_UP)>;
++ disable-wp;
+ st,sig-dir;
+ st,neg-edge;
+ st,use-ckin;
+diff --git a/arch/arm/boot/dts/sun7i-a20-bananapro.dts b/arch/arm/boot/dts/sun7i-a20-bananapro.dts
+index 01ccff756996d..5740f9442705c 100644
+--- a/arch/arm/boot/dts/sun7i-a20-bananapro.dts
++++ b/arch/arm/boot/dts/sun7i-a20-bananapro.dts
+@@ -110,7 +110,7 @@
+ pinctrl-names = "default";
+ pinctrl-0 = <&gmac_rgmii_pins>;
+ phy-handle = <&phy1>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-id";
+ phy-supply = <®_gmac_3v3>;
+ status = "okay";
+ };
+diff --git a/arch/arm/include/debug/tegra.S b/arch/arm/include/debug/tegra.S
+index 98daa7f483148..7454480d084b2 100644
+--- a/arch/arm/include/debug/tegra.S
++++ b/arch/arm/include/debug/tegra.S
+@@ -149,7 +149,34 @@
+
+ .align
+ 99: .word .
++#if defined(ZIMAGE)
++ .word . + 4
++/*
++ * Storage for the state maintained by the macro.
++ *
++ * In the kernel proper, this data is located in arch/arm/mach-tegra/tegra.c.
++ * That's because this header is included from multiple files, and we only
++ * want a single copy of the data. In particular, the UART probing code above
++ * assumes it's running using physical addresses. This is true when this file
++ * is included from head.o, but not when included from debug.o. So we need
++ * to share the probe results between the two copies, rather than having
++ * to re-run the probing again later.
++ *
++ * In the decompressor, we put the storage right here, since common.c
++ * isn't included in the decompressor build. This storage data gets put in
++ * .text even though it's really data, since .data is discarded from the
++ * decompressor. Luckily, .text is writeable in the decompressor, unless
++ * CONFIG_ZBOOT_ROM. That dependency is handled in arch/arm/Kconfig.debug.
++ */
++ /* Debug UART initialization required */
++ .word 1
++ /* Debug UART physical address */
++ .word 0
++ /* Debug UART virtual address */
++ .word 0
++#else
+ .word tegra_uart_config
++#endif
+ .ltorg
+
+ /* Load previously selected UART address */
+@@ -189,30 +216,3 @@
+
+ .macro waituarttxrdy,rd,rx
+ .endm
+-
+-/*
+- * Storage for the state maintained by the macros above.
+- *
+- * In the kernel proper, this data is located in arch/arm/mach-tegra/tegra.c.
+- * That's because this header is included from multiple files, and we only
+- * want a single copy of the data. In particular, the UART probing code above
+- * assumes it's running using physical addresses. This is true when this file
+- * is included from head.o, but not when included from debug.o. So we need
+- * to share the probe results between the two copies, rather than having
+- * to re-run the probing again later.
+- *
+- * In the decompressor, we put the symbol/storage right here, since common.c
+- * isn't included in the decompressor build. This symbol gets put in .text
+- * even though it's really data, since .data is discarded from the
+- * decompressor. Luckily, .text is writeable in the decompressor, unless
+- * CONFIG_ZBOOT_ROM. That dependency is handled in arch/arm/Kconfig.debug.
+- */
+-#if defined(ZIMAGE)
+-tegra_uart_config:
+- /* Debug UART initialization required */
+- .word 1
+- /* Debug UART physical address */
+- .word 0
+- /* Debug UART virtual address */
+- .word 0
+-#endif
+diff --git a/arch/arm/mach-footbridge/dc21285.c b/arch/arm/mach-footbridge/dc21285.c
+index 416462e3f5d63..f9713dc561cf7 100644
+--- a/arch/arm/mach-footbridge/dc21285.c
++++ b/arch/arm/mach-footbridge/dc21285.c
+@@ -65,15 +65,15 @@ dc21285_read_config(struct pci_bus *bus, unsigned int devfn, int where,
+ if (addr)
+ switch (size) {
+ case 1:
+- asm("ldrb %0, [%1, %2]"
++ asm volatile("ldrb %0, [%1, %2]"
+ : "=r" (v) : "r" (addr), "r" (where) : "cc");
+ break;
+ case 2:
+- asm("ldrh %0, [%1, %2]"
++ asm volatile("ldrh %0, [%1, %2]"
+ : "=r" (v) : "r" (addr), "r" (where) : "cc");
+ break;
+ case 4:
+- asm("ldr %0, [%1, %2]"
++ asm volatile("ldr %0, [%1, %2]"
+ : "=r" (v) : "r" (addr), "r" (where) : "cc");
+ break;
+ }
+@@ -99,17 +99,17 @@ dc21285_write_config(struct pci_bus *bus, unsigned int devfn, int where,
+ if (addr)
+ switch (size) {
+ case 1:
+- asm("strb %0, [%1, %2]"
++ asm volatile("strb %0, [%1, %2]"
+ : : "r" (value), "r" (addr), "r" (where)
+ : "cc");
+ break;
+ case 2:
+- asm("strh %0, [%1, %2]"
++ asm volatile("strh %0, [%1, %2]"
+ : : "r" (value), "r" (addr), "r" (where)
+ : "cc");
+ break;
+ case 4:
+- asm("str %0, [%1, %2]"
++ asm volatile("str %0, [%1, %2]"
+ : : "r" (value), "r" (addr), "r" (where)
+ : "cc");
+ break;
+diff --git a/arch/arm/mach-omap1/board-osk.c b/arch/arm/mach-omap1/board-osk.c
+index a720259099edf..0a4c9b0b13b0c 100644
+--- a/arch/arm/mach-omap1/board-osk.c
++++ b/arch/arm/mach-omap1/board-osk.c
+@@ -203,6 +203,8 @@ static int osk_tps_setup(struct i2c_client *client, void *context)
+ */
+ gpio_request(OSK_TPS_GPIO_USB_PWR_EN, "n_vbus_en");
+ gpio_direction_output(OSK_TPS_GPIO_USB_PWR_EN, 1);
++ /* Free the GPIO again as the driver will request it */
++ gpio_free(OSK_TPS_GPIO_USB_PWR_EN);
+
+ /* Set GPIO 2 high so LED D3 is off by default */
+ tps65010_set_gpio_out_value(GPIO2, HIGH);
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index 8514fe6a275a3..a6127002573bd 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -2384,7 +2384,7 @@
+ interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
+ dr_mode = "host";
+ snps,dis_u2_susphy_quirk;
+- snps,quirk-frame-length-adjustment;
++ snps,quirk-frame-length-adjustment = <0x20>;
+ snps,parkmode-disable-ss-quirk;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts
+index cf5a98f0e47c8..a712273c905af 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-odroid-c4.dts
+@@ -52,7 +52,7 @@
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+
+- gpio = <&gpio_ao GPIOAO_3 GPIO_ACTIVE_HIGH>;
++ gpio = <&gpio_ao GPIOAO_3 GPIO_OPEN_DRAIN>;
+ enable-active-high;
+ regulator-always-on;
+ };
+diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+index 1fa39bacff4b3..0b4545012d43e 100644
+--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a.dtsi
+@@ -385,7 +385,7 @@
+
+ dcfg: dcfg@1ee0000 {
+ compatible = "fsl,ls1046a-dcfg", "syscon";
+- reg = <0x0 0x1ee0000 0x0 0x10000>;
++ reg = <0x0 0x1ee0000 0x0 0x1000>;
+ big-endian;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+index 76a8c996d497f..d70aae77a6e84 100644
+--- a/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
++++ b/arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dts
+@@ -263,6 +263,8 @@
+ &i2c3 {
+ status = "okay";
+ clock-frequency = <400000>;
++ /* Overwrite pinctrl-0 from sdm845.dtsi */
++ pinctrl-0 = <&qup_i2c3_default &i2c3_hid_active>;
+
+ tsel: hid@15 {
+ compatible = "hid-over-i2c";
+@@ -270,9 +272,6 @@
+ hid-descr-addr = <0x1>;
+
+ interrupts-extended = <&tlmm 37 IRQ_TYPE_LEVEL_HIGH>;
+-
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c3_hid_active>;
+ };
+
+ tsc2: hid@2c {
+@@ -281,11 +280,6 @@
+ hid-descr-addr = <0x20>;
+
+ interrupts-extended = <&tlmm 37 IRQ_TYPE_LEVEL_HIGH>;
+-
+- pinctrl-names = "default";
+- pinctrl-0 = <&i2c3_hid_active>;
+-
+- status = "disabled";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/px30.dtsi b/arch/arm64/boot/dts/rockchip/px30.dtsi
+index 2695ea8cda142..64193292d26c3 100644
+--- a/arch/arm64/boot/dts/rockchip/px30.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30.dtsi
+@@ -1097,7 +1097,7 @@
+ vopl_mmu: iommu@ff470f00 {
+ compatible = "rockchip,iommu";
+ reg = <0x0 0xff470f00 0x0 0x100>;
+- interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "vopl_mmu";
+ clocks = <&cru ACLK_VOPL>, <&cru HCLK_VOPL>;
+ clock-names = "aclk", "iface";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+index 06d48338c8362..219b7507a10fb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+@@ -790,7 +790,6 @@
+ &pcie0 {
+ bus-scan-delay-ms = <1000>;
+ ep-gpios = <&gpio2 RK_PD4 GPIO_ACTIVE_HIGH>;
+- max-link-speed = <2>;
+ num-lanes = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pcie_clkreqn_cpm>;
+diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
+index 234a21d26f674..3474286e59db7 100644
+--- a/arch/riscv/Kconfig
++++ b/arch/riscv/Kconfig
+@@ -252,8 +252,10 @@ choice
+ default MAXPHYSMEM_128GB if 64BIT && CMODEL_MEDANY
+
+ config MAXPHYSMEM_1GB
++ depends on 32BIT
+ bool "1GiB"
+ config MAXPHYSMEM_2GB
++ depends on 64BIT && CMODEL_MEDLOW
+ bool "2GiB"
+ config MAXPHYSMEM_128GB
+ depends on 64BIT && CMODEL_MEDANY
+diff --git a/arch/um/drivers/virtio_uml.c b/arch/um/drivers/virtio_uml.c
+index a6c4bb6c2c012..c17b8e5ec1869 100644
+--- a/arch/um/drivers/virtio_uml.c
++++ b/arch/um/drivers/virtio_uml.c
+@@ -1083,6 +1083,7 @@ static void virtio_uml_release_dev(struct device *d)
+ }
+
+ os_close_file(vu_dev->sock);
++ kfree(vu_dev);
+ }
+
+ /* Platform device */
+@@ -1096,7 +1097,7 @@ static int virtio_uml_probe(struct platform_device *pdev)
+ if (!pdata)
+ return -EINVAL;
+
+- vu_dev = devm_kzalloc(&pdev->dev, sizeof(*vu_dev), GFP_KERNEL);
++ vu_dev = kzalloc(sizeof(*vu_dev), GFP_KERNEL);
+ if (!vu_dev)
+ return -ENOMEM;
+
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 1bf21746f4cea..6a7efa78eba22 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -127,6 +127,9 @@ else
+
+ KBUILD_CFLAGS += -mno-red-zone
+ KBUILD_CFLAGS += -mcmodel=kernel
++
++ # Intel CET isn't enabled in the kernel
++ KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none)
+ endif
+
+ ifdef CONFIG_X86_X32
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 57af25cb44f63..51abd44ab8c2d 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -197,16 +197,6 @@ static inline bool apic_needs_pit(void) { return true; }
+ #endif /* !CONFIG_X86_LOCAL_APIC */
+
+ #ifdef CONFIG_X86_X2APIC
+-/*
+- * Make previous memory operations globally visible before
+- * sending the IPI through x2apic wrmsr. We need a serializing instruction or
+- * mfence for this.
+- */
+-static inline void x2apic_wrmsr_fence(void)
+-{
+- asm volatile("mfence" : : : "memory");
+-}
+-
+ static inline void native_apic_msr_write(u32 reg, u32 v)
+ {
+ if (reg == APIC_DFR || reg == APIC_ID || reg == APIC_LDR ||
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 7f828fe497978..4819d5e5a3353 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -84,4 +84,22 @@ do { \
+
+ #include <asm-generic/barrier.h>
+
++/*
++ * Make previous memory operations globally visible before
++ * a WRMSR.
++ *
++ * MFENCE makes writes visible, but only affects load/store
++ * instructions. WRMSR is unfortunately not a load/store
++ * instruction and is unaffected by MFENCE. The LFENCE ensures
++ * that the WRMSR is not reordered.
++ *
++ * Most WRMSRs are full serializing instructions themselves and
++ * do not require this barrier. This is only required for the
++ * IA32_TSC_DEADLINE and X2APIC MSRs.
++ */
++static inline void weak_wrmsr_fence(void)
++{
++ asm volatile("mfence; lfence" : : : "memory");
++}
++
+ #endif /* _ASM_X86_BARRIER_H */
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 113f6ca7b8284..f4c0514fc5108 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -41,6 +41,7 @@
+ #include <asm/perf_event.h>
+ #include <asm/x86_init.h>
+ #include <linux/atomic.h>
++#include <asm/barrier.h>
+ #include <asm/mpspec.h>
+ #include <asm/i8259.h>
+ #include <asm/proto.h>
+@@ -472,6 +473,9 @@ static int lapic_next_deadline(unsigned long delta,
+ {
+ u64 tsc;
+
++ /* This MSR is special and need a special fence: */
++ weak_wrmsr_fence();
++
+ tsc = rdtsc();
+ wrmsrl(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR));
+ return 0;
+diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
+index b0889c48a2ac5..7eec3c154fa24 100644
+--- a/arch/x86/kernel/apic/x2apic_cluster.c
++++ b/arch/x86/kernel/apic/x2apic_cluster.c
+@@ -29,7 +29,8 @@ static void x2apic_send_IPI(int cpu, int vector)
+ {
+ u32 dest = per_cpu(x86_cpu_to_logical_apicid, cpu);
+
+- x2apic_wrmsr_fence();
++ /* x2apic MSRs are special and need a special fence: */
++ weak_wrmsr_fence();
+ __x2apic_send_IPI_dest(dest, vector, APIC_DEST_LOGICAL);
+ }
+
+@@ -41,7 +42,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
+ unsigned long flags;
+ u32 dest;
+
+- x2apic_wrmsr_fence();
++ /* x2apic MSRs are special and need a special fence: */
++ weak_wrmsr_fence();
+ local_irq_save(flags);
+
+ tmpmsk = this_cpu_cpumask_var_ptr(ipi_mask);
+diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
+index e14eae6d6ea71..032a00e5d9fa6 100644
+--- a/arch/x86/kernel/apic/x2apic_phys.c
++++ b/arch/x86/kernel/apic/x2apic_phys.c
+@@ -43,7 +43,8 @@ static void x2apic_send_IPI(int cpu, int vector)
+ {
+ u32 dest = per_cpu(x86_cpu_to_apicid, cpu);
+
+- x2apic_wrmsr_fence();
++ /* x2apic MSRs are special and need a special fence: */
++ weak_wrmsr_fence();
+ __x2apic_send_IPI_dest(dest, vector, APIC_DEST_PHYSICAL);
+ }
+
+@@ -54,7 +55,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
+ unsigned long this_cpu;
+ unsigned long flags;
+
+- x2apic_wrmsr_fence();
++ /* x2apic MSRs are special and need a special fence: */
++ weak_wrmsr_fence();
+
+ local_irq_save(flags);
+
+@@ -125,7 +127,8 @@ void __x2apic_send_IPI_shorthand(int vector, u32 which)
+ {
+ unsigned long cfg = __prepare_ICR(which, vector, 0);
+
+- x2apic_wrmsr_fence();
++ /* x2apic MSRs are special and need a special fence: */
++ weak_wrmsr_fence();
+ native_x2apic_icr_write(cfg, 0);
+ }
+
+diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c
+index 03aa33b581658..668a4a6533d92 100644
+--- a/arch/x86/kernel/hw_breakpoint.c
++++ b/arch/x86/kernel/hw_breakpoint.c
+@@ -269,6 +269,20 @@ static inline bool within_cpu_entry(unsigned long addr, unsigned long end)
+ CPU_ENTRY_AREA_TOTAL_SIZE))
+ return true;
+
++ /*
++ * When FSGSBASE is enabled, paranoid_entry() fetches the per-CPU
++ * GSBASE value via __per_cpu_offset or pcpu_unit_offsets.
++ */
++#ifdef CONFIG_SMP
++ if (within_area(addr, end, (unsigned long)__per_cpu_offset,
++ sizeof(unsigned long) * nr_cpu_ids))
++ return true;
++#else
++ if (within_area(addr, end, (unsigned long)&pcpu_unit_offsets,
++ sizeof(pcpu_unit_offsets)))
++ return true;
++#endif
++
+ for_each_possible_cpu(cpu) {
+ /* The original rw GDT is being used after load_direct_gdt() */
+ if (within_area(addr, end, (unsigned long)get_cpu_gdt_rw(cpu),
+@@ -293,6 +307,14 @@ static inline bool within_cpu_entry(unsigned long addr, unsigned long end)
+ (unsigned long)&per_cpu(cpu_tlbstate, cpu),
+ sizeof(struct tlb_state)))
+ return true;
++
++ /*
++ * When in guest (X86_FEATURE_HYPERVISOR), local_db_save()
++ * will read per-cpu cpu_dr7 before clear dr7 register.
++ */
++ if (within_area(addr, end, (unsigned long)&per_cpu(cpu_dr7, cpu),
++ sizeof(cpu_dr7)))
++ return true;
+ }
+
+ return false;
+@@ -491,15 +513,12 @@ static int hw_breakpoint_handler(struct die_args *args)
+ struct perf_event *bp;
+ unsigned long *dr6_p;
+ unsigned long dr6;
++ bool bpx;
+
+ /* The DR6 value is pointed by args->err */
+ dr6_p = (unsigned long *)ERR_PTR(args->err);
+ dr6 = *dr6_p;
+
+- /* If it's a single step, TRAP bits are random */
+- if (dr6 & DR_STEP)
+- return NOTIFY_DONE;
+-
+ /* Do an early return if no trap bits are set in DR6 */
+ if ((dr6 & DR_TRAP_BITS) == 0)
+ return NOTIFY_DONE;
+@@ -509,28 +528,29 @@ static int hw_breakpoint_handler(struct die_args *args)
+ if (likely(!(dr6 & (DR_TRAP0 << i))))
+ continue;
+
++ bp = this_cpu_read(bp_per_reg[i]);
++ if (!bp)
++ continue;
++
++ bpx = bp->hw.info.type == X86_BREAKPOINT_EXECUTE;
++
+ /*
+- * The counter may be concurrently released but that can only
+- * occur from a call_rcu() path. We can then safely fetch
+- * the breakpoint, use its callback, touch its counter
+- * while we are in an rcu_read_lock() path.
++ * TF and data breakpoints are traps and can be merged, however
++ * instruction breakpoints are faults and will be raised
++ * separately.
++ *
++ * However DR6 can indicate both TF and instruction
++ * breakpoints. In that case take TF as that has precedence and
++ * delay the instruction breakpoint for the next exception.
+ */
+- rcu_read_lock();
++ if (bpx && (dr6 & DR_STEP))
++ continue;
+
+- bp = this_cpu_read(bp_per_reg[i]);
+ /*
+ * Reset the 'i'th TRAP bit in dr6 to denote completion of
+ * exception handling
+ */
+ (*dr6_p) &= ~(DR_TRAP0 << i);
+- /*
+- * bp can be NULL due to lazy debug register switching
+- * or due to concurrent perf counter removing.
+- */
+- if (!bp) {
+- rcu_read_unlock();
+- break;
+- }
+
+ perf_bp_event(bp, args->regs);
+
+@@ -538,11 +558,10 @@ static int hw_breakpoint_handler(struct die_args *args)
+ * Set up resume flag to avoid breakpoint recursion when
+ * returning back to origin.
+ */
+- if (bp->hw.info.type == X86_BREAKPOINT_EXECUTE)
++ if (bpx)
+ args->regs->flags |= X86_EFLAGS_RF;
+-
+- rcu_read_unlock();
+ }
++
+ /*
+ * Further processing in do_debug() is needed for a) user-space
+ * breakpoints (to generate signals) and b) when the system has
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 83637a2ff6052..62157b1000f08 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -320,7 +320,7 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
+ if (cpuid->nent < vcpu->arch.cpuid_nent)
+ goto out;
+ r = -EFAULT;
+- if (copy_to_user(entries, &vcpu->arch.cpuid_entries,
++ if (copy_to_user(entries, vcpu->arch.cpuid_entries,
+ vcpu->arch.cpuid_nent * sizeof(struct kvm_cpuid_entry2)))
+ goto out;
+ return 0;
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index 56cae1ff9e3fe..66a08322988f2 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -2879,6 +2879,8 @@ static int em_sysenter(struct x86_emulate_ctxt *ctxt)
+ ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data);
+ *reg_write(ctxt, VCPU_REGS_RSP) = (efer & EFER_LMA) ? msr_data :
+ (u32)msr_data;
++ if (efer & EFER_LMA)
++ ctxt->mode = X86EMUL_MODE_PROT64;
+
+ return X86EMUL_CONTINUE;
+ }
+diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
+index b9265a585ea3c..c842d17240ccb 100644
+--- a/arch/x86/kvm/mmu/tdp_mmu.c
++++ b/arch/x86/kvm/mmu/tdp_mmu.c
+@@ -1037,8 +1037,8 @@ bool kvm_tdp_mmu_slot_set_dirty(struct kvm *kvm, struct kvm_memory_slot *slot)
+ }
+
+ /*
+- * Clear non-leaf entries (and free associated page tables) which could
+- * be replaced by large mappings, for GFNs within the slot.
++ * Clear leaf entries which could be replaced by large mappings, for
++ * GFNs within the slot.
+ */
+ static void zap_collapsible_spte_range(struct kvm *kvm,
+ struct kvm_mmu_page *root,
+@@ -1050,7 +1050,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm,
+
+ tdp_root_for_each_pte(iter, root, start, end) {
+ if (!is_shadow_present_pte(iter.old_spte) ||
+- is_last_spte(iter.old_spte, iter.level))
++ !is_last_spte(iter.old_spte, iter.level))
+ continue;
+
+ pfn = spte_to_pfn(iter.old_spte);
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 5c9630c3f6ba1..e3e04988fdabe 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -320,6 +320,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
+ unsigned long first, last;
+ int ret;
+
++ lockdep_assert_held(&kvm->lock);
++
+ if (ulen == 0 || uaddr + ulen < uaddr)
+ return ERR_PTR(-EINVAL);
+
+@@ -1001,12 +1003,20 @@ int svm_register_enc_region(struct kvm *kvm,
+ if (!region)
+ return -ENOMEM;
+
++ mutex_lock(&kvm->lock);
+ region->pages = sev_pin_memory(kvm, range->addr, range->size, ®ion->npages, 1);
+ if (IS_ERR(region->pages)) {
+ ret = PTR_ERR(region->pages);
++ mutex_unlock(&kvm->lock);
+ goto e_free;
+ }
+
++ region->uaddr = range->addr;
++ region->size = range->size;
++
++ list_add_tail(®ion->list, &sev->regions_list);
++ mutex_unlock(&kvm->lock);
++
+ /*
+ * The guest may change the memory encryption attribute from C=0 -> C=1
+ * or vice versa for this memory range. Lets make sure caches are
+@@ -1015,13 +1025,6 @@ int svm_register_enc_region(struct kvm *kvm,
+ */
+ sev_clflush_pages(region->pages, region->npages);
+
+- region->uaddr = range->addr;
+- region->size = range->size;
+-
+- mutex_lock(&kvm->lock);
+- list_add_tail(®ion->list, &sev->regions_list);
+- mutex_unlock(&kvm->lock);
+-
+ return ret;
+
+ e_free:
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 94b0cb8330451..f4ae3871e412a 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -438,6 +438,11 @@ static int has_svm(void)
+ return 0;
+ }
+
++ if (sev_active()) {
++ pr_info("KVM is unsupported when running as an SEV guest\n");
++ return 0;
++ }
++
+ return 1;
+ }
+
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index c01aac2bac37c..82af43e14b09c 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6874,11 +6874,20 @@ static int vmx_create_vcpu(struct kvm_vcpu *vcpu)
+ switch (index) {
+ case MSR_IA32_TSX_CTRL:
+ /*
+- * No need to pass TSX_CTRL_CPUID_CLEAR through, so
+- * let's avoid changing CPUID bits under the host
+- * kernel's feet.
++ * TSX_CTRL_CPUID_CLEAR is handled in the CPUID
++ * interception. Keep the host value unchanged to avoid
++ * changing CPUID bits under the host kernel's feet.
++ *
++ * hle=0, rtm=0, tsx_ctrl=1 can be found with some
++ * combinations of new kernel and old userspace. If
++ * those guests run on a tsx=off host, do allow guests
++ * to use TSX_CTRL, but do not change the value on the
++ * host so that TSX remains always disabled.
+ */
+- vmx->guest_uret_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
++ if (boot_cpu_has(X86_FEATURE_RTM))
++ vmx->guest_uret_msrs[j].mask = ~(u64)TSX_CTRL_CPUID_CLEAR;
++ else
++ vmx->guest_uret_msrs[j].mask = 0;
+ break;
+ default:
+ vmx->guest_uret_msrs[j].mask = -1ull;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0a302685e4d62..18a315bbcb79e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1376,16 +1376,24 @@ static u64 kvm_get_arch_capabilities(void)
+ if (!boot_cpu_has_bug(X86_BUG_MDS))
+ data |= ARCH_CAP_MDS_NO;
+
+- /*
+- * On TAA affected systems:
+- * - nothing to do if TSX is disabled on the host.
+- * - we emulate TSX_CTRL if present on the host.
+- * This lets the guest use VERW to clear CPU buffers.
+- */
+- if (!boot_cpu_has(X86_FEATURE_RTM))
+- data &= ~(ARCH_CAP_TAA_NO | ARCH_CAP_TSX_CTRL_MSR);
+- else if (!boot_cpu_has_bug(X86_BUG_TAA))
++ if (!boot_cpu_has(X86_FEATURE_RTM)) {
++ /*
++ * If RTM=0 because the kernel has disabled TSX, the host might
++ * have TAA_NO or TSX_CTRL. Clear TAA_NO (the guest sees RTM=0
++ * and therefore knows that there cannot be TAA) but keep
++ * TSX_CTRL: some buggy userspaces leave it set on tsx=on hosts,
++ * and we want to allow migrating those guests to tsx=off hosts.
++ */
++ data &= ~ARCH_CAP_TAA_NO;
++ } else if (!boot_cpu_has_bug(X86_BUG_TAA)) {
+ data |= ARCH_CAP_TAA_NO;
++ } else {
++ /*
++ * Nothing to do here; we emulate TSX_CTRL if present on the
++ * host so the guest can choose between disabling TSX or
++ * using VERW to clear CPU buffers.
++ */
++ }
+
+ return data;
+ }
+@@ -9907,6 +9915,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ fx_init(vcpu);
+
+ vcpu->arch.maxphyaddr = cpuid_query_maxphyaddr(vcpu);
++ vcpu->arch.cr3_lm_rsvd_bits = rsvd_bits(cpuid_maxphyaddr(vcpu), 63);
+
+ vcpu->arch.pat = MSR_IA32_CR_PAT_DEFAULT;
+
+diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
+index bc0833713be95..f80d10d39cf6d 100644
+--- a/arch/x86/mm/mem_encrypt.c
++++ b/arch/x86/mm/mem_encrypt.c
+@@ -351,6 +351,7 @@ bool sev_active(void)
+ {
+ return sev_status & MSR_AMD64_SEV_ENABLED;
+ }
++EXPORT_SYMBOL_GPL(sev_active);
+
+ /* Needs to be called from non-instrumentable code */
+ bool noinstr sev_es_active(void)
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 4ad3c4b276dcf..7e17d4edccb12 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -602,7 +602,11 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ ret = gdev->id;
+ goto err_free_gdev;
+ }
+- dev_set_name(&gdev->dev, GPIOCHIP_NAME "%d", gdev->id);
++
++ ret = dev_set_name(&gdev->dev, GPIOCHIP_NAME "%d", gdev->id);
++ if (ret)
++ goto err_free_ida;
++
+ device_initialize(&gdev->dev);
+ dev_set_drvdata(&gdev->dev, gdev);
+ if (gc->parent && gc->parent->driver)
+@@ -616,7 +620,7 @@ int gpiochip_add_data_with_key(struct gpio_chip *gc, void *data,
+ gdev->descs = kcalloc(gc->ngpio, sizeof(gdev->descs[0]), GFP_KERNEL);
+ if (!gdev->descs) {
+ ret = -ENOMEM;
+- goto err_free_ida;
++ goto err_free_dev_name;
+ }
+
+ if (gc->ngpio == 0) {
+@@ -767,6 +771,8 @@ err_free_label:
+ kfree_const(gdev->label);
+ err_free_descs:
+ kfree(gdev->descs);
++err_free_dev_name:
++ kfree(dev_name(&gdev->dev));
+ err_free_ida:
+ ida_free(&gpio_ida, gdev->id);
+ err_free_gdev:
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0f7749e9424d4..580880212e551 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -2278,8 +2278,6 @@ void amdgpu_dm_update_connector_after_detect(
+
+ drm_connector_update_edid_property(connector,
+ aconnector->edid);
+- drm_add_edid_modes(connector, aconnector->edid);
+-
+ if (aconnector->dc_link->aux_mode)
+ drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+ aconnector->edid);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index e875425336406..7749b0ceabba9 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3629,14 +3629,26 @@ static int drm_dp_send_up_ack_reply(struct drm_dp_mst_topology_mgr *mgr,
+ return 0;
+ }
+
+-static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8 dp_link_count)
++/**
++ * drm_dp_get_vc_payload_bw - get the VC payload BW for an MST link
++ * @link_rate: link rate in 10kbits/s units
++ * @link_lane_count: lane count
++ *
++ * Calculate the total bandwidth of a MultiStream Transport link. The returned
++ * value is in units of PBNs/(timeslots/1 MTP). This value can be used to
++ * convert the number of PBNs required for a given stream to the number of
++ * timeslots this stream requires in each MTP.
++ */
++int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count)
+ {
+- if (dp_link_bw == 0 || dp_link_count == 0)
+- DRM_DEBUG_KMS("invalid link bandwidth in DPCD: %x (link count: %d)\n",
+- dp_link_bw, dp_link_count);
++ if (link_rate == 0 || link_lane_count == 0)
++ DRM_DEBUG_KMS("invalid link rate/lane count: (%d / %d)\n",
++ link_rate, link_lane_count);
+
+- return dp_link_bw * dp_link_count / 2;
++ /* See DP v2.0 2.6.4.2, VCPayload_Bandwidth_for_OneTimeSlotPer_MTP_Allocation */
++ return link_rate * link_lane_count / 54000;
+ }
++EXPORT_SYMBOL(drm_dp_get_vc_payload_bw);
+
+ /**
+ * drm_dp_read_mst_cap() - check whether or not a sink supports MST
+@@ -3692,7 +3704,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
+ goto out_unlock;
+ }
+
+- mgr->pbn_div = drm_dp_get_vc_payload_bw(mgr->dpcd[1],
++ mgr->pbn_div = drm_dp_get_vc_payload_bw(drm_dp_bw_code_to_link_rate(mgr->dpcd[1]),
+ mgr->dpcd[2] & DP_MAX_LANE_COUNT_MASK);
+ if (mgr->pbn_div == 0) {
+ ret = -EINVAL;
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 3f2bbd9370a86..40dfb4d0ffbec 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3274,6 +3274,23 @@ static void intel_ddi_disable_fec_state(struct intel_encoder *encoder,
+ intel_de_posting_read(dev_priv, intel_dp->regs.dp_tp_ctl);
+ }
+
++static void intel_ddi_power_up_lanes(struct intel_encoder *encoder,
++ const struct intel_crtc_state *crtc_state)
++{
++ struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++ struct intel_digital_port *dig_port = enc_to_dig_port(encoder);
++ enum phy phy = intel_port_to_phy(i915, encoder->port);
++
++ if (intel_phy_is_combo(i915, phy)) {
++ bool lane_reversal =
++ dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
++
++ intel_combo_phy_power_up_lanes(i915, phy, false,
++ crtc_state->lane_count,
++ lane_reversal);
++ }
++}
++
+ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state,
+@@ -3367,14 +3384,7 @@ static void tgl_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ * 7.f Combo PHY: Configure PORT_CL_DW10 Static Power Down to power up
+ * the used lanes of the DDI.
+ */
+- if (intel_phy_is_combo(dev_priv, phy)) {
+- bool lane_reversal =
+- dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
+-
+- intel_combo_phy_power_up_lanes(dev_priv, phy, false,
+- crtc_state->lane_count,
+- lane_reversal);
+- }
++ intel_ddi_power_up_lanes(encoder, crtc_state);
+
+ /*
+ * 7.g Configure and enable DDI_BUF_CTL
+@@ -3458,14 +3468,7 @@ static void hsw_ddi_pre_enable_dp(struct intel_atomic_state *state,
+ else
+ intel_prepare_dp_ddi_buffers(encoder, crtc_state);
+
+- if (intel_phy_is_combo(dev_priv, phy)) {
+- bool lane_reversal =
+- dig_port->saved_port_bits & DDI_BUF_PORT_REVERSAL;
+-
+- intel_combo_phy_power_up_lanes(dev_priv, phy, false,
+- crtc_state->lane_count,
+- lane_reversal);
+- }
++ intel_ddi_power_up_lanes(encoder, crtc_state);
+
+ intel_ddi_init_dp_buf_reg(encoder);
+ if (!is_mst)
+@@ -3933,6 +3936,8 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state,
+ intel_de_write(dev_priv, reg, val);
+ }
+
++ intel_ddi_power_up_lanes(encoder, crtc_state);
++
+ /* In HDMI/DVI mode, the port width, and swing/emphasis values
+ * are ignored so nothing special needs to be done besides
+ * enabling the port.
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index aabf09f89cada..45c2556d63955 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -2294,7 +2294,7 @@ intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb,
+ */
+ ret = i915_vma_pin_fence(vma);
+ if (ret != 0 && INTEL_GEN(dev_priv) < 4) {
+- i915_gem_object_unpin_from_display_plane(vma);
++ i915_vma_unpin(vma);
+ vma = ERR_PTR(ret);
+ goto err;
+ }
+@@ -2312,12 +2312,9 @@ err:
+
+ void intel_unpin_fb_vma(struct i915_vma *vma, unsigned long flags)
+ {
+- i915_gem_object_lock(vma->obj, NULL);
+ if (flags & PLANE_HAS_FENCE)
+ i915_vma_unpin_fence(vma);
+- i915_gem_object_unpin_from_display_plane(vma);
+- i915_gem_object_unlock(vma->obj);
+-
++ i915_vma_unpin(vma);
+ i915_vma_put(vma);
+ }
+
+@@ -4883,6 +4880,8 @@ u32 glk_plane_color_ctl(const struct intel_crtc_state *crtc_state,
+ plane_color_ctl |= PLANE_COLOR_YUV_RANGE_CORRECTION_DISABLE;
+ } else if (fb->format->is_yuv) {
+ plane_color_ctl |= PLANE_COLOR_INPUT_CSC_ENABLE;
++ if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
++ plane_color_ctl |= PLANE_COLOR_YUV_RANGE_CORRECTION_DISABLE;
+ }
+
+ return plane_color_ctl;
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_mst.c b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+index 5d745d9b99b2a..ecaa538b2d357 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_mst.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_mst.c
+@@ -68,7 +68,9 @@ static int intel_dp_mst_compute_link_config(struct intel_encoder *encoder,
+
+ slots = drm_dp_atomic_find_vcpi_slots(state, &intel_dp->mst_mgr,
+ connector->port,
+- crtc_state->pbn, 0);
++ crtc_state->pbn,
++ drm_dp_get_vc_payload_bw(crtc_state->port_clock,
++ crtc_state->lane_count));
+ if (slots == -EDEADLK)
+ return slots;
+ if (slots >= 0)
+diff --git a/drivers/gpu/drm/i915/display/intel_overlay.c b/drivers/gpu/drm/i915/display/intel_overlay.c
+index 52b4f6193b4ce..0095c8cac9b40 100644
+--- a/drivers/gpu/drm/i915/display/intel_overlay.c
++++ b/drivers/gpu/drm/i915/display/intel_overlay.c
+@@ -359,7 +359,7 @@ static void intel_overlay_release_old_vma(struct intel_overlay *overlay)
+ intel_frontbuffer_flip_complete(overlay->i915,
+ INTEL_FRONTBUFFER_OVERLAY(overlay->crtc->pipe));
+
+- i915_gem_object_unpin_from_display_plane(vma);
++ i915_vma_unpin(vma);
+ i915_vma_put(vma);
+ }
+
+@@ -860,7 +860,7 @@ static int intel_overlay_do_put_image(struct intel_overlay *overlay,
+ return 0;
+
+ out_unpin:
+- i915_gem_object_unpin_from_display_plane(vma);
++ i915_vma_unpin(vma);
+ out_pin_section:
+ atomic_dec(&dev_priv->gpu_error.pending_fb_pin);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c
+index 63040cb0d4e10..12f7128b777f6 100644
+--- a/drivers/gpu/drm/i915/display/intel_sprite.c
++++ b/drivers/gpu/drm/i915/display/intel_sprite.c
+@@ -469,13 +469,19 @@ skl_program_scaler(struct intel_plane *plane,
+
+ /* Preoffset values for YUV to RGB Conversion */
+ #define PREOFF_YUV_TO_RGB_HI 0x1800
+-#define PREOFF_YUV_TO_RGB_ME 0x1F00
++#define PREOFF_YUV_TO_RGB_ME 0x0000
+ #define PREOFF_YUV_TO_RGB_LO 0x1800
+
+ #define ROFF(x) (((x) & 0xffff) << 16)
+ #define GOFF(x) (((x) & 0xffff) << 0)
+ #define BOFF(x) (((x) & 0xffff) << 16)
+
++/*
++ * Programs the input color space conversion stage for ICL HDR planes.
++ * Note that it is assumed that this stage always happens after YUV
++ * range correction. Thus, the input to this stage is assumed to be
++ * in full-range YCbCr.
++ */
+ static void
+ icl_program_input_csc(struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+@@ -523,52 +529,7 @@ icl_program_input_csc(struct intel_plane *plane,
+ 0x0, 0x7800, 0x7F10,
+ },
+ };
+-
+- /* Matrix for Limited Range to Full Range Conversion */
+- static const u16 input_csc_matrix_lr[][9] = {
+- /*
+- * BT.601 Limted range YCbCr -> full range RGB
+- * The matrix required is :
+- * [1.164384, 0.000, 1.596027,
+- * 1.164384, -0.39175, -0.812813,
+- * 1.164384, 2.017232, 0.0000]
+- */
+- [DRM_COLOR_YCBCR_BT601] = {
+- 0x7CC8, 0x7950, 0x0,
+- 0x8D00, 0x7950, 0x9C88,
+- 0x0, 0x7950, 0x6810,
+- },
+- /*
+- * BT.709 Limited range YCbCr -> full range RGB
+- * The matrix required is :
+- * [1.164384, 0.000, 1.792741,
+- * 1.164384, -0.213249, -0.532909,
+- * 1.164384, 2.112402, 0.0000]
+- */
+- [DRM_COLOR_YCBCR_BT709] = {
+- 0x7E58, 0x7950, 0x0,
+- 0x8888, 0x7950, 0xADA8,
+- 0x0, 0x7950, 0x6870,
+- },
+- /*
+- * BT.2020 Limited range YCbCr -> full range RGB
+- * The matrix required is :
+- * [1.164, 0.000, 1.678,
+- * 1.164, -0.1873, -0.6504,
+- * 1.164, 2.1417, 0.0000]
+- */
+- [DRM_COLOR_YCBCR_BT2020] = {
+- 0x7D70, 0x7950, 0x0,
+- 0x8A68, 0x7950, 0xAC00,
+- 0x0, 0x7950, 0x6890,
+- },
+- };
+- const u16 *csc;
+-
+- if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
+- csc = input_csc_matrix[plane_state->hw.color_encoding];
+- else
+- csc = input_csc_matrix_lr[plane_state->hw.color_encoding];
++ const u16 *csc = input_csc_matrix[plane_state->hw.color_encoding];
+
+ intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
+ ROFF(csc[0]) | GOFF(csc[1]));
+@@ -585,14 +546,8 @@ icl_program_input_csc(struct intel_plane *plane,
+
+ intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
+ PREOFF_YUV_TO_RGB_HI);
+- if (plane_state->hw.color_range == DRM_COLOR_YCBCR_FULL_RANGE)
+- intel_de_write_fw(dev_priv,
+- PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
+- 0);
+- else
+- intel_de_write_fw(dev_priv,
+- PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
+- PREOFF_YUV_TO_RGB_ME);
++ intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
++ PREOFF_YUV_TO_RGB_ME);
+ intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
+ PREOFF_YUV_TO_RGB_LO);
+ intel_de_write_fw(dev_priv,
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+index fcce6909f2017..3d435bfff7649 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+@@ -387,48 +387,6 @@ err:
+ return vma;
+ }
+
+-static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
+-{
+- struct drm_i915_private *i915 = to_i915(obj->base.dev);
+- struct i915_vma *vma;
+-
+- if (list_empty(&obj->vma.list))
+- return;
+-
+- mutex_lock(&i915->ggtt.vm.mutex);
+- spin_lock(&obj->vma.lock);
+- for_each_ggtt_vma(vma, obj) {
+- if (!drm_mm_node_allocated(&vma->node))
+- continue;
+-
+- GEM_BUG_ON(vma->vm != &i915->ggtt.vm);
+- list_move_tail(&vma->vm_link, &vma->vm->bound_list);
+- }
+- spin_unlock(&obj->vma.lock);
+- mutex_unlock(&i915->ggtt.vm.mutex);
+-
+- if (i915_gem_object_is_shrinkable(obj)) {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&i915->mm.obj_lock, flags);
+-
+- if (obj->mm.madv == I915_MADV_WILLNEED &&
+- !atomic_read(&obj->mm.shrink_pin))
+- list_move_tail(&obj->mm.link, &i915->mm.shrink_list);
+-
+- spin_unlock_irqrestore(&i915->mm.obj_lock, flags);
+- }
+-}
+-
+-void
+-i915_gem_object_unpin_from_display_plane(struct i915_vma *vma)
+-{
+- /* Bump the LRU to try and avoid premature eviction whilst flipping */
+- i915_gem_object_bump_inactive_ggtt(vma->obj);
+-
+- i915_vma_unpin(vma);
+-}
+-
+ /**
+ * Moves a single object to the CPU read, and possibly write domain.
+ * @obj: object to act on
+@@ -569,9 +527,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
+ else
+ err = i915_gem_object_set_to_cpu_domain(obj, write_domain);
+
+- /* And bump the LRU for this access */
+- i915_gem_object_bump_inactive_ggtt(obj);
+-
+ i915_gem_object_unlock(obj);
+
+ if (write_domain)
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
+index d46db8d8f38e4..bc48717971204 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
++++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
+@@ -471,7 +471,6 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
+ u32 alignment,
+ const struct i915_ggtt_view *view,
+ unsigned int flags);
+-void i915_gem_object_unpin_from_display_plane(struct i915_vma *vma);
+
+ void i915_gem_object_make_unshrinkable(struct drm_i915_gem_object *obj);
+ void i915_gem_object_make_shrinkable(struct drm_i915_gem_object *obj);
+diff --git a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+index 0625cbb3b4312..0040b4765a54d 100644
+--- a/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
++++ b/drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
+@@ -451,10 +451,12 @@ void i915_request_cancel_breadcrumb(struct i915_request *rq)
+ struct intel_context *ce = rq->context;
+ bool release;
+
+- if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags))
++ spin_lock(&ce->signal_lock);
++ if (!test_and_clear_bit(I915_FENCE_FLAG_SIGNAL, &rq->fence.flags)) {
++ spin_unlock(&ce->signal_lock);
+ return;
++ }
+
+- spin_lock(&ce->signal_lock);
+ list_del_rcu(&rq->signal_link);
+ release = remove_signaling_context(rq->engine->breadcrumbs, ce);
+ spin_unlock(&ce->signal_lock);
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 8c73377ac82ca..3d004ca76b6ed 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -215,9 +215,17 @@ static const struct xpad_device {
+ { 0x0e6f, 0x0213, "Afterglow Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x0e6f, 0x021f, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x0e6f, 0x0246, "Rock Candy Gamepad for Xbox One 2015", 0, XTYPE_XBOXONE },
+- { 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02a0, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02a1, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02a2, "PDP Wired Controller for Xbox One - Crimson Red", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x02a4, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x02a6, "PDP Wired Controller for Xbox One - Camo Series", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02a7, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02a8, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02ad, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02b3, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE },
++ { 0x0e6f, 0x02b8, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 },
+ { 0x0e6f, 0x0346, "Rock Candy Gamepad for Xbox One 2016", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 },
+@@ -296,6 +304,9 @@ static const struct xpad_device {
+ { 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 },
++ { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE },
++ { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 },
++ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x24c6, 0x5300, "PowerA MINI PROEX Controller", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x5303, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 },
+@@ -429,8 +440,12 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x162e), /* Joytech X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */
+ XPAD_XBOX360_VENDOR(0x1bad), /* Harminix Rock Band Guitar and Drums */
++ XPAD_XBOX360_VENDOR(0x20d6), /* PowerA Controllers */
++ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA Controllers */
+ XPAD_XBOX360_VENDOR(0x24c6), /* PowerA Controllers */
+ XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */
++ XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke X-Box One pad */
++ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */
+ { }
+ };
+
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 3a2dcf0805f12..c74b020796a94 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -219,6 +219,8 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = {
+ DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "C15B"),
+ },
++ },
++ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"),
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 6612f9e2d7e83..45113767db964 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -157,6 +157,7 @@ static const struct goodix_chip_id goodix_chip_ids[] = {
+ { .id = "5663", .data = >1x_chip_data },
+ { .id = "5688", .data = >1x_chip_data },
+ { .id = "917S", .data = >1x_chip_data },
++ { .id = "9286", .data = >1x_chip_data },
+
+ { .id = "911", .data = >911_chip_data },
+ { .id = "9271", .data = >911_chip_data },
+@@ -1445,6 +1446,7 @@ static const struct of_device_id goodix_of_match[] = {
+ { .compatible = "goodix,gt927" },
+ { .compatible = "goodix,gt9271" },
+ { .compatible = "goodix,gt928" },
++ { .compatible = "goodix,gt9286" },
+ { .compatible = "goodix,gt967" },
+ { }
+ };
+diff --git a/drivers/input/touchscreen/ili210x.c b/drivers/input/touchscreen/ili210x.c
+index 199cf3daec106..d8fccf048bf44 100644
+--- a/drivers/input/touchscreen/ili210x.c
++++ b/drivers/input/touchscreen/ili210x.c
+@@ -29,11 +29,13 @@ struct ili2xxx_chip {
+ void *buf, size_t len);
+ int (*get_touch_data)(struct i2c_client *client, u8 *data);
+ bool (*parse_touch_data)(const u8 *data, unsigned int finger,
+- unsigned int *x, unsigned int *y);
++ unsigned int *x, unsigned int *y,
++ unsigned int *z);
+ bool (*continue_polling)(const u8 *data, bool touch);
+ unsigned int max_touches;
+ unsigned int resolution;
+ bool has_calibrate_reg;
++ bool has_pressure_reg;
+ };
+
+ struct ili210x {
+@@ -82,7 +84,8 @@ static int ili210x_read_touch_data(struct i2c_client *client, u8 *data)
+
+ static bool ili210x_touchdata_to_coords(const u8 *touchdata,
+ unsigned int finger,
+- unsigned int *x, unsigned int *y)
++ unsigned int *x, unsigned int *y,
++ unsigned int *z)
+ {
+ if (touchdata[0] & BIT(finger))
+ return false;
+@@ -137,7 +140,8 @@ static int ili211x_read_touch_data(struct i2c_client *client, u8 *data)
+
+ static bool ili211x_touchdata_to_coords(const u8 *touchdata,
+ unsigned int finger,
+- unsigned int *x, unsigned int *y)
++ unsigned int *x, unsigned int *y,
++ unsigned int *z)
+ {
+ u32 data;
+
+@@ -169,7 +173,8 @@ static const struct ili2xxx_chip ili211x_chip = {
+
+ static bool ili212x_touchdata_to_coords(const u8 *touchdata,
+ unsigned int finger,
+- unsigned int *x, unsigned int *y)
++ unsigned int *x, unsigned int *y,
++ unsigned int *z)
+ {
+ u16 val;
+
+@@ -235,7 +240,8 @@ static int ili251x_read_touch_data(struct i2c_client *client, u8 *data)
+
+ static bool ili251x_touchdata_to_coords(const u8 *touchdata,
+ unsigned int finger,
+- unsigned int *x, unsigned int *y)
++ unsigned int *x, unsigned int *y,
++ unsigned int *z)
+ {
+ u16 val;
+
+@@ -245,6 +251,7 @@ static bool ili251x_touchdata_to_coords(const u8 *touchdata,
+
+ *x = val & 0x3fff;
+ *y = get_unaligned_be16(touchdata + 1 + (finger * 5) + 2);
++ *z = touchdata[1 + (finger * 5) + 4];
+
+ return true;
+ }
+@@ -261,6 +268,7 @@ static const struct ili2xxx_chip ili251x_chip = {
+ .continue_polling = ili251x_check_continue_polling,
+ .max_touches = 10,
+ .has_calibrate_reg = true,
++ .has_pressure_reg = true,
+ };
+
+ static bool ili210x_report_events(struct ili210x *priv, u8 *touchdata)
+@@ -268,14 +276,16 @@ static bool ili210x_report_events(struct ili210x *priv, u8 *touchdata)
+ struct input_dev *input = priv->input;
+ int i;
+ bool contact = false, touch;
+- unsigned int x = 0, y = 0;
++ unsigned int x = 0, y = 0, z = 0;
+
+ for (i = 0; i < priv->chip->max_touches; i++) {
+- touch = priv->chip->parse_touch_data(touchdata, i, &x, &y);
++ touch = priv->chip->parse_touch_data(touchdata, i, &x, &y, &z);
+
+ input_mt_slot(input, i);
+ if (input_mt_report_slot_state(input, MT_TOOL_FINGER, touch)) {
+ touchscreen_report_pos(input, &priv->prop, x, y, true);
++ if (priv->chip->has_pressure_reg)
++ input_report_abs(input, ABS_MT_PRESSURE, z);
+ contact = true;
+ }
+ }
+@@ -437,6 +447,8 @@ static int ili210x_i2c_probe(struct i2c_client *client,
+ max_xy = (chip->resolution ?: SZ_64K) - 1;
+ input_set_abs_params(input, ABS_MT_POSITION_X, 0, max_xy, 0, 0);
+ input_set_abs_params(input, ABS_MT_POSITION_Y, 0, max_xy, 0, 0);
++ if (priv->chip->has_pressure_reg)
++ input_set_abs_params(input, ABS_MT_PRESSURE, 0, 0xa, 0, 0);
+ touchscreen_parse_properties(input, true, &priv->prop);
+
+ error = input_mt_init_slots(input, priv->chip->max_touches,
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 3be74cf3635fe..7a0a228d64bbe 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws)
+ * could wait for this and below md_handle_request could wait for those
+ * bios because of suspend check
+ */
++ spin_lock_irq(&mddev->lock);
+ mddev->last_flush = mddev->start_flush;
+ mddev->flush_bio = NULL;
++ spin_unlock_irq(&mddev->lock);
+ wake_up(&mddev->sb_wait);
+
+ if (bio->bi_iter.bi_size == 0) {
+diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
+index 44bea5e4aeda1..b23773583179d 100644
+--- a/drivers/mmc/core/sdio_cis.c
++++ b/drivers/mmc/core/sdio_cis.c
+@@ -20,6 +20,8 @@
+ #include "sdio_cis.h"
+ #include "sdio_ops.h"
+
++#define SDIO_READ_CIS_TIMEOUT_MS (10 * 1000) /* 10s */
++
+ static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func,
+ const unsigned char *buf, unsigned size)
+ {
+@@ -274,6 +276,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func)
+
+ do {
+ unsigned char tpl_code, tpl_link;
++ unsigned long timeout = jiffies +
++ msecs_to_jiffies(SDIO_READ_CIS_TIMEOUT_MS);
+
+ ret = mmc_io_rw_direct(card, 0, 0, ptr++, 0, &tpl_code);
+ if (ret)
+@@ -326,6 +330,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func)
+ prev = &this->next;
+
+ if (ret == -ENOENT) {
++ if (time_after(jiffies, timeout))
++ break;
+ /* warn about unknown tuples */
+ pr_warn_ratelimited("%s: queuing unknown"
+ " CIS tuple 0x%02x (%u bytes)\n",
+diff --git a/drivers/mmc/host/sdhci-pltfm.h b/drivers/mmc/host/sdhci-pltfm.h
+index 6301b81cf5731..9bd717ff784be 100644
+--- a/drivers/mmc/host/sdhci-pltfm.h
++++ b/drivers/mmc/host/sdhci-pltfm.h
+@@ -111,8 +111,13 @@ static inline void *sdhci_pltfm_priv(struct sdhci_pltfm_host *host)
+ return host->private;
+ }
+
++extern const struct dev_pm_ops sdhci_pltfm_pmops;
++#ifdef CONFIG_PM_SLEEP
+ int sdhci_pltfm_suspend(struct device *dev);
+ int sdhci_pltfm_resume(struct device *dev);
+-extern const struct dev_pm_ops sdhci_pltfm_pmops;
++#else
++static inline int sdhci_pltfm_suspend(struct device *dev) { return 0; }
++static inline int sdhci_pltfm_resume(struct device *dev) { return 0; }
++#endif
+
+ #endif /* _DRIVERS_MMC_SDHCI_PLTFM_H */
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 34cca0a4b31c7..87160e723dfcf 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -1669,7 +1669,11 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
+ if (!entry.portvec)
+ entry.state = 0;
+ } else {
+- entry.portvec |= BIT(port);
++ if (state == MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC)
++ entry.portvec = BIT(port);
++ else
++ entry.portvec |= BIT(port);
++
+ entry.state = state;
+ }
+
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 627ce1a20473a..2f281d0f98070 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -5339,11 +5339,6 @@ static int ibmvnic_remove(struct vio_dev *dev)
+ unsigned long flags;
+
+ spin_lock_irqsave(&adapter->state_lock, flags);
+- if (test_bit(0, &adapter->resetting)) {
+- spin_unlock_irqrestore(&adapter->state_lock, flags);
+- return -EBUSY;
+- }
+-
+ adapter->state = VNIC_REMOVING;
+ spin_unlock_irqrestore(&adapter->state_lock, flags);
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+index 2872c4dc77f07..3b269c70dcfe1 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+@@ -55,12 +55,7 @@ static void i40e_vc_notify_vf_link_state(struct i40e_vf *vf)
+
+ pfe.event = VIRTCHNL_EVENT_LINK_CHANGE;
+ pfe.severity = PF_EVENT_SEVERITY_INFO;
+-
+- /* Always report link is down if the VF queues aren't enabled */
+- if (!vf->queues_enabled) {
+- pfe.event_data.link_event.link_status = false;
+- pfe.event_data.link_event.link_speed = 0;
+- } else if (vf->link_forced) {
++ if (vf->link_forced) {
+ pfe.event_data.link_event.link_status = vf->link_up;
+ pfe.event_data.link_event.link_speed =
+ (vf->link_up ? VIRTCHNL_LINK_SPEED_40GB : 0);
+@@ -70,7 +65,6 @@ static void i40e_vc_notify_vf_link_state(struct i40e_vf *vf)
+ pfe.event_data.link_event.link_speed =
+ i40e_virtchnl_link_speed(ls->link_speed);
+ }
+-
+ i40e_aq_send_msg_to_vf(hw, abs_vf_id, VIRTCHNL_OP_EVENT,
+ 0, (u8 *)&pfe, sizeof(pfe), NULL);
+ }
+@@ -2443,8 +2437,6 @@ static int i40e_vc_enable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ }
+ }
+
+- vf->queues_enabled = true;
+-
+ error_param:
+ /* send the response to the VF */
+ return i40e_vc_send_resp_to_vf(vf, VIRTCHNL_OP_ENABLE_QUEUES,
+@@ -2466,9 +2458,6 @@ static int i40e_vc_disable_queues_msg(struct i40e_vf *vf, u8 *msg)
+ struct i40e_pf *pf = vf->pf;
+ i40e_status aq_ret = 0;
+
+- /* Immediately mark queues as disabled */
+- vf->queues_enabled = false;
+-
+ if (!test_bit(I40E_VF_STATE_ACTIVE, &vf->vf_states)) {
+ aq_ret = I40E_ERR_PARAM;
+ goto error_param;
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+index 5491215d81deb..091e32c1bb46f 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+@@ -98,7 +98,6 @@ struct i40e_vf {
+ unsigned int tx_rate; /* Tx bandwidth limit in Mbps */
+ bool link_forced;
+ bool link_up; /* only valid if VF link is forced */
+- bool queues_enabled; /* true if the VF queues are enabled */
+ bool spoofchk;
+ u16 num_vlan;
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+index 831f2f09de5fb..ec8cd69d49928 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
+@@ -1714,7 +1714,8 @@ static int igc_ethtool_get_link_ksettings(struct net_device *netdev,
+ Asym_Pause);
+ }
+
+- status = rd32(IGC_STATUS);
++ status = pm_runtime_suspended(&adapter->pdev->dev) ?
++ 0 : rd32(IGC_STATUS);
+
+ if (status & IGC_STATUS_LU) {
+ if (status & IGC_STATUS_SPEED_1000) {
+diff --git a/drivers/net/ethernet/intel/igc/igc_i225.c b/drivers/net/ethernet/intel/igc/igc_i225.c
+index 8b67d9b49a83a..7ec04e48860c6 100644
+--- a/drivers/net/ethernet/intel/igc/igc_i225.c
++++ b/drivers/net/ethernet/intel/igc/igc_i225.c
+@@ -219,9 +219,9 @@ static s32 igc_write_nvm_srwr(struct igc_hw *hw, u16 offset, u16 words,
+ u16 *data)
+ {
+ struct igc_nvm_info *nvm = &hw->nvm;
++ s32 ret_val = -IGC_ERR_NVM;
+ u32 attempts = 100000;
+ u32 i, k, eewr = 0;
+- s32 ret_val = 0;
+
+ /* A check for invalid values: offset too large, too many words,
+ * too many words for the offset, and not enough words.
+@@ -229,7 +229,6 @@ static s32 igc_write_nvm_srwr(struct igc_hw *hw, u16 offset, u16 words,
+ if (offset >= nvm->word_size || (words > (nvm->word_size - offset)) ||
+ words == 0) {
+ hw_dbg("nvm parameter(s) out of bounds\n");
+- ret_val = -IGC_ERR_NVM;
+ goto out;
+ }
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_mac.c b/drivers/net/ethernet/intel/igc/igc_mac.c
+index 09cd0ec7ee87d..67b8ffd21d8af 100644
+--- a/drivers/net/ethernet/intel/igc/igc_mac.c
++++ b/drivers/net/ethernet/intel/igc/igc_mac.c
+@@ -638,7 +638,7 @@ s32 igc_config_fc_after_link_up(struct igc_hw *hw)
+ }
+
+ out:
+- return 0;
++ return ret_val;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index a30eb90ba3d28..dd590086fe6a5 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -29,16 +29,16 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe)
+ /* Clear entry invalidation bit */
+ pe->tcam[MVPP2_PRS_TCAM_INV_WORD] &= ~MVPP2_PRS_TCAM_INV_MASK;
+
+- /* Write tcam index - indirect access */
+- mvpp2_write(priv, MVPP2_PRS_TCAM_IDX_REG, pe->index);
+- for (i = 0; i < MVPP2_PRS_TCAM_WORDS; i++)
+- mvpp2_write(priv, MVPP2_PRS_TCAM_DATA_REG(i), pe->tcam[i]);
+-
+ /* Write sram index - indirect access */
+ mvpp2_write(priv, MVPP2_PRS_SRAM_IDX_REG, pe->index);
+ for (i = 0; i < MVPP2_PRS_SRAM_WORDS; i++)
+ mvpp2_write(priv, MVPP2_PRS_SRAM_DATA_REG(i), pe->sram[i]);
+
++ /* Write tcam index - indirect access */
++ mvpp2_write(priv, MVPP2_PRS_TCAM_IDX_REG, pe->index);
++ for (i = 0; i < MVPP2_PRS_TCAM_WORDS; i++)
++ mvpp2_write(priv, MVPP2_PRS_TCAM_DATA_REG(i), pe->tcam[i]);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index c9b5d7f29911e..42848db8f8dd6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3593,12 +3593,10 @@ static int mlx5e_setup_tc_mqprio(struct mlx5e_priv *priv,
+
+ err = mlx5e_safe_switch_channels(priv, &new_channels,
+ mlx5e_num_channels_changed_ctx, NULL);
+- if (err)
+- goto out;
+
+- priv->max_opened_tc = max_t(u8, priv->max_opened_tc,
+- new_channels.params.num_tc);
+ out:
++ priv->max_opened_tc = max_t(u8, priv->max_opened_tc,
++ priv->channels.params.num_tc);
+ mutex_unlock(&priv->state_lock);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 6628a0197b4e0..6d2ba8b84187c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1262,8 +1262,10 @@ static void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+
+ if (mlx5e_cqe_regb_chain(cqe))
+- if (!mlx5e_tc_update_skb(cqe, skb))
++ if (!mlx5e_tc_update_skb(cqe, skb)) {
++ dev_kfree_skb_any(skb);
+ goto free_wqe;
++ }
+
+ napi_gro_receive(rq->cq.napi, skb);
+
+@@ -1316,8 +1318,10 @@ static void mlx5e_handle_rx_cqe_rep(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+ if (rep->vlan && skb_vlan_tag_present(skb))
+ skb_vlan_pop(skb);
+
+- if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))
++ if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++ dev_kfree_skb_any(skb);
+ goto free_wqe;
++ }
+
+ napi_gro_receive(rq->cq.napi, skb);
+
+@@ -1371,8 +1375,10 @@ static void mlx5e_handle_rx_cqe_mpwrq_rep(struct mlx5e_rq *rq, struct mlx5_cqe64
+
+ mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+
+- if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv))
++ if (!mlx5e_rep_tc_update_skb(cqe, skb, &tc_priv)) {
++ dev_kfree_skb_any(skb);
+ goto mpwrq_cqe_out;
++ }
+
+ napi_gro_receive(rq->cq.napi, skb);
+
+@@ -1528,8 +1534,10 @@ static void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cq
+ mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+
+ if (mlx5e_cqe_regb_chain(cqe))
+- if (!mlx5e_tc_update_skb(cqe, skb))
++ if (!mlx5e_tc_update_skb(cqe, skb)) {
++ dev_kfree_skb_any(skb);
+ goto mpwrq_cqe_out;
++ }
+
+ napi_gro_receive(rq->cq.napi, skb);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 634c2bfd25be1..79fc5755735fa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1764,6 +1764,7 @@ search_again_locked:
+ if (!fte_tmp)
+ continue;
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte_tmp);
++ /* No error check needed here, because insert_fte() is not called */
+ up_write_ref_node(&fte_tmp->node, false);
+ tree_put_node(&fte_tmp->node, false);
+ kmem_cache_free(steering->ftes_cache, fte);
+@@ -1816,6 +1817,8 @@ skip_search:
+ up_write_ref_node(&g->node, false);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
+ up_write_ref_node(&fte->node, false);
++ if (IS_ERR(rule))
++ tree_put_node(&fte->node, false);
+ return rule;
+ }
+ rule = ERR_PTR(-ENOENT);
+@@ -1914,6 +1917,8 @@ search_again_locked:
+ up_write_ref_node(&g->node, false);
+ rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
+ up_write_ref_node(&fte->node, false);
++ if (IS_ERR(rule))
++ tree_put_node(&fte->node, false);
+ tree_put_node(&g->node, false);
+ return rule;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+index a3e0c71831928..a44a2bad5bbb5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c
+@@ -76,7 +76,7 @@ enum {
+
+ static u32 get_function(u16 func_id, bool ec_function)
+ {
+- return func_id & (ec_function << 16);
++ return (u32)func_id | (ec_function << 16);
+ }
+
+ static struct rb_root *page_root_per_function(struct mlx5_core_dev *dev, u32 function)
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 762cabf16157b..75f774347f6d1 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4082,17 +4082,72 @@ err_out:
+ return -EIO;
+ }
+
+-static bool rtl_test_hw_pad_bug(struct rtl8169_private *tp)
++static bool rtl_skb_is_udp(struct sk_buff *skb)
++{
++ int no = skb_network_offset(skb);
++ struct ipv6hdr *i6h, _i6h;
++ struct iphdr *ih, _ih;
++
++ switch (vlan_get_protocol(skb)) {
++ case htons(ETH_P_IP):
++ ih = skb_header_pointer(skb, no, sizeof(_ih), &_ih);
++ return ih && ih->protocol == IPPROTO_UDP;
++ case htons(ETH_P_IPV6):
++ i6h = skb_header_pointer(skb, no, sizeof(_i6h), &_i6h);
++ return i6h && i6h->nexthdr == IPPROTO_UDP;
++ default:
++ return false;
++ }
++}
++
++#define RTL_MIN_PATCH_LEN 47
++
++/* see rtl8125_get_patch_pad_len() in r8125 vendor driver */
++static unsigned int rtl8125_quirk_udp_padto(struct rtl8169_private *tp,
++ struct sk_buff *skb)
+ {
++ unsigned int padto = 0, len = skb->len;
++
++ if (rtl_is_8125(tp) && len < 128 + RTL_MIN_PATCH_LEN &&
++ rtl_skb_is_udp(skb) && skb_transport_header_was_set(skb)) {
++ unsigned int trans_data_len = skb_tail_pointer(skb) -
++ skb_transport_header(skb);
++
++ if (trans_data_len >= offsetof(struct udphdr, len) &&
++ trans_data_len < RTL_MIN_PATCH_LEN) {
++ u16 dest = ntohs(udp_hdr(skb)->dest);
++
++ /* dest is a standard PTP port */
++ if (dest == 319 || dest == 320)
++ padto = len + RTL_MIN_PATCH_LEN - trans_data_len;
++ }
++
++ if (trans_data_len < sizeof(struct udphdr))
++ padto = max_t(unsigned int, padto,
++ len + sizeof(struct udphdr) - trans_data_len);
++ }
++
++ return padto;
++}
++
++static unsigned int rtl_quirk_packet_padto(struct rtl8169_private *tp,
++ struct sk_buff *skb)
++{
++ unsigned int padto;
++
++ padto = rtl8125_quirk_udp_padto(tp, skb);
++
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_34:
+ case RTL_GIGA_MAC_VER_60:
+ case RTL_GIGA_MAC_VER_61:
+ case RTL_GIGA_MAC_VER_63:
+- return true;
++ padto = max_t(unsigned int, padto, ETH_ZLEN);
+ default:
+- return false;
++ break;
+ }
++
++ return padto;
+ }
+
+ static void rtl8169_tso_csum_v1(struct sk_buff *skb, u32 *opts)
+@@ -4164,9 +4219,10 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
+
+ opts[1] |= transport_offset << TCPHO_SHIFT;
+ } else {
+- if (unlikely(skb->len < ETH_ZLEN && rtl_test_hw_pad_bug(tp)))
+- /* eth_skb_pad would free the skb on error */
+- return !__skb_put_padto(skb, ETH_ZLEN, false);
++ unsigned int padto = rtl_quirk_packet_padto(tp, skb);
++
++ /* skb_padto would free the skb on error */
++ return !__skb_put_padto(skb, padto, false);
+ }
+
+ return true;
+@@ -4349,6 +4405,9 @@ static netdev_features_t rtl8169_features_check(struct sk_buff *skb,
+ if (skb->len < ETH_ZLEN)
+ features &= ~NETIF_F_CSUM_MASK;
+
++ if (rtl_quirk_packet_padto(tp, skb))
++ features &= ~NETIF_F_CSUM_MASK;
++
+ if (transport_offset > TCPHO_MAX &&
+ rtl_chip_supports_csum_v2(tp))
+ features &= ~NETIF_F_CSUM_MASK;
+@@ -4694,10 +4753,10 @@ static int rtl8169_close(struct net_device *dev)
+
+ cancel_work_sync(&tp->wk.work);
+
+- phy_disconnect(tp->phydev);
+-
+ free_irq(pci_irq_vector(pdev, 0), tp);
+
++ phy_disconnect(tp->phydev);
++
+ dma_free_coherent(&pdev->dev, R8169_RX_RING_BYTES, tp->RxDescArray,
+ tp->RxPhyAddr);
+ dma_free_coherent(&pdev->dev, R8169_TX_RING_BYTES, tp->TxDescArray,
+diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
+index 6bfac1efe037c..4a68da7115d19 100644
+--- a/drivers/net/ipa/gsi.c
++++ b/drivers/net/ipa/gsi.c
+@@ -1256,7 +1256,7 @@ static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
+ /* Hardware requires a 2^n ring size, with alignment equal to size */
+ ring->virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
+ if (ring->virt && addr % size) {
+- dma_free_coherent(dev, size, ring->virt, ring->addr);
++ dma_free_coherent(dev, size, ring->virt, addr);
+ dev_err(dev, "unable to alloc 0x%zx-aligned ring buffer\n",
+ size);
+ return -EINVAL; /* Not a good error value, but distinct */
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index b59032e0859b7..9d208570d059a 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -335,16 +335,16 @@ static ssize_t state_show(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_RO(state);
+
+-static ssize_t available_slots_show(struct device *dev,
+- struct device_attribute *attr, char *buf)
++static ssize_t __available_slots_show(struct nvdimm_drvdata *ndd, char *buf)
+ {
+- struct nvdimm_drvdata *ndd = dev_get_drvdata(dev);
++ struct device *dev;
+ ssize_t rc;
+ u32 nfree;
+
+ if (!ndd)
+ return -ENXIO;
+
++ dev = ndd->dev;
+ nvdimm_bus_lock(dev);
+ nfree = nd_label_nfree(ndd);
+ if (nfree - 1 > nfree) {
+@@ -356,6 +356,18 @@ static ssize_t available_slots_show(struct device *dev,
+ nvdimm_bus_unlock(dev);
+ return rc;
+ }
++
++static ssize_t available_slots_show(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ ssize_t rc;
++
++ nd_device_lock(dev);
++ rc = __available_slots_show(dev_get_drvdata(dev), buf);
++ nd_device_unlock(dev);
++
++ return rc;
++}
+ static DEVICE_ATTR_RO(available_slots);
+
+ __weak ssize_t security_show(struct device *dev,
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index 6da67f4d641a2..2403b71b601e9 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1635,11 +1635,11 @@ static umode_t namespace_visible(struct kobject *kobj,
+ return a->mode;
+ }
+
+- if (a == &dev_attr_nstype.attr || a == &dev_attr_size.attr
+- || a == &dev_attr_holder.attr
+- || a == &dev_attr_holder_class.attr
+- || a == &dev_attr_force_raw.attr
+- || a == &dev_attr_mode.attr)
++ /* base is_namespace_io() attributes */
++ if (a == &dev_attr_nstype.attr || a == &dev_attr_size.attr ||
++ a == &dev_attr_holder.attr || a == &dev_attr_holder_class.attr ||
++ a == &dev_attr_force_raw.attr || a == &dev_attr_mode.attr ||
++ a == &dev_attr_resource.attr)
+ return a->mode;
+
+ return 0;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index a3486c1c27f0c..a32494cde61f7 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -3262,6 +3262,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x15b7, 0x2001), /* Sandisk Skyhawk */
+ .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
++ { PCI_DEVICE(0x2646, 0x2263), /* KINGSTON A2000 NVMe SSD */
++ .driver_data = NVME_QUIRK_NO_DEEPEST_PS, },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2001),
+ .driver_data = NVME_QUIRK_SINGLE_VECTOR },
+ { PCI_DEVICE(PCI_VENDOR_ID_APPLE, 0x2003) },
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index dc1f0f6471896..aacf06f0b4312 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -305,7 +305,7 @@ static void nvmet_tcp_map_pdu_iovec(struct nvmet_tcp_cmd *cmd)
+ length = cmd->pdu_len;
+ cmd->nr_mapped = DIV_ROUND_UP(length, PAGE_SIZE);
+ offset = cmd->rbytes_done;
+- cmd->sg_idx = DIV_ROUND_UP(offset, PAGE_SIZE);
++ cmd->sg_idx = offset / PAGE_SIZE;
+ sg_offset = offset % PAGE_SIZE;
+ sg = &cmd->req.sg[cmd->sg_idx];
+
+@@ -318,6 +318,7 @@ static void nvmet_tcp_map_pdu_iovec(struct nvmet_tcp_cmd *cmd)
+ length -= iov_len;
+ sg = sg_next(sg);
+ iov++;
++ sg_offset = 0;
+ }
+
+ iov_iter_kvec(&cmd->recv_msg.msg_iter, READ, cmd->iov,
+diff --git a/drivers/thunderbolt/acpi.c b/drivers/thunderbolt/acpi.c
+index a5f988a9f9482..b5442f979b4d0 100644
+--- a/drivers/thunderbolt/acpi.c
++++ b/drivers/thunderbolt/acpi.c
+@@ -56,7 +56,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
+ * managed with the xHCI and the SuperSpeed hub so we create the
+ * link from xHCI instead.
+ */
+- while (!dev_is_pci(dev))
++ while (dev && !dev_is_pci(dev))
+ dev = dev->parent;
+
+ if (!dev)
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 134dc2005ce97..c9f6e97582885 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -1329,14 +1329,17 @@ static int usblp_set_protocol(struct usblp *usblp, int protocol)
+ if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL)
+ return -EINVAL;
+
+- alts = usblp->protocol[protocol].alt_setting;
+- if (alts < 0)
+- return -EINVAL;
+- r = usb_set_interface(usblp->dev, usblp->ifnum, alts);
+- if (r < 0) {
+- printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
+- alts, usblp->ifnum);
+- return r;
++ /* Don't unnecessarily set the interface if there's a single alt. */
++ if (usblp->intf->num_altsetting > 1) {
++ alts = usblp->protocol[protocol].alt_setting;
++ if (alts < 0)
++ return -EINVAL;
++ r = usb_set_interface(usblp->dev, usblp->ifnum, alts);
++ if (r < 0) {
++ printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
++ alts, usblp->ifnum);
++ return r;
++ }
+ }
+
+ usblp->bidir = (usblp->protocol[protocol].epread != NULL);
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index 0a0d11151cfb8..ad4c94366dadf 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -1543,7 +1543,6 @@ static void dwc2_hsotg_complete_oursetup(struct usb_ep *ep,
+ static struct dwc2_hsotg_ep *ep_from_windex(struct dwc2_hsotg *hsotg,
+ u32 windex)
+ {
+- struct dwc2_hsotg_ep *ep;
+ int dir = (windex & USB_DIR_IN) ? 1 : 0;
+ int idx = windex & 0x7F;
+
+@@ -1553,12 +1552,7 @@ static struct dwc2_hsotg_ep *ep_from_windex(struct dwc2_hsotg *hsotg,
+ if (idx > hsotg->num_of_eps)
+ return NULL;
+
+- ep = index_to_ep(hsotg, idx, dir);
+-
+- if (idx && ep->dir_in != dir)
+- return NULL;
+-
+- return ep;
++ return index_to_ep(hsotg, idx, dir);
+ }
+
+ /**
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 841daec70b6ef..3101f0dcf6ae8 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1758,7 +1758,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ if (PMSG_IS_AUTO(msg))
+ break;
+
+- ret = dwc3_core_init(dwc);
++ ret = dwc3_core_init_for_resume(dwc);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/usb/gadget/legacy/ether.c b/drivers/usb/gadget/legacy/ether.c
+index 30313b233680d..99c7fc0d1d597 100644
+--- a/drivers/usb/gadget/legacy/ether.c
++++ b/drivers/usb/gadget/legacy/ether.c
+@@ -403,8 +403,10 @@ static int eth_bind(struct usb_composite_dev *cdev)
+ struct usb_descriptor_header *usb_desc;
+
+ usb_desc = usb_otg_descriptor_alloc(gadget);
+- if (!usb_desc)
++ if (!usb_desc) {
++ status = -ENOMEM;
+ goto fail1;
++ }
+ usb_otg_descriptor_init(gadget, usb_desc);
+ otg_desc[0] = usb_desc;
+ otg_desc[1] = NULL;
+diff --git a/drivers/usb/gadget/udc/aspeed-vhub/hub.c b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+index 6497185ec4e7a..bfd8e77788e29 100644
+--- a/drivers/usb/gadget/udc/aspeed-vhub/hub.c
++++ b/drivers/usb/gadget/udc/aspeed-vhub/hub.c
+@@ -999,8 +999,10 @@ static int ast_vhub_of_parse_str_desc(struct ast_vhub *vhub,
+ str_array[offset].s = NULL;
+
+ ret = ast_vhub_str_alloc_add(vhub, &lang_str);
+- if (ret)
++ if (ret) {
++ of_node_put(child);
+ break;
++ }
+ }
+
+ return ret;
+diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c
+index 45c54d56ecbd5..b45e5bf089979 100644
+--- a/drivers/usb/host/xhci-mtk-sch.c
++++ b/drivers/usb/host/xhci-mtk-sch.c
+@@ -200,6 +200,8 @@ static struct mu3h_sch_ep_info *create_sch_ep(struct usb_device *udev,
+
+ sch_ep->sch_tt = tt;
+ sch_ep->ep = ep;
++ INIT_LIST_HEAD(&sch_ep->endpoint);
++ INIT_LIST_HEAD(&sch_ep->tt_endpoint);
+
+ return sch_ep;
+ }
+@@ -373,6 +375,7 @@ static void update_bus_bw(struct mu3h_sch_bw_info *sch_bw,
+ sch_ep->bw_budget_table[j];
+ }
+ }
++ sch_ep->allocated = used;
+ }
+
+ static int check_sch_tt(struct usb_device *udev,
+@@ -541,6 +544,22 @@ static int check_sch_bw(struct usb_device *udev,
+ return 0;
+ }
+
++static void destroy_sch_ep(struct usb_device *udev,
++ struct mu3h_sch_bw_info *sch_bw, struct mu3h_sch_ep_info *sch_ep)
++{
++ /* only release ep bw check passed by check_sch_bw() */
++ if (sch_ep->allocated)
++ update_bus_bw(sch_bw, sch_ep, 0);
++
++ list_del(&sch_ep->endpoint);
++
++ if (sch_ep->sch_tt) {
++ list_del(&sch_ep->tt_endpoint);
++ drop_tt(udev);
++ }
++ kfree(sch_ep);
++}
++
+ static bool need_bw_sch(struct usb_host_endpoint *ep,
+ enum usb_device_speed speed, int has_tt)
+ {
+@@ -583,6 +602,8 @@ int xhci_mtk_sch_init(struct xhci_hcd_mtk *mtk)
+
+ mtk->sch_array = sch_array;
+
++ INIT_LIST_HEAD(&mtk->bw_ep_chk_list);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(xhci_mtk_sch_init);
+@@ -601,19 +622,14 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ struct xhci_ep_ctx *ep_ctx;
+ struct xhci_slot_ctx *slot_ctx;
+ struct xhci_virt_device *virt_dev;
+- struct mu3h_sch_bw_info *sch_bw;
+ struct mu3h_sch_ep_info *sch_ep;
+- struct mu3h_sch_bw_info *sch_array;
+ unsigned int ep_index;
+- int bw_index;
+- int ret = 0;
+
+ xhci = hcd_to_xhci(hcd);
+ virt_dev = xhci->devs[udev->slot_id];
+ ep_index = xhci_get_endpoint_index(&ep->desc);
+ slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx);
+ ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
+- sch_array = mtk->sch_array;
+
+ xhci_dbg(xhci, "%s() type:%d, speed:%d, mpkt:%d, dir:%d, ep:%p\n",
+ __func__, usb_endpoint_type(&ep->desc), udev->speed,
+@@ -632,35 +648,13 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ return 0;
+ }
+
+- bw_index = get_bw_index(xhci, udev, ep);
+- sch_bw = &sch_array[bw_index];
+-
+ sch_ep = create_sch_ep(udev, ep, ep_ctx);
+ if (IS_ERR_OR_NULL(sch_ep))
+ return -ENOMEM;
+
+ setup_sch_info(udev, ep_ctx, sch_ep);
+
+- ret = check_sch_bw(udev, sch_bw, sch_ep);
+- if (ret) {
+- xhci_err(xhci, "Not enough bandwidth!\n");
+- if (is_fs_or_ls(udev->speed))
+- drop_tt(udev);
+-
+- kfree(sch_ep);
+- return -ENOSPC;
+- }
+-
+- list_add_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
+-
+- ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
+- | EP_BCSCOUNT(sch_ep->cs_count) | EP_BBM(sch_ep->burst_mode));
+- ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
+- | EP_BREPEAT(sch_ep->repeat));
+-
+- xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
+- sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode,
+- sch_ep->offset, sch_ep->repeat);
++ list_add_tail(&sch_ep->endpoint, &mtk->bw_ep_chk_list);
+
+ return 0;
+ }
+@@ -675,7 +669,7 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ struct xhci_virt_device *virt_dev;
+ struct mu3h_sch_bw_info *sch_array;
+ struct mu3h_sch_bw_info *sch_bw;
+- struct mu3h_sch_ep_info *sch_ep;
++ struct mu3h_sch_ep_info *sch_ep, *tmp;
+ int bw_index;
+
+ xhci = hcd_to_xhci(hcd);
+@@ -694,17 +688,79 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ bw_index = get_bw_index(xhci, udev, ep);
+ sch_bw = &sch_array[bw_index];
+
+- list_for_each_entry(sch_ep, &sch_bw->bw_ep_list, endpoint) {
++ list_for_each_entry_safe(sch_ep, tmp, &sch_bw->bw_ep_list, endpoint) {
+ if (sch_ep->ep == ep) {
+- update_bus_bw(sch_bw, sch_ep, 0);
+- list_del(&sch_ep->endpoint);
+- if (is_fs_or_ls(udev->speed)) {
+- list_del(&sch_ep->tt_endpoint);
+- drop_tt(udev);
+- }
+- kfree(sch_ep);
++ destroy_sch_ep(udev, sch_bw, sch_ep);
+ break;
+ }
+ }
+ }
+ EXPORT_SYMBOL_GPL(xhci_mtk_drop_ep_quirk);
++
++int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++{
++ struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
++ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++ struct xhci_virt_device *virt_dev = xhci->devs[udev->slot_id];
++ struct mu3h_sch_bw_info *sch_bw;
++ struct mu3h_sch_ep_info *sch_ep, *tmp;
++ int bw_index, ret;
++
++ xhci_dbg(xhci, "%s() udev %s\n", __func__, dev_name(&udev->dev));
++
++ list_for_each_entry(sch_ep, &mtk->bw_ep_chk_list, endpoint) {
++ bw_index = get_bw_index(xhci, udev, sch_ep->ep);
++ sch_bw = &mtk->sch_array[bw_index];
++
++ ret = check_sch_bw(udev, sch_bw, sch_ep);
++ if (ret) {
++ xhci_err(xhci, "Not enough bandwidth!\n");
++ return -ENOSPC;
++ }
++ }
++
++ list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint) {
++ struct xhci_ep_ctx *ep_ctx;
++ struct usb_host_endpoint *ep = sch_ep->ep;
++ unsigned int ep_index = xhci_get_endpoint_index(&ep->desc);
++
++ bw_index = get_bw_index(xhci, udev, ep);
++ sch_bw = &mtk->sch_array[bw_index];
++
++ list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list);
++
++ ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index);
++ ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts)
++ | EP_BCSCOUNT(sch_ep->cs_count)
++ | EP_BBM(sch_ep->burst_mode));
++ ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset)
++ | EP_BREPEAT(sch_ep->repeat));
++
++ xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n",
++ sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode,
++ sch_ep->offset, sch_ep->repeat);
++ }
++
++ return xhci_check_bandwidth(hcd, udev);
++}
++EXPORT_SYMBOL_GPL(xhci_mtk_check_bandwidth);
++
++void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++{
++ struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd);
++ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++ struct mu3h_sch_bw_info *sch_bw;
++ struct mu3h_sch_ep_info *sch_ep, *tmp;
++ int bw_index;
++
++ xhci_dbg(xhci, "%s() udev %s\n", __func__, dev_name(&udev->dev));
++
++ list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_chk_list, endpoint) {
++ bw_index = get_bw_index(xhci, udev, sch_ep->ep);
++ sch_bw = &mtk->sch_array[bw_index];
++ destroy_sch_ep(udev, sch_bw, sch_ep);
++ }
++
++ xhci_reset_bandwidth(hcd, udev);
++}
++EXPORT_SYMBOL_GPL(xhci_mtk_reset_bandwidth);
+diff --git a/drivers/usb/host/xhci-mtk.c b/drivers/usb/host/xhci-mtk.c
+index 8f321f39ab960..fe010cc61f19b 100644
+--- a/drivers/usb/host/xhci-mtk.c
++++ b/drivers/usb/host/xhci-mtk.c
+@@ -347,6 +347,8 @@ static void usb_wakeup_set(struct xhci_hcd_mtk *mtk, bool enable)
+ static int xhci_mtk_setup(struct usb_hcd *hcd);
+ static const struct xhci_driver_overrides xhci_mtk_overrides __initconst = {
+ .reset = xhci_mtk_setup,
++ .check_bandwidth = xhci_mtk_check_bandwidth,
++ .reset_bandwidth = xhci_mtk_reset_bandwidth,
+ };
+
+ static struct hc_driver __read_mostly xhci_mtk_hc_driver;
+diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h
+index a93cfe8179049..cbb09dfea62e0 100644
+--- a/drivers/usb/host/xhci-mtk.h
++++ b/drivers/usb/host/xhci-mtk.h
+@@ -59,6 +59,7 @@ struct mu3h_sch_bw_info {
+ * @ep_type: endpoint type
+ * @maxpkt: max packet size of endpoint
+ * @ep: address of usb_host_endpoint struct
++ * @allocated: the bandwidth is aready allocated from bus_bw
+ * @offset: which uframe of the interval that transfer should be
+ * scheduled first time within the interval
+ * @repeat: the time gap between two uframes that transfers are
+@@ -86,6 +87,7 @@ struct mu3h_sch_ep_info {
+ u32 ep_type;
+ u32 maxpkt;
+ void *ep;
++ bool allocated;
+ /*
+ * mtk xHCI scheduling information put into reserved DWs
+ * in ep context
+@@ -131,6 +133,7 @@ struct xhci_hcd_mtk {
+ struct device *dev;
+ struct usb_hcd *hcd;
+ struct mu3h_sch_bw_info *sch_array;
++ struct list_head bw_ep_chk_list;
+ struct mu3c_ippc_regs __iomem *ippc_regs;
+ bool has_ippc;
+ int num_u2_ports;
+@@ -166,6 +169,8 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ struct usb_host_endpoint *ep);
+ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev,
+ struct usb_host_endpoint *ep);
++int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
++void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+
+ #else
+ static inline int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd,
+@@ -179,6 +184,16 @@ static inline void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd,
+ {
+ }
+
++static inline int xhci_mtk_check_bandwidth(struct usb_hcd *hcd,
++ struct usb_device *udev)
++{
++ return 0;
++}
++
++static inline void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd,
++ struct usb_device *udev)
++{
++}
+ #endif
+
+ #endif /* _XHCI_MTK_H_ */
+diff --git a/drivers/usb/host/xhci-mvebu.c b/drivers/usb/host/xhci-mvebu.c
+index 60651a50770f9..8ca1a235d1645 100644
+--- a/drivers/usb/host/xhci-mvebu.c
++++ b/drivers/usb/host/xhci-mvebu.c
+@@ -8,6 +8,7 @@
+ #include <linux/mbus.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
++#include <linux/phy/phy.h>
+
+ #include <linux/usb.h>
+ #include <linux/usb/hcd.h>
+@@ -74,6 +75,47 @@ int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+ return 0;
+ }
+
++int xhci_mvebu_a3700_plat_setup(struct usb_hcd *hcd)
++{
++ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
++ struct device *dev = hcd->self.controller;
++ struct phy *phy;
++ int ret;
++
++ /* Old bindings miss the PHY handle */
++ phy = of_phy_get(dev->of_node, "usb3-phy");
++ if (IS_ERR(phy) && PTR_ERR(phy) == -EPROBE_DEFER)
++ return -EPROBE_DEFER;
++ else if (IS_ERR(phy))
++ goto phy_out;
++
++ ret = phy_init(phy);
++ if (ret)
++ goto phy_put;
++
++ ret = phy_set_mode(phy, PHY_MODE_USB_HOST_SS);
++ if (ret)
++ goto phy_exit;
++
++ ret = phy_power_on(phy);
++ if (ret == -EOPNOTSUPP) {
++ /* Skip initializatin of XHCI PHY when it is unsupported by firmware */
++ dev_warn(dev, "PHY unsupported by firmware\n");
++ xhci->quirks |= XHCI_SKIP_PHY_INIT;
++ }
++ if (ret)
++ goto phy_exit;
++
++ phy_power_off(phy);
++phy_exit:
++ phy_exit(phy);
++phy_put:
++ of_phy_put(phy);
++phy_out:
++
++ return 0;
++}
++
+ int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd)
+ {
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+diff --git a/drivers/usb/host/xhci-mvebu.h b/drivers/usb/host/xhci-mvebu.h
+index 3be021793cc8b..01bf3fcb3eca5 100644
+--- a/drivers/usb/host/xhci-mvebu.h
++++ b/drivers/usb/host/xhci-mvebu.h
+@@ -12,6 +12,7 @@ struct usb_hcd;
+
+ #if IS_ENABLED(CONFIG_USB_XHCI_MVEBU)
+ int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd);
++int xhci_mvebu_a3700_plat_setup(struct usb_hcd *hcd);
+ int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd);
+ #else
+ static inline int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+@@ -19,6 +20,11 @@ static inline int xhci_mvebu_mbus_init_quirk(struct usb_hcd *hcd)
+ return 0;
+ }
+
++static inline int xhci_mvebu_a3700_plat_setup(struct usb_hcd *hcd)
++{
++ return 0;
++}
++
+ static inline int xhci_mvebu_a3700_init_quirk(struct usb_hcd *hcd)
+ {
+ return 0;
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 4d34f6005381e..c1edcc9b13cec 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -44,6 +44,16 @@ static void xhci_priv_plat_start(struct usb_hcd *hcd)
+ priv->plat_start(hcd);
+ }
+
++static int xhci_priv_plat_setup(struct usb_hcd *hcd)
++{
++ struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
++
++ if (!priv->plat_setup)
++ return 0;
++
++ return priv->plat_setup(hcd);
++}
++
+ static int xhci_priv_init_quirk(struct usb_hcd *hcd)
+ {
+ struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
+@@ -111,6 +121,7 @@ static const struct xhci_plat_priv xhci_plat_marvell_armada = {
+ };
+
+ static const struct xhci_plat_priv xhci_plat_marvell_armada3700 = {
++ .plat_setup = xhci_mvebu_a3700_plat_setup,
+ .init_quirk = xhci_mvebu_a3700_init_quirk,
+ };
+
+@@ -330,7 +341,14 @@ static int xhci_plat_probe(struct platform_device *pdev)
+
+ hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node);
+ xhci->shared_hcd->tpl_support = hcd->tpl_support;
+- if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT))
++
++ if (priv) {
++ ret = xhci_priv_plat_setup(hcd);
++ if (ret)
++ goto disable_usb_phy;
++ }
++
++ if ((xhci->quirks & XHCI_SKIP_PHY_INIT) || (priv && (priv->quirks & XHCI_SKIP_PHY_INIT)))
+ hcd->skip_phy_initialization = 1;
+
+ if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
+diff --git a/drivers/usb/host/xhci-plat.h b/drivers/usb/host/xhci-plat.h
+index 1fb149d1fbcea..561d0b7bce098 100644
+--- a/drivers/usb/host/xhci-plat.h
++++ b/drivers/usb/host/xhci-plat.h
+@@ -13,6 +13,7 @@
+ struct xhci_plat_priv {
+ const char *firmware_name;
+ unsigned long long quirks;
++ int (*plat_setup)(struct usb_hcd *);
+ void (*plat_start)(struct usb_hcd *);
+ int (*init_quirk)(struct usb_hcd *);
+ int (*suspend_quirk)(struct usb_hcd *);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index db8612ec82d3e..061d5c51405fb 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -699,11 +699,16 @@ static void xhci_unmap_td_bounce_buffer(struct xhci_hcd *xhci,
+ dma_unmap_single(dev, seg->bounce_dma, ring->bounce_buf_len,
+ DMA_FROM_DEVICE);
+ /* for in tranfers we need to copy the data from bounce to sg */
+- len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf,
+- seg->bounce_len, seg->bounce_offs);
+- if (len != seg->bounce_len)
+- xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n",
+- len, seg->bounce_len);
++ if (urb->num_sgs) {
++ len = sg_pcopy_from_buffer(urb->sg, urb->num_sgs, seg->bounce_buf,
++ seg->bounce_len, seg->bounce_offs);
++ if (len != seg->bounce_len)
++ xhci_warn(xhci, "WARN Wrong bounce buffer read length: %zu != %d\n",
++ len, seg->bounce_len);
++ } else {
++ memcpy(urb->transfer_buffer + seg->bounce_offs, seg->bounce_buf,
++ seg->bounce_len);
++ }
+ seg->bounce_len = 0;
+ seg->bounce_offs = 0;
+ }
+@@ -3275,12 +3280,16 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
+
+ /* create a max max_pkt sized bounce buffer pointed to by last trb */
+ if (usb_urb_dir_out(urb)) {
+- len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
+- seg->bounce_buf, new_buff_len, enqd_len);
+- if (len != new_buff_len)
+- xhci_warn(xhci,
+- "WARN Wrong bounce buffer write length: %zu != %d\n",
+- len, new_buff_len);
++ if (urb->num_sgs) {
++ len = sg_pcopy_to_buffer(urb->sg, urb->num_sgs,
++ seg->bounce_buf, new_buff_len, enqd_len);
++ if (len != new_buff_len)
++ xhci_warn(xhci, "WARN Wrong bounce buffer write length: %zu != %d\n",
++ len, new_buff_len);
++ } else {
++ memcpy(seg->bounce_buf, urb->transfer_buffer + enqd_len, new_buff_len);
++ }
++
+ seg->bounce_dma = dma_map_single(dev, seg->bounce_buf,
+ max_pkt, DMA_TO_DEVICE);
+ } else {
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 73f1373d517a2..d17bbb162810a 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -2861,7 +2861,7 @@ static void xhci_check_bw_drop_ep_streams(struct xhci_hcd *xhci,
+ * else should be touching the xhci->devs[slot_id] structure, so we
+ * don't need to take the xhci->lock for manipulating that.
+ */
+-static int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ int i;
+ int ret = 0;
+@@ -2959,7 +2959,7 @@ command_cleanup:
+ return ret;
+ }
+
+-static void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
++void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev)
+ {
+ struct xhci_hcd *xhci;
+ struct xhci_virt_device *virt_dev;
+@@ -5385,6 +5385,10 @@ void xhci_init_driver(struct hc_driver *drv,
+ drv->reset = over->reset;
+ if (over->start)
+ drv->start = over->start;
++ if (over->check_bandwidth)
++ drv->check_bandwidth = over->check_bandwidth;
++ if (over->reset_bandwidth)
++ drv->reset_bandwidth = over->reset_bandwidth;
+ }
+ }
+ EXPORT_SYMBOL_GPL(xhci_init_driver);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index d90c0d5df3b37..045740ad9c1ec 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1916,6 +1916,8 @@ struct xhci_driver_overrides {
+ size_t extra_priv_size;
+ int (*reset)(struct usb_hcd *hcd);
+ int (*start)(struct usb_hcd *hcd);
++ int (*check_bandwidth)(struct usb_hcd *, struct usb_device *);
++ void (*reset_bandwidth)(struct usb_hcd *, struct usb_device *);
+ };
+
+ #define XHCI_CFC_DELAY 10
+@@ -2070,6 +2072,8 @@ int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks);
+ void xhci_shutdown(struct usb_hcd *hcd);
+ void xhci_init_driver(struct hc_driver *drv,
+ const struct xhci_driver_overrides *over);
++int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
++void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev);
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id);
+ int xhci_ext_cap_init(struct xhci_hcd *xhci);
+
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index ac9a81ae82164..e6fa137018082 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -126,6 +126,7 @@ struct usbhs_pkt *usbhs_pkt_pop(struct usbhs_pipe *pipe, struct usbhs_pkt *pkt)
+ }
+
+ usbhs_pipe_clear_without_sequence(pipe, 0, 0);
++ usbhs_pipe_running(pipe, 0);
+
+ __usbhsf_pkt_del(pkt);
+ }
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index d0c05aa8a0d6e..bf11f86896837 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -64,6 +64,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
+ { USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */
+ { USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */
++ { USB_DEVICE(0x0988, 0x0578) }, /* Teraoka AD2000 */
+ { USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */
+ { USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */
+ { USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */
+@@ -204,6 +205,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */
+ { USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */
+ { USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */
++ { USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */
+ { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 3fe959104311b..2049e66f34a3f 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -425,6 +425,8 @@ static void option_instat_callback(struct urb *urb);
+ #define CINTERION_PRODUCT_AHXX_2RMNET 0x0084
+ #define CINTERION_PRODUCT_AHXX_AUDIO 0x0085
+ #define CINTERION_PRODUCT_CLS8 0x00b0
++#define CINTERION_PRODUCT_MV31_MBIM 0x00b3
++#define CINTERION_PRODUCT_MV31_RMNET 0x00b7
+
+ /* Olivetti products */
+ #define OLIVETTI_VENDOR_ID 0x0b3c
+@@ -1914,6 +1916,10 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) },
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */
+ { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) },
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_MBIM, 0xff),
++ .driver_info = RSVD(3)},
++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff),
++ .driver_info = RSVD(0)},
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100),
+ .driver_info = RSVD(4) },
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120),
+diff --git a/drivers/vdpa/mlx5/core/mlx5_vdpa.h b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+index 5c92a576edae8..08f742fd24099 100644
+--- a/drivers/vdpa/mlx5/core/mlx5_vdpa.h
++++ b/drivers/vdpa/mlx5/core/mlx5_vdpa.h
+@@ -15,6 +15,7 @@ struct mlx5_vdpa_direct_mr {
+ struct sg_table sg_head;
+ int log_size;
+ int nsg;
++ int nent;
+ struct list_head list;
+ u64 offset;
+ };
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 4b6195666c589..d300f799efcd1 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -25,17 +25,6 @@ static int get_octo_len(u64 len, int page_shift)
+ return (npages + 1) / 2;
+ }
+
+-static void fill_sg(struct mlx5_vdpa_direct_mr *mr, void *in)
+-{
+- struct scatterlist *sg;
+- __be64 *pas;
+- int i;
+-
+- pas = MLX5_ADDR_OF(create_mkey_in, in, klm_pas_mtt);
+- for_each_sg(mr->sg_head.sgl, sg, mr->nsg, i)
+- (*pas) = cpu_to_be64(sg_dma_address(sg));
+-}
+-
+ static void mlx5_set_access_mode(void *mkc, int mode)
+ {
+ MLX5_SET(mkc, mkc, access_mode_1_0, mode & 0x3);
+@@ -45,10 +34,18 @@ static void mlx5_set_access_mode(void *mkc, int mode)
+ static void populate_mtts(struct mlx5_vdpa_direct_mr *mr, __be64 *mtt)
+ {
+ struct scatterlist *sg;
++ int nsg = mr->nsg;
++ u64 dma_addr;
++ u64 dma_len;
++ int j = 0;
+ int i;
+
+- for_each_sg(mr->sg_head.sgl, sg, mr->nsg, i)
+- mtt[i] = cpu_to_be64(sg_dma_address(sg));
++ for_each_sg(mr->sg_head.sgl, sg, mr->nent, i) {
++ for (dma_addr = sg_dma_address(sg), dma_len = sg_dma_len(sg);
++ nsg && dma_len;
++ nsg--, dma_addr += BIT(mr->log_size), dma_len -= BIT(mr->log_size))
++ mtt[j++] = cpu_to_be64(dma_addr);
++ }
+ }
+
+ static int create_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr *mr)
+@@ -64,7 +61,6 @@ static int create_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct
+ return -ENOMEM;
+
+ MLX5_SET(create_mkey_in, in, uid, mvdev->res.uid);
+- fill_sg(mr, in);
+ mkc = MLX5_ADDR_OF(create_mkey_in, in, memory_key_mkey_entry);
+ MLX5_SET(mkc, mkc, lw, !!(mr->perm & VHOST_MAP_WO));
+ MLX5_SET(mkc, mkc, lr, !!(mr->perm & VHOST_MAP_RO));
+@@ -276,8 +272,8 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ done:
+ mr->log_size = log_entity_size;
+ mr->nsg = nsg;
+- err = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
+- if (!err)
++ mr->nent = dma_map_sg_attrs(dma, mr->sg_head.sgl, mr->nsg, DMA_BIDIRECTIONAL, 0);
++ if (!mr->nent)
+ goto err_map;
+
+ err = create_direct_mr(mvdev, mr);
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 81b932f72e103..c6529f7c3034a 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -77,6 +77,7 @@ struct mlx5_vq_restore_info {
+ u64 device_addr;
+ u64 driver_addr;
+ u16 avail_index;
++ u16 used_index;
+ bool ready;
+ struct vdpa_callback cb;
+ bool restore;
+@@ -111,6 +112,7 @@ struct mlx5_vdpa_virtqueue {
+ u32 virtq_id;
+ struct mlx5_vdpa_net *ndev;
+ u16 avail_idx;
++ u16 used_idx;
+ int fw_state;
+
+ /* keep last in the struct */
+@@ -789,6 +791,7 @@ static int create_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtque
+
+ obj_context = MLX5_ADDR_OF(create_virtio_net_q_in, in, obj_context);
+ MLX5_SET(virtio_net_q_object, obj_context, hw_available_index, mvq->avail_idx);
++ MLX5_SET(virtio_net_q_object, obj_context, hw_used_index, mvq->used_idx);
+ MLX5_SET(virtio_net_q_object, obj_context, queue_feature_bit_mask_12_3,
+ get_features_12_3(ndev->mvdev.actual_features));
+ vq_ctx = MLX5_ADDR_OF(virtio_net_q_object, obj_context, virtio_q_context);
+@@ -1007,6 +1010,7 @@ static int connect_qps(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *m
+ struct mlx5_virtq_attr {
+ u8 state;
+ u16 available_index;
++ u16 used_index;
+ };
+
+ static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueue *mvq,
+@@ -1037,6 +1041,7 @@ static int query_virtqueue(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqueu
+ memset(attr, 0, sizeof(*attr));
+ attr->state = MLX5_GET(virtio_net_q_object, obj_context, state);
+ attr->available_index = MLX5_GET(virtio_net_q_object, obj_context, hw_available_index);
++ attr->used_index = MLX5_GET(virtio_net_q_object, obj_context, hw_used_index);
+ kfree(out);
+ return 0;
+
+@@ -1520,6 +1525,16 @@ static void teardown_virtqueues(struct mlx5_vdpa_net *ndev)
+ }
+ }
+
++static void clear_virtqueues(struct mlx5_vdpa_net *ndev)
++{
++ int i;
++
++ for (i = ndev->mvdev.max_vqs - 1; i >= 0; i--) {
++ ndev->vqs[i].avail_idx = 0;
++ ndev->vqs[i].used_idx = 0;
++ }
++}
++
+ /* TODO: cross-endian support */
+ static inline bool mlx5_vdpa_is_little_endian(struct mlx5_vdpa_dev *mvdev)
+ {
+@@ -1595,6 +1610,7 @@ static int save_channel_info(struct mlx5_vdpa_net *ndev, struct mlx5_vdpa_virtqu
+ return err;
+
+ ri->avail_index = attr.available_index;
++ ri->used_index = attr.used_index;
+ ri->ready = mvq->ready;
+ ri->num_ent = mvq->num_ent;
+ ri->desc_addr = mvq->desc_addr;
+@@ -1639,6 +1655,7 @@ static void restore_channels_info(struct mlx5_vdpa_net *ndev)
+ continue;
+
+ mvq->avail_idx = ri->avail_index;
++ mvq->used_idx = ri->used_index;
+ mvq->ready = ri->ready;
+ mvq->num_ent = ri->num_ent;
+ mvq->desc_addr = ri->desc_addr;
+@@ -1753,6 +1770,7 @@ static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status)
+ if (!status) {
+ mlx5_vdpa_info(mvdev, "performing device reset\n");
+ teardown_driver(ndev);
++ clear_virtqueues(ndev);
+ mlx5_vdpa_destroy_mr(&ndev->mvdev);
+ ndev->mvdev.status = 0;
+ ndev->mvdev.mlx_features = 0;
+diff --git a/fs/afs/main.c b/fs/afs/main.c
+index accdd8970e7c0..b2975256dadbd 100644
+--- a/fs/afs/main.c
++++ b/fs/afs/main.c
+@@ -193,7 +193,7 @@ static int __init afs_init(void)
+ goto error_cache;
+ #endif
+
+- ret = register_pernet_subsys(&afs_net_ops);
++ ret = register_pernet_device(&afs_net_ops);
+ if (ret < 0)
+ goto error_net;
+
+@@ -213,7 +213,7 @@ static int __init afs_init(void)
+ error_proc:
+ afs_fs_exit();
+ error_fs:
+- unregister_pernet_subsys(&afs_net_ops);
++ unregister_pernet_device(&afs_net_ops);
+ error_net:
+ #ifdef CONFIG_AFS_FSCACHE
+ fscache_unregister_netfs(&afs_cache_netfs);
+@@ -244,7 +244,7 @@ static void __exit afs_exit(void)
+
+ proc_remove(afs_proc_symlink);
+ afs_fs_exit();
+- unregister_pernet_subsys(&afs_net_ops);
++ unregister_pernet_device(&afs_net_ops);
+ #ifdef CONFIG_AFS_FSCACHE
+ fscache_unregister_netfs(&afs_cache_netfs);
+ #endif
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index 398c1eef71906..0d7238cb45b56 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -736,6 +736,7 @@ static int
+ cifs_d_revalidate(struct dentry *direntry, unsigned int flags)
+ {
+ struct inode *inode;
++ int rc;
+
+ if (flags & LOOKUP_RCU)
+ return -ECHILD;
+@@ -745,8 +746,25 @@ cifs_d_revalidate(struct dentry *direntry, unsigned int flags)
+ if ((flags & LOOKUP_REVAL) && !CIFS_CACHE_READ(CIFS_I(inode)))
+ CIFS_I(inode)->time = 0; /* force reval */
+
+- if (cifs_revalidate_dentry(direntry))
+- return 0;
++ rc = cifs_revalidate_dentry(direntry);
++ if (rc) {
++ cifs_dbg(FYI, "cifs_revalidate_dentry failed with rc=%d", rc);
++ switch (rc) {
++ case -ENOENT:
++ case -ESTALE:
++ /*
++ * Those errors mean the dentry is invalid
++ * (file was deleted or recreated)
++ */
++ return 0;
++ default:
++ /*
++ * Otherwise some unexpected error happened
++ * report it as-is to VFS layer
++ */
++ return rc;
++ }
++ }
+ else {
+ /*
+ * If the inode wasn't known to be a dfs entry when
+diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h
+index 204a622b89ed3..56ec9fba3925b 100644
+--- a/fs/cifs/smb2pdu.h
++++ b/fs/cifs/smb2pdu.h
+@@ -286,7 +286,7 @@ struct smb2_negotiate_req {
+ __le32 NegotiateContextOffset; /* SMB3.1.1 only. MBZ earlier */
+ __le16 NegotiateContextCount; /* SMB3.1.1 only. MBZ earlier */
+ __le16 Reserved2;
+- __le16 Dialects[1]; /* One dialect (vers=) at a time for now */
++ __le16 Dialects[4]; /* BB expand this if autonegotiate > 4 dialects */
+ } __packed;
+
+ /* Dialects */
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index b1c2f416b9bd9..9391cd17a2b55 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -655,10 +655,22 @@ wait_for_compound_request(struct TCP_Server_Info *server, int num,
+ spin_lock(&server->req_lock);
+ if (*credits < num) {
+ /*
+- * Return immediately if not too many requests in flight since
+- * we will likely be stuck on waiting for credits.
++ * If the server is tight on resources or just gives us less
++ * credits for other reasons (e.g. requests are coming out of
++ * order and the server delays granting more credits until it
++ * processes a missing mid) and we exhausted most available
++ * credits there may be situations when we try to send
++ * a compound request but we don't have enough credits. At this
++ * point the client needs to decide if it should wait for
++ * additional credits or fail the request. If at least one
++ * request is in flight there is a high probability that the
++ * server will return enough credits to satisfy this compound
++ * request.
++ *
++ * Return immediately if no requests in flight since we will be
++ * stuck on waiting for credits.
+ */
+- if (server->in_flight < num - *credits) {
++ if (server->in_flight == 0) {
+ spin_unlock(&server->req_lock);
+ return -ENOTSUPP;
+ }
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index b5c109703daaf..21c20fd5f9ee7 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -735,9 +735,10 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+
+ mutex_unlock(&hugetlb_fault_mutex_table[hash]);
+
++ set_page_huge_active(page);
+ /*
+ * unlock_page because locked by add_to_page_cache()
+- * page_put due to reference from alloc_huge_page()
++ * put_page() due to reference from alloc_huge_page()
+ */
+ unlock_page(page);
+ put_page(page);
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 907ecaffc3386..3b6307f6bd93d 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -8782,12 +8782,6 @@ static void io_uring_cancel_task_requests(struct io_ring_ctx *ctx,
+
+ if ((ctx->flags & IORING_SETUP_SQPOLL) && ctx->sq_data) {
+ atomic_dec(&task->io_uring->in_idle);
+- /*
+- * If the files that are going away are the ones in the thread
+- * identity, clear them out.
+- */
+- if (task->io_uring->identity->files == files)
+- task->io_uring->identity->files = NULL;
+ io_sq_thread_unpark(ctx->sq_data);
+ }
+ }
+diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c
+index 28a075b5f5b2e..d1efa3a5a5032 100644
+--- a/fs/overlayfs/dir.c
++++ b/fs/overlayfs/dir.c
+@@ -992,8 +992,8 @@ static char *ovl_get_redirect(struct dentry *dentry, bool abs_redirect)
+
+ buflen -= thislen;
+ memcpy(&buf[buflen], name, thislen);
+- tmp = dget_dlock(d->d_parent);
+ spin_unlock(&d->d_lock);
++ tmp = dget_parent(d);
+
+ dput(d);
+ d = tmp;
+diff --git a/fs/overlayfs/file.c b/fs/overlayfs/file.c
+index a1f72ac053e5f..5c5c3972ebd0a 100644
+--- a/fs/overlayfs/file.c
++++ b/fs/overlayfs/file.c
+@@ -445,8 +445,9 @@ static int ovl_fsync(struct file *file, loff_t start, loff_t end, int datasync)
+ const struct cred *old_cred;
+ int ret;
+
+- if (!ovl_should_sync(OVL_FS(file_inode(file)->i_sb)))
+- return 0;
++ ret = ovl_sync_status(OVL_FS(file_inode(file)->i_sb));
++ if (ret <= 0)
++ return ret;
+
+ ret = ovl_real_fdget_meta(file, &real, !datasync);
+ if (ret)
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index f8880aa2ba0ec..9f7af98ae2005 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -322,6 +322,7 @@ int ovl_check_metacopy_xattr(struct ovl_fs *ofs, struct dentry *dentry);
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, struct dentry *dentry,
+ int padding);
++int ovl_sync_status(struct ovl_fs *ofs);
+
+ static inline bool ovl_is_impuredir(struct super_block *sb,
+ struct dentry *dentry)
+diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h
+index 1b5a2094df8eb..b208eba5d0b64 100644
+--- a/fs/overlayfs/ovl_entry.h
++++ b/fs/overlayfs/ovl_entry.h
+@@ -79,6 +79,8 @@ struct ovl_fs {
+ atomic_long_t last_ino;
+ /* Whiteout dentry cache */
+ struct dentry *whiteout;
++ /* r/o snapshot of upperdir sb's only taken on volatile mounts */
++ errseq_t errseq;
+ };
+
+ static inline struct vfsmount *ovl_upper_mnt(struct ovl_fs *ofs)
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index 01620ebae1bd4..f404a78e6b607 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -865,7 +865,7 @@ struct file *ovl_dir_real_file(const struct file *file, bool want_upper)
+
+ struct ovl_dir_file *od = file->private_data;
+ struct dentry *dentry = file->f_path.dentry;
+- struct file *realfile = od->realfile;
++ struct file *old, *realfile = od->realfile;
+
+ if (!OVL_TYPE_UPPER(ovl_path_type(dentry)))
+ return want_upper ? NULL : realfile;
+@@ -874,29 +874,20 @@ struct file *ovl_dir_real_file(const struct file *file, bool want_upper)
+ * Need to check if we started out being a lower dir, but got copied up
+ */
+ if (!od->is_upper) {
+- struct inode *inode = file_inode(file);
+-
+ realfile = READ_ONCE(od->upperfile);
+ if (!realfile) {
+ struct path upperpath;
+
+ ovl_path_upper(dentry, &upperpath);
+ realfile = ovl_dir_open_realfile(file, &upperpath);
++ if (IS_ERR(realfile))
++ return realfile;
+
+- inode_lock(inode);
+- if (!od->upperfile) {
+- if (IS_ERR(realfile)) {
+- inode_unlock(inode);
+- return realfile;
+- }
+- smp_store_release(&od->upperfile, realfile);
+- } else {
+- /* somebody has beaten us to it */
+- if (!IS_ERR(realfile))
+- fput(realfile);
+- realfile = od->upperfile;
++ old = cmpxchg_release(&od->upperfile, NULL, realfile);
++ if (old) {
++ fput(realfile);
++ realfile = old;
+ }
+- inode_unlock(inode);
+ }
+ }
+
+@@ -909,8 +900,9 @@ static int ovl_dir_fsync(struct file *file, loff_t start, loff_t end,
+ struct file *realfile;
+ int err;
+
+- if (!ovl_should_sync(OVL_FS(file->f_path.dentry->d_sb)))
+- return 0;
++ err = ovl_sync_status(OVL_FS(file->f_path.dentry->d_sb));
++ if (err <= 0)
++ return err;
+
+ realfile = ovl_dir_real_file(file, true);
+ err = PTR_ERR_OR_ZERO(realfile);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 290983bcfbb35..d23177a53c95f 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -261,11 +261,20 @@ static int ovl_sync_fs(struct super_block *sb, int wait)
+ struct super_block *upper_sb;
+ int ret;
+
+- if (!ovl_upper_mnt(ofs))
+- return 0;
++ ret = ovl_sync_status(ofs);
++ /*
++ * We have to always set the err, because the return value isn't
++ * checked in syncfs, and instead indirectly return an error via
++ * the sb's writeback errseq, which VFS inspects after this call.
++ */
++ if (ret < 0) {
++ errseq_set(&sb->s_wb_err, -EIO);
++ return -EIO;
++ }
++
++ if (!ret)
++ return ret;
+
+- if (!ovl_should_sync(ofs))
+- return 0;
+ /*
+ * Not called for sync(2) call or an emergency sync (SB_I_SKIP_SYNC).
+ * All the super blocks will be iterated, including upper_sb.
+@@ -1927,6 +1936,8 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ sb->s_op = &ovl_super_operations;
+
+ if (ofs->config.upperdir) {
++ struct super_block *upper_sb;
++
+ if (!ofs->config.workdir) {
+ pr_err("missing 'workdir'\n");
+ goto out_err;
+@@ -1936,6 +1947,16 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ if (err)
+ goto out_err;
+
++ upper_sb = ovl_upper_mnt(ofs)->mnt_sb;
++ if (!ovl_should_sync(ofs)) {
++ ofs->errseq = errseq_sample(&upper_sb->s_wb_err);
++ if (errseq_check(&upper_sb->s_wb_err, ofs->errseq)) {
++ err = -EIO;
++ pr_err("Cannot mount volatile when upperdir has an unseen error. Sync upperdir fs to clear state.\n");
++ goto out_err;
++ }
++ }
++
+ err = ovl_get_workdir(sb, ofs, &upperpath);
+ if (err)
+ goto out_err;
+@@ -1943,9 +1964,8 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ if (!ofs->workdir)
+ sb->s_flags |= SB_RDONLY;
+
+- sb->s_stack_depth = ovl_upper_mnt(ofs)->mnt_sb->s_stack_depth;
+- sb->s_time_gran = ovl_upper_mnt(ofs)->mnt_sb->s_time_gran;
+-
++ sb->s_stack_depth = upper_sb->s_stack_depth;
++ sb->s_time_gran = upper_sb->s_time_gran;
+ }
+ oe = ovl_get_lowerstack(sb, splitlower, numlower, ofs, layers);
+ err = PTR_ERR(oe);
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 23f475627d07f..6e7b8c882045c 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -950,3 +950,30 @@ err_free:
+ kfree(buf);
+ return ERR_PTR(res);
+ }
++
++/*
++ * ovl_sync_status() - Check fs sync status for volatile mounts
++ *
++ * Returns 1 if this is not a volatile mount and a real sync is required.
++ *
++ * Returns 0 if syncing can be skipped because mount is volatile, and no errors
++ * have occurred on the upperdir since the mount.
++ *
++ * Returns -errno if it is a volatile mount, and the error that occurred since
++ * the last mount. If the error code changes, it'll return the latest error
++ * code.
++ */
++
++int ovl_sync_status(struct ovl_fs *ofs)
++{
++ struct vfsmount *mnt;
++
++ if (ovl_should_sync(ofs))
++ return 1;
++
++ mnt = ovl_upper_mnt(ofs);
++ if (!mnt)
++ return 0;
++
++ return errseq_check(&mnt->mnt_sb->s_wb_err, ofs->errseq);
++}
+diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
+index f5e92fe9151c3..bd1c39907b924 100644
+--- a/include/drm/drm_dp_mst_helper.h
++++ b/include/drm/drm_dp_mst_helper.h
+@@ -783,6 +783,7 @@ drm_dp_mst_detect_port(struct drm_connector *connector,
+
+ struct edid *drm_dp_mst_get_edid(struct drm_connector *connector, struct drm_dp_mst_topology_mgr *mgr, struct drm_dp_mst_port *port);
+
++int drm_dp_get_vc_payload_bw(int link_rate, int link_lane_count);
+
+ int drm_dp_calc_pbn_mode(int clock, int bpp, bool dsc);
+
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index ebca2ef022127..b5807f23caf80 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -770,6 +770,8 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
+ }
+ #endif
+
++void set_page_huge_active(struct page *page);
++
+ #else /* CONFIG_HUGETLB_PAGE */
+ struct hstate {};
+
+diff --git a/include/linux/iommu.h b/include/linux/iommu.h
+index b95a6f8db6ff9..9bbcfe3b0bb12 100644
+--- a/include/linux/iommu.h
++++ b/include/linux/iommu.h
+@@ -614,7 +614,10 @@ static inline void dev_iommu_fwspec_set(struct device *dev,
+
+ static inline void *dev_iommu_priv_get(struct device *dev)
+ {
+- return dev->iommu->priv;
++ if (dev->iommu)
++ return dev->iommu->priv;
++ else
++ return NULL;
+ }
+
+ static inline void dev_iommu_priv_set(struct device *dev, void *priv)
+diff --git a/include/linux/irq.h b/include/linux/irq.h
+index c54365309e975..a36d35c259963 100644
+--- a/include/linux/irq.h
++++ b/include/linux/irq.h
+@@ -922,7 +922,7 @@ int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
+ __irq_alloc_descs(irq, from, cnt, node, THIS_MODULE, NULL)
+
+ #define irq_alloc_desc(node) \
+- irq_alloc_descs(-1, 0, 1, node)
++ irq_alloc_descs(-1, 1, 1, node)
+
+ #define irq_alloc_desc_at(at, node) \
+ irq_alloc_descs(at, at, 1, node)
+@@ -937,7 +937,7 @@ int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from,
+ __devm_irq_alloc_descs(dev, irq, from, cnt, node, THIS_MODULE, NULL)
+
+ #define devm_irq_alloc_desc(dev, node) \
+- devm_irq_alloc_descs(dev, -1, 0, 1, node)
++ devm_irq_alloc_descs(dev, -1, 1, 1, node)
+
+ #define devm_irq_alloc_desc_at(dev, at, node) \
+ devm_irq_alloc_descs(dev, at, at, 1, node)
+diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h
+index 629abaf25681d..21f21f7f878ce 100644
+--- a/include/linux/kprobes.h
++++ b/include/linux/kprobes.h
+@@ -251,7 +251,7 @@ extern void kprobes_inc_nmissed_count(struct kprobe *p);
+ extern bool arch_within_kprobe_blacklist(unsigned long addr);
+ extern int arch_populate_kprobe_blacklist(void);
+ extern bool arch_kprobe_on_func_entry(unsigned long offset);
+-extern bool kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset);
++extern int kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset);
+
+ extern bool within_kprobe_blacklist(unsigned long addr);
+ extern int kprobe_add_ksym_blacklist(unsigned long entry);
+diff --git a/include/linux/msi.h b/include/linux/msi.h
+index 6b584cc4757cd..2a3e997751cea 100644
+--- a/include/linux/msi.h
++++ b/include/linux/msi.h
+@@ -139,6 +139,12 @@ struct msi_desc {
+ list_for_each_entry((desc), dev_to_msi_list((dev)), list)
+ #define for_each_msi_entry_safe(desc, tmp, dev) \
+ list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list)
++#define for_each_msi_vector(desc, __irq, dev) \
++ for_each_msi_entry((desc), (dev)) \
++ if ((desc)->irq) \
++ for (__irq = (desc)->irq; \
++ __irq < ((desc)->irq + (desc)->nvec_used); \
++ __irq++)
+
+ #ifdef CONFIG_IRQ_MSI_IOMMU
+ static inline const void *msi_desc_get_iommu_cookie(struct msi_desc *desc)
+diff --git a/include/linux/tracepoint.h b/include/linux/tracepoint.h
+index 0f21617f1a668..966ed89803274 100644
+--- a/include/linux/tracepoint.h
++++ b/include/linux/tracepoint.h
+@@ -307,11 +307,13 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
+ \
+ it_func_ptr = \
+ rcu_dereference_raw((&__tracepoint_##_name)->funcs); \
+- do { \
+- it_func = (it_func_ptr)->func; \
+- __data = (it_func_ptr)->data; \
+- ((void(*)(void *, proto))(it_func))(__data, args); \
+- } while ((++it_func_ptr)->func); \
++ if (it_func_ptr) { \
++ do { \
++ it_func = (it_func_ptr)->func; \
++ __data = (it_func_ptr)->data; \
++ ((void(*)(void *, proto))(it_func))(__data, args); \
++ } while ((++it_func_ptr)->func); \
++ } \
+ return 0; \
+ } \
+ DEFINE_STATIC_CALL(tp_func_##_name, __traceiter_##_name);
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 938eaf9517e26..76dad53a410ac 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -24,7 +24,8 @@ struct notifier_block; /* in notifier.h */
+ #define VM_UNINITIALIZED 0x00000020 /* vm_struct is not fully initialized */
+ #define VM_NO_GUARD 0x00000040 /* don't add guard page */
+ #define VM_KASAN 0x00000080 /* has allocated kasan shadow memory */
+-#define VM_MAP_PUT_PAGES 0x00000100 /* put pages and free array in vfree */
++#define VM_FLUSH_RESET_PERMS 0x00000100 /* reset direct map and flush TLB on unmap, can't be freed in atomic context */
++#define VM_MAP_PUT_PAGES 0x00000200 /* put pages and free array in vfree */
+
+ /*
+ * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
+@@ -37,12 +38,6 @@ struct notifier_block; /* in notifier.h */
+ * determine which allocations need the module shadow freed.
+ */
+
+-/*
+- * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with
+- * vfree_atomic().
+- */
+-#define VM_FLUSH_RESET_PERMS 0x00000100 /* Reset direct map and flush TLB on unmap */
+-
+ /* bits [20..32] reserved for arch specific ioremap internals */
+
+ /*
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index d8fd8676fc724..3648164faa060 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -1155,7 +1155,7 @@ static inline struct Qdisc *qdisc_replace(struct Qdisc *sch, struct Qdisc *new,
+ old = *pold;
+ *pold = new;
+ if (old != NULL)
+- qdisc_tree_flush_backlog(old);
++ qdisc_purge_queue(old);
+ sch_tree_unlock(sch);
+
+ return old;
+diff --git a/include/net/udp.h b/include/net/udp.h
+index 295d52a735982..949ae14a54250 100644
+--- a/include/net/udp.h
++++ b/include/net/udp.h
+@@ -178,7 +178,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ int udp_gro_complete(struct sk_buff *skb, int nhoff, udp_lookup_t lookup);
+
+ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+- netdev_features_t features);
++ netdev_features_t features, bool is_ipv6);
+
+ static inline struct udphdr *udp_gro_udphdr(struct sk_buff *skb)
+ {
+diff --git a/init/init_task.c b/init/init_task.c
+index 15f6eb93a04fa..16d14c2ebb552 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -198,7 +198,8 @@ struct task_struct init_task
+ .lockdep_recursion = 0,
+ #endif
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+- .ret_stack = NULL,
++ .ret_stack = NULL,
++ .tracing_graph_pause = ATOMIC_INIT(0),
+ #endif
+ #if defined(CONFIG_TRACING) && defined(CONFIG_PREEMPTION)
+ .trace_recursion = 0,
+diff --git a/kernel/bpf/bpf_inode_storage.c b/kernel/bpf/bpf_inode_storage.c
+index dbc1dbdd2cbf0..c2a501cd90eba 100644
+--- a/kernel/bpf/bpf_inode_storage.c
++++ b/kernel/bpf/bpf_inode_storage.c
+@@ -125,8 +125,12 @@ static int bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key,
+
+ fd = *(int *)key;
+ f = fget_raw(fd);
+- if (!f || !inode_storage_ptr(f->f_inode))
++ if (!f)
++ return -EBADF;
++ if (!inode_storage_ptr(f->f_inode)) {
++ fput(f);
+ return -EBADF;
++ }
+
+ sdata = bpf_local_storage_update(f->f_inode,
+ (struct bpf_local_storage_map *)map,
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index 96555a8a2c545..6aa9e10c6335a 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -1442,6 +1442,11 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ goto out;
+ }
+
++ if (ctx.optlen < 0) {
++ ret = -EFAULT;
++ goto out;
++ }
++
+ if (copy_from_user(ctx.optval, optval,
+ min(ctx.optlen, max_optlen)) != 0) {
+ ret = -EFAULT;
+@@ -1459,7 +1464,7 @@ int __cgroup_bpf_run_filter_getsockopt(struct sock *sk, int level,
+ goto out;
+ }
+
+- if (ctx.optlen > max_optlen) {
++ if (ctx.optlen > max_optlen || ctx.optlen < 0) {
+ ret = -EFAULT;
+ goto out;
+ }
+diff --git a/kernel/bpf/preload/Makefile b/kernel/bpf/preload/Makefile
+index 23ee310b6eb49..1951332dd15f5 100644
+--- a/kernel/bpf/preload/Makefile
++++ b/kernel/bpf/preload/Makefile
+@@ -4,8 +4,11 @@ LIBBPF_SRCS = $(srctree)/tools/lib/bpf/
+ LIBBPF_A = $(obj)/libbpf.a
+ LIBBPF_OUT = $(abspath $(obj))
+
++# Although not in use by libbpf's Makefile, set $(O) so that the "dummy" test
++# in tools/scripts/Makefile.include always succeeds when building the kernel
++# with $(O) pointing to a relative path, as in "make O=build bindeb-pkg".
+ $(LIBBPF_A):
+- $(Q)$(MAKE) -C $(LIBBPF_SRCS) OUTPUT=$(LIBBPF_OUT)/ $(LIBBPF_OUT)/libbpf.a
++ $(Q)$(MAKE) -C $(LIBBPF_SRCS) O=$(LIBBPF_OUT)/ OUTPUT=$(LIBBPF_OUT)/ $(LIBBPF_OUT)/libbpf.a
+
+ userccflags += -I $(srctree)/tools/include/ -I $(srctree)/tools/include/uapi \
+ -I $(srctree)/tools/lib/ -Wno-unused-result
+diff --git a/kernel/irq/msi.c b/kernel/irq/msi.c
+index 2c0c4d6d0f83a..d924676c8781b 100644
+--- a/kernel/irq/msi.c
++++ b/kernel/irq/msi.c
+@@ -436,22 +436,22 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+
+ can_reserve = msi_check_reservation_mode(domain, info, dev);
+
+- for_each_msi_entry(desc, dev) {
+- virq = desc->irq;
+- if (desc->nvec_used == 1)
+- dev_dbg(dev, "irq %d for MSI\n", virq);
+- else
++ /*
++ * This flag is set by the PCI layer as we need to activate
++ * the MSI entries before the PCI layer enables MSI in the
++ * card. Otherwise the card latches a random msi message.
++ */
++ if (!(info->flags & MSI_FLAG_ACTIVATE_EARLY))
++ goto skip_activate;
++
++ for_each_msi_vector(desc, i, dev) {
++ if (desc->irq == i) {
++ virq = desc->irq;
+ dev_dbg(dev, "irq [%d-%d] for MSI\n",
+ virq, virq + desc->nvec_used - 1);
+- /*
+- * This flag is set by the PCI layer as we need to activate
+- * the MSI entries before the PCI layer enables MSI in the
+- * card. Otherwise the card latches a random msi message.
+- */
+- if (!(info->flags & MSI_FLAG_ACTIVATE_EARLY))
+- continue;
++ }
+
+- irq_data = irq_domain_get_irq_data(domain, desc->irq);
++ irq_data = irq_domain_get_irq_data(domain, i);
+ if (!can_reserve) {
+ irqd_clr_can_reserve(irq_data);
+ if (domain->flags & IRQ_DOMAIN_MSI_NOMASK_QUIRK)
+@@ -462,28 +462,24 @@ int __msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev,
+ goto cleanup;
+ }
+
++skip_activate:
+ /*
+ * If these interrupts use reservation mode, clear the activated bit
+ * so request_irq() will assign the final vector.
+ */
+ if (can_reserve) {
+- for_each_msi_entry(desc, dev) {
+- irq_data = irq_domain_get_irq_data(domain, desc->irq);
++ for_each_msi_vector(desc, i, dev) {
++ irq_data = irq_domain_get_irq_data(domain, i);
+ irqd_clr_activated(irq_data);
+ }
+ }
+ return 0;
+
+ cleanup:
+- for_each_msi_entry(desc, dev) {
+- struct irq_data *irqd;
+-
+- if (desc->irq == virq)
+- break;
+-
+- irqd = irq_domain_get_irq_data(domain, desc->irq);
+- if (irqd_is_activated(irqd))
+- irq_domain_deactivate_irq(irqd);
++ for_each_msi_vector(desc, i, dev) {
++ irq_data = irq_domain_get_irq_data(domain, i);
++ if (irqd_is_activated(irq_data))
++ irq_domain_deactivate_irq(irq_data);
+ }
+ msi_domain_free_irqs(domain, dev);
+ return ret;
+diff --git a/kernel/kprobes.c b/kernel/kprobes.c
+index 41fdbb7953c60..911c77ef5bbcd 100644
+--- a/kernel/kprobes.c
++++ b/kernel/kprobes.c
+@@ -2082,28 +2082,48 @@ bool __weak arch_kprobe_on_func_entry(unsigned long offset)
+ return !offset;
+ }
+
+-bool kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset)
++/**
++ * kprobe_on_func_entry() -- check whether given address is function entry
++ * @addr: Target address
++ * @sym: Target symbol name
++ * @offset: The offset from the symbol or the address
++ *
++ * This checks whether the given @addr+@offset or @sym+@offset is on the
++ * function entry address or not.
++ * This returns 0 if it is the function entry, or -EINVAL if it is not.
++ * And also it returns -ENOENT if it fails the symbol or address lookup.
++ * Caller must pass @addr or @sym (either one must be NULL), or this
++ * returns -EINVAL.
++ */
++int kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset)
+ {
+ kprobe_opcode_t *kp_addr = _kprobe_addr(addr, sym, offset);
+
+ if (IS_ERR(kp_addr))
+- return false;
++ return PTR_ERR(kp_addr);
+
+- if (!kallsyms_lookup_size_offset((unsigned long)kp_addr, NULL, &offset) ||
+- !arch_kprobe_on_func_entry(offset))
+- return false;
++ if (!kallsyms_lookup_size_offset((unsigned long)kp_addr, NULL, &offset))
++ return -ENOENT;
+
+- return true;
++ if (!arch_kprobe_on_func_entry(offset))
++ return -EINVAL;
++
++ return 0;
+ }
+
+ int register_kretprobe(struct kretprobe *rp)
+ {
+- int ret = 0;
++ int ret;
+ struct kretprobe_instance *inst;
+ int i;
+ void *addr;
+
+- if (!kprobe_on_func_entry(rp->kp.addr, rp->kp.symbol_name, rp->kp.offset))
++ ret = kprobe_on_func_entry(rp->kp.addr, rp->kp.symbol_name, rp->kp.offset);
++ if (ret)
++ return ret;
++
++ /* If only rp->kp.addr is specified, check reregistering kprobes */
++ if (rp->kp.addr && check_kprobe_rereg(&rp->kp))
+ return -EINVAL;
+
+ if (kretprobe_blacklist_size) {
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 5658f13037b3d..a58da91eadb5c 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -395,7 +395,6 @@ static int alloc_retstack_tasklist(struct ftrace_ret_stack **ret_stack_list)
+ }
+
+ if (t->ret_stack == NULL) {
+- atomic_set(&t->tracing_graph_pause, 0);
+ atomic_set(&t->trace_overrun, 0);
+ t->curr_ret_stack = -1;
+ t->curr_ret_depth = -1;
+@@ -490,7 +489,6 @@ static DEFINE_PER_CPU(struct ftrace_ret_stack *, idle_ret_stack);
+ static void
+ graph_init_task(struct task_struct *t, struct ftrace_ret_stack *ret_stack)
+ {
+- atomic_set(&t->tracing_graph_pause, 0);
+ atomic_set(&t->trace_overrun, 0);
+ t->ftrace_timestamp = 0;
+ /* make curr_ret_stack visible before we add the ret_stack */
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index 10bbb0f381d56..ee4571b624bcb 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -562,6 +562,8 @@ static int __irqsoff_tracer_init(struct trace_array *tr)
+ /* non overwrite screws up the latency tracers */
+ set_tracer_flag(tr, TRACE_ITER_OVERWRITE, 1);
+ set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, 1);
++ /* without pause, we will produce garbage if another latency occurs */
++ set_tracer_flag(tr, TRACE_ITER_PAUSE_ON_TRACE, 1);
+
+ tr->max_latency = 0;
+ irqsoff_trace = tr;
+@@ -583,11 +585,13 @@ static void __irqsoff_tracer_reset(struct trace_array *tr)
+ {
+ int lat_flag = save_flags & TRACE_ITER_LATENCY_FMT;
+ int overwrite_flag = save_flags & TRACE_ITER_OVERWRITE;
++ int pause_flag = save_flags & TRACE_ITER_PAUSE_ON_TRACE;
+
+ stop_irqsoff_tracer(tr, is_graph(tr));
+
+ set_tracer_flag(tr, TRACE_ITER_LATENCY_FMT, lat_flag);
+ set_tracer_flag(tr, TRACE_ITER_OVERWRITE, overwrite_flag);
++ set_tracer_flag(tr, TRACE_ITER_PAUSE_ON_TRACE, pause_flag);
+ ftrace_reset_array_ops(tr);
+
+ irqsoff_busy = false;
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 5fff39541b8ae..68150b9cbde92 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -221,9 +221,9 @@ bool trace_kprobe_on_func_entry(struct trace_event_call *call)
+ {
+ struct trace_kprobe *tk = trace_kprobe_primary_from_call(call);
+
+- return tk ? kprobe_on_func_entry(tk->rp.kp.addr,
++ return tk ? (kprobe_on_func_entry(tk->rp.kp.addr,
+ tk->rp.kp.addr ? NULL : tk->rp.kp.symbol_name,
+- tk->rp.kp.addr ? 0 : tk->rp.kp.offset) : false;
++ tk->rp.kp.addr ? 0 : tk->rp.kp.offset) == 0) : false;
+ }
+
+ bool trace_kprobe_error_injectable(struct trace_event_call *call)
+@@ -828,9 +828,11 @@ static int trace_kprobe_create(int argc, const char *argv[])
+ }
+ if (is_return)
+ flags |= TPARG_FL_RETURN;
+- if (kprobe_on_func_entry(NULL, symbol, offset))
++ ret = kprobe_on_func_entry(NULL, symbol, offset);
++ if (ret == 0)
+ flags |= TPARG_FL_FENTRY;
+- if (offset && is_return && !(flags & TPARG_FL_FENTRY)) {
++ /* Defer the ENOENT case until register kprobe */
++ if (ret == -EINVAL && is_return) {
+ trace_probe_log_err(0, BAD_RETPROBE);
+ goto parse_error;
+ }
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 13cb7a961b319..0846d4ffa3387 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -1302,7 +1302,7 @@ fast_isolate_freepages(struct compact_control *cc)
+ {
+ unsigned int limit = min(1U, freelist_scan_limit(cc) >> 1);
+ unsigned int nr_scanned = 0;
+- unsigned long low_pfn, min_pfn, high_pfn = 0, highest = 0;
++ unsigned long low_pfn, min_pfn, highest = 0;
+ unsigned long nr_isolated = 0;
+ unsigned long distance;
+ struct page *page = NULL;
+@@ -1347,6 +1347,7 @@ fast_isolate_freepages(struct compact_control *cc)
+ struct page *freepage;
+ unsigned long flags;
+ unsigned int order_scanned = 0;
++ unsigned long high_pfn = 0;
+
+ if (!area->nr_free)
+ continue;
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 0b2067b3c3283..125b69f59caad 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -835,6 +835,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
+ XA_STATE(xas, &mapping->i_pages, offset);
+ int huge = PageHuge(page);
+ int error;
++ bool charged = false;
+
+ VM_BUG_ON_PAGE(!PageLocked(page), page);
+ VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+@@ -848,6 +849,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
+ error = mem_cgroup_charge(page, current->mm, gfp);
+ if (error)
+ goto error;
++ charged = true;
+ }
+
+ gfp &= GFP_RECLAIM_MASK;
+@@ -896,6 +898,8 @@ unlock:
+
+ if (xas_error(&xas)) {
+ error = xas_error(&xas);
++ if (charged)
++ mem_cgroup_uncharge(page);
+ goto error;
+ }
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 85eda66eb625d..4a78514830d5a 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2188,7 +2188,7 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ {
+ spinlock_t *ptl;
+ struct mmu_notifier_range range;
+- bool was_locked = false;
++ bool do_unlock_page = false;
+ pmd_t _pmd;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
+@@ -2204,7 +2204,6 @@ void __split_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
+ VM_BUG_ON(freeze && !page);
+ if (page) {
+ VM_WARN_ON_ONCE(!PageLocked(page));
+- was_locked = true;
+ if (page != pmd_page(*pmd))
+ goto out;
+ }
+@@ -2213,19 +2212,29 @@ repeat:
+ if (pmd_trans_huge(*pmd)) {
+ if (!page) {
+ page = pmd_page(*pmd);
+- if (unlikely(!trylock_page(page))) {
+- get_page(page);
+- _pmd = *pmd;
+- spin_unlock(ptl);
+- lock_page(page);
+- spin_lock(ptl);
+- if (unlikely(!pmd_same(*pmd, _pmd))) {
+- unlock_page(page);
++ /*
++ * An anonymous page must be locked, to ensure that a
++ * concurrent reuse_swap_page() sees stable mapcount;
++ * but reuse_swap_page() is not used on shmem or file,
++ * and page lock must not be taken when zap_pmd_range()
++ * calls __split_huge_pmd() while i_mmap_lock is held.
++ */
++ if (PageAnon(page)) {
++ if (unlikely(!trylock_page(page))) {
++ get_page(page);
++ _pmd = *pmd;
++ spin_unlock(ptl);
++ lock_page(page);
++ spin_lock(ptl);
++ if (unlikely(!pmd_same(*pmd, _pmd))) {
++ unlock_page(page);
++ put_page(page);
++ page = NULL;
++ goto repeat;
++ }
+ put_page(page);
+- page = NULL;
+- goto repeat;
+ }
+- put_page(page);
++ do_unlock_page = true;
+ }
+ }
+ if (PageMlocked(page))
+@@ -2235,7 +2244,7 @@ repeat:
+ __split_huge_pmd_locked(vma, pmd, range.start, freeze);
+ out:
+ spin_unlock(ptl);
+- if (!was_locked && page)
++ if (do_unlock_page)
+ unlock_page(page);
+ /*
+ * No need to double call mmu_notifier->invalidate_range() callback.
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 9a3f06cdcc2a8..26909396898b6 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -79,6 +79,21 @@ DEFINE_SPINLOCK(hugetlb_lock);
+ static int num_fault_mutexes;
+ struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;
+
++static inline bool PageHugeFreed(struct page *head)
++{
++ return page_private(head + 4) == -1UL;
++}
++
++static inline void SetPageHugeFreed(struct page *head)
++{
++ set_page_private(head + 4, -1UL);
++}
++
++static inline void ClearPageHugeFreed(struct page *head)
++{
++ set_page_private(head + 4, 0);
++}
++
+ /* Forward declaration */
+ static int hugetlb_acct_memory(struct hstate *h, long delta);
+
+@@ -1028,6 +1043,7 @@ static void enqueue_huge_page(struct hstate *h, struct page *page)
+ list_move(&page->lru, &h->hugepage_freelists[nid]);
+ h->free_huge_pages++;
+ h->free_huge_pages_node[nid]++;
++ SetPageHugeFreed(page);
+ }
+
+ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
+@@ -1044,6 +1060,7 @@ static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
+
+ list_move(&page->lru, &h->hugepage_activelist);
+ set_page_refcounted(page);
++ ClearPageHugeFreed(page);
+ h->free_huge_pages--;
+ h->free_huge_pages_node[nid]--;
+ return page;
+@@ -1344,12 +1361,11 @@ struct hstate *size_to_hstate(unsigned long size)
+ */
+ bool page_huge_active(struct page *page)
+ {
+- VM_BUG_ON_PAGE(!PageHuge(page), page);
+- return PageHead(page) && PagePrivate(&page[1]);
++ return PageHeadHuge(page) && PagePrivate(&page[1]);
+ }
+
+ /* never called for tail page */
+-static void set_page_huge_active(struct page *page)
++void set_page_huge_active(struct page *page)
+ {
+ VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+ SetPagePrivate(&page[1]);
+@@ -1505,6 +1521,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
+ spin_lock(&hugetlb_lock);
+ h->nr_huge_pages++;
+ h->nr_huge_pages_node[nid]++;
++ ClearPageHugeFreed(page);
+ spin_unlock(&hugetlb_lock);
+ }
+
+@@ -1755,6 +1772,7 @@ int dissolve_free_huge_page(struct page *page)
+ {
+ int rc = -EBUSY;
+
++retry:
+ /* Not to disrupt normal path by vainly holding hugetlb_lock */
+ if (!PageHuge(page))
+ return 0;
+@@ -1771,6 +1789,26 @@ int dissolve_free_huge_page(struct page *page)
+ int nid = page_to_nid(head);
+ if (h->free_huge_pages - h->resv_huge_pages == 0)
+ goto out;
++
++ /*
++ * We should make sure that the page is already on the free list
++ * when it is dissolved.
++ */
++ if (unlikely(!PageHugeFreed(head))) {
++ spin_unlock(&hugetlb_lock);
++ cond_resched();
++
++ /*
++ * Theoretically, we should return -EBUSY when we
++ * encounter this race. In fact, we have a chance
++ * to successfully dissolve the page if we do a
++ * retry. Because the race window is quite small.
++ * If we seize this opportunity, it is an optimization
++ * for increasing the success rate of dissolving page.
++ */
++ goto retry;
++ }
++
+ /*
+ * Move PageHWPoison flag from head page to the raw error page,
+ * which makes any subpages rather than the error page reusable.
+@@ -5556,9 +5594,9 @@ bool isolate_huge_page(struct page *page, struct list_head *list)
+ {
+ bool ret = true;
+
+- VM_BUG_ON_PAGE(!PageHead(page), page);
+ spin_lock(&hugetlb_lock);
+- if (!page_huge_active(page) || !get_page_unless_zero(page)) {
++ if (!PageHeadHuge(page) || !page_huge_active(page) ||
++ !get_page_unless_zero(page)) {
+ ret = false;
+ goto unlock;
+ }
+diff --git a/mm/memblock.c b/mm/memblock.c
+index b68ee86788af9..10bd7d1ef0f49 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -275,14 +275,6 @@ __memblock_find_range_top_down(phys_addr_t start, phys_addr_t end,
+ *
+ * Find @size free area aligned to @align in the specified range and node.
+ *
+- * When allocation direction is bottom-up, the @start should be greater
+- * than the end of the kernel image. Otherwise, it will be trimmed. The
+- * reason is that we want the bottom-up allocation just near the kernel
+- * image so it is highly likely that the allocated memory and the kernel
+- * will reside in the same node.
+- *
+- * If bottom-up allocation failed, will try to allocate memory top-down.
+- *
+ * Return:
+ * Found address on success, 0 on failure.
+ */
+@@ -291,8 +283,6 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
+ phys_addr_t end, int nid,
+ enum memblock_flags flags)
+ {
+- phys_addr_t kernel_end, ret;
+-
+ /* pump up @end */
+ if (end == MEMBLOCK_ALLOC_ACCESSIBLE ||
+ end == MEMBLOCK_ALLOC_KASAN)
+@@ -301,40 +291,13 @@ static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
+ /* avoid allocating the first page */
+ start = max_t(phys_addr_t, start, PAGE_SIZE);
+ end = max(start, end);
+- kernel_end = __pa_symbol(_end);
+-
+- /*
+- * try bottom-up allocation only when bottom-up mode
+- * is set and @end is above the kernel image.
+- */
+- if (memblock_bottom_up() && end > kernel_end) {
+- phys_addr_t bottom_up_start;
+-
+- /* make sure we will allocate above the kernel */
+- bottom_up_start = max(start, kernel_end);
+
+- /* ok, try bottom-up allocation first */
+- ret = __memblock_find_range_bottom_up(bottom_up_start, end,
+- size, align, nid, flags);
+- if (ret)
+- return ret;
+-
+- /*
+- * we always limit bottom-up allocation above the kernel,
+- * but top-down allocation doesn't have the limit, so
+- * retrying top-down allocation may succeed when bottom-up
+- * allocation failed.
+- *
+- * bottom-up allocation is expected to be fail very rarely,
+- * so we use WARN_ONCE() here to see the stack trace if
+- * fail happens.
+- */
+- WARN_ONCE(IS_ENABLED(CONFIG_MEMORY_HOTREMOVE),
+- "memblock: bottom-up allocation failed, memory hotremove may be affected\n");
+- }
+-
+- return __memblock_find_range_top_down(start, end, size, align, nid,
+- flags);
++ if (memblock_bottom_up())
++ return __memblock_find_range_bottom_up(start, end, size, align,
++ nid, flags);
++ else
++ return __memblock_find_range_top_down(start, end, size, align,
++ nid, flags);
+ }
+
+ /**
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 9500d28a43b0e..2fe4bbb6b80cf 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1245,13 +1245,14 @@ static int __neigh_update(struct neighbour *neigh, const u8 *lladdr,
+ old = neigh->nud_state;
+ err = -EPERM;
+
+- if (!(flags & NEIGH_UPDATE_F_ADMIN) &&
+- (old & (NUD_NOARP | NUD_PERMANENT)))
+- goto out;
+ if (neigh->dead) {
+ NL_SET_ERR_MSG(extack, "Neighbor entry is now dead");
++ new = old;
+ goto out;
+ }
++ if (!(flags & NEIGH_UPDATE_F_ADMIN) &&
++ (old & (NUD_NOARP | NUD_PERMANENT)))
++ goto out;
+
+ ext_learn_change = neigh_update_ext_learned(neigh, flags, ¬ify);
+
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 64594aa755f05..76a420c76f16e 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -317,7 +317,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+ }
+
+ dev->needed_headroom = t_hlen + hlen;
+- mtu -= (dev->hard_header_len + t_hlen);
++ mtu -= t_hlen;
+
+ if (mtu < IPV4_MIN_MTU)
+ mtu = IPV4_MIN_MTU;
+@@ -347,7 +347,7 @@ static struct ip_tunnel *ip_tunnel_create(struct net *net,
+ nt = netdev_priv(dev);
+ t_hlen = nt->hlen + sizeof(struct iphdr);
+ dev->min_mtu = ETH_MIN_MTU;
+- dev->max_mtu = IP_MAX_MTU - dev->hard_header_len - t_hlen;
++ dev->max_mtu = IP_MAX_MTU - t_hlen;
+ ip_tunnel_add(itn, nt);
+ return nt;
+
+@@ -488,11 +488,10 @@ static int tnl_update_pmtu(struct net_device *dev, struct sk_buff *skb,
+ int mtu;
+
+ tunnel_hlen = md ? tunnel_hlen : tunnel->hlen;
+- pkt_size = skb->len - tunnel_hlen - dev->hard_header_len;
++ pkt_size = skb->len - tunnel_hlen;
+
+ if (df)
+- mtu = dst_mtu(&rt->dst) - dev->hard_header_len
+- - sizeof(struct iphdr) - tunnel_hlen;
++ mtu = dst_mtu(&rt->dst) - (sizeof(struct iphdr) + tunnel_hlen);
+ else
+ mtu = skb_valid_dst(skb) ? dst_mtu(skb_dst(skb)) : dev->mtu;
+
+@@ -972,7 +971,7 @@ int __ip_tunnel_change_mtu(struct net_device *dev, int new_mtu, bool strict)
+ {
+ struct ip_tunnel *tunnel = netdev_priv(dev);
+ int t_hlen = tunnel->hlen + sizeof(struct iphdr);
+- int max_mtu = IP_MAX_MTU - dev->hard_header_len - t_hlen;
++ int max_mtu = IP_MAX_MTU - t_hlen;
+
+ if (new_mtu < ETH_MIN_MTU)
+ return -EINVAL;
+@@ -1149,10 +1148,9 @@ int ip_tunnel_newlink(struct net_device *dev, struct nlattr *tb[],
+
+ mtu = ip_tunnel_bind_dev(dev);
+ if (tb[IFLA_MTU]) {
+- unsigned int max = IP_MAX_MTU - dev->hard_header_len - nt->hlen;
++ unsigned int max = IP_MAX_MTU - (nt->hlen + sizeof(struct iphdr));
+
+- mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU,
+- (unsigned int)(max - sizeof(struct iphdr)));
++ mtu = clamp(dev->mtu, (unsigned int)ETH_MIN_MTU, max);
+ }
+
+ err = dev_set_mtu(dev, mtu);
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index c62805cd31319..cfdaac4a57e41 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -184,8 +184,67 @@ out_unlock:
+ }
+ EXPORT_SYMBOL(skb_udp_tunnel_segment);
+
++static void __udpv4_gso_segment_csum(struct sk_buff *seg,
++ __be32 *oldip, __be32 *newip,
++ __be16 *oldport, __be16 *newport)
++{
++ struct udphdr *uh;
++ struct iphdr *iph;
++
++ if (*oldip == *newip && *oldport == *newport)
++ return;
++
++ uh = udp_hdr(seg);
++ iph = ip_hdr(seg);
++
++ if (uh->check) {
++ inet_proto_csum_replace4(&uh->check, seg, *oldip, *newip,
++ true);
++ inet_proto_csum_replace2(&uh->check, seg, *oldport, *newport,
++ false);
++ if (!uh->check)
++ uh->check = CSUM_MANGLED_0;
++ }
++ *oldport = *newport;
++
++ csum_replace4(&iph->check, *oldip, *newip);
++ *oldip = *newip;
++}
++
++static struct sk_buff *__udpv4_gso_segment_list_csum(struct sk_buff *segs)
++{
++ struct sk_buff *seg;
++ struct udphdr *uh, *uh2;
++ struct iphdr *iph, *iph2;
++
++ seg = segs;
++ uh = udp_hdr(seg);
++ iph = ip_hdr(seg);
++
++ if ((udp_hdr(seg)->dest == udp_hdr(seg->next)->dest) &&
++ (udp_hdr(seg)->source == udp_hdr(seg->next)->source) &&
++ (ip_hdr(seg)->daddr == ip_hdr(seg->next)->daddr) &&
++ (ip_hdr(seg)->saddr == ip_hdr(seg->next)->saddr))
++ return segs;
++
++ while ((seg = seg->next)) {
++ uh2 = udp_hdr(seg);
++ iph2 = ip_hdr(seg);
++
++ __udpv4_gso_segment_csum(seg,
++ &iph2->saddr, &iph->saddr,
++ &uh2->source, &uh->source);
++ __udpv4_gso_segment_csum(seg,
++ &iph2->daddr, &iph->daddr,
++ &uh2->dest, &uh->dest);
++ }
++
++ return segs;
++}
++
+ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
+- netdev_features_t features)
++ netdev_features_t features,
++ bool is_ipv6)
+ {
+ unsigned int mss = skb_shinfo(skb)->gso_size;
+
+@@ -195,11 +254,11 @@ static struct sk_buff *__udp_gso_segment_list(struct sk_buff *skb,
+
+ udp_hdr(skb)->len = htons(sizeof(struct udphdr) + mss);
+
+- return skb;
++ return is_ipv6 ? skb : __udpv4_gso_segment_list_csum(skb);
+ }
+
+ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+- netdev_features_t features)
++ netdev_features_t features, bool is_ipv6)
+ {
+ struct sock *sk = gso_skb->sk;
+ unsigned int sum_truesize = 0;
+@@ -211,7 +270,7 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+ __be16 newlen;
+
+ if (skb_shinfo(gso_skb)->gso_type & SKB_GSO_FRAGLIST)
+- return __udp_gso_segment_list(gso_skb, features);
++ return __udp_gso_segment_list(gso_skb, features, is_ipv6);
+
+ mss = skb_shinfo(gso_skb)->gso_size;
+ if (gso_skb->len <= sizeof(*uh) + mss)
+@@ -325,7 +384,7 @@ static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb,
+ goto out;
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+- return __udp_gso_segment(skb, features);
++ return __udp_gso_segment(skb, features, false);
+
+ mss = skb_shinfo(skb)->gso_size;
+ if (unlikely(skb->len <= mss))
+diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c
+index f9e888d1b9af8..ebee748f25b9e 100644
+--- a/net/ipv6/udp_offload.c
++++ b/net/ipv6/udp_offload.c
+@@ -46,7 +46,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb,
+ goto out;
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4)
+- return __udp_gso_segment(skb, features);
++ return __udp_gso_segment(skb, features, true);
+
+ /* Do software UFO. Complete and fill in the UDP checksum as HW cannot
+ * do checksum of UDP packets sent as multiple IP fragments.
+diff --git a/net/lapb/lapb_out.c b/net/lapb/lapb_out.c
+index 7a4d0715d1c32..a966d29c772d9 100644
+--- a/net/lapb/lapb_out.c
++++ b/net/lapb/lapb_out.c
+@@ -82,7 +82,8 @@ void lapb_kick(struct lapb_cb *lapb)
+ skb = skb_dequeue(&lapb->write_queue);
+
+ do {
+- if ((skbn = skb_clone(skb, GFP_ATOMIC)) == NULL) {
++ skbn = skb_copy(skb, GFP_ATOMIC);
++ if (!skbn) {
+ skb_queue_head(&lapb->write_queue, skb);
+ break;
+ }
+diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
+index c9a8a2433e8ac..48322e45e7ddb 100644
+--- a/net/mac80211/driver-ops.c
++++ b/net/mac80211/driver-ops.c
+@@ -125,8 +125,11 @@ int drv_sta_state(struct ieee80211_local *local,
+ } else if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_ASSOC) {
+ ret = drv_sta_add(local, sdata, &sta->sta);
+- if (ret == 0)
++ if (ret == 0) {
+ sta->uploaded = true;
++ if (rcu_access_pointer(sta->sta.rates))
++ drv_sta_rate_tbl_update(local, sdata, &sta->sta);
++ }
+ } else if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTH) {
+ drv_sta_remove(local, sdata, &sta->sta);
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index 45927202c71c6..63652c39c8e07 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -960,7 +960,8 @@ int rate_control_set_rates(struct ieee80211_hw *hw,
+ if (old)
+ kfree_rcu(old, rcu_head);
+
+- drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta);
++ if (sta->uploaded)
++ drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta);
+
+ ieee80211_sta_set_expected_throughput(pubsta, sta_get_expected_throughput(sta));
+
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index 0a2f4817ec6cf..41671af6b33f9 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -990,7 +990,7 @@ static int __init af_rxrpc_init(void)
+ goto error_security;
+ }
+
+- ret = register_pernet_subsys(&rxrpc_net_ops);
++ ret = register_pernet_device(&rxrpc_net_ops);
+ if (ret)
+ goto error_pernet;
+
+@@ -1035,7 +1035,7 @@ error_key_type:
+ error_sock:
+ proto_unregister(&rxrpc_proto);
+ error_proto:
+- unregister_pernet_subsys(&rxrpc_net_ops);
++ unregister_pernet_device(&rxrpc_net_ops);
+ error_pernet:
+ rxrpc_exit_security();
+ error_security:
+@@ -1057,7 +1057,7 @@ static void __exit af_rxrpc_exit(void)
+ unregister_key_type(&key_type_rxrpc);
+ sock_unregister(PF_RXRPC);
+ proto_unregister(&rxrpc_proto);
+- unregister_pernet_subsys(&rxrpc_net_ops);
++ unregister_pernet_device(&rxrpc_net_ops);
+ ASSERTCMP(atomic_read(&rxrpc_n_tx_skbs), ==, 0);
+ ASSERTCMP(atomic_read(&rxrpc_n_rx_skbs), ==, 0);
+
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 4404c491eb388..fa7b7ae2c2c5f 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1113,14 +1113,15 @@ static int svc_tcp_sendmsg(struct socket *sock, struct msghdr *msg,
+ unsigned int offset, len, remaining;
+ struct bio_vec *bvec;
+
+- bvec = xdr->bvec;
+- offset = xdr->page_base;
++ bvec = xdr->bvec + (xdr->page_base >> PAGE_SHIFT);
++ offset = offset_in_page(xdr->page_base);
+ remaining = xdr->page_len;
+ flags = MSG_MORE | MSG_SENDPAGE_NOTLAST;
+ while (remaining > 0) {
+ if (remaining <= PAGE_SIZE && tail->iov_len == 0)
+ flags = 0;
+- len = min(remaining, bvec->bv_len);
++
++ len = min(remaining, bvec->bv_len - offset);
+ ret = kernel_sendpage(sock, bvec->bv_page,
+ bvec->bv_offset + offset,
+ len, flags);
+diff --git a/scripts/Makefile b/scripts/Makefile
+index b5418ec587fbd..9de3c03b94aa7 100644
+--- a/scripts/Makefile
++++ b/scripts/Makefile
+@@ -3,6 +3,9 @@
+ # scripts contains sources for various helper programs used throughout
+ # the kernel for the build process.
+
++CRYPTO_LIBS = $(shell pkg-config --libs libcrypto 2> /dev/null || echo -lcrypto)
++CRYPTO_CFLAGS = $(shell pkg-config --cflags libcrypto 2> /dev/null)
++
+ hostprogs-always-$(CONFIG_BUILD_BIN2C) += bin2c
+ hostprogs-always-$(CONFIG_KALLSYMS) += kallsyms
+ hostprogs-always-$(BUILD_C_RECORDMCOUNT) += recordmcount
+@@ -14,8 +17,9 @@ hostprogs-always-$(CONFIG_SYSTEM_EXTRA_CERTIFICATE) += insert-sys-cert
+
+ HOSTCFLAGS_sorttable.o = -I$(srctree)/tools/include
+ HOSTCFLAGS_asn1_compiler.o = -I$(srctree)/include
+-HOSTLDLIBS_sign-file = -lcrypto
+-HOSTLDLIBS_extract-cert = -lcrypto
++HOSTLDLIBS_sign-file = $(CRYPTO_LIBS)
++HOSTCFLAGS_extract-cert.o = $(CRYPTO_CFLAGS)
++HOSTLDLIBS_extract-cert = $(CRYPTO_LIBS)
+
+ ifdef CONFIG_UNWINDER_ORC
+ ifeq ($(ARCH),x86_64)
next reply other threads:[~2021-02-10 9:51 UTC|newest]
Thread overview: 312+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-10 9:51 Alice Ferrazzi [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-02 13:27 [gentoo-commits] proj/linux-patches:5.10 commit in: / Arisu Tachibana
2025-09-12 3:58 Arisu Tachibana
2025-09-10 5:33 Arisu Tachibana
2025-09-04 15:19 Arisu Tachibana
2025-09-04 14:32 Arisu Tachibana
2025-08-29 9:13 Arisu Tachibana
2025-08-28 16:55 Arisu Tachibana
2025-07-18 12:07 Arisu Tachibana
2025-06-27 11:21 Mike Pagano
2025-06-04 18:15 Mike Pagano
2025-05-02 10:58 Mike Pagano
2025-04-10 13:16 Mike Pagano
2025-03-13 12:58 Mike Pagano
2025-02-01 23:10 Mike Pagano
2025-01-09 13:58 Mike Pagano
2024-12-19 18:10 Mike Pagano
2024-12-14 23:50 Mike Pagano
2024-11-30 17:35 Mike Pagano
2024-11-17 18:19 Mike Pagano
2024-11-08 17:46 Mike Pagano
2024-10-22 17:00 Mike Pagano
2024-10-17 14:11 Mike Pagano
2024-10-17 14:08 Mike Pagano
2024-09-12 12:42 Mike Pagano
2024-09-04 13:53 Mike Pagano
2024-08-19 10:44 Mike Pagano
2024-07-27 9:20 Mike Pagano
2024-07-27 9:17 Mike Pagano
2024-07-18 12:17 Mike Pagano
2024-07-05 10:53 Mike Pagano
2024-07-05 10:51 Mike Pagano
2024-06-21 14:08 Mike Pagano
2024-06-16 14:35 Mike Pagano
2024-05-25 15:14 Mike Pagano
2024-05-17 11:38 Mike Pagano
2024-05-05 18:14 Mike Pagano
2024-05-02 15:03 Mike Pagano
2024-04-27 22:57 Mike Pagano
2024-04-13 13:09 Mike Pagano
2024-03-27 11:26 Mike Pagano
2024-03-15 22:02 Mike Pagano
2024-03-06 18:09 Mike Pagano
2024-03-01 13:09 Mike Pagano
2024-02-23 12:45 Mike Pagano
2024-02-23 12:39 Mike Pagano
2024-01-25 23:34 Mike Pagano
2024-01-15 18:49 Mike Pagano
2024-01-12 20:35 Mike Pagano
2024-01-05 14:29 Mike Pagano
2023-12-20 15:21 Mike Pagano
2023-12-13 18:29 Mike Pagano
2023-12-08 11:16 Mike Pagano
2023-12-01 17:47 Mike Pagano
2023-11-28 17:52 Mike Pagano
2023-11-20 11:25 Mike Pagano
2023-11-08 17:28 Mike Pagano
2023-10-25 11:38 Mike Pagano
2023-10-18 20:16 Mike Pagano
2023-10-10 20:34 Mike Pagano
2023-10-05 14:24 Mike Pagano
2023-09-23 10:19 Mike Pagano
2023-09-21 11:29 Mike Pagano
2023-09-19 13:22 Mike Pagano
2023-09-02 9:59 Mike Pagano
2023-08-30 14:45 Mike Pagano
2023-08-26 15:21 Mike Pagano
2023-08-16 17:01 Mike Pagano
2023-08-11 11:56 Mike Pagano
2023-08-08 18:42 Mike Pagano
2023-07-27 11:50 Mike Pagano
2023-07-24 20:28 Mike Pagano
2023-06-28 10:27 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:34 Mike Pagano
2023-06-14 10:20 Mike Pagano
2023-06-09 11:31 Mike Pagano
2023-06-05 11:50 Mike Pagano
2023-05-30 12:56 Mike Pagano
2023-05-17 11:25 Mike Pagano
2023-05-17 10:59 Mike Pagano
2023-05-10 17:56 Mike Pagano
2023-04-27 14:11 Mike Pagano
2023-04-26 9:50 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 10:01 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-17 10:45 Mike Pagano
2023-03-13 11:32 Alice Ferrazzi
2023-03-11 16:05 Mike Pagano
2023-03-03 15:01 Mike Pagano
2023-03-03 12:30 Mike Pagano
2023-02-25 11:44 Mike Pagano
2023-02-24 3:06 Alice Ferrazzi
2023-02-22 14:04 Alice Ferrazzi
2023-02-15 16:40 Mike Pagano
2023-02-06 12:47 Mike Pagano
2023-02-02 19:11 Mike Pagano
2023-02-01 8:09 Alice Ferrazzi
2023-01-24 7:13 Alice Ferrazzi
2023-01-18 11:09 Mike Pagano
2023-01-14 13:52 Mike Pagano
2023-01-04 11:39 Mike Pagano
2022-12-21 19:00 Alice Ferrazzi
2022-12-19 12:33 Alice Ferrazzi
2022-12-14 12:14 Mike Pagano
2022-12-08 11:51 Alice Ferrazzi
2022-12-02 17:26 Mike Pagano
2022-11-25 17:06 Mike Pagano
2022-11-16 12:08 Alice Ferrazzi
2022-11-10 18:05 Mike Pagano
2022-11-03 15:17 Mike Pagano
2022-10-30 9:33 Mike Pagano
2022-10-28 13:38 Mike Pagano
2022-10-26 11:46 Mike Pagano
2022-10-17 16:46 Mike Pagano
2022-10-15 10:05 Mike Pagano
2022-10-05 11:58 Mike Pagano
2022-09-28 9:30 Mike Pagano
2022-09-23 12:40 Mike Pagano
2022-09-20 12:01 Mike Pagano
2022-09-15 10:31 Mike Pagano
2022-09-08 10:46 Mike Pagano
2022-09-05 12:04 Mike Pagano
2022-08-31 15:39 Mike Pagano
2022-08-29 10:46 Mike Pagano
2022-08-25 10:33 Mike Pagano
2022-08-21 16:52 Mike Pagano
2022-08-11 12:34 Mike Pagano
2022-08-03 14:24 Alice Ferrazzi
2022-07-29 16:37 Mike Pagano
2022-07-25 10:19 Alice Ferrazzi
2022-07-21 20:08 Mike Pagano
2022-07-15 10:03 Mike Pagano
2022-07-12 15:59 Mike Pagano
2022-07-07 16:17 Mike Pagano
2022-07-02 16:10 Mike Pagano
2022-06-29 11:08 Mike Pagano
2022-06-27 11:12 Mike Pagano
2022-06-25 19:45 Mike Pagano
2022-06-22 12:45 Mike Pagano
2022-06-16 11:44 Mike Pagano
2022-06-14 17:12 Mike Pagano
2022-06-09 11:27 Mike Pagano
2022-06-06 11:03 Mike Pagano
2022-05-30 13:59 Mike Pagano
2022-05-25 11:54 Mike Pagano
2022-05-18 9:48 Mike Pagano
2022-05-15 22:10 Mike Pagano
2022-05-12 11:29 Mike Pagano
2022-05-09 10:56 Mike Pagano
2022-04-27 12:24 Mike Pagano
2022-04-27 12:20 Mike Pagano
2022-04-26 12:17 Mike Pagano
2022-04-20 12:07 Mike Pagano
2022-04-13 20:20 Mike Pagano
2022-04-13 19:48 Mike Pagano
2022-04-12 19:08 Mike Pagano
2022-04-08 13:16 Mike Pagano
2022-03-28 10:58 Mike Pagano
2022-03-23 11:55 Mike Pagano
2022-03-19 13:20 Mike Pagano
2022-03-16 13:33 Mike Pagano
2022-03-11 11:31 Mike Pagano
2022-03-08 18:32 Mike Pagano
2022-03-02 13:06 Mike Pagano
2022-02-26 20:27 Mike Pagano
2022-02-23 12:37 Mike Pagano
2022-02-16 12:46 Mike Pagano
2022-02-11 12:35 Mike Pagano
2022-02-08 17:54 Mike Pagano
2022-02-05 19:04 Mike Pagano
2022-02-05 12:13 Mike Pagano
2022-02-01 17:23 Mike Pagano
2022-01-31 12:25 Mike Pagano
2022-01-29 17:43 Mike Pagano
2022-01-27 11:37 Mike Pagano
2022-01-20 10:00 Mike Pagano
2022-01-16 10:21 Mike Pagano
2022-01-11 14:50 Mike Pagano
2022-01-05 12:53 Mike Pagano
2021-12-29 13:06 Mike Pagano
2021-12-22 14:05 Mike Pagano
2021-12-21 19:37 Mike Pagano
2021-12-17 11:55 Mike Pagano
2021-12-16 16:04 Mike Pagano
2021-12-14 12:51 Mike Pagano
2021-12-14 12:12 Mike Pagano
2021-12-08 12:53 Mike Pagano
2021-12-01 12:49 Mike Pagano
2021-11-26 11:57 Mike Pagano
2021-11-21 20:42 Mike Pagano
2021-11-18 15:33 Mike Pagano
2021-11-12 14:18 Mike Pagano
2021-11-06 13:36 Mike Pagano
2021-11-02 19:30 Mike Pagano
2021-10-27 14:55 Mike Pagano
2021-10-27 11:57 Mike Pagano
2021-10-20 13:23 Mike Pagano
2021-10-18 21:17 Mike Pagano
2021-10-17 13:11 Mike Pagano
2021-10-13 9:35 Alice Ferrazzi
2021-10-09 21:31 Mike Pagano
2021-10-06 14:18 Mike Pagano
2021-09-30 10:48 Mike Pagano
2021-09-26 14:12 Mike Pagano
2021-09-22 11:38 Mike Pagano
2021-09-20 22:02 Mike Pagano
2021-09-18 16:07 Mike Pagano
2021-09-17 12:50 Mike Pagano
2021-09-17 12:46 Mike Pagano
2021-09-16 11:20 Mike Pagano
2021-09-15 12:00 Mike Pagano
2021-09-12 14:38 Mike Pagano
2021-09-08 13:00 Alice Ferrazzi
2021-09-03 11:47 Mike Pagano
2021-09-03 11:20 Mike Pagano
2021-08-26 14:34 Mike Pagano
2021-08-25 16:23 Mike Pagano
2021-08-24 21:33 Mike Pagano
2021-08-24 21:32 Mike Pagano
2021-08-21 14:17 Mike Pagano
2021-08-19 11:56 Mike Pagano
2021-08-18 12:46 Mike Pagano
2021-08-15 20:05 Mike Pagano
2021-08-12 11:53 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-08 13:36 Mike Pagano
2021-08-04 11:52 Mike Pagano
2021-08-03 11:03 Mike Pagano
2021-08-02 22:35 Mike Pagano
2021-07-31 10:30 Alice Ferrazzi
2021-07-28 13:22 Mike Pagano
2021-07-25 17:28 Mike Pagano
2021-07-25 17:26 Mike Pagano
2021-07-20 15:44 Alice Ferrazzi
2021-07-19 11:17 Mike Pagano
2021-07-14 16:31 Mike Pagano
2021-07-14 16:21 Mike Pagano
2021-07-13 12:37 Mike Pagano
2021-07-12 17:25 Mike Pagano
2021-07-11 15:11 Mike Pagano
2021-07-11 14:43 Mike Pagano
2021-07-08 12:27 Mike Pagano
2021-07-08 3:27 Alice Ferrazzi
2021-07-07 13:13 Mike Pagano
2021-07-02 19:38 Mike Pagano
2021-07-01 14:32 Mike Pagano
2021-06-30 14:23 Mike Pagano
2021-06-23 15:12 Mike Pagano
2021-06-18 11:37 Mike Pagano
2021-06-16 12:24 Mike Pagano
2021-06-11 17:34 Mike Pagano
2021-06-10 13:14 Mike Pagano
2021-06-10 12:09 Mike Pagano
2021-06-08 22:42 Mike Pagano
2021-06-03 10:26 Alice Ferrazzi
2021-05-28 12:15 Alice Ferrazzi
2021-05-26 12:07 Mike Pagano
2021-05-22 16:59 Mike Pagano
2021-05-19 12:24 Mike Pagano
2021-05-14 14:07 Alice Ferrazzi
2021-05-11 14:20 Mike Pagano
2021-05-07 11:27 Alice Ferrazzi
2021-05-02 16:03 Mike Pagano
2021-04-30 18:58 Mike Pagano
2021-04-28 12:03 Alice Ferrazzi
2021-04-21 11:42 Mike Pagano
2021-04-16 11:02 Alice Ferrazzi
2021-04-14 11:07 Alice Ferrazzi
2021-04-10 13:26 Mike Pagano
2021-04-07 13:27 Mike Pagano
2021-03-30 12:57 Alice Ferrazzi
2021-03-25 9:04 Alice Ferrazzi
2021-03-22 15:57 Mike Pagano
2021-03-20 14:35 Mike Pagano
2021-03-17 17:00 Mike Pagano
2021-03-11 15:08 Mike Pagano
2021-03-09 12:18 Mike Pagano
2021-03-07 15:17 Mike Pagano
2021-03-04 12:04 Alice Ferrazzi
2021-02-26 13:22 Mike Pagano
2021-02-26 10:42 Alice Ferrazzi
2021-02-23 15:16 Alice Ferrazzi
2021-02-18 20:45 Mike Pagano
2021-02-18 14:48 Mike Pagano
2021-02-17 11:14 Alice Ferrazzi
2021-02-13 15:51 Mike Pagano
2021-02-13 15:48 Mike Pagano
2021-02-13 14:42 Alice Ferrazzi
2021-02-10 10:23 Alice Ferrazzi
2021-02-09 19:10 Mike Pagano
2021-02-07 15:20 Alice Ferrazzi
2021-02-03 23:43 Alice Ferrazzi
2021-01-30 13:27 Alice Ferrazzi
2021-01-27 11:29 Mike Pagano
2021-01-23 16:38 Mike Pagano
2021-01-19 20:31 Mike Pagano
2021-01-17 16:18 Mike Pagano
2021-01-12 20:03 Mike Pagano
2021-01-09 17:58 Mike Pagano
2021-01-09 0:14 Mike Pagano
2021-01-06 14:54 Mike Pagano
2020-12-30 12:54 Mike Pagano
2020-12-26 15:32 Mike Pagano
2020-12-26 15:29 Mike Pagano
2020-12-21 13:26 Mike Pagano
2020-12-18 16:08 Mike Pagano
2020-12-14 20:45 Mike Pagano
2020-12-13 16:09 Mike Pagano
2020-11-19 13:03 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1612950675.6c84e9a9d87af7d00d731b3a4f131091c7393002.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox