From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 4895F158089 for ; Fri, 6 Oct 2023 13:18:21 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 732CD2BC028; Fri, 6 Oct 2023 13:18:20 +0000 (UTC) Received: from smtp.gentoo.org (dev.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 192A72BC028 for ; Fri, 6 Oct 2023 13:18:20 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 95508335CD2 for ; Fri, 6 Oct 2023 13:18:18 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 2CE53931 for ; Fri, 6 Oct 2023 13:18:17 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1696598286.0fa2adcf7c71594b742f6a33f82f22fca6bcade9.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:6.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1055_linux-6.1.56.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 0fa2adcf7c71594b742f6a33f82f22fca6bcade9 X-VCS-Branch: 6.1 Date: Fri, 6 Oct 2023 13:18:17 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 7bd701c7-b960-4163-9dd3-bdfd841d5389 X-Archives-Hash: df95e9e9ea4d53e1de9f12f9e6c29488 commit: 0fa2adcf7c71594b742f6a33f82f22fca6bcade9 Author: Mike Pagano gentoo org> AuthorDate: Fri Oct 6 13:18:06 2023 +0000 Commit: Mike Pagano gentoo org> CommitDate: Fri Oct 6 13:18:06 2023 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0fa2adcf Linux patch 6.1.56 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1055_linux-6.1.56.patch | 11323 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 11327 insertions(+) diff --git a/0000_README b/0000_README index 3723582e..b5768a62 100644 --- a/0000_README +++ b/0000_README @@ -263,6 +263,10 @@ Patch: 1054_linux-6.1.55.patch From: https://www.kernel.org Desc: Linux 6.1.55 +Patch: 1055_linux-6.1.56.patch +From: https://www.kernel.org +Desc: Linux 6.1.56 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1055_linux-6.1.56.patch b/1055_linux-6.1.56.patch new file mode 100644 index 00000000..c67d69e5 --- /dev/null +++ b/1055_linux-6.1.56.patch @@ -0,0 +1,11323 @@ +diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst +index 2524061836acc..40164f2881e17 100644 +--- a/Documentation/admin-guide/cgroup-v1/memory.rst ++++ b/Documentation/admin-guide/cgroup-v1/memory.rst +@@ -91,8 +91,13 @@ Brief summary of control files. + memory.oom_control set/show oom controls. + memory.numa_stat show the number of memory usage per numa + node +- memory.kmem.limit_in_bytes This knob is deprecated and writing to +- it will return -ENOTSUPP. ++ memory.kmem.limit_in_bytes Deprecated knob to set and read the kernel ++ memory hard limit. Kernel hard limit is not ++ supported since 5.16. Writing any value to ++ do file will not have any effect same as if ++ nokmem kernel parameter was specified. ++ Kernel memory is still charged and reported ++ by memory.kmem.usage_in_bytes. + memory.kmem.usage_in_bytes show current kernel memory allocation + memory.kmem.failcnt show the number of kernel memory usage + hits limits +diff --git a/Makefile b/Makefile +index 3d839824a7224..9ceda3dad5eb7 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 6 + PATCHLEVEL = 1 +-SUBLEVEL = 55 ++SUBLEVEL = 56 + EXTRAVERSION = + NAME = Curry Ramen + +diff --git a/arch/arm/boot/dts/am335x-guardian.dts b/arch/arm/boot/dts/am335x-guardian.dts +index f6356266564c8..b357364e93f99 100644 +--- a/arch/arm/boot/dts/am335x-guardian.dts ++++ b/arch/arm/boot/dts/am335x-guardian.dts +@@ -103,8 +103,9 @@ + + }; + +- guardian_beeper: dmtimer-pwm@7 { ++ guardian_beeper: pwm-7 { + compatible = "ti,omap-dmtimer-pwm"; ++ #pwm-cells = <3>; + ti,timers = <&timer7>; + pinctrl-names = "default"; + pinctrl-0 = <&guardian_beeper_pins>; +diff --git a/arch/arm/boot/dts/am3517-evm.dts b/arch/arm/boot/dts/am3517-evm.dts +index 35b653014f2b0..7bab0a9dadb30 100644 +--- a/arch/arm/boot/dts/am3517-evm.dts ++++ b/arch/arm/boot/dts/am3517-evm.dts +@@ -150,7 +150,7 @@ + enable-gpios = <&gpio6 22 GPIO_ACTIVE_HIGH>; /* gpio_182 */ + }; + +- pwm11: dmtimer-pwm@11 { ++ pwm11: pwm-11 { + compatible = "ti,omap-dmtimer-pwm"; + pinctrl-names = "default"; + pinctrl-0 = <&pwm_pins>; +diff --git a/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts b/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts +index f1412ba83defb..0454423fe166c 100644 +--- a/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts ++++ b/arch/arm/boot/dts/bcm4708-linksys-ea6500-v2.dts +@@ -19,7 +19,8 @@ + + memory@0 { + device_type = "memory"; +- reg = <0x00000000 0x08000000>; ++ reg = <0x00000000 0x08000000>, ++ <0x88000000 0x08000000>; + }; + + gpio-keys { +diff --git a/arch/arm/boot/dts/exynos4210-i9100.dts b/arch/arm/boot/dts/exynos4210-i9100.dts +index bba85011ecc93..53e023fc1cacf 100644 +--- a/arch/arm/boot/dts/exynos4210-i9100.dts ++++ b/arch/arm/boot/dts/exynos4210-i9100.dts +@@ -201,8 +201,8 @@ + power-on-delay = <10>; + reset-delay = <10>; + +- panel-width-mm = <90>; +- panel-height-mm = <154>; ++ panel-width-mm = <56>; ++ panel-height-mm = <93>; + + display-timings { + timing { +diff --git a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi +index d3da8b1b473b8..e0cbac500e172 100644 +--- a/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi ++++ b/arch/arm/boot/dts/logicpd-torpedo-baseboard.dtsi +@@ -59,7 +59,7 @@ + }; + }; + +- pwm10: dmtimer-pwm { ++ pwm10: pwm-10 { + compatible = "ti,omap-dmtimer-pwm"; + pinctrl-names = "default"; + pinctrl-0 = <&pwm_pins>; +diff --git a/arch/arm/boot/dts/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/motorola-mapphone-common.dtsi +index c7a1f3ffc48ca..d69f0f4b4990d 100644 +--- a/arch/arm/boot/dts/motorola-mapphone-common.dtsi ++++ b/arch/arm/boot/dts/motorola-mapphone-common.dtsi +@@ -133,7 +133,7 @@ + dais = <&mcbsp2_port>, <&mcbsp3_port>; + }; + +- pwm8: dmtimer-pwm-8 { ++ pwm8: pwm-8 { + pinctrl-names = "default"; + pinctrl-0 = <&vibrator_direction_pin>; + +@@ -143,7 +143,7 @@ + ti,clock-source = <0x01>; + }; + +- pwm9: dmtimer-pwm-9 { ++ pwm9: pwm-9 { + pinctrl-names = "default"; + pinctrl-0 = <&vibrator_enable_pin>; + +@@ -352,13 +352,13 @@ + &omap4_pmx_core { + + /* hdmi_hpd.gpio_63 */ +- hdmi_hpd_gpio: pinmux_hdmi_hpd_pins { ++ hdmi_hpd_gpio: hdmi-hpd-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x098, PIN_INPUT | MUX_MODE3) + >; + }; + +- hdq_pins: pinmux_hdq_pins { ++ hdq_pins: hdq-pins { + pinctrl-single,pins = < + /* 0x4a100120 hdq_sio.hdq_sio aa27 */ + OMAP4_IOPAD(0x120, PIN_INPUT | MUX_MODE0) +@@ -366,7 +366,7 @@ + }; + + /* hdmi_cec.hdmi_cec, hdmi_scl.hdmi_scl, hdmi_sda.hdmi_sda */ +- dss_hdmi_pins: pinmux_dss_hdmi_pins { ++ dss_hdmi_pins: dss-hdmi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) + OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0) +@@ -380,7 +380,7 @@ + * devices. Off mode value should be tested if we have off mode working + * later on. + */ +- mmc3_pins: pinmux_mmc3_pins { ++ mmc3_pins: mmc3-pins { + pinctrl-single,pins = < + /* 0x4a10008e gpmc_wait2.gpio_100 d23 */ + OMAP4_IOPAD(0x08e, PIN_INPUT | MUX_MODE3) +@@ -406,40 +406,40 @@ + }; + + /* gpmc_ncs0.gpio_50 */ +- poweroff_gpio: pinmux_poweroff_pins { ++ poweroff_gpio: poweroff-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x074, PIN_OUTPUT_PULLUP | MUX_MODE3) + >; + }; + + /* kpd_row0.gpio_178 */ +- tmp105_irq: pinmux_tmp105_irq { ++ tmp105_irq: tmp105-irq-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x18e, PIN_INPUT_PULLUP | MUX_MODE3) + >; + }; + +- usb_gpio_mux_sel1: pinmux_usb_gpio_mux_sel1_pins { ++ usb_gpio_mux_sel1: usb-gpio-mux-sel1-pins { + /* gpio_60 */ + pinctrl-single,pins = < + OMAP4_IOPAD(0x088, PIN_OUTPUT | MUX_MODE3) + >; + }; + +- touchscreen_pins: pinmux_touchscreen_pins { ++ touchscreen_pins: touchscreen-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x180, PIN_OUTPUT | MUX_MODE3) + OMAP4_IOPAD(0x1a0, PIN_INPUT_PULLUP | MUX_MODE3) + >; + }; + +- als_proximity_pins: pinmux_als_proximity_pins { ++ als_proximity_pins: als-proximity-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x18c, PIN_INPUT_PULLUP | MUX_MODE3) + >; + }; + +- usb_mdm6600_pins: pinmux_usb_mdm6600_pins { ++ usb_mdm6600_pins: usb-mdm6600-pins { + pinctrl-single,pins = < + /* enable 0x4a1000d8 usbb1_ulpitll_dat7.gpio_95 ag16 */ + OMAP4_IOPAD(0x0d8, PIN_INPUT | MUX_MODE3) +@@ -476,7 +476,7 @@ + >; + }; + +- usb_ulpi_pins: pinmux_usb_ulpi_pins { ++ usb_ulpi_pins: usb-ulpi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x196, MUX_MODE7) + OMAP4_IOPAD(0x198, MUX_MODE7) +@@ -496,7 +496,7 @@ + }; + + /* usb0_otg_dp and usb0_otg_dm */ +- usb_utmi_pins: pinmux_usb_utmi_pins { ++ usb_utmi_pins: usb-utmi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x196, PIN_INPUT | MUX_MODE0) + OMAP4_IOPAD(0x198, PIN_INPUT | MUX_MODE0) +@@ -521,7 +521,7 @@ + * when not used. If needed, we can add rts pin remux later based + * on power measurements. + */ +- uart1_pins: pinmux_uart1_pins { ++ uart1_pins: uart1-pins { + pinctrl-single,pins = < + /* 0x4a10013c mcspi1_cs2.uart1_cts ag23 */ + OMAP4_IOPAD(0x13c, PIN_INPUT_PULLUP | MUX_MODE1) +@@ -538,7 +538,7 @@ + }; + + /* uart3_tx_irtx and uart3_rx_irrx */ +- uart3_pins: pinmux_uart3_pins { ++ uart3_pins: uart3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x196, MUX_MODE7) + OMAP4_IOPAD(0x198, MUX_MODE7) +@@ -557,7 +557,7 @@ + >; + }; + +- uart4_pins: pinmux_uart4_pins { ++ uart4_pins: uart4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x15c, PIN_INPUT | MUX_MODE0) /* uart4_rx */ + OMAP4_IOPAD(0x15e, PIN_OUTPUT | MUX_MODE0) /* uart4_tx */ +@@ -566,7 +566,7 @@ + >; + }; + +- mcbsp2_pins: pinmux_mcbsp2_pins { ++ mcbsp2_pins: mcbsp2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0f6, PIN_INPUT | MUX_MODE0) /* abe_mcbsp2_clkx */ + OMAP4_IOPAD(0x0f8, PIN_INPUT | MUX_MODE0) /* abe_mcbsp2_dr */ +@@ -575,7 +575,7 @@ + >; + }; + +- mcbsp3_pins: pinmux_mcbsp3_pins { ++ mcbsp3_pins: mcbsp3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x106, PIN_INPUT | MUX_MODE1) /* abe_mcbsp3_dr */ + OMAP4_IOPAD(0x108, PIN_OUTPUT | MUX_MODE1) /* abe_mcbsp3_dx */ +@@ -584,13 +584,13 @@ + >; + }; + +- vibrator_direction_pin: pinmux_vibrator_direction_pin { ++ vibrator_direction_pin: vibrator-direction-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x1ce, PIN_OUTPUT | MUX_MODE1) /* dmtimer8_pwm_evt (gpio_27) */ + >; + }; + +- vibrator_enable_pin: pinmux_vibrator_enable_pin { ++ vibrator_enable_pin: vibrator-enable-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0X1d0, PIN_OUTPUT | MUX_MODE1) /* dmtimer9_pwm_evt (gpio_28) */ + >; +@@ -598,7 +598,7 @@ + }; + + &omap4_pmx_wkup { +- usb_gpio_mux_sel2: pinmux_usb_gpio_mux_sel2_pins { ++ usb_gpio_mux_sel2: usb-gpio-mux-sel2-pins { + /* gpio_wk0 */ + pinctrl-single,pins = < + OMAP4_IOPAD(0x040, PIN_OUTPUT_PULLDOWN | MUX_MODE3) +@@ -614,12 +614,12 @@ + /* Configure pwm clock source for timers 8 & 9 */ + &timer8 { + assigned-clocks = <&abe_clkctrl OMAP4_TIMER8_CLKCTRL 24>; +- assigned-clock-parents = <&sys_clkin_ck>; ++ assigned-clock-parents = <&sys_32k_ck>; + }; + + &timer9 { + assigned-clocks = <&l4_per_clkctrl OMAP4_TIMER9_CLKCTRL 24>; +- assigned-clock-parents = <&sys_clkin_ck>; ++ assigned-clock-parents = <&sys_32k_ck>; + }; + + /* +diff --git a/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi b/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi +index ce6c235f68ec6..3046ec572632d 100644 +--- a/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi ++++ b/arch/arm/boot/dts/omap-gpmc-smsc911x.dtsi +@@ -8,9 +8,9 @@ + + / { + vddvario: regulator-vddvario { +- compatible = "regulator-fixed"; +- regulator-name = "vddvario"; +- regulator-always-on; ++ compatible = "regulator-fixed"; ++ regulator-name = "vddvario"; ++ regulator-always-on; + }; + + vdd33a: regulator-vdd33a { +diff --git a/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi b/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi +index e7534fe9c53cf..bc8961f3690f0 100644 +--- a/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi ++++ b/arch/arm/boot/dts/omap-gpmc-smsc9221.dtsi +@@ -12,9 +12,9 @@ + + / { + vddvario: regulator-vddvario { +- compatible = "regulator-fixed"; +- regulator-name = "vddvario"; +- regulator-always-on; ++ compatible = "regulator-fixed"; ++ regulator-name = "vddvario"; ++ regulator-always-on; + }; + + vdd33a: regulator-vdd33a { +diff --git a/arch/arm/boot/dts/omap3-cm-t3517.dts b/arch/arm/boot/dts/omap3-cm-t3517.dts +index 3b8349094baa6..f25c0a84a190c 100644 +--- a/arch/arm/boot/dts/omap3-cm-t3517.dts ++++ b/arch/arm/boot/dts/omap3-cm-t3517.dts +@@ -11,12 +11,12 @@ + model = "CompuLab CM-T3517"; + compatible = "compulab,omap3-cm-t3517", "ti,am3517", "ti,omap3"; + +- vmmc: regulator-vmmc { +- compatible = "regulator-fixed"; +- regulator-name = "vmmc"; +- regulator-min-microvolt = <3300000>; +- regulator-max-microvolt = <3300000>; +- }; ++ vmmc: regulator-vmmc { ++ compatible = "regulator-fixed"; ++ regulator-name = "vmmc"; ++ regulator-min-microvolt = <3300000>; ++ regulator-max-microvolt = <3300000>; ++ }; + + wl12xx_vmmc2: wl12xx_vmmc2 { + compatible = "regulator-fixed"; +diff --git a/arch/arm/boot/dts/omap3-cpu-thermal.dtsi b/arch/arm/boot/dts/omap3-cpu-thermal.dtsi +index 0da759f8e2c2d..7dd2340bc5e45 100644 +--- a/arch/arm/boot/dts/omap3-cpu-thermal.dtsi ++++ b/arch/arm/boot/dts/omap3-cpu-thermal.dtsi +@@ -12,8 +12,7 @@ cpu_thermal: cpu-thermal { + polling-delay = <1000>; /* milliseconds */ + coefficients = <0 20000>; + +- /* sensor ID */ +- thermal-sensors = <&bandgap 0>; ++ thermal-sensors = <&bandgap>; + + cpu_trips: trips { + cpu_alert0: cpu_alert { +diff --git a/arch/arm/boot/dts/omap3-gta04.dtsi b/arch/arm/boot/dts/omap3-gta04.dtsi +index 2dbee248a126f..e0be0fb23f80f 100644 +--- a/arch/arm/boot/dts/omap3-gta04.dtsi ++++ b/arch/arm/boot/dts/omap3-gta04.dtsi +@@ -147,7 +147,7 @@ + pinctrl-0 = <&backlight_pins>; + }; + +- pwm11: dmtimer-pwm { ++ pwm11: pwm-11 { + compatible = "ti,omap-dmtimer-pwm"; + ti,timers = <&timer11>; + #pwm-cells = <3>; +@@ -332,7 +332,7 @@ + OMAP3_CORE1_IOPAD(0x2108, PIN_OUTPUT | MUX_MODE0) /* dss_data22.dss_data22 */ + OMAP3_CORE1_IOPAD(0x210a, PIN_OUTPUT | MUX_MODE0) /* dss_data23.dss_data23 */ + >; +- }; ++ }; + + gps_pins: pinmux_gps_pins { + pinctrl-single,pins = < +@@ -869,8 +869,8 @@ + }; + + &hdqw1w { +- pinctrl-names = "default"; +- pinctrl-0 = <&hdq_pins>; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&hdq_pins>; + }; + + /* image signal processor within OMAP3 SoC */ +diff --git a/arch/arm/boot/dts/omap3-ldp.dts b/arch/arm/boot/dts/omap3-ldp.dts +index 36fc8805e0c15..85f33bbb566f9 100644 +--- a/arch/arm/boot/dts/omap3-ldp.dts ++++ b/arch/arm/boot/dts/omap3-ldp.dts +@@ -301,5 +301,5 @@ + + &vaux1 { + /* Needed for ads7846 */ +- regulator-name = "vcc"; ++ regulator-name = "vcc"; + }; +diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts +index dd79715564498..89ab08d83261a 100644 +--- a/arch/arm/boot/dts/omap3-n900.dts ++++ b/arch/arm/boot/dts/omap3-n900.dts +@@ -156,7 +156,7 @@ + io-channel-names = "temp", "bsi", "vbat"; + }; + +- pwm9: dmtimer-pwm { ++ pwm9: pwm-9 { + compatible = "ti,omap-dmtimer-pwm"; + #pwm-cells = <3>; + ti,timers = <&timer9>; +@@ -236,27 +236,27 @@ + pinctrl-single,pins = < + + /* address lines */ +- OMAP3_CORE1_IOPAD(0x207a, PIN_OUTPUT | MUX_MODE0) /* gpmc_a1.gpmc_a1 */ +- OMAP3_CORE1_IOPAD(0x207c, PIN_OUTPUT | MUX_MODE0) /* gpmc_a2.gpmc_a2 */ +- OMAP3_CORE1_IOPAD(0x207e, PIN_OUTPUT | MUX_MODE0) /* gpmc_a3.gpmc_a3 */ ++ OMAP3_CORE1_IOPAD(0x207a, PIN_OUTPUT | MUX_MODE0) /* gpmc_a1.gpmc_a1 */ ++ OMAP3_CORE1_IOPAD(0x207c, PIN_OUTPUT | MUX_MODE0) /* gpmc_a2.gpmc_a2 */ ++ OMAP3_CORE1_IOPAD(0x207e, PIN_OUTPUT | MUX_MODE0) /* gpmc_a3.gpmc_a3 */ + + /* data lines, gpmc_d0..d7 not muxable according to TRM */ +- OMAP3_CORE1_IOPAD(0x209e, PIN_INPUT | MUX_MODE0) /* gpmc_d8.gpmc_d8 */ +- OMAP3_CORE1_IOPAD(0x20a0, PIN_INPUT | MUX_MODE0) /* gpmc_d9.gpmc_d9 */ +- OMAP3_CORE1_IOPAD(0x20a2, PIN_INPUT | MUX_MODE0) /* gpmc_d10.gpmc_d10 */ +- OMAP3_CORE1_IOPAD(0x20a4, PIN_INPUT | MUX_MODE0) /* gpmc_d11.gpmc_d11 */ +- OMAP3_CORE1_IOPAD(0x20a6, PIN_INPUT | MUX_MODE0) /* gpmc_d12.gpmc_d12 */ +- OMAP3_CORE1_IOPAD(0x20a8, PIN_INPUT | MUX_MODE0) /* gpmc_d13.gpmc_d13 */ +- OMAP3_CORE1_IOPAD(0x20aa, PIN_INPUT | MUX_MODE0) /* gpmc_d14.gpmc_d14 */ +- OMAP3_CORE1_IOPAD(0x20ac, PIN_INPUT | MUX_MODE0) /* gpmc_d15.gpmc_d15 */ ++ OMAP3_CORE1_IOPAD(0x209e, PIN_INPUT | MUX_MODE0) /* gpmc_d8.gpmc_d8 */ ++ OMAP3_CORE1_IOPAD(0x20a0, PIN_INPUT | MUX_MODE0) /* gpmc_d9.gpmc_d9 */ ++ OMAP3_CORE1_IOPAD(0x20a2, PIN_INPUT | MUX_MODE0) /* gpmc_d10.gpmc_d10 */ ++ OMAP3_CORE1_IOPAD(0x20a4, PIN_INPUT | MUX_MODE0) /* gpmc_d11.gpmc_d11 */ ++ OMAP3_CORE1_IOPAD(0x20a6, PIN_INPUT | MUX_MODE0) /* gpmc_d12.gpmc_d12 */ ++ OMAP3_CORE1_IOPAD(0x20a8, PIN_INPUT | MUX_MODE0) /* gpmc_d13.gpmc_d13 */ ++ OMAP3_CORE1_IOPAD(0x20aa, PIN_INPUT | MUX_MODE0) /* gpmc_d14.gpmc_d14 */ ++ OMAP3_CORE1_IOPAD(0x20ac, PIN_INPUT | MUX_MODE0) /* gpmc_d15.gpmc_d15 */ + + /* + * gpmc_ncs0, gpmc_nadv_ale, gpmc_noe, gpmc_nwe, gpmc_wait0 not muxable + * according to TRM. OneNAND seems to require PIN_INPUT on clock. + */ +- OMAP3_CORE1_IOPAD(0x20b0, PIN_OUTPUT | MUX_MODE0) /* gpmc_ncs1.gpmc_ncs1 */ +- OMAP3_CORE1_IOPAD(0x20be, PIN_INPUT | MUX_MODE0) /* gpmc_clk.gpmc_clk */ +- >; ++ OMAP3_CORE1_IOPAD(0x20b0, PIN_OUTPUT | MUX_MODE0) /* gpmc_ncs1.gpmc_ncs1 */ ++ OMAP3_CORE1_IOPAD(0x20be, PIN_INPUT | MUX_MODE0) /* gpmc_clk.gpmc_clk */ ++ >; + }; + + i2c1_pins: pinmux_i2c1_pins { +@@ -738,12 +738,12 @@ + + si4713: si4713@63 { + compatible = "silabs,si4713"; +- reg = <0x63>; ++ reg = <0x63>; + +- interrupts-extended = <&gpio2 21 IRQ_TYPE_EDGE_FALLING>; /* 53 */ +- reset-gpios = <&gpio6 3 GPIO_ACTIVE_HIGH>; /* 163 */ +- vio-supply = <&vio>; +- vdd-supply = <&vaux1>; ++ interrupts-extended = <&gpio2 21 IRQ_TYPE_EDGE_FALLING>; /* 53 */ ++ reset-gpios = <&gpio6 3 GPIO_ACTIVE_HIGH>; /* 163 */ ++ vio-supply = <&vio>; ++ vdd-supply = <&vaux1>; + }; + + bq24150a: bq24150a@6b { +diff --git a/arch/arm/boot/dts/omap3-zoom3.dts b/arch/arm/boot/dts/omap3-zoom3.dts +index 0482676d18306..ce58b1f208e81 100644 +--- a/arch/arm/boot/dts/omap3-zoom3.dts ++++ b/arch/arm/boot/dts/omap3-zoom3.dts +@@ -23,9 +23,9 @@ + }; + + vddvario: regulator-vddvario { +- compatible = "regulator-fixed"; +- regulator-name = "vddvario"; +- regulator-always-on; ++ compatible = "regulator-fixed"; ++ regulator-name = "vddvario"; ++ regulator-always-on; + }; + + vdd33a: regulator-vdd33a { +@@ -84,28 +84,28 @@ + + uart1_pins: pinmux_uart1_pins { + pinctrl-single,pins = < +- OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT | MUX_MODE0) /* uart1_cts.uart1_cts */ +- OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE0) /* uart1_rts.uart1_rts */ +- OMAP3_CORE1_IOPAD(0x2182, WAKEUP_EN | PIN_INPUT | MUX_MODE0) /* uart1_rx.uart1_rx */ +- OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE0) /* uart1_tx.uart1_tx */ ++ OMAP3_CORE1_IOPAD(0x2180, PIN_INPUT | MUX_MODE0) /* uart1_cts.uart1_cts */ ++ OMAP3_CORE1_IOPAD(0x217e, PIN_OUTPUT | MUX_MODE0) /* uart1_rts.uart1_rts */ ++ OMAP3_CORE1_IOPAD(0x2182, WAKEUP_EN | PIN_INPUT | MUX_MODE0) /* uart1_rx.uart1_rx */ ++ OMAP3_CORE1_IOPAD(0x217c, PIN_OUTPUT | MUX_MODE0) /* uart1_tx.uart1_tx */ + >; + }; + + uart2_pins: pinmux_uart2_pins { + pinctrl-single,pins = < +- OMAP3_CORE1_IOPAD(0x2174, PIN_INPUT_PULLUP | MUX_MODE0) /* uart2_cts.uart2_cts */ +- OMAP3_CORE1_IOPAD(0x2176, PIN_OUTPUT | MUX_MODE0) /* uart2_rts.uart2_rts */ +- OMAP3_CORE1_IOPAD(0x217a, PIN_INPUT | MUX_MODE0) /* uart2_rx.uart2_rx */ +- OMAP3_CORE1_IOPAD(0x2178, PIN_OUTPUT | MUX_MODE0) /* uart2_tx.uart2_tx */ ++ OMAP3_CORE1_IOPAD(0x2174, PIN_INPUT_PULLUP | MUX_MODE0) /* uart2_cts.uart2_cts */ ++ OMAP3_CORE1_IOPAD(0x2176, PIN_OUTPUT | MUX_MODE0) /* uart2_rts.uart2_rts */ ++ OMAP3_CORE1_IOPAD(0x217a, PIN_INPUT | MUX_MODE0) /* uart2_rx.uart2_rx */ ++ OMAP3_CORE1_IOPAD(0x2178, PIN_OUTPUT | MUX_MODE0) /* uart2_tx.uart2_tx */ + >; + }; + + uart3_pins: pinmux_uart3_pins { + pinctrl-single,pins = < +- OMAP3_CORE1_IOPAD(0x219a, PIN_INPUT_PULLDOWN | MUX_MODE0) /* uart3_cts_rctx.uart3_cts_rctx */ +- OMAP3_CORE1_IOPAD(0x219c, PIN_OUTPUT | MUX_MODE0) /* uart3_rts_sd.uart3_rts_sd */ +- OMAP3_CORE1_IOPAD(0x219e, PIN_INPUT | MUX_MODE0) /* uart3_rx_irrx.uart3_rx_irrx */ +- OMAP3_CORE1_IOPAD(0x21a0, PIN_OUTPUT | MUX_MODE0) /* uart3_tx_irtx.uart3_tx_irtx */ ++ OMAP3_CORE1_IOPAD(0x219a, PIN_INPUT_PULLDOWN | MUX_MODE0) /* uart3_cts_rctx.uart3_cts_rctx */ ++ OMAP3_CORE1_IOPAD(0x219c, PIN_OUTPUT | MUX_MODE0) /* uart3_rts_sd.uart3_rts_sd */ ++ OMAP3_CORE1_IOPAD(0x219e, PIN_INPUT | MUX_MODE0) /* uart3_rx_irrx.uart3_rx_irrx */ ++ OMAP3_CORE1_IOPAD(0x21a0, PIN_OUTPUT | MUX_MODE0) /* uart3_tx_irtx.uart3_tx_irtx */ + >; + }; + +@@ -205,22 +205,22 @@ + }; + + &uart1 { +- pinctrl-names = "default"; +- pinctrl-0 = <&uart1_pins>; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&uart1_pins>; + }; + + &uart2 { +- pinctrl-names = "default"; +- pinctrl-0 = <&uart2_pins>; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&uart2_pins>; + }; + + &uart3 { +- pinctrl-names = "default"; +- pinctrl-0 = <&uart3_pins>; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&uart3_pins>; + }; + + &uart4 { +- status = "disabled"; ++ status = "disabled"; + }; + + &usb_otg_hs { +diff --git a/arch/arm/boot/dts/omap4-cpu-thermal.dtsi b/arch/arm/boot/dts/omap4-cpu-thermal.dtsi +index 4d7eeb133dadd..d484ec1e4fd86 100644 +--- a/arch/arm/boot/dts/omap4-cpu-thermal.dtsi ++++ b/arch/arm/boot/dts/omap4-cpu-thermal.dtsi +@@ -12,21 +12,24 @@ cpu_thermal: cpu_thermal { + polling-delay-passive = <250>; /* milliseconds */ + polling-delay = <1000>; /* milliseconds */ + +- /* sensor ID */ +- thermal-sensors = <&bandgap 0>; ++ /* ++ * See 44xx files for single sensor addressing, omap5 and dra7 need ++ * also sensor ID for addressing. ++ */ ++ thermal-sensors = <&bandgap 0>; + + cpu_trips: trips { +- cpu_alert0: cpu_alert { +- temperature = <100000>; /* millicelsius */ +- hysteresis = <2000>; /* millicelsius */ +- type = "passive"; +- }; +- cpu_crit: cpu_crit { +- temperature = <125000>; /* millicelsius */ +- hysteresis = <2000>; /* millicelsius */ +- type = "critical"; +- }; +- }; ++ cpu_alert0: cpu_alert { ++ temperature = <100000>; /* millicelsius */ ++ hysteresis = <2000>; /* millicelsius */ ++ type = "passive"; ++ }; ++ cpu_crit: cpu_crit { ++ temperature = <125000>; /* millicelsius */ ++ hysteresis = <2000>; /* millicelsius */ ++ type = "critical"; ++ }; ++ }; + + cpu_cooling_maps: cooling-maps { + map0 { +diff --git a/arch/arm/boot/dts/omap4-duovero-parlor.dts b/arch/arm/boot/dts/omap4-duovero-parlor.dts +index b294c22177cbf..6d1beb453234e 100644 +--- a/arch/arm/boot/dts/omap4-duovero-parlor.dts ++++ b/arch/arm/boot/dts/omap4-duovero-parlor.dts +@@ -62,33 +62,33 @@ + &smsc_pins + >; + +- led_pins: pinmux_led_pins { ++ led_pins: led-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x116, PIN_OUTPUT | MUX_MODE3) /* abe_dmic_din3.gpio_122 */ + >; + }; + +- button_pins: pinmux_button_pins { ++ button_pins: button-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x114, PIN_INPUT_PULLUP | MUX_MODE3) /* abe_dmic_din2.gpio_121 */ + >; + }; + +- i2c2_pins: pinmux_i2c2_pins { ++ i2c2_pins: i2c2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x126, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_scl */ + OMAP4_IOPAD(0x128, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_sda */ + >; + }; + +- i2c3_pins: pinmux_i2c3_pins { ++ i2c3_pins: i2c3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12a, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */ + OMAP4_IOPAD(0x12c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */ + >; + }; + +- smsc_pins: pinmux_smsc_pins { ++ smsc_pins: smsc-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x068, PIN_INPUT | MUX_MODE3) /* gpmc_a20.gpio_44: IRQ */ + OMAP4_IOPAD(0x06a, PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a21.gpio_45: nReset */ +@@ -96,7 +96,7 @@ + >; + }; + +- dss_hdmi_pins: pinmux_dss_hdmi_pins { ++ dss_hdmi_pins: dss-hdmi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x098, PIN_INPUT | MUX_MODE3) /* hdmi_hpd.gpio_63 */ + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) /* hdmi_cec.hdmi_cec */ +diff --git a/arch/arm/boot/dts/omap4-duovero.dtsi b/arch/arm/boot/dts/omap4-duovero.dtsi +index 805dfd40030dc..b8af455b411a9 100644 +--- a/arch/arm/boot/dts/omap4-duovero.dtsi ++++ b/arch/arm/boot/dts/omap4-duovero.dtsi +@@ -73,14 +73,14 @@ + &hsusbb1_pins + >; + +- twl6040_pins: pinmux_twl6040_pins { ++ twl6040_pins: twl6040-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x166, PIN_OUTPUT | MUX_MODE3) /* usbb2_ulpitll_nxt.gpio_160 */ + OMAP4_IOPAD(0x1a0, PIN_INPUT | MUX_MODE0) /* sys_nirq2.sys_nirq2 */ + >; + }; + +- mcbsp1_pins: pinmux_mcbsp1_pins { ++ mcbsp1_pins: mcbsp1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0fe, PIN_INPUT | MUX_MODE0) /* abe_mcbsp1_clkx.abe_mcbsp1_clkx */ + OMAP4_IOPAD(0x100, PIN_INPUT_PULLDOWN | MUX_MODE0) /* abe_mcbsp1_dr.abe_mcbsp1_dr */ +@@ -89,7 +89,7 @@ + >; + }; + +- hsusbb1_pins: pinmux_hsusbb1_pins { ++ hsusbb1_pins: hsusbb1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0c2, PIN_INPUT_PULLDOWN | MUX_MODE4) /* usbb1_ulpitll_clk.usbb1_ulpiphy_clk */ + OMAP4_IOPAD(0x0c4, PIN_OUTPUT | MUX_MODE4) /* usbb1_ulpitll_stp.usbb1_ulpiphy_stp */ +@@ -106,34 +106,34 @@ + >; + }; + +- hsusb1phy_pins: pinmux_hsusb1phy_pins { ++ hsusb1phy_pins: hsusb1phy-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x08c, PIN_OUTPUT | MUX_MODE3) /* gpmc_wait1.gpio_62 */ + >; + }; + +- w2cbw0015_pins: pinmux_w2cbw0015_pins { ++ w2cbw0015_pins: w2cbw0015-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x066, PIN_OUTPUT | MUX_MODE3) /* gpmc_a19.gpio_43 */ + OMAP4_IOPAD(0x07a, PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */ + >; + }; + +- i2c1_pins: pinmux_i2c1_pins { ++ i2c1_pins: i2c1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x122, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */ + OMAP4_IOPAD(0x124, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */ + >; + }; + +- i2c4_pins: pinmux_i2c4_pins { ++ i2c4_pins: i2c4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12e, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_scl */ + OMAP4_IOPAD(0x130, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */ + >; + }; + +- mmc1_pins: pinmux_mmc1_pins { ++ mmc1_pins: mmc1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0e2, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_clk */ + OMAP4_IOPAD(0x0e4, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmcc1_cmd */ +@@ -144,7 +144,7 @@ + >; + }; + +- mmc5_pins: pinmux_mmc5_pins { ++ mmc5_pins: mmc5-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x148, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk */ + OMAP4_IOPAD(0x14a, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmcc5_cmd */ +diff --git a/arch/arm/boot/dts/omap4-kc1.dts b/arch/arm/boot/dts/omap4-kc1.dts +index e59d17b25a1d9..c6b79ba8bbc91 100644 +--- a/arch/arm/boot/dts/omap4-kc1.dts ++++ b/arch/arm/boot/dts/omap4-kc1.dts +@@ -35,42 +35,42 @@ + &omap4_pmx_core { + pinctrl-names = "default"; + +- uart3_pins: pinmux_uart3_pins { ++ uart3_pins: uart3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x144, PIN_INPUT | MUX_MODE0) /* uart3_rx_irrx */ + OMAP4_IOPAD(0x146, PIN_OUTPUT | MUX_MODE0) /* uart3_tx_irtx */ + >; + }; + +- i2c1_pins: pinmux_i2c1_pins { ++ i2c1_pins: i2c1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x122, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */ + OMAP4_IOPAD(0x124, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */ + >; + }; + +- i2c2_pins: pinmux_i2c2_pins { ++ i2c2_pins: i2c2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x126, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_scl */ + OMAP4_IOPAD(0x128, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_sda */ + >; + }; + +- i2c3_pins: pinmux_i2c3_pins { ++ i2c3_pins: i2c3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12a, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */ + OMAP4_IOPAD(0x12c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */ + >; + }; + +- i2c4_pins: pinmux_i2c4_pins { ++ i2c4_pins: i2c4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12e, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_scl */ + OMAP4_IOPAD(0x130, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */ + >; + }; + +- mmc2_pins: pinmux_mmc2_pins { ++ mmc2_pins: mmc2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x040, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat0 */ + OMAP4_IOPAD(0x042, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat1 */ +@@ -85,7 +85,7 @@ + >; + }; + +- usb_otg_hs_pins: pinmux_usb_otg_hs_pins { ++ usb_otg_hs_pins: usb-otg-hs-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x194, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* usba0_otg_ce */ + OMAP4_IOPAD(0x196, PIN_INPUT | MUX_MODE0) /* usba0_otg_dp */ +diff --git a/arch/arm/boot/dts/omap4-mcpdm.dtsi b/arch/arm/boot/dts/omap4-mcpdm.dtsi +index 915a9b31a33b4..03ade47431fbe 100644 +--- a/arch/arm/boot/dts/omap4-mcpdm.dtsi ++++ b/arch/arm/boot/dts/omap4-mcpdm.dtsi +@@ -7,7 +7,7 @@ + */ + + &omap4_pmx_core { +- mcpdm_pins: pinmux_mcpdm_pins { ++ mcpdm_pins: mcpdm-pins { + pinctrl-single,pins = < + /* 0x4a100106 abe_pdm_ul_data.abe_pdm_ul_data ag25 */ + OMAP4_IOPAD(0x106, PIN_INPUT_PULLDOWN | MUX_MODE0) +diff --git a/arch/arm/boot/dts/omap4-panda-common.dtsi b/arch/arm/boot/dts/omap4-panda-common.dtsi +index 518652a599bd7..53b99004b19cf 100644 +--- a/arch/arm/boot/dts/omap4-panda-common.dtsi ++++ b/arch/arm/boot/dts/omap4-panda-common.dtsi +@@ -237,14 +237,14 @@ + &hsusbb1_pins + >; + +- twl6040_pins: pinmux_twl6040_pins { ++ twl6040_pins: twl6040-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x120, PIN_OUTPUT | MUX_MODE3) /* hdq_sio.gpio_127 */ + OMAP4_IOPAD(0x1a0, PIN_INPUT | MUX_MODE0) /* sys_nirq2.sys_nirq2 */ + >; + }; + +- mcbsp1_pins: pinmux_mcbsp1_pins { ++ mcbsp1_pins: mcbsp1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0fe, PIN_INPUT | MUX_MODE0) /* abe_mcbsp1_clkx.abe_mcbsp1_clkx */ + OMAP4_IOPAD(0x100, PIN_INPUT_PULLDOWN | MUX_MODE0) /* abe_mcbsp1_dr.abe_mcbsp1_dr */ +@@ -253,7 +253,7 @@ + >; + }; + +- dss_dpi_pins: pinmux_dss_dpi_pins { ++ dss_dpi_pins: dss-dpi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x162, PIN_OUTPUT | MUX_MODE5) /* dispc2_data23 */ + OMAP4_IOPAD(0x164, PIN_OUTPUT | MUX_MODE5) /* dispc2_data22 */ +@@ -288,13 +288,13 @@ + >; + }; + +- tfp410_pins: pinmux_tfp410_pins { ++ tfp410_pins: tfp410-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x184, PIN_OUTPUT | MUX_MODE3) /* gpio_0 */ + >; + }; + +- dss_hdmi_pins: pinmux_dss_hdmi_pins { ++ dss_hdmi_pins: dss-hdmi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) /* hdmi_cec.hdmi_cec */ + OMAP4_IOPAD(0x09c, PIN_INPUT_PULLUP | MUX_MODE0) /* hdmi_scl.hdmi_scl */ +@@ -302,7 +302,7 @@ + >; + }; + +- tpd12s015_pins: pinmux_tpd12s015_pins { ++ tpd12s015_pins: tpd12s015-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x062, PIN_OUTPUT | MUX_MODE3) /* gpmc_a17.gpio_41 */ + OMAP4_IOPAD(0x088, PIN_OUTPUT | MUX_MODE3) /* gpmc_nbe1.gpio_60 */ +@@ -310,7 +310,7 @@ + >; + }; + +- hsusbb1_pins: pinmux_hsusbb1_pins { ++ hsusbb1_pins: hsusbb1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0c2, PIN_INPUT_PULLDOWN | MUX_MODE4) /* usbb1_ulpitll_clk.usbb1_ulpiphy_clk */ + OMAP4_IOPAD(0x0c4, PIN_OUTPUT | MUX_MODE4) /* usbb1_ulpitll_stp.usbb1_ulpiphy_stp */ +@@ -327,28 +327,28 @@ + >; + }; + +- i2c1_pins: pinmux_i2c1_pins { ++ i2c1_pins: i2c1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x122, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */ + OMAP4_IOPAD(0x124, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */ + >; + }; + +- i2c2_pins: pinmux_i2c2_pins { ++ i2c2_pins: i2c2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x126, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_scl */ + OMAP4_IOPAD(0x128, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_sda */ + >; + }; + +- i2c3_pins: pinmux_i2c3_pins { ++ i2c3_pins: i2c3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12a, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */ + OMAP4_IOPAD(0x12c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */ + >; + }; + +- i2c4_pins: pinmux_i2c4_pins { ++ i2c4_pins: i2c4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12e, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_scl */ + OMAP4_IOPAD(0x130, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */ +@@ -359,7 +359,7 @@ + * wl12xx GPIO outputs for WLAN_EN, BT_EN, FM_EN, BT_WAKEUP + * REVISIT: Are the pull-ups needed for GPIO 48 and 49? + */ +- wl12xx_gpio: pinmux_wl12xx_gpio { ++ wl12xx_gpio: wl12xx-gpio-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x066, PIN_OUTPUT | MUX_MODE3) /* gpmc_a19.gpio_43 */ + OMAP4_IOPAD(0x06c, PIN_OUTPUT | MUX_MODE3) /* gpmc_a22.gpio_46 */ +@@ -369,7 +369,7 @@ + }; + + /* wl12xx GPIO inputs and SDIO pins */ +- wl12xx_pins: pinmux_wl12xx_pins { ++ wl12xx_pins: wl12xx-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x078, PIN_INPUT | MUX_MODE3) /* gpmc_ncs2.gpio_52 */ + OMAP4_IOPAD(0x07a, PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */ +@@ -382,7 +382,7 @@ + >; + }; + +- button_pins: pinmux_button_pins { ++ button_pins: button-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x114, PIN_INPUT_PULLUP | MUX_MODE3) /* gpio_121 */ + >; +@@ -390,7 +390,7 @@ + }; + + &omap4_pmx_wkup { +- led_wkgpio_pins: pinmux_leds_wkpins { ++ led_wkgpio_pins: leds-wkpins-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x05a, PIN_OUTPUT | MUX_MODE3) /* gpio_wk7 */ + OMAP4_IOPAD(0x05c, PIN_OUTPUT | MUX_MODE3) /* gpio_wk8 */ +diff --git a/arch/arm/boot/dts/omap4-panda-es.dts b/arch/arm/boot/dts/omap4-panda-es.dts +index 7c6886cd738f0..6c08dff58beae 100644 +--- a/arch/arm/boot/dts/omap4-panda-es.dts ++++ b/arch/arm/boot/dts/omap4-panda-es.dts +@@ -38,26 +38,26 @@ + }; + + &omap4_pmx_core { +- led_gpio_pins: gpio_led_pmx { ++ led_gpio_pins: gpio-led-pmx-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0f6, PIN_OUTPUT | MUX_MODE3) /* gpio_110 */ + >; + }; + +- button_pins: pinmux_button_pins { ++ button_pins: button-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0fc, PIN_INPUT_PULLUP | MUX_MODE3) /* gpio_113 */ + >; + }; + +- bt_pins: pinmux_bt_pins { ++ bt_pins: bt-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x06c, PIN_OUTPUT | MUX_MODE3) /* gpmc_a22.gpio_46 - BTEN */ + OMAP4_IOPAD(0x072, PIN_OUTPUT_PULLUP | MUX_MODE3) /* gpmc_a25.gpio_49 - BTWAKEUP */ + >; + }; + +- uart2_pins: pinmux_uart2_pins { ++ uart2_pins: uart2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x118, PIN_INPUT_PULLUP | MUX_MODE0) /* uart2_cts.uart2_cts - HCI */ + OMAP4_IOPAD(0x11a, PIN_OUTPUT | MUX_MODE0) /* uart2_rts.uart2_rts */ +diff --git a/arch/arm/boot/dts/omap4-sdp.dts b/arch/arm/boot/dts/omap4-sdp.dts +index 9e976140f34a6..b2cb93edbc3a6 100644 +--- a/arch/arm/boot/dts/omap4-sdp.dts ++++ b/arch/arm/boot/dts/omap4-sdp.dts +@@ -214,7 +214,7 @@ + &tpd12s015_pins + >; + +- uart2_pins: pinmux_uart2_pins { ++ uart2_pins: uart2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x118, PIN_INPUT_PULLUP | MUX_MODE0) /* uart2_cts.uart2_cts */ + OMAP4_IOPAD(0x11a, PIN_OUTPUT | MUX_MODE0) /* uart2_rts.uart2_rts */ +@@ -223,7 +223,7 @@ + >; + }; + +- uart3_pins: pinmux_uart3_pins { ++ uart3_pins: uart3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x140, PIN_INPUT_PULLUP | MUX_MODE0) /* uart3_cts_rctx.uart3_cts_rctx */ + OMAP4_IOPAD(0x142, PIN_OUTPUT | MUX_MODE0) /* uart3_rts_sd.uart3_rts_sd */ +@@ -232,21 +232,21 @@ + >; + }; + +- uart4_pins: pinmux_uart4_pins { ++ uart4_pins: uart4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x15c, PIN_INPUT | MUX_MODE0) /* uart4_rx.uart4_rx */ + OMAP4_IOPAD(0x15e, PIN_OUTPUT | MUX_MODE0) /* uart4_tx.uart4_tx */ + >; + }; + +- twl6040_pins: pinmux_twl6040_pins { ++ twl6040_pins: twl6040-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x120, PIN_OUTPUT | MUX_MODE3) /* hdq_sio.gpio_127 */ + OMAP4_IOPAD(0x1a0, PIN_INPUT | MUX_MODE0) /* sys_nirq2.sys_nirq2 */ + >; + }; + +- dmic_pins: pinmux_dmic_pins { ++ dmic_pins: dmic-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x110, PIN_OUTPUT | MUX_MODE0) /* abe_dmic_clk1.abe_dmic_clk1 */ + OMAP4_IOPAD(0x112, PIN_INPUT | MUX_MODE0) /* abe_dmic_din1.abe_dmic_din1 */ +@@ -255,7 +255,7 @@ + >; + }; + +- mcbsp1_pins: pinmux_mcbsp1_pins { ++ mcbsp1_pins: mcbsp1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0fe, PIN_INPUT | MUX_MODE0) /* abe_mcbsp1_clkx.abe_mcbsp1_clkx */ + OMAP4_IOPAD(0x100, PIN_INPUT_PULLDOWN | MUX_MODE0) /* abe_mcbsp1_dr.abe_mcbsp1_dr */ +@@ -264,7 +264,7 @@ + >; + }; + +- mcbsp2_pins: pinmux_mcbsp2_pins { ++ mcbsp2_pins: mcbsp2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0f6, PIN_INPUT | MUX_MODE0) /* abe_mcbsp2_clkx.abe_mcbsp2_clkx */ + OMAP4_IOPAD(0x0f8, PIN_INPUT_PULLDOWN | MUX_MODE0) /* abe_mcbsp2_dr.abe_mcbsp2_dr */ +@@ -273,7 +273,7 @@ + >; + }; + +- mcspi1_pins: pinmux_mcspi1_pins { ++ mcspi1_pins: mcspi1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x132, PIN_INPUT | MUX_MODE0) /* mcspi1_clk.mcspi1_clk */ + OMAP4_IOPAD(0x134, PIN_INPUT | MUX_MODE0) /* mcspi1_somi.mcspi1_somi */ +@@ -282,7 +282,7 @@ + >; + }; + +- dss_hdmi_pins: pinmux_dss_hdmi_pins { ++ dss_hdmi_pins: dss-hdmi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) /* hdmi_cec.hdmi_cec */ + OMAP4_IOPAD(0x09c, PIN_INPUT_PULLUP | MUX_MODE0) /* hdmi_scl.hdmi_scl */ +@@ -290,7 +290,7 @@ + >; + }; + +- tpd12s015_pins: pinmux_tpd12s015_pins { ++ tpd12s015_pins: tpd12s015-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x062, PIN_OUTPUT | MUX_MODE3) /* gpmc_a17.gpio_41 */ + OMAP4_IOPAD(0x088, PIN_OUTPUT | MUX_MODE3) /* gpmc_nbe1.gpio_60 */ +@@ -298,28 +298,28 @@ + >; + }; + +- i2c1_pins: pinmux_i2c1_pins { ++ i2c1_pins: i2c1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x122, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */ + OMAP4_IOPAD(0x124, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */ + >; + }; + +- i2c2_pins: pinmux_i2c2_pins { ++ i2c2_pins: i2c2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x126, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_scl */ + OMAP4_IOPAD(0x128, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_sda */ + >; + }; + +- i2c3_pins: pinmux_i2c3_pins { ++ i2c3_pins: i2c3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12a, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */ + OMAP4_IOPAD(0x12c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */ + >; + }; + +- i2c4_pins: pinmux_i2c4_pins { ++ i2c4_pins: i2c4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12e, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_scl */ + OMAP4_IOPAD(0x130, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */ +@@ -327,14 +327,14 @@ + }; + + /* wl12xx GPIO output for WLAN_EN */ +- wl12xx_gpio: pinmux_wl12xx_gpio { ++ wl12xx_gpio: wl12xx-gpio-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x07c, PIN_OUTPUT | MUX_MODE3) /* gpmc_nwp.gpio_54 */ + >; + }; + + /* wl12xx GPIO inputs and SDIO pins */ +- wl12xx_pins: pinmux_wl12xx_pins { ++ wl12xx_pins: wl12xx-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x07a, PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */ + OMAP4_IOPAD(0x148, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */ +@@ -347,13 +347,13 @@ + }; + + /* gpio_48 for ENET_ENABLE */ +- enet_enable_gpio: pinmux_enet_enable_gpio { ++ enet_enable_gpio: enet-enable-gpio-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x070, PIN_OUTPUT_PULLDOWN | MUX_MODE3) /* gpmc_a24.gpio_48 */ + >; + }; + +- ks8851_pins: pinmux_ks8851_pins { ++ ks8851_pins: ks8851-pins { + pinctrl-single,pins = < + /* ENET_INT */ + OMAP4_IOPAD(0x054, PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_ad10.gpio_34 */ +diff --git a/arch/arm/boot/dts/omap4-var-om44customboard.dtsi b/arch/arm/boot/dts/omap4-var-om44customboard.dtsi +index 458cb53dd3d18..cadc7e02592bf 100644 +--- a/arch/arm/boot/dts/omap4-var-om44customboard.dtsi ++++ b/arch/arm/boot/dts/omap4-var-om44customboard.dtsi +@@ -60,7 +60,7 @@ + }; + + &omap4_pmx_core { +- uart1_pins: pinmux_uart1_pins { ++ uart1_pins: uart1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x13c, PIN_INPUT_PULLUP | MUX_MODE1) /* mcspi1_cs2.uart1_cts */ + OMAP4_IOPAD(0x13e, PIN_OUTPUT | MUX_MODE1) /* mcspi1_cs3.uart1_rts */ +@@ -69,7 +69,7 @@ + >; + }; + +- mcspi1_pins: pinmux_mcspi1_pins { ++ mcspi1_pins: mcspi1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x132, PIN_INPUT | MUX_MODE0) /* mcspi1_clk.mcspi1_clk */ + OMAP4_IOPAD(0x134, PIN_INPUT | MUX_MODE0) /* mcspi1_somi.mcspi1_somi */ +@@ -78,13 +78,13 @@ + >; + }; + +- mcasp_pins: pinmux_mcsasp_pins { ++ mcasp_pins: mcsasp-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0f8, PIN_OUTPUT | MUX_MODE2) /* mcbsp2_dr.abe_mcasp_axr */ + >; + }; + +- dss_dpi_pins: pinmux_dss_dpi_pins { ++ dss_dpi_pins: dss-dpi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x162, PIN_OUTPUT | MUX_MODE5) /* dispc2_data23 */ + OMAP4_IOPAD(0x164, PIN_OUTPUT | MUX_MODE5) /* dispc2_data22 */ +@@ -117,7 +117,7 @@ + >; + }; + +- dss_hdmi_pins: pinmux_dss_hdmi_pins { ++ dss_hdmi_pins: dss-hdmi-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0) /* hdmi_cec.hdmi_cec */ + OMAP4_IOPAD(0x09c, PIN_INPUT_PULLUP | MUX_MODE0) /* hdmi_scl.hdmi_scl */ +@@ -125,14 +125,14 @@ + >; + }; + +- i2c4_pins: pinmux_i2c4_pins { ++ i2c4_pins: i2c4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12e, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_scl */ + OMAP4_IOPAD(0x130, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */ + >; + }; + +- mmc5_pins: pinmux_mmc5_pins { ++ mmc5_pins: mmc5-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0f6, PIN_INPUT | MUX_MODE3) /* abe_mcbsp2_clkx.gpio_110 */ + OMAP4_IOPAD(0x148, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */ +@@ -144,32 +144,32 @@ + >; + }; + +- gpio_led_pins: pinmux_gpio_led_pins { ++ gpio_led_pins: gpio-led-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x17e, PIN_OUTPUT | MUX_MODE3) /* kpd_col4.gpio_172 */ + OMAP4_IOPAD(0x180, PIN_OUTPUT | MUX_MODE3) /* kpd_col5.gpio_173 */ + >; + }; + +- gpio_key_pins: pinmux_gpio_key_pins { ++ gpio_key_pins: gpio-key-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x1a2, PIN_INPUT | MUX_MODE3) /* sys_boot0.gpio_184 */ + >; + }; + +- ks8851_irq_pins: pinmux_ks8851_irq_pins { ++ ks8851_irq_pins: ks8851-irq-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x17c, PIN_INPUT_PULLUP | MUX_MODE3) /* kpd_col3.gpio_171 */ + >; + }; + +- hdmi_hpd_pins: pinmux_hdmi_hpd_pins { ++ hdmi_hpd_pins: hdmi-hpd-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x098, PIN_INPUT_PULLDOWN | MUX_MODE3) /* hdmi_hpd.gpio_63 */ + >; + }; + +- backlight_pins: pinmux_backlight_pins { ++ backlight_pins: backlight-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x116, PIN_OUTPUT | MUX_MODE3) /* abe_dmic_din3.gpio_122 */ + >; +diff --git a/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi b/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi +index d0032213101e6..de779d2d7c3e9 100644 +--- a/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi ++++ b/arch/arm/boot/dts/omap4-var-som-om44-wlan.dtsi +@@ -19,7 +19,7 @@ + }; + + &omap4_pmx_core { +- uart2_pins: pinmux_uart2_pins { ++ uart2_pins: uart2-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x118, PIN_INPUT_PULLUP | MUX_MODE0) /* uart2_cts.uart2_cts */ + OMAP4_IOPAD(0x11a, PIN_OUTPUT | MUX_MODE0) /* uart2_rts.uart2_rts */ +@@ -28,7 +28,7 @@ + >; + }; + +- wl12xx_ctrl_pins: pinmux_wl12xx_ctrl_pins { ++ wl12xx_ctrl_pins: wl12xx-ctrl-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x062, PIN_INPUT_PULLUP | MUX_MODE3) /* gpmc_a17.gpio_41 (WLAN_IRQ) */ + OMAP4_IOPAD(0x064, PIN_OUTPUT | MUX_MODE3) /* gpmc_a18.gpio_42 (BT_EN) */ +@@ -36,7 +36,7 @@ + >; + }; + +- mmc4_pins: pinmux_mmc4_pins { ++ mmc4_pins: mmc4-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x154, PIN_INPUT_PULLUP | MUX_MODE1) /* mcspi4_clk.sdmmc4_clk */ + OMAP4_IOPAD(0x156, PIN_INPUT_PULLUP | MUX_MODE1) /* mcspi4_simo.sdmmc4_cmd */ +diff --git a/arch/arm/boot/dts/omap4-var-som-om44.dtsi b/arch/arm/boot/dts/omap4-var-som-om44.dtsi +index 334cbbaa5b8b0..37d56b3010cff 100644 +--- a/arch/arm/boot/dts/omap4-var-som-om44.dtsi ++++ b/arch/arm/boot/dts/omap4-var-som-om44.dtsi +@@ -65,21 +65,21 @@ + &hsusbb1_pins + >; + +- twl6040_pins: pinmux_twl6040_pins { ++ twl6040_pins: twl6040-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x19c, PIN_OUTPUT | MUX_MODE3) /* fref_clk2_out.gpio_182 */ + OMAP4_IOPAD(0x1a0, PIN_INPUT | MUX_MODE0) /* sys_nirq2.sys_nirq2 */ + >; + }; + +- tsc2004_pins: pinmux_tsc2004_pins { ++ tsc2004_pins: tsc2004-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x090, PIN_INPUT | MUX_MODE3) /* gpmc_ncs4.gpio_101 (irq) */ + OMAP4_IOPAD(0x092, PIN_OUTPUT | MUX_MODE3) /* gpmc_ncs5.gpio_102 (rst) */ + >; + }; + +- uart3_pins: pinmux_uart3_pins { ++ uart3_pins: uart3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x140, PIN_INPUT_PULLUP | MUX_MODE0) /* uart3_cts_rctx.uart3_cts_rctx */ + OMAP4_IOPAD(0x142, PIN_OUTPUT | MUX_MODE0) /* uart3_rts_sd.uart3_rts_sd */ +@@ -88,7 +88,7 @@ + >; + }; + +- hsusbb1_pins: pinmux_hsusbb1_pins { ++ hsusbb1_pins: hsusbb1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0c2, PIN_INPUT_PULLDOWN | MUX_MODE4) /* usbb1_ulpitll_clk.usbb1_ulpiphy_clk */ + OMAP4_IOPAD(0x0c4, PIN_OUTPUT | MUX_MODE4) /* usbb1_ulpitll_stp.usbb1_ulpiphy_stp */ +@@ -105,27 +105,27 @@ + >; + }; + +- hsusbb1_phy_rst_pins: pinmux_hsusbb1_phy_rst_pins { ++ hsusbb1_phy_rst_pins: hsusbb1-phy-rst-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x18c, PIN_OUTPUT | MUX_MODE3) /* kpd_row2.gpio_177 */ + >; + }; + +- i2c1_pins: pinmux_i2c1_pins { ++ i2c1_pins: i2c1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x122, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */ + OMAP4_IOPAD(0x124, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */ + >; + }; + +- i2c3_pins: pinmux_i2c3_pins { ++ i2c3_pins: i2c3-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x12a, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */ + OMAP4_IOPAD(0x12c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */ + >; + }; + +- mmc1_pins: pinmux_mmc1_pins { ++ mmc1_pins: mmc1-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x0e2, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_clk.sdmmc1_clk */ + OMAP4_IOPAD(0x0e4, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_cmd.sdmmc1_cmd */ +@@ -144,19 +144,19 @@ + &lan7500_rst_pins + >; + +- hsusbb1_phy_clk_pins: pinmux_hsusbb1_phy_clk_pins { ++ hsusbb1_phy_clk_pins: hsusbb1-phy-clk-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x058, PIN_OUTPUT | MUX_MODE0) /* fref_clk3_out */ + >; + }; + +- hsusbb1_hub_rst_pins: pinmux_hsusbb1_hub_rst_pins { ++ hsusbb1_hub_rst_pins: hsusbb1-hub-rst-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x042, PIN_OUTPUT | MUX_MODE3) /* gpio_wk1 */ + >; + }; + +- lan7500_rst_pins: pinmux_lan7500_rst_pins { ++ lan7500_rst_pins: lan7500-rst-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x040, PIN_OUTPUT | MUX_MODE3) /* gpio_wk0 */ + >; +diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi +index 238aceb799f89..2104170fe2cd7 100644 +--- a/arch/arm/boot/dts/omap443x.dtsi ++++ b/arch/arm/boot/dts/omap443x.dtsi +@@ -69,6 +69,7 @@ + }; + + &cpu_thermal { ++ thermal-sensors = <&bandgap>; + coefficients = <0 20000>; + }; + +diff --git a/arch/arm/boot/dts/omap4460.dtsi b/arch/arm/boot/dts/omap4460.dtsi +index 1b27a862ae810..a6764750d4476 100644 +--- a/arch/arm/boot/dts/omap4460.dtsi ++++ b/arch/arm/boot/dts/omap4460.dtsi +@@ -79,6 +79,7 @@ + }; + + &cpu_thermal { ++ thermal-sensors = <&bandgap>; + coefficients = <348 (-9301)>; + }; + +diff --git a/arch/arm/boot/dts/omap5-cm-t54.dts b/arch/arm/boot/dts/omap5-cm-t54.dts +index e62ea8b6d53fd..af288d63a26a4 100644 +--- a/arch/arm/boot/dts/omap5-cm-t54.dts ++++ b/arch/arm/boot/dts/omap5-cm-t54.dts +@@ -84,36 +84,36 @@ + }; + + lcd0: display { +- compatible = "startek,startek-kd050c", "panel-dpi"; +- label = "lcd"; +- +- pinctrl-names = "default"; +- pinctrl-0 = <&lcd_pins>; +- +- enable-gpios = <&gpio8 3 GPIO_ACTIVE_HIGH>; +- +- panel-timing { +- clock-frequency = <33000000>; +- hactive = <800>; +- vactive = <480>; +- hfront-porch = <40>; +- hback-porch = <40>; +- hsync-len = <43>; +- vback-porch = <29>; +- vfront-porch = <13>; +- vsync-len = <3>; +- hsync-active = <0>; +- vsync-active = <0>; +- de-active = <1>; +- pixelclk-active = <1>; +- }; +- +- port { +- lcd_in: endpoint { +- remote-endpoint = <&dpi_lcd_out>; +- }; +- }; +- }; ++ compatible = "startek,startek-kd050c", "panel-dpi"; ++ label = "lcd"; ++ ++ pinctrl-names = "default"; ++ pinctrl-0 = <&lcd_pins>; ++ ++ enable-gpios = <&gpio8 3 GPIO_ACTIVE_HIGH>; ++ ++ panel-timing { ++ clock-frequency = <33000000>; ++ hactive = <800>; ++ vactive = <480>; ++ hfront-porch = <40>; ++ hback-porch = <40>; ++ hsync-len = <43>; ++ vback-porch = <29>; ++ vfront-porch = <13>; ++ vsync-len = <3>; ++ hsync-active = <0>; ++ vsync-active = <0>; ++ de-active = <1>; ++ pixelclk-active = <1>; ++ }; ++ ++ port { ++ lcd_in: endpoint { ++ remote-endpoint = <&dpi_lcd_out>; ++ }; ++ }; ++ }; + + hdmi0: connector0 { + compatible = "hdmi-connector"; +@@ -644,8 +644,8 @@ + }; + + &usb3 { +- extcon = <&extcon_usb3>; +- vbus-supply = <&smps10_out1_reg>; ++ extcon = <&extcon_usb3>; ++ vbus-supply = <&smps10_out1_reg>; + }; + + &cpu0 { +diff --git a/arch/arm/boot/dts/qcom-msm8974pro-sony-xperia-shinano-castor.dts b/arch/arm/boot/dts/qcom-msm8974pro-sony-xperia-shinano-castor.dts +index 3f45f5c5d37b5..cc49bb777df8a 100644 +--- a/arch/arm/boot/dts/qcom-msm8974pro-sony-xperia-shinano-castor.dts ++++ b/arch/arm/boot/dts/qcom-msm8974pro-sony-xperia-shinano-castor.dts +@@ -124,15 +124,15 @@ + + syna,startup-delay-ms = <10>; + +- rmi-f01@1 { ++ rmi4-f01@1 { + reg = <0x1>; +- syna,nosleep = <1>; ++ syna,nosleep-mode = <1>; + }; + +- rmi-f11@11 { ++ rmi4-f11@11 { + reg = <0x11>; +- syna,f11-flip-x = <1>; + syna,sensor-type = <1>; ++ touchscreen-inverted-x; + }; + }; + }; +diff --git a/arch/arm/boot/dts/twl6030_omap4.dtsi b/arch/arm/boot/dts/twl6030_omap4.dtsi +index 5730e46b00677..64e38c7c8be70 100644 +--- a/arch/arm/boot/dts/twl6030_omap4.dtsi ++++ b/arch/arm/boot/dts/twl6030_omap4.dtsi +@@ -19,7 +19,7 @@ + }; + + &omap4_pmx_wkup { +- twl6030_wkup_pins: pinmux_twl6030_wkup_pins { ++ twl6030_wkup_pins: twl6030-wkup-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x054, PIN_OUTPUT | MUX_MODE2) /* fref_clk0_out.sys_drm_msecure */ + >; +@@ -27,7 +27,7 @@ + }; + + &omap4_pmx_core { +- twl6030_pins: pinmux_twl6030_pins { ++ twl6030_pins: twl6030-pins { + pinctrl-single,pins = < + OMAP4_IOPAD(0x19e, WAKEUP_EN | PIN_INPUT_PULLUP | MUX_MODE0) /* sys_nirq1.sys_nirq1 */ + >; +diff --git a/arch/arm64/boot/dts/freescale/Makefile b/arch/arm64/boot/dts/freescale/Makefile +index 3ea9edc87909a..ac6f780dc1914 100644 +--- a/arch/arm64/boot/dts/freescale/Makefile ++++ b/arch/arm64/boot/dts/freescale/Makefile +@@ -62,6 +62,7 @@ dtb-$(CONFIG_ARCH_MXC) += imx8mm-kontron-bl-osm-s.dtb + dtb-$(CONFIG_ARCH_MXC) += imx8mm-mx8menlo.dtb + dtb-$(CONFIG_ARCH_MXC) += imx8mm-nitrogen-r2.dtb + dtb-$(CONFIG_ARCH_MXC) += imx8mm-phyboard-polis-rdk.dtb ++dtb-$(CONFIG_ARCH_MXC) += imx8mm-prt8mm.dtb + dtb-$(CONFIG_ARCH_MXC) += imx8mm-tqma8mqml-mba8mx.dtb + dtb-$(CONFIG_ARCH_MXC) += imx8mm-var-som-symphony.dtb + dtb-$(CONFIG_ARCH_MXC) += imx8mm-venice-gw71xx-0x.dtb +diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts +index c289bf0903b45..c9efcb894a52f 100644 +--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts ++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts +@@ -100,6 +100,14 @@ + }; + }; + ++ reserved-memory { ++ /* Cont splash region set up by the bootloader */ ++ cont_splash_mem: framebuffer@9d400000 { ++ reg = <0x0 0x9d400000 0x0 0x2400000>; ++ no-map; ++ }; ++ }; ++ + lt9611_1v8: lt9611-vdd18-regulator { + compatible = "regulator-fixed"; + regulator-name = "LT9611_1V8"; +@@ -512,6 +520,7 @@ + }; + + &mdss { ++ memory-region = <&cont_splash_mem>; + status = "okay"; + }; + +diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig +index 0b6af3348e791..623e9f308f38a 100644 +--- a/arch/arm64/configs/defconfig ++++ b/arch/arm64/configs/defconfig +@@ -1050,7 +1050,6 @@ CONFIG_COMMON_CLK_FSL_SAI=y + CONFIG_COMMON_CLK_S2MPS11=y + CONFIG_COMMON_CLK_PWM=y + CONFIG_COMMON_CLK_VC5=y +-CONFIG_COMMON_CLK_NPCM8XX=y + CONFIG_COMMON_CLK_BD718XX=m + CONFIG_CLK_RASPBERRYPI=m + CONFIG_CLK_IMX8MM=y +diff --git a/arch/loongarch/include/asm/elf.h b/arch/loongarch/include/asm/elf.h +index 7af0cebf28d73..b9a4ab54285c1 100644 +--- a/arch/loongarch/include/asm/elf.h ++++ b/arch/loongarch/include/asm/elf.h +@@ -111,6 +111,15 @@ + #define R_LARCH_TLS_GD_HI20 98 + #define R_LARCH_32_PCREL 99 + #define R_LARCH_RELAX 100 ++#define R_LARCH_DELETE 101 ++#define R_LARCH_ALIGN 102 ++#define R_LARCH_PCREL20_S2 103 ++#define R_LARCH_CFA 104 ++#define R_LARCH_ADD6 105 ++#define R_LARCH_SUB6 106 ++#define R_LARCH_ADD_ULEB128 107 ++#define R_LARCH_SUB_ULEB128 108 ++#define R_LARCH_64_PCREL 109 + + #ifndef ELF_ARCH + +diff --git a/arch/loongarch/kernel/mem.c b/arch/loongarch/kernel/mem.c +index 4a4107a6a9651..aed901c57fb43 100644 +--- a/arch/loongarch/kernel/mem.c ++++ b/arch/loongarch/kernel/mem.c +@@ -50,7 +50,6 @@ void __init memblock_init(void) + } + + memblock_set_current_limit(PFN_PHYS(max_low_pfn)); +- memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); + + /* Reserve the first 2MB */ + memblock_reserve(PHYS_OFFSET, 0x200000); +@@ -58,4 +57,7 @@ void __init memblock_init(void) + /* Reserve the kernel text/data/bss */ + memblock_reserve(__pa_symbol(&_text), + __pa_symbol(&_end) - __pa_symbol(&_text)); ++ ++ memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0); ++ memblock_set_node(0, PHYS_ADDR_MAX, &memblock.reserved, 0); + } +diff --git a/arch/loongarch/kernel/module.c b/arch/loongarch/kernel/module.c +index 097595b2fc14b..4f1e6e55dc026 100644 +--- a/arch/loongarch/kernel/module.c ++++ b/arch/loongarch/kernel/module.c +@@ -376,7 +376,7 @@ typedef int (*reloc_rela_handler)(struct module *mod, u32 *location, Elf_Addr v, + + /* The handlers for known reloc types */ + static reloc_rela_handler reloc_rela_handlers[] = { +- [R_LARCH_NONE ... R_LARCH_RELAX] = apply_r_larch_error, ++ [R_LARCH_NONE ... R_LARCH_64_PCREL] = apply_r_larch_error, + + [R_LARCH_NONE] = apply_r_larch_none, + [R_LARCH_32] = apply_r_larch_32, +diff --git a/arch/loongarch/kernel/numa.c b/arch/loongarch/kernel/numa.c +index a13f92593cfda..f7ffce170213e 100644 +--- a/arch/loongarch/kernel/numa.c ++++ b/arch/loongarch/kernel/numa.c +@@ -453,7 +453,7 @@ void __init paging_init(void) + + void __init mem_init(void) + { +- high_memory = (void *) __va(get_num_physpages() << PAGE_SHIFT); ++ high_memory = (void *) __va(max_low_pfn << PAGE_SHIFT); + memblock_free_all(); + setup_zero_pages(); /* This comes from node 0 */ + } +diff --git a/arch/mips/alchemy/devboards/db1000.c b/arch/mips/alchemy/devboards/db1000.c +index 50de86eb8784c..3183df60ad337 100644 +--- a/arch/mips/alchemy/devboards/db1000.c ++++ b/arch/mips/alchemy/devboards/db1000.c +@@ -164,6 +164,7 @@ static struct platform_device db1x00_audio_dev = { + + /******************************************************************************/ + ++#ifdef CONFIG_MMC_AU1X + static irqreturn_t db1100_mmc_cd(int irq, void *ptr) + { + mmc_detect_change(ptr, msecs_to_jiffies(500)); +@@ -369,6 +370,7 @@ static struct platform_device db1100_mmc1_dev = { + .num_resources = ARRAY_SIZE(au1100_mmc1_res), + .resource = au1100_mmc1_res, + }; ++#endif /* CONFIG_MMC_AU1X */ + + /******************************************************************************/ + +@@ -432,8 +434,10 @@ static struct platform_device *db1x00_devs[] = { + + static struct platform_device *db1100_devs[] = { + &au1100_lcd_device, ++#ifdef CONFIG_MMC_AU1X + &db1100_mmc0_dev, + &db1100_mmc1_dev, ++#endif + }; + + int __init db1000_dev_setup(void) +diff --git a/arch/mips/alchemy/devboards/db1200.c b/arch/mips/alchemy/devboards/db1200.c +index 76080c71a2a7b..f521874ebb07b 100644 +--- a/arch/mips/alchemy/devboards/db1200.c ++++ b/arch/mips/alchemy/devboards/db1200.c +@@ -326,6 +326,7 @@ static struct platform_device db1200_ide_dev = { + + /**********************************************************************/ + ++#ifdef CONFIG_MMC_AU1X + /* SD carddetects: they're supposed to be edge-triggered, but ack + * doesn't seem to work (CPLD Rev 2). Instead, the screaming one + * is disabled and its counterpart enabled. The 200ms timeout is +@@ -584,6 +585,7 @@ static struct platform_device pb1200_mmc1_dev = { + .num_resources = ARRAY_SIZE(au1200_mmc1_res), + .resource = au1200_mmc1_res, + }; ++#endif /* CONFIG_MMC_AU1X */ + + /**********************************************************************/ + +@@ -751,7 +753,9 @@ static struct platform_device db1200_audiodma_dev = { + static struct platform_device *db1200_devs[] __initdata = { + NULL, /* PSC0, selected by S6.8 */ + &db1200_ide_dev, ++#ifdef CONFIG_MMC_AU1X + &db1200_mmc0_dev, ++#endif + &au1200_lcd_dev, + &db1200_eth_dev, + &db1200_nand_dev, +@@ -762,7 +766,9 @@ static struct platform_device *db1200_devs[] __initdata = { + }; + + static struct platform_device *pb1200_devs[] __initdata = { ++#ifdef CONFIG_MMC_AU1X + &pb1200_mmc1_dev, ++#endif + }; + + /* Some peripheral base addresses differ on the PB1200 */ +diff --git a/arch/mips/alchemy/devboards/db1300.c b/arch/mips/alchemy/devboards/db1300.c +index ff61901329c62..d377e043b49f8 100644 +--- a/arch/mips/alchemy/devboards/db1300.c ++++ b/arch/mips/alchemy/devboards/db1300.c +@@ -450,6 +450,7 @@ static struct platform_device db1300_ide_dev = { + + /**********************************************************************/ + ++#ifdef CONFIG_MMC_AU1X + static irqreturn_t db1300_mmc_cd(int irq, void *ptr) + { + disable_irq_nosync(irq); +@@ -632,6 +633,7 @@ static struct platform_device db1300_sd0_dev = { + .resource = au1300_sd0_res, + .num_resources = ARRAY_SIZE(au1300_sd0_res), + }; ++#endif /* CONFIG_MMC_AU1X */ + + /**********************************************************************/ + +@@ -767,8 +769,10 @@ static struct platform_device *db1300_dev[] __initdata = { + &db1300_5waysw_dev, + &db1300_nand_dev, + &db1300_ide_dev, ++#ifdef CONFIG_MMC_AU1X + &db1300_sd0_dev, + &db1300_sd1_dev, ++#endif + &db1300_lcd_dev, + &db1300_ac97_dev, + &db1300_i2s_dev, +diff --git a/arch/parisc/include/asm/ropes.h b/arch/parisc/include/asm/ropes.h +index 8e51c775c80a6..62399c7ea94a1 100644 +--- a/arch/parisc/include/asm/ropes.h ++++ b/arch/parisc/include/asm/ropes.h +@@ -86,6 +86,9 @@ struct sba_device { + struct ioc ioc[MAX_IOC]; + }; + ++/* list of SBA's in system, see drivers/parisc/sba_iommu.c */ ++extern struct sba_device *sba_list; ++ + #define ASTRO_RUNWAY_PORT 0x582 + #define IKE_MERCED_PORT 0x803 + #define REO_MERCED_PORT 0x804 +diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c +index e7ee0c0c91d35..8f12b9f318ae6 100644 +--- a/arch/parisc/kernel/drivers.c ++++ b/arch/parisc/kernel/drivers.c +@@ -924,9 +924,9 @@ static __init void qemu_header(void) + pr_info("#define PARISC_MODEL \"%s\"\n\n", + boot_cpu_data.pdc.sys_model_name); + ++ #define p ((unsigned long *)&boot_cpu_data.pdc.model) + pr_info("#define PARISC_PDC_MODEL 0x%lx, 0x%lx, 0x%lx, " + "0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx, 0x%lx\n\n", +- #define p ((unsigned long *)&boot_cpu_data.pdc.model) + p[0], p[1], p[2], p[3], p[4], p[5], p[6], p[7], p[8]); + #undef p + +diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c +index b05055f3ba4b8..9ddb2e3970589 100644 +--- a/arch/parisc/kernel/irq.c ++++ b/arch/parisc/kernel/irq.c +@@ -368,7 +368,7 @@ union irq_stack_union { + volatile unsigned int lock[1]; + }; + +-DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = { ++static DEFINE_PER_CPU(union irq_stack_union, irq_stack_union) = { + .slock = { 1,1,1,1 }, + }; + #endif +diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c +index 8db1a15d7acbe..02436f80e60e2 100644 +--- a/arch/powerpc/kernel/hw_breakpoint.c ++++ b/arch/powerpc/kernel/hw_breakpoint.c +@@ -505,11 +505,13 @@ void thread_change_pc(struct task_struct *tsk, struct pt_regs *regs) + struct arch_hw_breakpoint *info; + int i; + ++ preempt_disable(); ++ + for (i = 0; i < nr_wp_slots(); i++) { + if (unlikely(tsk->thread.last_hit_ubp[i])) + goto reset; + } +- return; ++ goto out; + + reset: + regs_set_return_msr(regs, regs->msr & ~MSR_SE); +@@ -518,6 +520,9 @@ reset: + __set_breakpoint(i, info); + tsk->thread.last_hit_ubp[i] = NULL; + } ++ ++out: ++ preempt_enable(); + } + + static bool is_larx_stcx_instr(int type) +@@ -632,6 +637,11 @@ static void handle_p10dd1_spurious_exception(struct arch_hw_breakpoint **info, + } + } + ++/* ++ * Handle a DABR or DAWR exception. ++ * ++ * Called in atomic context. ++ */ + int hw_breakpoint_handler(struct die_args *args) + { + bool err = false; +@@ -758,6 +768,8 @@ NOKPROBE_SYMBOL(hw_breakpoint_handler); + + /* + * Handle single-step exceptions following a DABR hit. ++ * ++ * Called in atomic context. + */ + static int single_step_dabr_instruction(struct die_args *args) + { +@@ -815,6 +827,8 @@ NOKPROBE_SYMBOL(single_step_dabr_instruction); + + /* + * Handle debug exception notifications. ++ * ++ * Called in atomic context. + */ + int hw_breakpoint_exceptions_notify( + struct notifier_block *unused, unsigned long val, void *data) +diff --git a/arch/powerpc/kernel/hw_breakpoint_constraints.c b/arch/powerpc/kernel/hw_breakpoint_constraints.c +index a74623025f3ab..9e51801c49152 100644 +--- a/arch/powerpc/kernel/hw_breakpoint_constraints.c ++++ b/arch/powerpc/kernel/hw_breakpoint_constraints.c +@@ -131,8 +131,13 @@ void wp_get_instr_detail(struct pt_regs *regs, ppc_inst_t *instr, + int *type, int *size, unsigned long *ea) + { + struct instruction_op op; ++ int err; + +- if (__get_user_instr(*instr, (void __user *)regs->nip)) ++ pagefault_disable(); ++ err = __get_user_instr(*instr, (void __user *)regs->nip); ++ pagefault_enable(); ++ ++ if (err) + return; + + analyse_instr(&op, regs, *instr); +diff --git a/arch/powerpc/perf/hv-24x7.c b/arch/powerpc/perf/hv-24x7.c +index 33c23225fd545..7dda59923ed6a 100644 +--- a/arch/powerpc/perf/hv-24x7.c ++++ b/arch/powerpc/perf/hv-24x7.c +@@ -1431,7 +1431,7 @@ static int h_24x7_event_init(struct perf_event *event) + } + + domain = event_get_domain(event); +- if (domain >= HV_PERF_DOMAIN_MAX) { ++ if (domain == 0 || domain >= HV_PERF_DOMAIN_MAX) { + pr_devel("invalid domain %d\n", domain); + return -EINVAL; + } +diff --git a/arch/riscv/include/asm/errata_list.h b/arch/riscv/include/asm/errata_list.h +index 19a771085781a..7d2675bb71611 100644 +--- a/arch/riscv/include/asm/errata_list.h ++++ b/arch/riscv/include/asm/errata_list.h +@@ -100,7 +100,7 @@ asm volatile(ALTERNATIVE( \ + * | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 | + * 0000001 01001 rs1 000 00000 0001011 + * dcache.cva rs1 (clean, virtual address) +- * 0000001 00100 rs1 000 00000 0001011 ++ * 0000001 00101 rs1 000 00000 0001011 + * + * dcache.cipa rs1 (clean then invalidate, physical address) + * | 31 - 25 | 24 - 20 | 19 - 15 | 14 - 12 | 11 - 7 | 6 - 0 | +@@ -113,7 +113,7 @@ asm volatile(ALTERNATIVE( \ + * 0000000 11001 00000 000 00000 0001011 + */ + #define THEAD_inval_A0 ".long 0x0265000b" +-#define THEAD_clean_A0 ".long 0x0245000b" ++#define THEAD_clean_A0 ".long 0x0255000b" + #define THEAD_flush_A0 ".long 0x0275000b" + #define THEAD_SYNC_S ".long 0x0190000b" + +diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h +index a3760ca796aa2..256eee99afc8f 100644 +--- a/arch/x86/include/asm/kexec.h ++++ b/arch/x86/include/asm/kexec.h +@@ -208,8 +208,6 @@ int arch_kimage_file_post_load_cleanup(struct kimage *image); + #endif + #endif + +-typedef void crash_vmclear_fn(void); +-extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; + extern void kdump_nmi_shootdown_cpus(void); + + #endif /* __ASSEMBLY__ */ +diff --git a/arch/x86/include/asm/reboot.h b/arch/x86/include/asm/reboot.h +index bc5b4d788c08d..2551baec927d2 100644 +--- a/arch/x86/include/asm/reboot.h ++++ b/arch/x86/include/asm/reboot.h +@@ -25,6 +25,8 @@ void __noreturn machine_real_restart(unsigned int type); + #define MRR_BIOS 0 + #define MRR_APM 1 + ++typedef void crash_vmclear_fn(void); ++extern crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; + void cpu_emergency_disable_virtualization(void); + + typedef void (*nmi_shootdown_cb)(int, struct pt_regs*); +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index 3a893ab398a01..263df737d5cd5 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -2414,7 +2414,7 @@ static void __init srso_select_mitigation(void) + + switch (srso_cmd) { + case SRSO_CMD_OFF: +- return; ++ goto pred_cmd; + + case SRSO_CMD_MICROCODE: + if (has_microcode) { +@@ -2692,7 +2692,7 @@ static ssize_t srso_show_state(char *buf) + + return sysfs_emit(buf, "%s%s\n", + srso_strings[srso_mitigation], +- (cpu_has_ibpb_brtype_microcode() ? "" : ", no microcode")); ++ boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode"); + } + + static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index b723368dbc644..454cdf3418624 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1282,7 +1282,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = { + VULNBL_AMD(0x15, RETBLEED), + VULNBL_AMD(0x16, RETBLEED), + VULNBL_AMD(0x17, RETBLEED | SMT_RSB | SRSO), +- VULNBL_HYGON(0x18, RETBLEED | SMT_RSB), ++ VULNBL_HYGON(0x18, RETBLEED | SMT_RSB | SRSO), + VULNBL_AMD(0x19, SRSO), + {} + }; +diff --git a/arch/x86/kernel/cpu/sgx/encl.c b/arch/x86/kernel/cpu/sgx/encl.c +index 2c258255a6296..d5f76b996795f 100644 +--- a/arch/x86/kernel/cpu/sgx/encl.c ++++ b/arch/x86/kernel/cpu/sgx/encl.c +@@ -235,6 +235,21 @@ static struct sgx_epc_page *sgx_encl_eldu(struct sgx_encl_page *encl_page, + return epc_page; + } + ++/* ++ * Ensure the SECS page is not swapped out. Must be called with encl->lock ++ * to protect the enclave states including SECS and ensure the SECS page is ++ * not swapped out again while being used. ++ */ ++static struct sgx_epc_page *sgx_encl_load_secs(struct sgx_encl *encl) ++{ ++ struct sgx_epc_page *epc_page = encl->secs.epc_page; ++ ++ if (!epc_page) ++ epc_page = sgx_encl_eldu(&encl->secs, NULL); ++ ++ return epc_page; ++} ++ + static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl, + struct sgx_encl_page *entry) + { +@@ -248,11 +263,9 @@ static struct sgx_encl_page *__sgx_encl_load_page(struct sgx_encl *encl, + return entry; + } + +- if (!(encl->secs.epc_page)) { +- epc_page = sgx_encl_eldu(&encl->secs, NULL); +- if (IS_ERR(epc_page)) +- return ERR_CAST(epc_page); +- } ++ epc_page = sgx_encl_load_secs(encl); ++ if (IS_ERR(epc_page)) ++ return ERR_CAST(epc_page); + + epc_page = sgx_encl_eldu(entry, encl->secs.epc_page); + if (IS_ERR(epc_page)) +@@ -339,6 +352,13 @@ static vm_fault_t sgx_encl_eaug_page(struct vm_area_struct *vma, + + mutex_lock(&encl->lock); + ++ epc_page = sgx_encl_load_secs(encl); ++ if (IS_ERR(epc_page)) { ++ if (PTR_ERR(epc_page) == -EBUSY) ++ vmret = VM_FAULT_NOPAGE; ++ goto err_out_unlock; ++ } ++ + epc_page = sgx_alloc_epc_page(encl_page, false); + if (IS_ERR(epc_page)) { + if (PTR_ERR(epc_page) == -EBUSY) +diff --git a/arch/x86/kernel/crash.c b/arch/x86/kernel/crash.c +index cdd92ab43cda4..54cd959cb3160 100644 +--- a/arch/x86/kernel/crash.c ++++ b/arch/x86/kernel/crash.c +@@ -48,38 +48,12 @@ struct crash_memmap_data { + unsigned int type; + }; + +-/* +- * This is used to VMCLEAR all VMCSs loaded on the +- * processor. And when loading kvm_intel module, the +- * callback function pointer will be assigned. +- * +- * protected by rcu. +- */ +-crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss = NULL; +-EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss); +- +-static inline void cpu_crash_vmclear_loaded_vmcss(void) +-{ +- crash_vmclear_fn *do_vmclear_operation = NULL; +- +- rcu_read_lock(); +- do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss); +- if (do_vmclear_operation) +- do_vmclear_operation(); +- rcu_read_unlock(); +-} +- + #if defined(CONFIG_SMP) && defined(CONFIG_X86_LOCAL_APIC) + + static void kdump_nmi_callback(int cpu, struct pt_regs *regs) + { + crash_save_cpu(regs, cpu); + +- /* +- * VMCLEAR VMCSs loaded on all cpus if needed. +- */ +- cpu_crash_vmclear_loaded_vmcss(); +- + /* + * Disable Intel PT to stop its logging + */ +@@ -133,11 +107,6 @@ void native_machine_crash_shutdown(struct pt_regs *regs) + + crash_smp_send_stop(); + +- /* +- * VMCLEAR VMCSs loaded on this cpu if needed. +- */ +- cpu_crash_vmclear_loaded_vmcss(); +- + cpu_emergency_disable_virtualization(); + + /* +diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c +index d03c551defccf..299b970e5f829 100644 +--- a/arch/x86/kernel/reboot.c ++++ b/arch/x86/kernel/reboot.c +@@ -787,6 +787,26 @@ void machine_crash_shutdown(struct pt_regs *regs) + } + #endif + ++/* ++ * This is used to VMCLEAR all VMCSs loaded on the ++ * processor. And when loading kvm_intel module, the ++ * callback function pointer will be assigned. ++ * ++ * protected by rcu. ++ */ ++crash_vmclear_fn __rcu *crash_vmclear_loaded_vmcss; ++EXPORT_SYMBOL_GPL(crash_vmclear_loaded_vmcss); ++ ++static inline void cpu_crash_vmclear_loaded_vmcss(void) ++{ ++ crash_vmclear_fn *do_vmclear_operation = NULL; ++ ++ rcu_read_lock(); ++ do_vmclear_operation = rcu_dereference(crash_vmclear_loaded_vmcss); ++ if (do_vmclear_operation) ++ do_vmclear_operation(); ++ rcu_read_unlock(); ++} + + /* This is the CPU performing the emergency shutdown work. */ + int crashing_cpu = -1; +@@ -798,6 +818,8 @@ int crashing_cpu = -1; + */ + void cpu_emergency_disable_virtualization(void) + { ++ cpu_crash_vmclear_loaded_vmcss(); ++ + cpu_emergency_vmxoff(); + cpu_emergency_svm_disable(); + } +diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c +index 892609cde4a20..804a252382da7 100644 +--- a/arch/x86/kernel/setup.c ++++ b/arch/x86/kernel/setup.c +@@ -363,15 +363,11 @@ static void __init add_early_ima_buffer(u64 phys_addr) + #if defined(CONFIG_HAVE_IMA_KEXEC) && !defined(CONFIG_OF_FLATTREE) + int __init ima_free_kexec_buffer(void) + { +- int rc; +- + if (!ima_kexec_buffer_size) + return -ENOENT; + +- rc = memblock_phys_free(ima_kexec_buffer_phys, +- ima_kexec_buffer_size); +- if (rc) +- return rc; ++ memblock_free_late(ima_kexec_buffer_phys, ++ ima_kexec_buffer_size); + + ima_kexec_buffer_phys = 0; + ima_kexec_buffer_size = 0; +diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c +index 7a6df4b62c1bd..2a6fec4e2d196 100644 +--- a/arch/x86/kvm/mmu/mmu.c ++++ b/arch/x86/kvm/mmu/mmu.c +@@ -6079,7 +6079,6 @@ static bool kvm_rmap_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_e + void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) + { + bool flush; +- int i; + + if (WARN_ON_ONCE(gfn_end <= gfn_start)) + return; +@@ -6090,11 +6089,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) + + flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); + +- if (is_tdp_mmu_enabled(kvm)) { +- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) +- flush = kvm_tdp_mmu_zap_leafs(kvm, i, gfn_start, +- gfn_end, true, flush); +- } ++ if (is_tdp_mmu_enabled(kvm)) ++ flush = kvm_tdp_mmu_zap_leafs(kvm, gfn_start, gfn_end, flush); + + if (flush) + kvm_flush_remote_tlbs_with_address(kvm, gfn_start, +diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c +index 70945f00ec412..9b9fc4e834d09 100644 +--- a/arch/x86/kvm/mmu/tdp_mmu.c ++++ b/arch/x86/kvm/mmu/tdp_mmu.c +@@ -222,8 +222,12 @@ static struct kvm_mmu_page *tdp_mmu_next_root(struct kvm *kvm, + #define for_each_valid_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared) \ + __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, _shared, true) + +-#define for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id) \ +- __for_each_tdp_mmu_root_yield_safe(_kvm, _root, _as_id, false, false) ++#define for_each_tdp_mmu_root_yield_safe(_kvm, _root) \ ++ for (_root = tdp_mmu_next_root(_kvm, NULL, false, false); \ ++ _root; \ ++ _root = tdp_mmu_next_root(_kvm, _root, false, false)) \ ++ if (!kvm_lockdep_assert_mmu_lock_held(_kvm, false)) { \ ++ } else + + /* + * Iterate over all TDP MMU roots. Requires that mmu_lock be held for write, +@@ -955,13 +959,12 @@ static bool tdp_mmu_zap_leafs(struct kvm *kvm, struct kvm_mmu_page *root, + * true if a TLB flush is needed before releasing the MMU lock, i.e. if one or + * more SPTEs were zapped since the MMU lock was last acquired. + */ +-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, +- bool can_yield, bool flush) ++bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush) + { + struct kvm_mmu_page *root; + +- for_each_tdp_mmu_root_yield_safe(kvm, root, as_id) +- flush = tdp_mmu_zap_leafs(kvm, root, start, end, can_yield, flush); ++ for_each_tdp_mmu_root_yield_safe(kvm, root) ++ flush = tdp_mmu_zap_leafs(kvm, root, start, end, true, flush); + + return flush; + } +@@ -969,7 +972,6 @@ bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, gfn_t end, + void kvm_tdp_mmu_zap_all(struct kvm *kvm) + { + struct kvm_mmu_page *root; +- int i; + + /* + * Zap all roots, including invalid roots, as all SPTEs must be dropped +@@ -983,10 +985,8 @@ void kvm_tdp_mmu_zap_all(struct kvm *kvm) + * is being destroyed or the userspace VMM has exited. In both cases, + * KVM_RUN is unreachable, i.e. no vCPUs will ever service the request. + */ +- for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { +- for_each_tdp_mmu_root_yield_safe(kvm, root, i) +- tdp_mmu_zap_root(kvm, root, false); +- } ++ for_each_tdp_mmu_root_yield_safe(kvm, root) ++ tdp_mmu_zap_root(kvm, root, false); + } + + /* +@@ -1221,8 +1221,13 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) + bool kvm_tdp_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range, + bool flush) + { +- return kvm_tdp_mmu_zap_leafs(kvm, range->slot->as_id, range->start, +- range->end, range->may_block, flush); ++ struct kvm_mmu_page *root; ++ ++ __for_each_tdp_mmu_root_yield_safe(kvm, root, range->slot->as_id, false, false) ++ flush = tdp_mmu_zap_leafs(kvm, root, range->start, range->end, ++ range->may_block, flush); ++ ++ return flush; + } + + typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter, +diff --git a/arch/x86/kvm/mmu/tdp_mmu.h b/arch/x86/kvm/mmu/tdp_mmu.h +index c163f7cc23ca5..d0a9fe0770fdd 100644 +--- a/arch/x86/kvm/mmu/tdp_mmu.h ++++ b/arch/x86/kvm/mmu/tdp_mmu.h +@@ -15,8 +15,7 @@ __must_check static inline bool kvm_tdp_mmu_get_root(struct kvm_mmu_page *root) + void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root, + bool shared); + +-bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, int as_id, gfn_t start, +- gfn_t end, bool can_yield, bool flush); ++bool kvm_tdp_mmu_zap_leafs(struct kvm *kvm, gfn_t start, gfn_t end, bool flush); + bool kvm_tdp_mmu_zap_sp(struct kvm *kvm, struct kvm_mmu_page *sp); + void kvm_tdp_mmu_zap_all(struct kvm *kvm); + void kvm_tdp_mmu_invalidate_all_roots(struct kvm *kvm); +diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c +index d08d5e085649f..3060fe4e9731a 100644 +--- a/arch/x86/kvm/svm/sev.c ++++ b/arch/x86/kvm/svm/sev.c +@@ -2941,6 +2941,32 @@ int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in) + count, in); + } + ++static void sev_es_vcpu_after_set_cpuid(struct vcpu_svm *svm) ++{ ++ struct kvm_vcpu *vcpu = &svm->vcpu; ++ ++ if (boot_cpu_has(X86_FEATURE_V_TSC_AUX)) { ++ bool v_tsc_aux = guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP) || ++ guest_cpuid_has(vcpu, X86_FEATURE_RDPID); ++ ++ set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, v_tsc_aux, v_tsc_aux); ++ } ++} ++ ++void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm) ++{ ++ struct kvm_vcpu *vcpu = &svm->vcpu; ++ struct kvm_cpuid_entry2 *best; ++ ++ /* For sev guests, the memory encryption bit is not reserved in CR3. */ ++ best = kvm_find_cpuid_entry(vcpu, 0x8000001F); ++ if (best) ++ vcpu->arch.reserved_gpa_bits &= ~(1UL << (best->ebx & 0x3f)); ++ ++ if (sev_es_guest(svm->vcpu.kvm)) ++ sev_es_vcpu_after_set_cpuid(svm); ++} ++ + static void sev_es_init_vmcb(struct vcpu_svm *svm) + { + struct kvm_vcpu *vcpu = &svm->vcpu; +@@ -2987,14 +3013,6 @@ static void sev_es_init_vmcb(struct vcpu_svm *svm) + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTBRANCHTOIP, 1, 1); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTFROMIP, 1, 1); + set_msr_interception(vcpu, svm->msrpm, MSR_IA32_LASTINTTOIP, 1, 1); +- +- if (boot_cpu_has(X86_FEATURE_V_TSC_AUX) && +- (guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDTSCP) || +- guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDPID))) { +- set_msr_interception(vcpu, svm->msrpm, MSR_TSC_AUX, 1, 1); +- if (guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDTSCP)) +- svm_clr_intercept(svm, INTERCEPT_RDTSCP); +- } + } + + void sev_init_vmcb(struct vcpu_svm *svm) +diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c +index 7e4d66be18ef5..c871a6d6364ca 100644 +--- a/arch/x86/kvm/svm/svm.c ++++ b/arch/x86/kvm/svm/svm.c +@@ -4173,7 +4173,6 @@ static bool svm_has_emulated_msr(struct kvm *kvm, u32 index) + static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) + { + struct vcpu_svm *svm = to_svm(vcpu); +- struct kvm_cpuid_entry2 *best; + + vcpu->arch.xsaves_enabled = guest_cpuid_has(vcpu, X86_FEATURE_XSAVE) && + boot_cpu_has(X86_FEATURE_XSAVE) && +@@ -4198,12 +4197,8 @@ static void svm_vcpu_after_set_cpuid(struct kvm_vcpu *vcpu) + + svm_recalc_instruction_intercepts(vcpu, svm); + +- /* For sev guests, the memory encryption bit is not reserved in CR3. */ +- if (sev_guest(vcpu->kvm)) { +- best = kvm_find_cpuid_entry(vcpu, 0x8000001F); +- if (best) +- vcpu->arch.reserved_gpa_bits &= ~(1UL << (best->ebx & 0x3f)); +- } ++ if (sev_guest(vcpu->kvm)) ++ sev_vcpu_after_set_cpuid(svm); + + init_vmcb_after_set_cpuid(vcpu); + } +diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h +index 62f87492763e0..4cb1425900c6d 100644 +--- a/arch/x86/kvm/svm/svm.h ++++ b/arch/x86/kvm/svm/svm.h +@@ -677,6 +677,7 @@ void __init sev_hardware_setup(void); + void sev_hardware_unsetup(void); + int sev_cpu_init(struct svm_cpu_data *sd); + void sev_init_vmcb(struct vcpu_svm *svm); ++void sev_vcpu_after_set_cpuid(struct vcpu_svm *svm); + void sev_free_vcpu(struct kvm_vcpu *vcpu); + int sev_handle_vmgexit(struct kvm_vcpu *vcpu); + int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); +diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c +index 4e972b9b68e59..31a10d774df6d 100644 +--- a/arch/x86/kvm/vmx/vmx.c ++++ b/arch/x86/kvm/vmx/vmx.c +@@ -40,7 +40,7 @@ + #include + #include + #include +-#include ++#include + #include + #include + #include +@@ -702,7 +702,6 @@ static int vmx_set_guest_uret_msr(struct vcpu_vmx *vmx, + return ret; + } + +-#ifdef CONFIG_KEXEC_CORE + static void crash_vmclear_local_loaded_vmcss(void) + { + int cpu = raw_smp_processor_id(); +@@ -712,7 +711,6 @@ static void crash_vmclear_local_loaded_vmcss(void) + loaded_vmcss_on_cpu_link) + vmcs_clear(v->vmcs); + } +-#endif /* CONFIG_KEXEC_CORE */ + + static void __loaded_vmcs_clear(void *arg) + { +@@ -8522,10 +8520,9 @@ static void __vmx_exit(void) + { + allow_smaller_maxphyaddr = false; + +-#ifdef CONFIG_KEXEC_CORE + RCU_INIT_POINTER(crash_vmclear_loaded_vmcss, NULL); + synchronize_rcu(); +-#endif ++ + vmx_cleanup_l1d_flush(); + } + +@@ -8598,10 +8595,9 @@ static int __init vmx_init(void) + pi_init_cpu(cpu); + } + +-#ifdef CONFIG_KEXEC_CORE + rcu_assign_pointer(crash_vmclear_loaded_vmcss, + crash_vmclear_local_loaded_vmcss); +-#endif ++ + vmx_check_vmcs12_offsets(); + + /* +diff --git a/arch/xtensa/boot/Makefile b/arch/xtensa/boot/Makefile +index a65b7a9ebff28..d8b0fadf429a9 100644 +--- a/arch/xtensa/boot/Makefile ++++ b/arch/xtensa/boot/Makefile +@@ -9,8 +9,7 @@ + + + # KBUILD_CFLAGS used when building rest of boot (takes effect recursively) +-KBUILD_CFLAGS += -fno-builtin -Iarch/$(ARCH)/boot/include +-HOSTFLAGS += -Iarch/$(ARCH)/boot/include ++KBUILD_CFLAGS += -fno-builtin + + subdir-y := lib + targets += vmlinux.bin vmlinux.bin.gz +diff --git a/arch/xtensa/boot/lib/zmem.c b/arch/xtensa/boot/lib/zmem.c +index e3ecd743c5153..b89189355122a 100644 +--- a/arch/xtensa/boot/lib/zmem.c ++++ b/arch/xtensa/boot/lib/zmem.c +@@ -4,13 +4,14 @@ + /* bits taken from ppc */ + + extern void *avail_ram, *end_avail; ++void gunzip(void *dst, int dstlen, unsigned char *src, int *lenp); + +-void exit (void) ++static void exit(void) + { + for (;;); + } + +-void *zalloc(unsigned size) ++static void *zalloc(unsigned int size) + { + void *p = avail_ram; + +diff --git a/arch/xtensa/include/asm/core.h b/arch/xtensa/include/asm/core.h +index 7cef85ad9741a..25293269e1edd 100644 +--- a/arch/xtensa/include/asm/core.h ++++ b/arch/xtensa/include/asm/core.h +@@ -6,6 +6,10 @@ + + #include + ++#ifndef XCHAL_HAVE_DIV32 ++#define XCHAL_HAVE_DIV32 0 ++#endif ++ + #ifndef XCHAL_HAVE_EXCLUSIVE + #define XCHAL_HAVE_EXCLUSIVE 0 + #endif +diff --git a/arch/xtensa/lib/umulsidi3.S b/arch/xtensa/lib/umulsidi3.S +index 1360816479427..4d9ba2387de0f 100644 +--- a/arch/xtensa/lib/umulsidi3.S ++++ b/arch/xtensa/lib/umulsidi3.S +@@ -3,7 +3,9 @@ + #include + #include + +-#if !XCHAL_HAVE_MUL16 && !XCHAL_HAVE_MUL32 && !XCHAL_HAVE_MAC16 ++#if XCHAL_HAVE_MUL16 || XCHAL_HAVE_MUL32 || XCHAL_HAVE_MAC16 ++#define XCHAL_NO_MUL 0 ++#else + #define XCHAL_NO_MUL 1 + #endif + +diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c +index 119345eeb04c9..bea539f9039a2 100644 +--- a/arch/xtensa/platforms/iss/network.c ++++ b/arch/xtensa/platforms/iss/network.c +@@ -201,7 +201,7 @@ static int tuntap_write(struct iss_net_private *lp, struct sk_buff **skb) + return simc_write(lp->tp.info.tuntap.fd, (*skb)->data, (*skb)->len); + } + +-unsigned short tuntap_protocol(struct sk_buff *skb) ++static unsigned short tuntap_protocol(struct sk_buff *skb) + { + return eth_type_trans(skb, skb->dev); + } +@@ -441,7 +441,7 @@ static int iss_net_change_mtu(struct net_device *dev, int new_mtu) + return -EINVAL; + } + +-void iss_net_user_timer_expire(struct timer_list *unused) ++static void iss_net_user_timer_expire(struct timer_list *unused) + { + } + +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c +index 0ba0c3d1613f1..25b9bdf2fc380 100644 +--- a/drivers/ata/libata-core.c ++++ b/drivers/ata/libata-core.c +@@ -4981,17 +4981,19 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg, + struct ata_link *link; + unsigned long flags; + +- /* Previous resume operation might still be in +- * progress. Wait for PM_PENDING to clear. ++ spin_lock_irqsave(ap->lock, flags); ++ ++ /* ++ * A previous PM operation might still be in progress. Wait for ++ * ATA_PFLAG_PM_PENDING to clear. + */ + if (ap->pflags & ATA_PFLAG_PM_PENDING) { ++ spin_unlock_irqrestore(ap->lock, flags); + ata_port_wait_eh(ap); +- WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING); ++ spin_lock_irqsave(ap->lock, flags); + } + +- /* request PM ops to EH */ +- spin_lock_irqsave(ap->lock, flags); +- ++ /* Request PM operation to EH */ + ap->pm_mesg = mesg; + ap->pflags |= ATA_PFLAG_PM_PENDING; + ata_for_each_link(link, ap, HOST_FIRST) { +@@ -5003,10 +5005,8 @@ static void ata_port_request_pm(struct ata_port *ap, pm_message_t mesg, + + spin_unlock_irqrestore(ap->lock, flags); + +- if (!async) { ++ if (!async) + ata_port_wait_eh(ap); +- WARN_ON(ap->pflags & ATA_PFLAG_PM_PENDING); +- } + } + + /* +@@ -5173,7 +5173,7 @@ EXPORT_SYMBOL_GPL(ata_host_resume); + #endif + + const struct device_type ata_port_type = { +- .name = "ata_port", ++ .name = ATA_PORT_TYPE_NAME, + #ifdef CONFIG_PM + .pm = &ata_port_pm_ops, + #endif +@@ -5906,11 +5906,30 @@ static void ata_port_detach(struct ata_port *ap) + if (!ap->ops->error_handler) + goto skip_eh; + +- /* tell EH we're leaving & flush EH */ ++ /* Wait for any ongoing EH */ ++ ata_port_wait_eh(ap); ++ ++ mutex_lock(&ap->scsi_scan_mutex); + spin_lock_irqsave(ap->lock, flags); ++ ++ /* Remove scsi devices */ ++ ata_for_each_link(link, ap, HOST_FIRST) { ++ ata_for_each_dev(dev, link, ALL) { ++ if (dev->sdev) { ++ spin_unlock_irqrestore(ap->lock, flags); ++ scsi_remove_device(dev->sdev); ++ spin_lock_irqsave(ap->lock, flags); ++ dev->sdev = NULL; ++ } ++ } ++ } ++ ++ /* Tell EH to disable all devices */ + ap->pflags |= ATA_PFLAG_UNLOADING; + ata_port_schedule_eh(ap); ++ + spin_unlock_irqrestore(ap->lock, flags); ++ mutex_unlock(&ap->scsi_scan_mutex); + + /* wait till EH commits suicide */ + ata_port_wait_eh(ap); +diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c +index a3ae5fc2a42fc..6d4c80b6daaef 100644 +--- a/drivers/ata/libata-eh.c ++++ b/drivers/ata/libata-eh.c +@@ -2704,18 +2704,11 @@ int ata_eh_reset(struct ata_link *link, int classify, + } + } + +- /* +- * Some controllers can't be frozen very well and may set spurious +- * error conditions during reset. Clear accumulated error +- * information and re-thaw the port if frozen. As reset is the +- * final recovery action and we cross check link onlineness against +- * device classification later, no hotplug event is lost by this. +- */ ++ /* clear cached SError */ + spin_lock_irqsave(link->ap->lock, flags); +- memset(&link->eh_info, 0, sizeof(link->eh_info)); ++ link->eh_info.serror = 0; + if (slave) +- memset(&slave->eh_info, 0, sizeof(link->eh_info)); +- ap->pflags &= ~ATA_PFLAG_EH_PENDING; ++ slave->eh_info.serror = 0; + spin_unlock_irqrestore(link->ap->lock, flags); + + if (ap->pflags & ATA_PFLAG_FROZEN) +diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c +index 9c0052d28078a..d28628b964e29 100644 +--- a/drivers/ata/libata-scsi.c ++++ b/drivers/ata/libata-scsi.c +@@ -1113,6 +1113,42 @@ int ata_scsi_dev_config(struct scsi_device *sdev, struct ata_device *dev) + return 0; + } + ++/** ++ * ata_scsi_slave_alloc - Early setup of SCSI device ++ * @sdev: SCSI device to examine ++ * ++ * This is called from scsi_alloc_sdev() when the scsi device ++ * associated with an ATA device is scanned on a port. ++ * ++ * LOCKING: ++ * Defined by SCSI layer. We don't really care. ++ */ ++ ++int ata_scsi_slave_alloc(struct scsi_device *sdev) ++{ ++ struct ata_port *ap = ata_shost_to_port(sdev->host); ++ struct device_link *link; ++ ++ ata_scsi_sdev_config(sdev); ++ ++ /* ++ * Create a link from the ata_port device to the scsi device to ensure ++ * that PM does suspend/resume in the correct order: the scsi device is ++ * consumer (child) and the ata port the supplier (parent). ++ */ ++ link = device_link_add(&sdev->sdev_gendev, &ap->tdev, ++ DL_FLAG_STATELESS | ++ DL_FLAG_PM_RUNTIME | DL_FLAG_RPM_ACTIVE); ++ if (!link) { ++ ata_port_err(ap, "Failed to create link to scsi device %s\n", ++ dev_name(&sdev->sdev_gendev)); ++ return -ENODEV; ++ } ++ ++ return 0; ++} ++EXPORT_SYMBOL_GPL(ata_scsi_slave_alloc); ++ + /** + * ata_scsi_slave_config - Set SCSI device attributes + * @sdev: SCSI device to examine +@@ -1129,14 +1165,11 @@ int ata_scsi_slave_config(struct scsi_device *sdev) + { + struct ata_port *ap = ata_shost_to_port(sdev->host); + struct ata_device *dev = __ata_scsi_find_dev(ap, sdev); +- int rc = 0; +- +- ata_scsi_sdev_config(sdev); + + if (dev) +- rc = ata_scsi_dev_config(sdev, dev); ++ return ata_scsi_dev_config(sdev, dev); + +- return rc; ++ return 0; + } + EXPORT_SYMBOL_GPL(ata_scsi_slave_config); + +@@ -1163,6 +1196,8 @@ void ata_scsi_slave_destroy(struct scsi_device *sdev) + if (!ap->ops->error_handler) + return; + ++ device_link_remove(&sdev->sdev_gendev, &ap->tdev); ++ + spin_lock_irqsave(ap->lock, flags); + dev = __ata_scsi_find_dev(ap, sdev); + if (dev && dev->sdev) { +@@ -4192,7 +4227,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd) + break; + + case MAINTENANCE_IN: +- if (scsicmd[1] == MI_REPORT_SUPPORTED_OPERATION_CODES) ++ if ((scsicmd[1] & 0x1f) == MI_REPORT_SUPPORTED_OPERATION_CODES) + ata_scsi_rbuf_fill(&args, ata_scsiop_maint_in); + else + ata_scsi_set_invalid_field(dev, cmd, 1, 0xff); +diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c +index e4fb9d1b9b398..3e49a877500e1 100644 +--- a/drivers/ata/libata-transport.c ++++ b/drivers/ata/libata-transport.c +@@ -266,6 +266,10 @@ void ata_tport_delete(struct ata_port *ap) + put_device(dev); + } + ++static const struct device_type ata_port_sas_type = { ++ .name = ATA_PORT_TYPE_NAME, ++}; ++ + /** ata_tport_add - initialize a transport ATA port structure + * + * @parent: parent device +@@ -283,7 +287,10 @@ int ata_tport_add(struct device *parent, + struct device *dev = &ap->tdev; + + device_initialize(dev); +- dev->type = &ata_port_type; ++ if (ap->flags & ATA_FLAG_SAS_HOST) ++ dev->type = &ata_port_sas_type; ++ else ++ dev->type = &ata_port_type; + + dev->parent = parent; + ata_host_get(ap->host); +diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h +index 2c5c8273af017..e5ec197aed303 100644 +--- a/drivers/ata/libata.h ++++ b/drivers/ata/libata.h +@@ -30,6 +30,8 @@ enum { + ATA_DNXFER_QUIET = (1 << 31), + }; + ++#define ATA_PORT_TYPE_NAME "ata_port" ++ + extern atomic_t ata_print_id; + extern int atapi_passthru16; + extern int libata_fua; +diff --git a/drivers/ata/sata_mv.c b/drivers/ata/sata_mv.c +index e3cff01201b80..17f9062b0eaa5 100644 +--- a/drivers/ata/sata_mv.c ++++ b/drivers/ata/sata_mv.c +@@ -1255,8 +1255,8 @@ static void mv_dump_mem(struct device *dev, void __iomem *start, unsigned bytes) + + for (b = 0; b < bytes; ) { + for (w = 0, o = 0; b < bytes && w < 4; w++) { +- o += snprintf(linebuf + o, sizeof(linebuf) - o, +- "%08x ", readl(start + b)); ++ o += scnprintf(linebuf + o, sizeof(linebuf) - o, ++ "%08x ", readl(start + b)); + b += sizeof(u32); + } + dev_dbg(dev, "%s: %p: %s\n", +diff --git a/drivers/bus/ti-sysc.c b/drivers/bus/ti-sysc.c +index ddde1427c90c7..59a2fe2448f17 100644 +--- a/drivers/bus/ti-sysc.c ++++ b/drivers/bus/ti-sysc.c +@@ -38,6 +38,7 @@ enum sysc_soc { + SOC_2420, + SOC_2430, + SOC_3430, ++ SOC_AM35, + SOC_3630, + SOC_4430, + SOC_4460, +@@ -1119,6 +1120,11 @@ static int sysc_enable_module(struct device *dev) + if (ddata->cfg.quirks & (SYSC_QUIRK_SWSUP_SIDLE | + SYSC_QUIRK_SWSUP_SIDLE_ACT)) { + best_mode = SYSC_IDLE_NO; ++ ++ /* Clear WAKEUP */ ++ if (regbits->enwkup_shift >= 0 && ++ ddata->cfg.sysc_val & BIT(regbits->enwkup_shift)) ++ reg &= ~BIT(regbits->enwkup_shift); + } else { + best_mode = fls(ddata->cfg.sidlemodes) - 1; + if (best_mode > SYSC_IDLE_MASK) { +@@ -1246,6 +1252,13 @@ set_sidle: + } + } + ++ if (ddata->cfg.quirks & SYSC_QUIRK_SWSUP_SIDLE_ACT) { ++ /* Set WAKEUP */ ++ if (regbits->enwkup_shift >= 0 && ++ ddata->cfg.sysc_val & BIT(regbits->enwkup_shift)) ++ reg |= BIT(regbits->enwkup_shift); ++ } ++ + reg &= ~(SYSC_IDLE_MASK << regbits->sidle_shift); + reg |= best_mode << regbits->sidle_shift; + if (regbits->autoidle_shift >= 0 && +@@ -1540,16 +1553,16 @@ struct sysc_revision_quirk { + static const struct sysc_revision_quirk sysc_revision_quirks[] = { + /* These drivers need to be fixed to not use pm_runtime_irq_safe() */ + SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000046, 0xffffffff, +- SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), ++ SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), + SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x00000052, 0xffffffff, +- SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), ++ SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), + /* Uarts on omap4 and later */ + SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x50411e03, 0xffff00ff, +- SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), ++ SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), + SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47422e03, 0xffffffff, +- SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), ++ SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), + SYSC_QUIRK("uart", 0, 0x50, 0x54, 0x58, 0x47424e03, 0xffffffff, +- SYSC_QUIRK_SWSUP_SIDLE | SYSC_QUIRK_LEGACY_IDLE), ++ SYSC_QUIRK_SWSUP_SIDLE_ACT | SYSC_QUIRK_LEGACY_IDLE), + + /* Quirks that need to be set based on the module address */ + SYSC_QUIRK("mcpdm", 0x40132000, 0, 0x10, -ENODEV, 0x50000800, 0xffffffff, +@@ -1878,7 +1891,7 @@ static void sysc_pre_reset_quirk_dss(struct sysc *ddata) + dev_warn(ddata->dev, "%s: timed out %08x !+ %08x\n", + __func__, val, irq_mask); + +- if (sysc_soc->soc == SOC_3430) { ++ if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35) { + /* Clear DSS_SDI_CONTROL */ + sysc_write(ddata, 0x44, 0); + +@@ -2166,8 +2179,7 @@ static int sysc_reset(struct sysc *ddata) + } + + if (ddata->cfg.srst_udelay) +- usleep_range(ddata->cfg.srst_udelay, +- ddata->cfg.srst_udelay * 2); ++ fsleep(ddata->cfg.srst_udelay); + + if (ddata->post_reset_quirk) + ddata->post_reset_quirk(ddata); +@@ -3043,6 +3055,7 @@ static void ti_sysc_idle(struct work_struct *work) + static const struct soc_device_attribute sysc_soc_match[] = { + SOC_FLAG("OMAP242*", SOC_2420), + SOC_FLAG("OMAP243*", SOC_2430), ++ SOC_FLAG("AM35*", SOC_AM35), + SOC_FLAG("OMAP3[45]*", SOC_3430), + SOC_FLAG("OMAP3[67]*", SOC_3630), + SOC_FLAG("OMAP443*", SOC_4430), +@@ -3249,7 +3262,7 @@ static int sysc_check_active_timer(struct sysc *ddata) + * can be dropped if we stop supporting old beagleboard revisions + * A to B4 at some point. + */ +- if (sysc_soc->soc == SOC_3430) ++ if (sysc_soc->soc == SOC_3430 || sysc_soc->soc == SOC_AM35) + error = -ENXIO; + else + error = -EBUSY; +diff --git a/drivers/char/agp/parisc-agp.c b/drivers/char/agp/parisc-agp.c +index 514f9f287a781..c6f181702b9a7 100644 +--- a/drivers/char/agp/parisc-agp.c ++++ b/drivers/char/agp/parisc-agp.c +@@ -394,8 +394,6 @@ find_quicksilver(struct device *dev, void *data) + static int __init + parisc_agp_init(void) + { +- extern struct sba_device *sba_list; +- + int err = -1; + struct parisc_device *sba = NULL, *lba = NULL; + struct lba_device *lbadev = NULL; +diff --git a/drivers/clk/sprd/ums512-clk.c b/drivers/clk/sprd/ums512-clk.c +index fc25bdd85e4ea..f43bb10bd5ae2 100644 +--- a/drivers/clk/sprd/ums512-clk.c ++++ b/drivers/clk/sprd/ums512-clk.c +@@ -800,7 +800,7 @@ static SPRD_MUX_CLK_DATA(uart1_clk, "uart1-clk", uart_parents, + 0x250, 0, 3, UMS512_MUX_FLAG); + + static const struct clk_parent_data thm_parents[] = { +- { .fw_name = "ext-32m" }, ++ { .fw_name = "ext-32k" }, + { .hw = &clk_250k.hw }, + }; + static SPRD_MUX_CLK_DATA(thm0_clk, "thm0-clk", thm_parents, +diff --git a/drivers/clk/tegra/clk-bpmp.c b/drivers/clk/tegra/clk-bpmp.c +index d82a71f10c2c1..39241662a412a 100644 +--- a/drivers/clk/tegra/clk-bpmp.c ++++ b/drivers/clk/tegra/clk-bpmp.c +@@ -159,7 +159,7 @@ static unsigned long tegra_bpmp_clk_recalc_rate(struct clk_hw *hw, + + err = tegra_bpmp_clk_transfer(clk->bpmp, &msg); + if (err < 0) +- return err; ++ return 0; + + return response.rate; + } +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index c37e823590055..21481fc05800f 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -478,6 +478,19 @@ static u32 ffa_get_num_pages_sg(struct scatterlist *sg) + return num_pages; + } + ++static u8 ffa_memory_attributes_get(u32 func_id) ++{ ++ /* ++ * For the memory lend or donate operation, if the receiver is a PE or ++ * a proxy endpoint, the owner/sender must not specify the attributes ++ */ ++ if (func_id == FFA_FN_NATIVE(MEM_LEND) || ++ func_id == FFA_MEM_LEND) ++ return 0; ++ ++ return FFA_MEM_NORMAL | FFA_MEM_WRITE_BACK | FFA_MEM_INNER_SHAREABLE; ++} ++ + static int + ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize, + struct ffa_mem_ops_args *args) +@@ -494,8 +507,7 @@ ffa_setup_and_transmit(u32 func_id, void *buffer, u32 max_fragsize, + mem_region->tag = args->tag; + mem_region->flags = args->flags; + mem_region->sender_id = drv_info->vm_id; +- mem_region->attributes = FFA_MEM_NORMAL | FFA_MEM_WRITE_BACK | +- FFA_MEM_INNER_SHAREABLE; ++ mem_region->attributes = ffa_memory_attributes_get(func_id); + ep_mem_access = &mem_region->ep_mem_access[0]; + + for (idx = 0; idx < args->nattrs; idx++, ep_mem_access++) { +diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c +index ecf5c4de851b7..431bda9165c3d 100644 +--- a/drivers/firmware/arm_scmi/perf.c ++++ b/drivers/firmware/arm_scmi/perf.c +@@ -139,7 +139,7 @@ struct perf_dom_info { + + struct scmi_perf_info { + u32 version; +- int num_domains; ++ u16 num_domains; + enum scmi_power_scale power_scale; + u64 stats_addr; + u32 stats_size; +@@ -356,11 +356,26 @@ static int scmi_perf_mb_limits_set(const struct scmi_protocol_handle *ph, + return ret; + } + ++static inline struct perf_dom_info * ++scmi_perf_domain_lookup(const struct scmi_protocol_handle *ph, u32 domain) ++{ ++ struct scmi_perf_info *pi = ph->get_priv(ph); ++ ++ if (domain >= pi->num_domains) ++ return ERR_PTR(-EINVAL); ++ ++ return pi->dom_info + domain; ++} ++ + static int scmi_perf_limits_set(const struct scmi_protocol_handle *ph, + u32 domain, u32 max_perf, u32 min_perf) + { + struct scmi_perf_info *pi = ph->get_priv(ph); +- struct perf_dom_info *dom = pi->dom_info + domain; ++ struct perf_dom_info *dom; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + if (PROTOCOL_REV_MAJOR(pi->version) >= 0x3 && !max_perf && !min_perf) + return -EINVAL; +@@ -408,8 +423,11 @@ static int scmi_perf_mb_limits_get(const struct scmi_protocol_handle *ph, + static int scmi_perf_limits_get(const struct scmi_protocol_handle *ph, + u32 domain, u32 *max_perf, u32 *min_perf) + { +- struct scmi_perf_info *pi = ph->get_priv(ph); +- struct perf_dom_info *dom = pi->dom_info + domain; ++ struct perf_dom_info *dom; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + if (dom->fc_info && dom->fc_info[PERF_FC_LIMIT].get_addr) { + struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LIMIT]; +@@ -449,8 +467,11 @@ static int scmi_perf_mb_level_set(const struct scmi_protocol_handle *ph, + static int scmi_perf_level_set(const struct scmi_protocol_handle *ph, + u32 domain, u32 level, bool poll) + { +- struct scmi_perf_info *pi = ph->get_priv(ph); +- struct perf_dom_info *dom = pi->dom_info + domain; ++ struct perf_dom_info *dom; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + if (dom->fc_info && dom->fc_info[PERF_FC_LEVEL].set_addr) { + struct scmi_fc_info *fci = &dom->fc_info[PERF_FC_LEVEL]; +@@ -490,8 +511,11 @@ static int scmi_perf_mb_level_get(const struct scmi_protocol_handle *ph, + static int scmi_perf_level_get(const struct scmi_protocol_handle *ph, + u32 domain, u32 *level, bool poll) + { +- struct scmi_perf_info *pi = ph->get_priv(ph); +- struct perf_dom_info *dom = pi->dom_info + domain; ++ struct perf_dom_info *dom; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + if (dom->fc_info && dom->fc_info[PERF_FC_LEVEL].get_addr) { + *level = ioread32(dom->fc_info[PERF_FC_LEVEL].get_addr); +@@ -574,13 +598,14 @@ static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph, + unsigned long freq; + struct scmi_opp *opp; + struct perf_dom_info *dom; +- struct scmi_perf_info *pi = ph->get_priv(ph); + + domain = scmi_dev_domain_id(dev); + if (domain < 0) +- return domain; ++ return -EINVAL; + +- dom = pi->dom_info + domain; ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) { + freq = opp->perf * dom->mult_factor; +@@ -603,14 +628,17 @@ static int + scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph, + struct device *dev) + { ++ int domain; + struct perf_dom_info *dom; +- struct scmi_perf_info *pi = ph->get_priv(ph); +- int domain = scmi_dev_domain_id(dev); + ++ domain = scmi_dev_domain_id(dev); + if (domain < 0) +- return domain; ++ return -EINVAL; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + +- dom = pi->dom_info + domain; + /* uS to nS */ + return dom->opp[dom->opp_count - 1].trans_latency_us * 1000; + } +@@ -618,8 +646,11 @@ scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph, + static int scmi_dvfs_freq_set(const struct scmi_protocol_handle *ph, u32 domain, + unsigned long freq, bool poll) + { +- struct scmi_perf_info *pi = ph->get_priv(ph); +- struct perf_dom_info *dom = pi->dom_info + domain; ++ struct perf_dom_info *dom; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + return scmi_perf_level_set(ph, domain, freq / dom->mult_factor, poll); + } +@@ -630,11 +661,14 @@ static int scmi_dvfs_freq_get(const struct scmi_protocol_handle *ph, u32 domain, + int ret; + u32 level; + struct scmi_perf_info *pi = ph->get_priv(ph); +- struct perf_dom_info *dom = pi->dom_info + domain; + + ret = scmi_perf_level_get(ph, domain, &level, poll); +- if (!ret) ++ if (!ret) { ++ struct perf_dom_info *dom = pi->dom_info + domain; ++ ++ /* Note domain is validated implicitly by scmi_perf_level_get */ + *freq = level * dom->mult_factor; ++ } + + return ret; + } +@@ -643,15 +677,14 @@ static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph, + u32 domain, unsigned long *freq, + unsigned long *power) + { +- struct scmi_perf_info *pi = ph->get_priv(ph); + struct perf_dom_info *dom; + unsigned long opp_freq; + int idx, ret = -EINVAL; + struct scmi_opp *opp; + +- dom = pi->dom_info + domain; +- if (!dom) +- return -EIO; ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return PTR_ERR(dom); + + for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) { + opp_freq = opp->perf * dom->mult_factor; +@@ -670,10 +703,16 @@ static int scmi_dvfs_est_power_get(const struct scmi_protocol_handle *ph, + static bool scmi_fast_switch_possible(const struct scmi_protocol_handle *ph, + struct device *dev) + { ++ int domain; + struct perf_dom_info *dom; +- struct scmi_perf_info *pi = ph->get_priv(ph); + +- dom = pi->dom_info + scmi_dev_domain_id(dev); ++ domain = scmi_dev_domain_id(dev); ++ if (domain < 0) ++ return false; ++ ++ dom = scmi_perf_domain_lookup(ph, domain); ++ if (IS_ERR(dom)) ++ return false; + + return dom->fc_info && dom->fc_info[PERF_FC_LEVEL].set_addr; + } +@@ -819,6 +858,8 @@ static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph) + if (!pinfo) + return -ENOMEM; + ++ pinfo->version = version; ++ + ret = scmi_perf_attributes_get(ph, pinfo); + if (ret) + return ret; +@@ -838,8 +879,6 @@ static int scmi_perf_protocol_init(const struct scmi_protocol_handle *ph) + scmi_perf_domain_init_fc(ph, domain, &dom->fc_info); + } + +- pinfo->version = version; +- + return ph->set_priv(ph, pinfo); + } + +diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c +index 81c5f94b1be11..64ed9d3f5d5d8 100644 +--- a/drivers/firmware/cirrus/cs_dsp.c ++++ b/drivers/firmware/cirrus/cs_dsp.c +@@ -1821,15 +1821,15 @@ static int cs_dsp_adsp2_setup_algs(struct cs_dsp *dsp) + return PTR_ERR(adsp2_alg); + + for (i = 0; i < n_algs; i++) { +- cs_dsp_info(dsp, +- "%d: ID %x v%d.%d.%d XM@%x YM@%x ZM@%x\n", +- i, be32_to_cpu(adsp2_alg[i].alg.id), +- (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff0000) >> 16, +- (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff00) >> 8, +- be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff, +- be32_to_cpu(adsp2_alg[i].xm), +- be32_to_cpu(adsp2_alg[i].ym), +- be32_to_cpu(adsp2_alg[i].zm)); ++ cs_dsp_dbg(dsp, ++ "%d: ID %x v%d.%d.%d XM@%x YM@%x ZM@%x\n", ++ i, be32_to_cpu(adsp2_alg[i].alg.id), ++ (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff0000) >> 16, ++ (be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff00) >> 8, ++ be32_to_cpu(adsp2_alg[i].alg.ver) & 0xff, ++ be32_to_cpu(adsp2_alg[i].xm), ++ be32_to_cpu(adsp2_alg[i].ym), ++ be32_to_cpu(adsp2_alg[i].zm)); + + alg_region = cs_dsp_create_region(dsp, WMFW_ADSP2_XM, + adsp2_alg[i].alg.id, +@@ -1954,14 +1954,14 @@ static int cs_dsp_halo_setup_algs(struct cs_dsp *dsp) + return PTR_ERR(halo_alg); + + for (i = 0; i < n_algs; i++) { +- cs_dsp_info(dsp, +- "%d: ID %x v%d.%d.%d XM@%x YM@%x\n", +- i, be32_to_cpu(halo_alg[i].alg.id), +- (be32_to_cpu(halo_alg[i].alg.ver) & 0xff0000) >> 16, +- (be32_to_cpu(halo_alg[i].alg.ver) & 0xff00) >> 8, +- be32_to_cpu(halo_alg[i].alg.ver) & 0xff, +- be32_to_cpu(halo_alg[i].xm_base), +- be32_to_cpu(halo_alg[i].ym_base)); ++ cs_dsp_dbg(dsp, ++ "%d: ID %x v%d.%d.%d XM@%x YM@%x\n", ++ i, be32_to_cpu(halo_alg[i].alg.id), ++ (be32_to_cpu(halo_alg[i].alg.ver) & 0xff0000) >> 16, ++ (be32_to_cpu(halo_alg[i].alg.ver) & 0xff00) >> 8, ++ be32_to_cpu(halo_alg[i].alg.ver) & 0xff, ++ be32_to_cpu(halo_alg[i].xm_base), ++ be32_to_cpu(halo_alg[i].ym_base)); + + ret = cs_dsp_halo_create_regions(dsp, halo_alg[i].alg.id, + halo_alg[i].alg.ver, +diff --git a/drivers/firmware/imx/imx-dsp.c b/drivers/firmware/imx/imx-dsp.c +index a6c06d7476c32..1f410809d3ee4 100644 +--- a/drivers/firmware/imx/imx-dsp.c ++++ b/drivers/firmware/imx/imx-dsp.c +@@ -115,6 +115,7 @@ static int imx_dsp_setup_channels(struct imx_dsp_ipc *dsp_ipc) + dsp_chan->idx = i % 2; + dsp_chan->ch = mbox_request_channel_byname(cl, chan_name); + if (IS_ERR(dsp_chan->ch)) { ++ kfree(dsp_chan->name); + ret = PTR_ERR(dsp_chan->ch); + if (ret != -EPROBE_DEFER) + dev_err(dev, "Failed to request mbox chan %s ret %d\n", +diff --git a/drivers/gpio/gpio-pmic-eic-sprd.c b/drivers/gpio/gpio-pmic-eic-sprd.c +index e518490c4b681..ebbbcb54270d1 100644 +--- a/drivers/gpio/gpio-pmic-eic-sprd.c ++++ b/drivers/gpio/gpio-pmic-eic-sprd.c +@@ -337,6 +337,7 @@ static int sprd_pmic_eic_probe(struct platform_device *pdev) + pmic_eic->chip.set_config = sprd_pmic_eic_set_config; + pmic_eic->chip.set = sprd_pmic_eic_set; + pmic_eic->chip.get = sprd_pmic_eic_get; ++ pmic_eic->chip.can_sleep = true; + + pmic_eic->intc.name = dev_name(&pdev->dev); + pmic_eic->intc.irq_mask = sprd_pmic_eic_irq_mask; +diff --git a/drivers/gpio/gpio-tb10x.c b/drivers/gpio/gpio-tb10x.c +index de6afa3f97168..05357473d2a11 100644 +--- a/drivers/gpio/gpio-tb10x.c ++++ b/drivers/gpio/gpio-tb10x.c +@@ -195,7 +195,7 @@ static int tb10x_gpio_probe(struct platform_device *pdev) + handle_edge_irq, IRQ_NOREQUEST, IRQ_NOPROBE, + IRQ_GC_INIT_MASK_CACHE); + if (ret) +- return ret; ++ goto err_remove_domain; + + gc = tb10x_gpio->domain->gc->gc[0]; + gc->reg_base = tb10x_gpio->base; +@@ -209,6 +209,10 @@ static int tb10x_gpio_probe(struct platform_device *pdev) + } + + return 0; ++ ++err_remove_domain: ++ irq_domain_remove(tb10x_gpio->domain); ++ return ret; + } + + static int tb10x_gpio_remove(struct platform_device *pdev) +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c +index 9e3313dd956ae..24b4bd6bb2771 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c +@@ -896,12 +896,17 @@ int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file *filp) + struct atom_context *atom_context; + + atom_context = adev->mode_info.atom_context; +- memcpy(vbios_info.name, atom_context->name, sizeof(atom_context->name)); +- memcpy(vbios_info.vbios_pn, atom_context->vbios_pn, sizeof(atom_context->vbios_pn)); +- vbios_info.version = atom_context->version; +- memcpy(vbios_info.vbios_ver_str, atom_context->vbios_ver_str, +- sizeof(atom_context->vbios_ver_str)); +- memcpy(vbios_info.date, atom_context->date, sizeof(atom_context->date)); ++ if (atom_context) { ++ memcpy(vbios_info.name, atom_context->name, ++ sizeof(atom_context->name)); ++ memcpy(vbios_info.vbios_pn, atom_context->vbios_pn, ++ sizeof(atom_context->vbios_pn)); ++ vbios_info.version = atom_context->version; ++ memcpy(vbios_info.vbios_ver_str, atom_context->vbios_ver_str, ++ sizeof(atom_context->vbios_ver_str)); ++ memcpy(vbios_info.date, atom_context->date, ++ sizeof(atom_context->date)); ++ } + + return copy_to_user(out, &vbios_info, + min((size_t)size, sizeof(vbios_info))) ? -EFAULT : 0; +diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c b/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c +index 09fdcd20cb919..c52a378396af1 100644 +--- a/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c ++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v4_3.c +@@ -344,6 +344,9 @@ static void nbio_v4_3_init_registers(struct amdgpu_device *adev) + data &= ~RCC_DEV0_EPF2_STRAP2__STRAP_NO_SOFT_RESET_DEV0_F2_MASK; + WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF2_STRAP2, data); + } ++ if (amdgpu_sriov_vf(adev)) ++ adev->rmmio_remap.reg_offset = SOC15_REG_OFFSET(NBIO, 0, ++ regBIF_BX_DEV0_EPF0_VF0_HDP_MEM_COHERENCY_FLUSH_CNTL) << 2; + } + + static u32 nbio_v4_3_get_rom_offset(struct amdgpu_device *adev) +diff --git a/drivers/gpu/drm/amd/amdgpu/soc21.c b/drivers/gpu/drm/amd/amdgpu/soc21.c +index d150a90daa403..56af7b5abac14 100644 +--- a/drivers/gpu/drm/amd/amdgpu/soc21.c ++++ b/drivers/gpu/drm/amd/amdgpu/soc21.c +@@ -755,7 +755,7 @@ static int soc21_common_hw_init(void *handle) + * for the purpose of expose those registers + * to process space + */ +- if (adev->nbio.funcs->remap_hdp_registers) ++ if (adev->nbio.funcs->remap_hdp_registers && !amdgpu_sriov_vf(adev)) + adev->nbio.funcs->remap_hdp_registers(adev); + /* enable the doorbell aperture */ + soc21_enable_doorbell_aperture(adev, true); +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +index c06ada0844ba1..0b87034d9dd51 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +@@ -201,7 +201,7 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q, + + if (q->wptr_bo) { + wptr_addr_off = (uint64_t)q->properties.write_ptr & (PAGE_SIZE - 1); +- queue_input.wptr_mc_addr = ((uint64_t)q->wptr_bo->tbo.resource->start << PAGE_SHIFT) + wptr_addr_off; ++ queue_input.wptr_mc_addr = amdgpu_bo_gpu_offset(q->wptr_bo) + wptr_addr_off; + } + + queue_input.is_kfd_process = 1; +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h +index 6d6588b9beed7..ec8a576ac5a9e 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h +@@ -1349,9 +1349,8 @@ void kfd_flush_tlb(struct kfd_process_device *pdd, enum TLB_FLUSH_TYPE type); + + static inline bool kfd_flush_tlb_after_unmap(struct kfd_dev *dev) + { +- return KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 2) || +- (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) && +- dev->adev->sdma.instance[0].fw_version >= 18) || ++ return KFD_GC_VERSION(dev) > IP_VERSION(9, 4, 2) || ++ (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) && dev->sdma_fw_version >= 18) || + KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 0); + } + +diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c +index 9378c98d02cfe..508f5fe268484 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c ++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c +@@ -973,7 +973,9 @@ void dce110_edp_backlight_control( + return; + } + +- if (link->panel_cntl) { ++ if (link->panel_cntl && !(link->dpcd_sink_ext_caps.bits.oled || ++ link->dpcd_sink_ext_caps.bits.hdr_aux_backlight_control == 1 || ++ link->dpcd_sink_ext_caps.bits.sdr_aux_backlight_control == 1)) { + bool is_backlight_on = link->panel_cntl->funcs->is_panel_backlight_on(link->panel_cntl); + + if ((enable && is_backlight_on) || (!enable && !is_backlight_on)) { +diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi83.c b/drivers/gpu/drm/bridge/ti-sn65dsi83.c +index 55efd3eb66723..3f43b44145a89 100644 +--- a/drivers/gpu/drm/bridge/ti-sn65dsi83.c ++++ b/drivers/gpu/drm/bridge/ti-sn65dsi83.c +@@ -655,7 +655,9 @@ static int sn65dsi83_host_attach(struct sn65dsi83 *ctx) + + dsi->lanes = dsi_lanes; + dsi->format = MIPI_DSI_FMT_RGB888; +- dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST; ++ dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | ++ MIPI_DSI_MODE_VIDEO_NO_HFP | MIPI_DSI_MODE_VIDEO_NO_HBP | ++ MIPI_DSI_MODE_VIDEO_NO_HSA | MIPI_DSI_MODE_NO_EOT_PACKET; + + ret = devm_mipi_dsi_attach(dev, dsi); + if (ret < 0) { +diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c +index b458547e1fc6e..07967adce16aa 100644 +--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c ++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c +@@ -541,7 +541,6 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id, + DRIVER_CAPS(i915)->has_logical_contexts = true; + + ewma__engine_latency_init(&engine->latency); +- seqcount_init(&engine->stats.execlists.lock); + + ATOMIC_INIT_NOTIFIER_HEAD(&engine->context_status_notifier); + +diff --git a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +index fc4a846289855..f903ee1ce06e7 100644 +--- a/drivers/gpu/drm/i915/gt/intel_execlists_submission.c ++++ b/drivers/gpu/drm/i915/gt/intel_execlists_submission.c +@@ -3546,6 +3546,8 @@ int intel_execlists_submission_setup(struct intel_engine_cs *engine) + logical_ring_default_vfuncs(engine); + logical_ring_default_irqs(engine); + ++ seqcount_init(&engine->stats.execlists.lock); ++ + if (engine->flags & I915_ENGINE_HAS_RCS_REG_STATE) + rcs_submission_override(engine); + +diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c +index 2049a00417afa..a6d0463b18d91 100644 +--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c ++++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c +@@ -500,20 +500,31 @@ void intel_ggtt_unbind_vma(struct i915_address_space *vm, + vm->clear_range(vm, vma_res->start, vma_res->vma_size); + } + ++/* ++ * Reserve the top of the GuC address space for firmware images. Addresses ++ * beyond GUC_GGTT_TOP in the GuC address space are inaccessible by GuC, ++ * which makes for a suitable range to hold GuC/HuC firmware images if the ++ * size of the GGTT is 4G. However, on a 32-bit platform the size of the GGTT ++ * is limited to 2G, which is less than GUC_GGTT_TOP, but we reserve a chunk ++ * of the same size anyway, which is far more than needed, to keep the logic ++ * in uc_fw_ggtt_offset() simple. ++ */ ++#define GUC_TOP_RESERVE_SIZE (SZ_4G - GUC_GGTT_TOP) ++ + static int ggtt_reserve_guc_top(struct i915_ggtt *ggtt) + { +- u64 size; ++ u64 offset; + int ret; + + if (!intel_uc_uses_guc(&ggtt->vm.gt->uc)) + return 0; + +- GEM_BUG_ON(ggtt->vm.total <= GUC_GGTT_TOP); +- size = ggtt->vm.total - GUC_GGTT_TOP; ++ GEM_BUG_ON(ggtt->vm.total <= GUC_TOP_RESERVE_SIZE); ++ offset = ggtt->vm.total - GUC_TOP_RESERVE_SIZE; + +- ret = i915_gem_gtt_reserve(&ggtt->vm, NULL, &ggtt->uc_fw, size, +- GUC_GGTT_TOP, I915_COLOR_UNEVICTABLE, +- PIN_NOEVICT); ++ ret = i915_gem_gtt_reserve(&ggtt->vm, NULL, &ggtt->uc_fw, ++ GUC_TOP_RESERVE_SIZE, offset, ++ I915_COLOR_UNEVICTABLE, PIN_NOEVICT); + if (ret) + drm_dbg(&ggtt->vm.i915->drm, + "Failed to reserve top of GGTT for GuC\n"); +diff --git a/drivers/gpu/drm/meson/meson_encoder_hdmi.c b/drivers/gpu/drm/meson/meson_encoder_hdmi.c +index 53231bfdf7e24..b14e6e507c61b 100644 +--- a/drivers/gpu/drm/meson/meson_encoder_hdmi.c ++++ b/drivers/gpu/drm/meson/meson_encoder_hdmi.c +@@ -332,6 +332,8 @@ static void meson_encoder_hdmi_hpd_notify(struct drm_bridge *bridge, + return; + + cec_notifier_set_phys_addr_from_edid(encoder_hdmi->cec_notifier, edid); ++ ++ kfree(edid); + } else + cec_notifier_phys_addr_invalidate(encoder_hdmi->cec_notifier); + } +diff --git a/drivers/gpu/drm/tests/drm_mm_test.c b/drivers/gpu/drm/tests/drm_mm_test.c +index c4b66eeae2039..13fa4a18a11b2 100644 +--- a/drivers/gpu/drm/tests/drm_mm_test.c ++++ b/drivers/gpu/drm/tests/drm_mm_test.c +@@ -939,7 +939,7 @@ static void drm_test_mm_insert_range(struct kunit *test) + KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size, 0, max - 1)); + KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size, 0, max / 2)); + KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size, +- max / 2, max / 2)); ++ max / 2, max)); + KUNIT_ASSERT_FALSE(test, __drm_test_mm_insert_range(test, count, size, + max / 4 + 1, 3 * max / 4 - 1)); + +diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c +index 1fda1eaa6d6ab..da1f6b60f9c9a 100644 +--- a/drivers/i2c/busses/i2c-i801.c ++++ b/drivers/i2c/busses/i2c-i801.c +@@ -1754,6 +1754,7 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id) + "SMBus I801 adapter at %04lx", priv->smba); + err = i2c_add_adapter(&priv->adapter); + if (err) { ++ platform_device_unregister(priv->tco_pdev); + i801_acpi_remove(priv); + return err; + } +diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c +index 83457359ec450..767dd15b3c881 100644 +--- a/drivers/i2c/busses/i2c-npcm7xx.c ++++ b/drivers/i2c/busses/i2c-npcm7xx.c +@@ -696,6 +696,7 @@ static void npcm_i2c_callback(struct npcm_i2c *bus, + { + struct i2c_msg *msgs; + int msgs_num; ++ bool do_complete = false; + + msgs = bus->msgs; + msgs_num = bus->msgs_num; +@@ -724,23 +725,17 @@ static void npcm_i2c_callback(struct npcm_i2c *bus, + msgs[1].flags & I2C_M_RD) + msgs[1].len = info; + } +- if (completion_done(&bus->cmd_complete) == false) +- complete(&bus->cmd_complete); +- break; +- ++ do_complete = true; ++ break; + case I2C_NACK_IND: + /* MASTER transmit got a NACK before tx all bytes */ + bus->cmd_err = -ENXIO; +- if (bus->master_or_slave == I2C_MASTER) +- complete(&bus->cmd_complete); +- ++ do_complete = true; + break; + case I2C_BUS_ERR_IND: + /* Bus error */ + bus->cmd_err = -EAGAIN; +- if (bus->master_or_slave == I2C_MASTER) +- complete(&bus->cmd_complete); +- ++ do_complete = true; + break; + case I2C_WAKE_UP_IND: + /* I2C wake up */ +@@ -754,6 +749,8 @@ static void npcm_i2c_callback(struct npcm_i2c *bus, + if (bus->slave) + bus->master_or_slave = I2C_SLAVE; + #endif ++ if (do_complete) ++ complete(&bus->cmd_complete); + } + + static u8 npcm_i2c_fifo_usage(struct npcm_i2c *bus) +diff --git a/drivers/i2c/busses/i2c-xiic.c b/drivers/i2c/busses/i2c-xiic.c +index b41a6709e47f2..b27bfc7765993 100644 +--- a/drivers/i2c/busses/i2c-xiic.c ++++ b/drivers/i2c/busses/i2c-xiic.c +@@ -420,7 +420,7 @@ static irqreturn_t xiic_process(int irq, void *dev_id) + * reset the IP instead of just flush fifos + */ + ret = xiic_reinit(i2c); +- if (!ret) ++ if (ret < 0) + dev_dbg(i2c->adap.dev.parent, "reinit failed\n"); + + if (i2c->rx_msg) { +diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c +index f7a7405d4350a..8e8688e8de0fb 100644 +--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c ++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c +@@ -243,6 +243,10 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev) + + props[i].name = devm_kstrdup(&pdev->dev, "status", GFP_KERNEL); + props[i].value = devm_kstrdup(&pdev->dev, "ok", GFP_KERNEL); ++ if (!props[i].name || !props[i].value) { ++ err = -ENOMEM; ++ goto err_rollback; ++ } + props[i].length = 3; + + of_changeset_init(&priv->chan[i].chgset); +diff --git a/drivers/i2c/muxes/i2c-mux-gpio.c b/drivers/i2c/muxes/i2c-mux-gpio.c +index 73a23e117ebec..0930a51c8c7c0 100644 +--- a/drivers/i2c/muxes/i2c-mux-gpio.c ++++ b/drivers/i2c/muxes/i2c-mux-gpio.c +@@ -105,8 +105,10 @@ static int i2c_mux_gpio_probe_fw(struct gpiomux *mux, + + } else if (is_acpi_node(child)) { + rc = acpi_get_local_address(ACPI_HANDLE_FWNODE(child), values + i); +- if (rc) ++ if (rc) { ++ fwnode_handle_put(child); + return dev_err_probe(dev, rc, "Cannot get address\n"); ++ } + } + + i++; +diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +index 5968a568aae2a..ffba8ce93ff88 100644 +--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c ++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-sva.c +@@ -186,6 +186,15 @@ static void arm_smmu_free_shared_cd(struct arm_smmu_ctx_desc *cd) + } + } + ++/* ++ * Cloned from the MAX_TLBI_OPS in arch/arm64/include/asm/tlbflush.h, this ++ * is used as a threshold to replace per-page TLBI commands to issue in the ++ * command queue with an address-space TLBI command, when SMMU w/o a range ++ * invalidation feature handles too many per-page TLBI commands, which will ++ * otherwise result in a soft lockup. ++ */ ++#define CMDQ_MAX_TLBI_OPS (1 << (PAGE_SHIFT - 3)) ++ + static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, + struct mm_struct *mm, + unsigned long start, unsigned long end) +@@ -200,10 +209,22 @@ static void arm_smmu_mm_invalidate_range(struct mmu_notifier *mn, + * range. So do a simple translation here by calculating size correctly. + */ + size = end - start; ++ if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_RANGE_INV)) { ++ if (size >= CMDQ_MAX_TLBI_OPS * PAGE_SIZE) ++ size = 0; ++ } ++ ++ if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) { ++ if (!size) ++ arm_smmu_tlb_inv_asid(smmu_domain->smmu, ++ smmu_mn->cd->asid); ++ else ++ arm_smmu_tlb_inv_range_asid(start, size, ++ smmu_mn->cd->asid, ++ PAGE_SIZE, false, ++ smmu_domain); ++ } + +- if (!(smmu_domain->smmu->features & ARM_SMMU_FEAT_BTM)) +- arm_smmu_tlb_inv_range_asid(start, size, smmu_mn->cd->asid, +- PAGE_SIZE, false, smmu_domain); + arm_smmu_atc_inv_domain(smmu_domain, mm->pasid, start, size); + } + +diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h +index 28c641352de9b..71dcd8fd4050a 100644 +--- a/drivers/md/dm-core.h ++++ b/drivers/md/dm-core.h +@@ -214,6 +214,7 @@ struct dm_table { + + /* a list of devices used by this table */ + struct list_head devices; ++ struct rw_semaphore devices_lock; + + /* events get handed up using this callback */ + void (*event_fn)(void *); +diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c +index 2afd2d2a0f407..206e6ce554dc7 100644 +--- a/drivers/md/dm-ioctl.c ++++ b/drivers/md/dm-ioctl.c +@@ -1566,6 +1566,8 @@ static void retrieve_deps(struct dm_table *table, + struct dm_dev_internal *dd; + struct dm_target_deps *deps; + ++ down_read(&table->devices_lock); ++ + deps = get_result_buffer(param, param_size, &len); + + /* +@@ -1580,7 +1582,7 @@ static void retrieve_deps(struct dm_table *table, + needed = struct_size(deps, dev, count); + if (len < needed) { + param->flags |= DM_BUFFER_FULL_FLAG; +- return; ++ goto out; + } + + /* +@@ -1592,6 +1594,9 @@ static void retrieve_deps(struct dm_table *table, + deps->dev[count++] = huge_encode_dev(dd->dm_dev->bdev->bd_dev); + + param->data_size = param->data_start + needed; ++ ++out: ++ up_read(&table->devices_lock); + } + + static int table_deps(struct file *filp, struct dm_ioctl *param, size_t param_size) +diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c +index 288f600ee56dc..dac6a5f25f2be 100644 +--- a/drivers/md/dm-table.c ++++ b/drivers/md/dm-table.c +@@ -134,6 +134,7 @@ int dm_table_create(struct dm_table **result, fmode_t mode, + return -ENOMEM; + + INIT_LIST_HEAD(&t->devices); ++ init_rwsem(&t->devices_lock); + + if (!num_targets) + num_targets = KEYS_PER_NODE; +@@ -362,15 +363,19 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode, + return -ENODEV; + } + ++ down_write(&t->devices_lock); ++ + dd = find_device(&t->devices, dev); + if (!dd) { + dd = kmalloc(sizeof(*dd), GFP_KERNEL); +- if (!dd) +- return -ENOMEM; ++ if (!dd) { ++ r = -ENOMEM; ++ goto unlock_ret_r; ++ } + + if ((r = dm_get_table_device(t->md, dev, mode, &dd->dm_dev))) { + kfree(dd); +- return r; ++ goto unlock_ret_r; + } + + refcount_set(&dd->count, 1); +@@ -380,12 +385,17 @@ int dm_get_device(struct dm_target *ti, const char *path, fmode_t mode, + } else if (dd->dm_dev->mode != (mode | dd->dm_dev->mode)) { + r = upgrade_mode(dd, mode, t->md); + if (r) +- return r; ++ goto unlock_ret_r; + } + refcount_inc(&dd->count); + out: ++ up_write(&t->devices_lock); + *result = dd->dm_dev; + return 0; ++ ++unlock_ret_r: ++ up_write(&t->devices_lock); ++ return r; + } + EXPORT_SYMBOL(dm_get_device); + +@@ -421,9 +431,12 @@ static int dm_set_device_limits(struct dm_target *ti, struct dm_dev *dev, + void dm_put_device(struct dm_target *ti, struct dm_dev *d) + { + int found = 0; +- struct list_head *devices = &ti->table->devices; ++ struct dm_table *t = ti->table; ++ struct list_head *devices = &t->devices; + struct dm_dev_internal *dd; + ++ down_write(&t->devices_lock); ++ + list_for_each_entry(dd, devices, list) { + if (dd->dm_dev == d) { + found = 1; +@@ -432,14 +445,17 @@ void dm_put_device(struct dm_target *ti, struct dm_dev *d) + } + if (!found) { + DMERR("%s: device %s not in table devices list", +- dm_device_name(ti->table->md), d->name); +- return; ++ dm_device_name(t->md), d->name); ++ goto unlock_ret; + } + if (refcount_dec_and_test(&dd->count)) { +- dm_put_table_device(ti->table->md, d); ++ dm_put_table_device(t->md, d); + list_del(&dd->list); + kfree(dd); + } ++ ++unlock_ret: ++ up_write(&t->devices_lock); + } + EXPORT_SYMBOL(dm_put_device); + +diff --git a/drivers/media/common/videobuf2/frame_vector.c b/drivers/media/common/videobuf2/frame_vector.c +index 144027035892a..07ebe4424df3a 100644 +--- a/drivers/media/common/videobuf2/frame_vector.c ++++ b/drivers/media/common/videobuf2/frame_vector.c +@@ -30,6 +30,10 @@ + * different type underlying the specified range of virtual addresses. + * When the function isn't able to map a single page, it returns error. + * ++ * Note that get_vaddr_frames() cannot follow VM_IO mappings. It used ++ * to be able to do that, but that could (racily) return non-refcounted ++ * pfns. ++ * + * This function takes care of grabbing mmap_lock as necessary. + */ + int get_vaddr_frames(unsigned long start, unsigned int nr_frames, +@@ -55,8 +59,6 @@ int get_vaddr_frames(unsigned long start, unsigned int nr_frames, + if (likely(ret > 0)) + return ret; + +- /* This used to (racily) return non-refcounted pfns. Let people know */ +- WARN_ONCE(1, "get_vaddr_frames() cannot follow VM_IO mapping"); + vec->nr_frames = 0; + return ret ? ret : -EFAULT; + } +diff --git a/drivers/media/platform/marvell/Kconfig b/drivers/media/platform/marvell/Kconfig +index ec1a16734a280..d6499ffe30e8b 100644 +--- a/drivers/media/platform/marvell/Kconfig ++++ b/drivers/media/platform/marvell/Kconfig +@@ -7,7 +7,7 @@ config VIDEO_CAFE_CCIC + depends on V4L_PLATFORM_DRIVERS + depends on PCI && I2C && VIDEO_DEV + depends on COMMON_CLK +- select VIDEO_OV7670 ++ select VIDEO_OV7670 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR + select VIDEOBUF2_VMALLOC + select VIDEOBUF2_DMA_CONTIG + select VIDEOBUF2_DMA_SG +@@ -22,7 +22,7 @@ config VIDEO_MMP_CAMERA + depends on I2C && VIDEO_DEV + depends on ARCH_MMP || COMPILE_TEST + depends on COMMON_CLK +- select VIDEO_OV7670 ++ select VIDEO_OV7670 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR + select I2C_GPIO + select VIDEOBUF2_VMALLOC + select VIDEOBUF2_DMA_CONTIG +diff --git a/drivers/media/platform/via/Kconfig b/drivers/media/platform/via/Kconfig +index 8926eb0803b27..6e603c0382487 100644 +--- a/drivers/media/platform/via/Kconfig ++++ b/drivers/media/platform/via/Kconfig +@@ -7,7 +7,7 @@ config VIDEO_VIA_CAMERA + depends on V4L_PLATFORM_DRIVERS + depends on FB_VIA && VIDEO_DEV + select VIDEOBUF2_DMA_SG +- select VIDEO_OV7670 ++ select VIDEO_OV7670 if VIDEO_CAMERA_SENSOR + help + Driver support for the integrated camera controller in VIA + Chrome9 chipsets. Currently only tested on OLPC xo-1.5 systems +diff --git a/drivers/media/usb/em28xx/Kconfig b/drivers/media/usb/em28xx/Kconfig +index b3c472b8c5a96..cb61fd6cc6c61 100644 +--- a/drivers/media/usb/em28xx/Kconfig ++++ b/drivers/media/usb/em28xx/Kconfig +@@ -12,8 +12,8 @@ config VIDEO_EM28XX_V4L2 + select VIDEO_SAA711X if MEDIA_SUBDRV_AUTOSELECT + select VIDEO_TVP5150 if MEDIA_SUBDRV_AUTOSELECT + select VIDEO_MSP3400 if MEDIA_SUBDRV_AUTOSELECT +- select VIDEO_MT9V011 if MEDIA_SUBDRV_AUTOSELECT && MEDIA_CAMERA_SUPPORT +- select VIDEO_OV2640 if MEDIA_SUBDRV_AUTOSELECT && MEDIA_CAMERA_SUPPORT ++ select VIDEO_MT9V011 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR ++ select VIDEO_OV2640 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR + help + This is a video4linux driver for Empia 28xx based TV cards. + +diff --git a/drivers/media/usb/go7007/Kconfig b/drivers/media/usb/go7007/Kconfig +index 4ff79940ad8d4..b2a15d9fb1f33 100644 +--- a/drivers/media/usb/go7007/Kconfig ++++ b/drivers/media/usb/go7007/Kconfig +@@ -12,8 +12,8 @@ config VIDEO_GO7007 + select VIDEO_TW2804 if MEDIA_SUBDRV_AUTOSELECT + select VIDEO_TW9903 if MEDIA_SUBDRV_AUTOSELECT + select VIDEO_TW9906 if MEDIA_SUBDRV_AUTOSELECT +- select VIDEO_OV7640 if MEDIA_SUBDRV_AUTOSELECT && MEDIA_CAMERA_SUPPORT + select VIDEO_UDA1342 if MEDIA_SUBDRV_AUTOSELECT ++ select VIDEO_OV7640 if MEDIA_SUBDRV_AUTOSELECT && VIDEO_CAMERA_SENSOR + help + This is a video4linux driver for the WIS GO7007 MPEG + encoder chip. +diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c +index 067b43a1cb3eb..6d7535efc09de 100644 +--- a/drivers/media/usb/uvc/uvc_ctrl.c ++++ b/drivers/media/usb/uvc/uvc_ctrl.c +@@ -1347,6 +1347,9 @@ int uvc_query_v4l2_menu(struct uvc_video_chain *chain, + query_menu->id = id; + query_menu->index = index; + ++ if (index >= BITS_PER_TYPE(mapping->menu_mask)) ++ return -EINVAL; ++ + ret = mutex_lock_interruptible(&chain->ctrl_mutex); + if (ret < 0) + return -ERESTARTSYS; +diff --git a/drivers/misc/cardreader/rts5227.c b/drivers/misc/cardreader/rts5227.c +index 3dae5e3a16976..cd512284bfb39 100644 +--- a/drivers/misc/cardreader/rts5227.c ++++ b/drivers/misc/cardreader/rts5227.c +@@ -83,63 +83,20 @@ static void rts5227_fetch_vendor_settings(struct rtsx_pcr *pcr) + + static void rts5227_init_from_cfg(struct rtsx_pcr *pcr) + { +- struct pci_dev *pdev = pcr->pci; +- int l1ss; +- u32 lval; + struct rtsx_cr_option *option = &pcr->option; + +- l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); +- if (!l1ss) +- return; +- +- pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval); +- + if (CHK_PCI_PID(pcr, 0x522A)) { +- if (0 == (lval & 0x0F)) +- rtsx_pci_enable_oobs_polling(pcr); +- else ++ if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN ++ | PM_L1_1_EN | PM_L1_2_EN)) + rtsx_pci_disable_oobs_polling(pcr); ++ else ++ rtsx_pci_enable_oobs_polling(pcr); + } + +- if (lval & PCI_L1SS_CTL1_ASPM_L1_1) +- rtsx_set_dev_flag(pcr, ASPM_L1_1_EN); +- else +- rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_2) +- rtsx_set_dev_flag(pcr, ASPM_L1_2_EN); +- else +- rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_1) +- rtsx_set_dev_flag(pcr, PM_L1_1_EN); +- else +- rtsx_clear_dev_flag(pcr, PM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_2) +- rtsx_set_dev_flag(pcr, PM_L1_2_EN); +- else +- rtsx_clear_dev_flag(pcr, PM_L1_2_EN); +- + if (option->ltr_en) { +- u16 val; +- +- pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &val); +- if (val & PCI_EXP_DEVCTL2_LTR_EN) { +- option->ltr_enabled = true; +- option->ltr_active = true; ++ if (option->ltr_enabled) + rtsx_set_ltr_latency(pcr, option->ltr_active_latency); +- } else { +- option->ltr_enabled = false; +- } + } +- +- if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN +- | PM_L1_1_EN | PM_L1_2_EN)) +- option->force_clkreq_0 = false; +- else +- option->force_clkreq_0 = true; +- + } + + static int rts5227_extra_init_hw(struct rtsx_pcr *pcr) +@@ -195,7 +152,7 @@ static int rts5227_extra_init_hw(struct rtsx_pcr *pcr) + } + } + +- if (option->force_clkreq_0 && pcr->aspm_mode == ASPM_MODE_CFG) ++ if (option->force_clkreq_0) + rtsx_pci_add_cmd(pcr, WRITE_REG_CMD, PETXCFG, + FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW); + else +diff --git a/drivers/misc/cardreader/rts5228.c b/drivers/misc/cardreader/rts5228.c +index f4ab09439da70..0c7f10bcf6f12 100644 +--- a/drivers/misc/cardreader/rts5228.c ++++ b/drivers/misc/cardreader/rts5228.c +@@ -386,59 +386,25 @@ static void rts5228_process_ocp(struct rtsx_pcr *pcr) + + static void rts5228_init_from_cfg(struct rtsx_pcr *pcr) + { +- struct pci_dev *pdev = pcr->pci; +- int l1ss; +- u32 lval; + struct rtsx_cr_option *option = &pcr->option; + +- l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); +- if (!l1ss) +- return; +- +- pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval); +- +- if (0 == (lval & 0x0F)) +- rtsx_pci_enable_oobs_polling(pcr); +- else ++ if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN ++ | PM_L1_1_EN | PM_L1_2_EN)) + rtsx_pci_disable_oobs_polling(pcr); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_1) +- rtsx_set_dev_flag(pcr, ASPM_L1_1_EN); +- else +- rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_2) +- rtsx_set_dev_flag(pcr, ASPM_L1_2_EN); +- else +- rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_1) +- rtsx_set_dev_flag(pcr, PM_L1_1_EN); + else +- rtsx_clear_dev_flag(pcr, PM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_2) +- rtsx_set_dev_flag(pcr, PM_L1_2_EN); +- else +- rtsx_clear_dev_flag(pcr, PM_L1_2_EN); ++ rtsx_pci_enable_oobs_polling(pcr); + + rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0xFF, 0); +- if (option->ltr_en) { +- u16 val; + +- pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &val); +- if (val & PCI_EXP_DEVCTL2_LTR_EN) { +- option->ltr_enabled = true; +- option->ltr_active = true; ++ if (option->ltr_en) { ++ if (option->ltr_enabled) + rtsx_set_ltr_latency(pcr, option->ltr_active_latency); +- } else { +- option->ltr_enabled = false; +- } + } + } + + static int rts5228_extra_init_hw(struct rtsx_pcr *pcr) + { ++ struct rtsx_cr_option *option = &pcr->option; + + rtsx_pci_write_register(pcr, RTS5228_AUTOLOAD_CFG1, + CD_RESUME_EN_MASK, CD_RESUME_EN_MASK); +@@ -469,6 +435,17 @@ static int rts5228_extra_init_hw(struct rtsx_pcr *pcr) + else + rtsx_pci_write_register(pcr, PETXCFG, 0x30, 0x00); + ++ /* ++ * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced ++ * to drive low, and we forcibly request clock. ++ */ ++ if (option->force_clkreq_0) ++ rtsx_pci_write_register(pcr, PETXCFG, ++ FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW); ++ else ++ rtsx_pci_write_register(pcr, PETXCFG, ++ FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH); ++ + rtsx_pci_write_register(pcr, PWD_SUSPEND_EN, 0xFF, 0xFB); + + if (pcr->rtd3_en) { +diff --git a/drivers/misc/cardreader/rts5249.c b/drivers/misc/cardreader/rts5249.c +index 47ab72a43256b..6c81040e18bef 100644 +--- a/drivers/misc/cardreader/rts5249.c ++++ b/drivers/misc/cardreader/rts5249.c +@@ -86,64 +86,22 @@ static void rtsx_base_fetch_vendor_settings(struct rtsx_pcr *pcr) + + static void rts5249_init_from_cfg(struct rtsx_pcr *pcr) + { +- struct pci_dev *pdev = pcr->pci; +- int l1ss; + struct rtsx_cr_option *option = &(pcr->option); +- u32 lval; +- +- l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); +- if (!l1ss) +- return; +- +- pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval); + + if (CHK_PCI_PID(pcr, PID_524A) || CHK_PCI_PID(pcr, PID_525A)) { +- if (0 == (lval & 0x0F)) +- rtsx_pci_enable_oobs_polling(pcr); +- else ++ if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN ++ | PM_L1_1_EN | PM_L1_2_EN)) + rtsx_pci_disable_oobs_polling(pcr); ++ else ++ rtsx_pci_enable_oobs_polling(pcr); + } + +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_1) +- rtsx_set_dev_flag(pcr, ASPM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_2) +- rtsx_set_dev_flag(pcr, ASPM_L1_2_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_1) +- rtsx_set_dev_flag(pcr, PM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_2) +- rtsx_set_dev_flag(pcr, PM_L1_2_EN); +- + if (option->ltr_en) { +- u16 val; +- +- pcie_capability_read_word(pdev, PCI_EXP_DEVCTL2, &val); +- if (val & PCI_EXP_DEVCTL2_LTR_EN) { +- option->ltr_enabled = true; +- option->ltr_active = true; ++ if (option->ltr_enabled) + rtsx_set_ltr_latency(pcr, option->ltr_active_latency); +- } else { +- option->ltr_enabled = false; +- } + } + } + +-static int rts5249_init_from_hw(struct rtsx_pcr *pcr) +-{ +- struct rtsx_cr_option *option = &(pcr->option); +- +- if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN +- | PM_L1_1_EN | PM_L1_2_EN)) +- option->force_clkreq_0 = false; +- else +- option->force_clkreq_0 = true; +- +- return 0; +-} +- + static void rts52xa_force_power_down(struct rtsx_pcr *pcr, u8 pm_state, bool runtime) + { + /* Set relink_time to 0 */ +@@ -276,7 +234,6 @@ static int rts5249_extra_init_hw(struct rtsx_pcr *pcr) + struct rtsx_cr_option *option = &(pcr->option); + + rts5249_init_from_cfg(pcr); +- rts5249_init_from_hw(pcr); + + rtsx_pci_init_cmd(pcr); + +@@ -327,11 +284,12 @@ static int rts5249_extra_init_hw(struct rtsx_pcr *pcr) + } + } + ++ + /* + * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced + * to drive low, and we forcibly request clock. + */ +- if (option->force_clkreq_0 && pcr->aspm_mode == ASPM_MODE_CFG) ++ if (option->force_clkreq_0) + rtsx_pci_write_register(pcr, PETXCFG, + FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW); + else +diff --git a/drivers/misc/cardreader/rts5260.c b/drivers/misc/cardreader/rts5260.c +index 79b18f6f73a8a..d2d3a6ccb8f7d 100644 +--- a/drivers/misc/cardreader/rts5260.c ++++ b/drivers/misc/cardreader/rts5260.c +@@ -480,47 +480,19 @@ static void rts5260_pwr_saving_setting(struct rtsx_pcr *pcr) + + static void rts5260_init_from_cfg(struct rtsx_pcr *pcr) + { +- struct pci_dev *pdev = pcr->pci; +- int l1ss; + struct rtsx_cr_option *option = &pcr->option; +- u32 lval; +- +- l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); +- if (!l1ss) +- return; +- +- pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_1) +- rtsx_set_dev_flag(pcr, ASPM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_2) +- rtsx_set_dev_flag(pcr, ASPM_L1_2_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_1) +- rtsx_set_dev_flag(pcr, PM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_2) +- rtsx_set_dev_flag(pcr, PM_L1_2_EN); + + rts5260_pwr_saving_setting(pcr); + + if (option->ltr_en) { +- u16 val; +- +- pcie_capability_read_word(pdev, PCI_EXP_DEVCTL2, &val); +- if (val & PCI_EXP_DEVCTL2_LTR_EN) { +- option->ltr_enabled = true; +- option->ltr_active = true; ++ if (option->ltr_enabled) + rtsx_set_ltr_latency(pcr, option->ltr_active_latency); +- } else { +- option->ltr_enabled = false; +- } + } + } + + static int rts5260_extra_init_hw(struct rtsx_pcr *pcr) + { ++ struct rtsx_cr_option *option = &pcr->option; + + /* Set mcu_cnt to 7 to ensure data can be sampled properly */ + rtsx_pci_write_register(pcr, 0xFC03, 0x7F, 0x07); +@@ -539,6 +511,17 @@ static int rts5260_extra_init_hw(struct rtsx_pcr *pcr) + + rts5260_init_hw(pcr); + ++ /* ++ * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced ++ * to drive low, and we forcibly request clock. ++ */ ++ if (option->force_clkreq_0) ++ rtsx_pci_write_register(pcr, PETXCFG, ++ FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW); ++ else ++ rtsx_pci_write_register(pcr, PETXCFG, ++ FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH); ++ + rtsx_pci_write_register(pcr, pcr->reg_pm_ctrl3, 0x10, 0x00); + + return 0; +diff --git a/drivers/misc/cardreader/rts5261.c b/drivers/misc/cardreader/rts5261.c +index 94af6bf8a25a6..67252512a1329 100644 +--- a/drivers/misc/cardreader/rts5261.c ++++ b/drivers/misc/cardreader/rts5261.c +@@ -454,54 +454,17 @@ static void rts5261_init_from_hw(struct rtsx_pcr *pcr) + + static void rts5261_init_from_cfg(struct rtsx_pcr *pcr) + { +- struct pci_dev *pdev = pcr->pci; +- int l1ss; +- u32 lval; + struct rtsx_cr_option *option = &pcr->option; + +- l1ss = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_L1SS); +- if (!l1ss) +- return; +- +- pci_read_config_dword(pdev, l1ss + PCI_L1SS_CTL1, &lval); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_1) +- rtsx_set_dev_flag(pcr, ASPM_L1_1_EN); +- else +- rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_ASPM_L1_2) +- rtsx_set_dev_flag(pcr, ASPM_L1_2_EN); +- else +- rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_1) +- rtsx_set_dev_flag(pcr, PM_L1_1_EN); +- else +- rtsx_clear_dev_flag(pcr, PM_L1_1_EN); +- +- if (lval & PCI_L1SS_CTL1_PCIPM_L1_2) +- rtsx_set_dev_flag(pcr, PM_L1_2_EN); +- else +- rtsx_clear_dev_flag(pcr, PM_L1_2_EN); +- +- rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0xFF, 0); + if (option->ltr_en) { +- u16 val; +- +- pcie_capability_read_word(pdev, PCI_EXP_DEVCTL2, &val); +- if (val & PCI_EXP_DEVCTL2_LTR_EN) { +- option->ltr_enabled = true; +- option->ltr_active = true; ++ if (option->ltr_enabled) + rtsx_set_ltr_latency(pcr, option->ltr_active_latency); +- } else { +- option->ltr_enabled = false; +- } + } + } + + static int rts5261_extra_init_hw(struct rtsx_pcr *pcr) + { ++ struct rtsx_cr_option *option = &pcr->option; + u32 val; + + rtsx_pci_write_register(pcr, RTS5261_AUTOLOAD_CFG1, +@@ -547,6 +510,17 @@ static int rts5261_extra_init_hw(struct rtsx_pcr *pcr) + else + rtsx_pci_write_register(pcr, PETXCFG, 0x30, 0x00); + ++ /* ++ * If u_force_clkreq_0 is enabled, CLKREQ# PIN will be forced ++ * to drive low, and we forcibly request clock. ++ */ ++ if (option->force_clkreq_0) ++ rtsx_pci_write_register(pcr, PETXCFG, ++ FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_LOW); ++ else ++ rtsx_pci_write_register(pcr, PETXCFG, ++ FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH); ++ + rtsx_pci_write_register(pcr, PWD_SUSPEND_EN, 0xFF, 0xFB); + + if (pcr->rtd3_en) { +diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c +index a3f4b52bb159f..a30751ad37330 100644 +--- a/drivers/misc/cardreader/rtsx_pcr.c ++++ b/drivers/misc/cardreader/rtsx_pcr.c +@@ -1326,11 +1326,8 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr) + return err; + } + +- if (pcr->aspm_mode == ASPM_MODE_REG) { ++ if (pcr->aspm_mode == ASPM_MODE_REG) + rtsx_pci_write_register(pcr, ASPM_FORCE_CTL, 0x30, 0x30); +- rtsx_pci_write_register(pcr, PETXCFG, +- FORCE_CLKREQ_DELINK_MASK, FORCE_CLKREQ_HIGH); +- } + + /* No CD interrupt if probing driver with card inserted. + * So we need to initialize pcr->card_exist here. +@@ -1345,7 +1342,9 @@ static int rtsx_pci_init_hw(struct rtsx_pcr *pcr) + + static int rtsx_pci_init_chip(struct rtsx_pcr *pcr) + { +- int err; ++ struct rtsx_cr_option *option = &(pcr->option); ++ int err, l1ss; ++ u32 lval; + u16 cfg_val; + u8 val; + +@@ -1430,6 +1429,48 @@ static int rtsx_pci_init_chip(struct rtsx_pcr *pcr) + pcr->aspm_enabled = true; + } + ++ l1ss = pci_find_ext_capability(pcr->pci, PCI_EXT_CAP_ID_L1SS); ++ if (l1ss) { ++ pci_read_config_dword(pcr->pci, l1ss + PCI_L1SS_CTL1, &lval); ++ ++ if (lval & PCI_L1SS_CTL1_ASPM_L1_1) ++ rtsx_set_dev_flag(pcr, ASPM_L1_1_EN); ++ else ++ rtsx_clear_dev_flag(pcr, ASPM_L1_1_EN); ++ ++ if (lval & PCI_L1SS_CTL1_ASPM_L1_2) ++ rtsx_set_dev_flag(pcr, ASPM_L1_2_EN); ++ else ++ rtsx_clear_dev_flag(pcr, ASPM_L1_2_EN); ++ ++ if (lval & PCI_L1SS_CTL1_PCIPM_L1_1) ++ rtsx_set_dev_flag(pcr, PM_L1_1_EN); ++ else ++ rtsx_clear_dev_flag(pcr, PM_L1_1_EN); ++ ++ if (lval & PCI_L1SS_CTL1_PCIPM_L1_2) ++ rtsx_set_dev_flag(pcr, PM_L1_2_EN); ++ else ++ rtsx_clear_dev_flag(pcr, PM_L1_2_EN); ++ ++ pcie_capability_read_word(pcr->pci, PCI_EXP_DEVCTL2, &cfg_val); ++ if (cfg_val & PCI_EXP_DEVCTL2_LTR_EN) { ++ option->ltr_enabled = true; ++ option->ltr_active = true; ++ } else { ++ option->ltr_enabled = false; ++ } ++ ++ if (rtsx_check_dev_flag(pcr, ASPM_L1_1_EN | ASPM_L1_2_EN ++ | PM_L1_1_EN | PM_L1_2_EN)) ++ option->force_clkreq_0 = false; ++ else ++ option->force_clkreq_0 = true; ++ } else { ++ option->ltr_enabled = false; ++ option->force_clkreq_0 = true; ++ } ++ + if (pcr->ops->fetch_vendor_settings) + pcr->ops->fetch_vendor_settings(pcr); + +diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c +index 5ce01ac72637e..42a66b74c1e5b 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c ++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c +@@ -1778,6 +1778,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi, + return work_done; + + error: ++ if (xdp_flags & ENA_XDP_REDIRECT) ++ xdp_do_flush(); ++ + adapter = netdev_priv(rx_ring->netdev); + + if (rc == -ENOSPC) { +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index 969db3c45d176..e81cb825dff4c 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -2654,6 +2654,7 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget) + struct rx_cmp_ext *rxcmp1; + u32 cp_cons, tmp_raw_cons; + u32 raw_cons = cpr->cp_raw_cons; ++ bool flush_xdp = false; + u32 rx_pkts = 0; + u8 event = 0; + +@@ -2688,6 +2689,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget) + rx_pkts++; + else if (rc == -EBUSY) /* partial completion */ + break; ++ if (event & BNXT_REDIRECT_EVENT) ++ flush_xdp = true; + } else if (unlikely(TX_CMP_TYPE(txcmp) == + CMPL_BASE_TYPE_HWRM_DONE)) { + bnxt_hwrm_handler(bp, txcmp); +@@ -2707,6 +2710,8 @@ static int bnxt_poll_nitroa0(struct napi_struct *napi, int budget) + + if (event & BNXT_AGG_EVENT) + bnxt_db_write(bp, &rxr->rx_agg_db, rxr->rx_agg_prod); ++ if (flush_xdp) ++ xdp_do_flush(); + + if (!bnxt_has_work(bp, cpr) && rx_pkts < budget) { + napi_complete_done(napi, rx_pkts); +diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c +index 6bf3cc11d2121..2be518db04270 100644 +--- a/drivers/net/ethernet/engleder/tsnep_main.c ++++ b/drivers/net/ethernet/engleder/tsnep_main.c +@@ -65,8 +65,11 @@ static irqreturn_t tsnep_irq(int irq, void *arg) + + /* handle TX/RX queue 0 interrupt */ + if ((active & adapter->queue[0].irq_mask) != 0) { +- tsnep_disable_irq(adapter, adapter->queue[0].irq_mask); +- napi_schedule(&adapter->queue[0].napi); ++ if (napi_schedule_prep(&adapter->queue[0].napi)) { ++ tsnep_disable_irq(adapter, adapter->queue[0].irq_mask); ++ /* schedule after masking to avoid races */ ++ __napi_schedule(&adapter->queue[0].napi); ++ } + } + + return IRQ_HANDLED; +@@ -77,8 +80,11 @@ static irqreturn_t tsnep_irq_txrx(int irq, void *arg) + struct tsnep_queue *queue = arg; + + /* handle TX/RX queue interrupt */ +- tsnep_disable_irq(queue->adapter, queue->irq_mask); +- napi_schedule(&queue->napi); ++ if (napi_schedule_prep(&queue->napi)) { ++ tsnep_disable_irq(queue->adapter, queue->irq_mask); ++ /* schedule after masking to avoid races */ ++ __napi_schedule(&queue->napi); ++ } + + return IRQ_HANDLED; + } +@@ -924,6 +930,10 @@ static int tsnep_poll(struct napi_struct *napi, int budget) + if (queue->tx) + complete = tsnep_tx_poll(queue->tx, budget); + ++ /* handle case where we are called by netpoll with a budget of 0 */ ++ if (unlikely(budget <= 0)) ++ return budget; ++ + if (queue->rx) { + done = tsnep_rx_poll(queue->rx, napi, budget); + if (done >= budget) +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +index 8aae179554a81..04c9baca1b0f8 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +@@ -3352,6 +3352,15 @@ static void hns3_set_default_feature(struct net_device *netdev) + NETIF_F_HW_TC); + + netdev->hw_enc_features |= netdev->vlan_features | NETIF_F_TSO_MANGLEID; ++ ++ /* The device_version V3 hardware can't offload the checksum for IP in ++ * GRE packets, but can do it for NvGRE. So default to disable the ++ * checksum and GSO offload for GRE. ++ */ ++ if (ae_dev->dev_version > HNAE3_DEVICE_VERSION_V2) { ++ netdev->features &= ~NETIF_F_GSO_GRE; ++ netdev->features &= ~NETIF_F_GSO_GRE_CSUM; ++ } + } + + static int hns3_alloc_buffer(struct hns3_enet_ring *ring, +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 884e45fb6b72e..3e1d202d60ce1 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -3662,9 +3662,14 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) + static void hclge_clear_event_cause(struct hclge_dev *hdev, u32 event_type, + u32 regclr) + { ++#define HCLGE_IMP_RESET_DELAY 5 ++ + switch (event_type) { + case HCLGE_VECTOR0_EVENT_PTP: + case HCLGE_VECTOR0_EVENT_RST: ++ if (regclr == BIT(HCLGE_VECTOR0_IMPRESET_INT_B)) ++ mdelay(HCLGE_IMP_RESET_DELAY); ++ + hclge_write_dev(&hdev->hw, HCLGE_MISC_RESET_STS_REG, regclr); + break; + case HCLGE_VECTOR0_EVENT_MBX: +@@ -7454,6 +7459,12 @@ static int hclge_del_cls_flower(struct hnae3_handle *handle, + ret = hclge_fd_tcam_config(hdev, HCLGE_FD_STAGE_1, true, rule->location, + NULL, false); + if (ret) { ++ /* if tcam config fail, set rule state to TO_DEL, ++ * so the rule will be deleted when periodic ++ * task being scheduled. ++ */ ++ hclge_update_fd_list(hdev, HCLGE_FD_TO_DEL, rule->location, NULL); ++ set_bit(HCLGE_STATE_FD_TBL_CHANGED, &hdev->state); + spin_unlock_bh(&hdev->fd_rule_lock); + return ret; + } +@@ -8930,7 +8941,7 @@ static void hclge_update_overflow_flags(struct hclge_vport *vport, + if (mac_type == HCLGE_MAC_ADDR_UC) { + if (is_all_added) + vport->overflow_promisc_flags &= ~HNAE3_OVERFLOW_UPE; +- else ++ else if (hclge_is_umv_space_full(vport, true)) + vport->overflow_promisc_flags |= HNAE3_OVERFLOW_UPE; + } else { + if (is_all_added) +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +index b1b14850e958f..72cf5145e15a2 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +@@ -1909,7 +1909,8 @@ static void hclgevf_periodic_service_task(struct hclgevf_dev *hdev) + unsigned long delta = round_jiffies_relative(HZ); + struct hnae3_handle *handle = &hdev->nic; + +- if (test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state)) ++ if (test_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state) || ++ test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state)) + return; + + if (time_is_after_jiffies(hdev->last_serv_processed + HZ)) { +diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +index cb7cf672f6971..547e67d9470b7 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c +@@ -4397,9 +4397,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id, + goto error_pvid; + + i40e_vlan_stripping_enable(vsi); +- i40e_vc_reset_vf(vf, true); +- /* During reset the VF got a new VSI, so refresh a pointer. */ +- vsi = pf->vsi[vf->lan_vsi_idx]; ++ + /* Locked once because multiple functions below iterate list */ + spin_lock_bh(&vsi->mac_filter_hash_lock); + +@@ -4485,6 +4483,10 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev, int vf_id, + */ + vf->port_vlan_id = le16_to_cpu(vsi->info.pvid); + ++ i40e_vc_reset_vf(vf, true); ++ /* During reset the VF got a new VSI, so refresh a pointer. */ ++ vsi = pf->vsi[vf->lan_vsi_idx]; ++ + ret = i40e_config_vf_promiscuous_mode(vf, vsi->id, allmulti, alluni); + if (ret) { + dev_err(&pf->pdev->dev, "Unable to config vf promiscuous mode\n"); +diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h +index 543931c06bb17..06cfd567866c2 100644 +--- a/drivers/net/ethernet/intel/iavf/iavf.h ++++ b/drivers/net/ethernet/intel/iavf/iavf.h +@@ -521,7 +521,7 @@ void iavf_down(struct iavf_adapter *adapter); + int iavf_process_config(struct iavf_adapter *adapter); + int iavf_parse_vf_resource_msg(struct iavf_adapter *adapter); + void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags); +-void iavf_schedule_request_stats(struct iavf_adapter *adapter); ++void iavf_schedule_aq_request(struct iavf_adapter *adapter, u64 flags); + void iavf_schedule_finish_config(struct iavf_adapter *adapter); + void iavf_reset(struct iavf_adapter *adapter); + void iavf_set_ethtool_ops(struct net_device *netdev); +diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +index fe912b1c468ef..c13b4fa659ee9 100644 +--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c ++++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c +@@ -362,7 +362,7 @@ static void iavf_get_ethtool_stats(struct net_device *netdev, + unsigned int i; + + /* Explicitly request stats refresh */ +- iavf_schedule_request_stats(adapter); ++ iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_REQUEST_STATS); + + iavf_add_ethtool_stats(&data, adapter, iavf_gstrings_stats); + +diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c +index 22bc57ee24228..a39f7f0d6ab0b 100644 +--- a/drivers/net/ethernet/intel/iavf/iavf_main.c ++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c +@@ -322,15 +322,13 @@ void iavf_schedule_reset(struct iavf_adapter *adapter, u64 flags) + } + + /** +- * iavf_schedule_request_stats - Set the flags and schedule statistics request ++ * iavf_schedule_aq_request - Set the flags and schedule aq request + * @adapter: board private structure +- * +- * Sets IAVF_FLAG_AQ_REQUEST_STATS flag so iavf_watchdog_task() will explicitly +- * request and refresh ethtool stats ++ * @flags: requested aq flags + **/ +-void iavf_schedule_request_stats(struct iavf_adapter *adapter) ++void iavf_schedule_aq_request(struct iavf_adapter *adapter, u64 flags) + { +- adapter->aq_required |= IAVF_FLAG_AQ_REQUEST_STATS; ++ adapter->aq_required |= flags; + mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0); + } + +@@ -831,7 +829,7 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter, + list_add_tail(&f->list, &adapter->vlan_filter_list); + f->state = IAVF_VLAN_ADD; + adapter->num_vlan_filters++; +- adapter->aq_required |= IAVF_FLAG_AQ_ADD_VLAN_FILTER; ++ iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER); + } + + clearout: +@@ -853,7 +851,7 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan) + f = iavf_find_vlan(adapter, vlan); + if (f) { + f->state = IAVF_VLAN_REMOVE; +- adapter->aq_required |= IAVF_FLAG_AQ_DEL_VLAN_FILTER; ++ iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER); + } + + spin_unlock_bh(&adapter->mac_vlan_list_lock); +@@ -1433,7 +1431,8 @@ void iavf_down(struct iavf_adapter *adapter) + iavf_clear_fdir_filters(adapter); + iavf_clear_adv_rss_conf(adapter); + +- if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)) { ++ if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) && ++ !(test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))) { + /* cancel any current operation */ + adapter->current_op = VIRTCHNL_OP_UNKNOWN; + /* Schedule operations to close down the HW. Don't wait +diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c +index 511fc3f412087..9166fde40c772 100644 +--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c ++++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c +@@ -867,6 +867,18 @@ static void igc_ethtool_get_stats(struct net_device *netdev, + spin_unlock(&adapter->stats64_lock); + } + ++static int igc_ethtool_get_previous_rx_coalesce(struct igc_adapter *adapter) ++{ ++ return (adapter->rx_itr_setting <= 3) ? ++ adapter->rx_itr_setting : adapter->rx_itr_setting >> 2; ++} ++ ++static int igc_ethtool_get_previous_tx_coalesce(struct igc_adapter *adapter) ++{ ++ return (adapter->tx_itr_setting <= 3) ? ++ adapter->tx_itr_setting : adapter->tx_itr_setting >> 2; ++} ++ + static int igc_ethtool_get_coalesce(struct net_device *netdev, + struct ethtool_coalesce *ec, + struct kernel_ethtool_coalesce *kernel_coal, +@@ -874,17 +886,8 @@ static int igc_ethtool_get_coalesce(struct net_device *netdev, + { + struct igc_adapter *adapter = netdev_priv(netdev); + +- if (adapter->rx_itr_setting <= 3) +- ec->rx_coalesce_usecs = adapter->rx_itr_setting; +- else +- ec->rx_coalesce_usecs = adapter->rx_itr_setting >> 2; +- +- if (!(adapter->flags & IGC_FLAG_QUEUE_PAIRS)) { +- if (adapter->tx_itr_setting <= 3) +- ec->tx_coalesce_usecs = adapter->tx_itr_setting; +- else +- ec->tx_coalesce_usecs = adapter->tx_itr_setting >> 2; +- } ++ ec->rx_coalesce_usecs = igc_ethtool_get_previous_rx_coalesce(adapter); ++ ec->tx_coalesce_usecs = igc_ethtool_get_previous_tx_coalesce(adapter); + + return 0; + } +@@ -909,8 +912,12 @@ static int igc_ethtool_set_coalesce(struct net_device *netdev, + ec->tx_coalesce_usecs == 2) + return -EINVAL; + +- if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) && ec->tx_coalesce_usecs) ++ if ((adapter->flags & IGC_FLAG_QUEUE_PAIRS) && ++ ec->tx_coalesce_usecs != igc_ethtool_get_previous_tx_coalesce(adapter)) { ++ NL_SET_ERR_MSG_MOD(extack, ++ "Queue Pair mode enabled, both Rx and Tx coalescing controlled by rx-usecs"); + return -EINVAL; ++ } + + /* If ITR is disabled, disable DMAC */ + if (ec->rx_coalesce_usecs == 0) { +diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c +index 2f3947cf513bd..1ac836a55cd31 100644 +--- a/drivers/net/ethernet/intel/igc/igc_main.c ++++ b/drivers/net/ethernet/intel/igc/igc_main.c +@@ -6322,7 +6322,7 @@ static int igc_xdp_xmit(struct net_device *dev, int num_frames, + struct igc_ring *ring; + int i, drops; + +- if (unlikely(test_bit(__IGC_DOWN, &adapter->state))) ++ if (unlikely(!netif_carrier_ok(dev))) + return -ENETDOWN; + + if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) +diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c +index d4ec46d1c8cfb..61354f7985035 100644 +--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c ++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c +@@ -726,13 +726,13 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb, + dma_map_sg_err: + if (si > 0) { + dma_unmap_single(iq->dev, sglist[0].dma_ptr[0], +- sglist[0].len[0], DMA_TO_DEVICE); +- sglist[0].len[0] = 0; ++ sglist[0].len[3], DMA_TO_DEVICE); ++ sglist[0].len[3] = 0; + } + while (si > 1) { + dma_unmap_page(iq->dev, sglist[si >> 2].dma_ptr[si & 3], +- sglist[si >> 2].len[si & 3], DMA_TO_DEVICE); +- sglist[si >> 2].len[si & 3] = 0; ++ sglist[si >> 2].len[3 - (si & 3)], DMA_TO_DEVICE); ++ sglist[si >> 2].len[3 - (si & 3)] = 0; + si--; + } + tx_buffer->gather = 0; +diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c +index 5a520d37bea02..d0adb82d65c31 100644 +--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c ++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c +@@ -69,12 +69,12 @@ int octep_iq_process_completions(struct octep_iq *iq, u16 budget) + compl_sg++; + + dma_unmap_single(iq->dev, tx_buffer->sglist[0].dma_ptr[0], +- tx_buffer->sglist[0].len[0], DMA_TO_DEVICE); ++ tx_buffer->sglist[0].len[3], DMA_TO_DEVICE); + + i = 1; /* entry 0 is main skb, unmapped above */ + while (frags--) { + dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3], +- tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE); ++ tx_buffer->sglist[i >> 2].len[3 - (i & 3)], DMA_TO_DEVICE); + i++; + } + +@@ -131,13 +131,13 @@ static void octep_iq_free_pending(struct octep_iq *iq) + + dma_unmap_single(iq->dev, + tx_buffer->sglist[0].dma_ptr[0], +- tx_buffer->sglist[0].len[0], ++ tx_buffer->sglist[0].len[3], + DMA_TO_DEVICE); + + i = 1; /* entry 0 is main skb, unmapped above */ + while (frags--) { + dma_unmap_page(iq->dev, tx_buffer->sglist[i >> 2].dma_ptr[i & 3], +- tx_buffer->sglist[i >> 2].len[i & 3], DMA_TO_DEVICE); ++ tx_buffer->sglist[i >> 2].len[3 - (i & 3)], DMA_TO_DEVICE); + i++; + } + +diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h +index 2ef57980eb47b..21e75ff9f5e71 100644 +--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h ++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h +@@ -17,7 +17,21 @@ + #define TX_BUFTYPE_NET_SG 2 + #define NUM_TX_BUFTYPES 3 + +-/* Hardware format for Scatter/Gather list */ ++/* Hardware format for Scatter/Gather list ++ * ++ * 63 48|47 32|31 16|15 0 ++ * ----------------------------------------- ++ * | Len 0 | Len 1 | Len 2 | Len 3 | ++ * ----------------------------------------- ++ * | Ptr 0 | ++ * ----------------------------------------- ++ * | Ptr 1 | ++ * ----------------------------------------- ++ * | Ptr 2 | ++ * ----------------------------------------- ++ * | Ptr 3 | ++ * ----------------------------------------- ++ */ + struct octep_tx_sglist_desc { + u16 len[4]; + dma_addr_t dma_ptr[4]; +diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +index 7af223b0a37f5..5704fb75fa477 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c ++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c +@@ -29,7 +29,8 @@ + static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf, + struct bpf_prog *prog, + struct nix_cqe_rx_s *cqe, +- struct otx2_cq_queue *cq); ++ struct otx2_cq_queue *cq, ++ bool *need_xdp_flush); + + static int otx2_nix_cq_op_status(struct otx2_nic *pfvf, + struct otx2_cq_queue *cq) +@@ -340,7 +341,7 @@ static bool otx2_check_rcv_errors(struct otx2_nic *pfvf, + static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, + struct napi_struct *napi, + struct otx2_cq_queue *cq, +- struct nix_cqe_rx_s *cqe) ++ struct nix_cqe_rx_s *cqe, bool *need_xdp_flush) + { + struct nix_rx_parse_s *parse = &cqe->parse; + struct nix_rx_sg_s *sg = &cqe->sg; +@@ -356,7 +357,7 @@ static void otx2_rcv_pkt_handler(struct otx2_nic *pfvf, + } + + if (pfvf->xdp_prog) +- if (otx2_xdp_rcv_pkt_handler(pfvf, pfvf->xdp_prog, cqe, cq)) ++ if (otx2_xdp_rcv_pkt_handler(pfvf, pfvf->xdp_prog, cqe, cq, need_xdp_flush)) + return; + + skb = napi_get_frags(napi); +@@ -389,6 +390,7 @@ static int otx2_rx_napi_handler(struct otx2_nic *pfvf, + struct napi_struct *napi, + struct otx2_cq_queue *cq, int budget) + { ++ bool need_xdp_flush = false; + struct nix_cqe_rx_s *cqe; + int processed_cqe = 0; + +@@ -410,13 +412,15 @@ process_cqe: + cq->cq_head++; + cq->cq_head &= (cq->cqe_cnt - 1); + +- otx2_rcv_pkt_handler(pfvf, napi, cq, cqe); ++ otx2_rcv_pkt_handler(pfvf, napi, cq, cqe, &need_xdp_flush); + + cqe->hdr.cqe_type = NIX_XQE_TYPE_INVALID; + cqe->sg.seg_addr = 0x00; + processed_cqe++; + cq->pend_cqe--; + } ++ if (need_xdp_flush) ++ xdp_do_flush(); + + /* Free CQEs to HW */ + otx2_write64(pfvf, NIX_LF_CQ_OP_DOOR, +@@ -1323,7 +1327,8 @@ bool otx2_xdp_sq_append_pkt(struct otx2_nic *pfvf, u64 iova, int len, u16 qidx) + static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf, + struct bpf_prog *prog, + struct nix_cqe_rx_s *cqe, +- struct otx2_cq_queue *cq) ++ struct otx2_cq_queue *cq, ++ bool *need_xdp_flush) + { + unsigned char *hard_start, *data; + int qidx = cq->cq_idx; +@@ -1360,8 +1365,10 @@ static bool otx2_xdp_rcv_pkt_handler(struct otx2_nic *pfvf, + + otx2_dma_unmap_page(pfvf, iova, pfvf->rbsize, + DMA_FROM_DEVICE); +- if (!err) ++ if (!err) { ++ *need_xdp_flush = true; + return true; ++ } + put_page(page); + break; + default: +diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h +index ad8a2a4453b76..93a4258421667 100644 +--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h ++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h +@@ -180,6 +180,7 @@ typedef void (*ionic_desc_cb)(struct ionic_queue *q, + struct ionic_desc_info *desc_info, + struct ionic_cq_info *cq_info, void *cb_arg); + ++#define IONIC_MAX_BUF_LEN ((u16)-1) + #define IONIC_PAGE_SIZE PAGE_SIZE + #define IONIC_PAGE_SPLIT_SZ (PAGE_SIZE / 2) + #define IONIC_PAGE_GFP_MASK (GFP_ATOMIC | __GFP_NOWARN |\ +diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c +index f8f5eb1307681..4684b9f194a68 100644 +--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c ++++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c +@@ -207,7 +207,8 @@ static struct sk_buff *ionic_rx_frags(struct ionic_queue *q, + return NULL; + } + +- frag_len = min_t(u16, len, IONIC_PAGE_SIZE - buf_info->page_offset); ++ frag_len = min_t(u16, len, min_t(u32, IONIC_MAX_BUF_LEN, ++ IONIC_PAGE_SIZE - buf_info->page_offset)); + len -= frag_len; + + dma_sync_single_for_cpu(dev, +@@ -444,7 +445,8 @@ void ionic_rx_fill(struct ionic_queue *q) + + /* fill main descriptor - buf[0] */ + desc->addr = cpu_to_le64(buf_info->dma_addr + buf_info->page_offset); +- frag_len = min_t(u16, len, IONIC_PAGE_SIZE - buf_info->page_offset); ++ frag_len = min_t(u16, len, min_t(u32, IONIC_MAX_BUF_LEN, ++ IONIC_PAGE_SIZE - buf_info->page_offset)); + desc->len = cpu_to_le16(frag_len); + remain_len -= frag_len; + buf_info++; +@@ -463,7 +465,9 @@ void ionic_rx_fill(struct ionic_queue *q) + } + + sg_elem->addr = cpu_to_le64(buf_info->dma_addr + buf_info->page_offset); +- frag_len = min_t(u16, remain_len, IONIC_PAGE_SIZE - buf_info->page_offset); ++ frag_len = min_t(u16, remain_len, min_t(u32, IONIC_MAX_BUF_LEN, ++ IONIC_PAGE_SIZE - ++ buf_info->page_offset)); + sg_elem->len = cpu_to_le16(frag_len); + remain_len -= frag_len; + buf_info++; +diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c +index 921ca59822b0f..556b2d1cd2aca 100644 +--- a/drivers/net/team/team.c ++++ b/drivers/net/team/team.c +@@ -2127,7 +2127,12 @@ static const struct ethtool_ops team_ethtool_ops = { + static void team_setup_by_port(struct net_device *dev, + struct net_device *port_dev) + { +- dev->header_ops = port_dev->header_ops; ++ struct team *team = netdev_priv(dev); ++ ++ if (port_dev->type == ARPHRD_ETHER) ++ dev->header_ops = team->header_ops_cache; ++ else ++ dev->header_ops = port_dev->header_ops; + dev->type = port_dev->type; + dev->hard_header_len = port_dev->hard_header_len; + dev->needed_headroom = port_dev->needed_headroom; +@@ -2174,8 +2179,11 @@ static int team_dev_type_check_change(struct net_device *dev, + + static void team_setup(struct net_device *dev) + { ++ struct team *team = netdev_priv(dev); ++ + ether_setup(dev); + dev->max_mtu = ETH_MAX_MTU; ++ team->header_ops_cache = dev->header_ops; + + dev->netdev_ops = &team_netdev_ops; + dev->ethtool_ops = &team_ethtool_ops; +diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c +index 6312f67f260e0..5966e36875def 100644 +--- a/drivers/net/thunderbolt.c ++++ b/drivers/net/thunderbolt.c +@@ -1005,12 +1005,11 @@ static bool tbnet_xmit_csum_and_map(struct tbnet *net, struct sk_buff *skb, + *tucso = ~csum_tcpudp_magic(ip_hdr(skb)->saddr, + ip_hdr(skb)->daddr, 0, + ip_hdr(skb)->protocol, 0); +- } else if (skb_is_gso_v6(skb)) { ++ } else if (skb_is_gso(skb) && skb_is_gso_v6(skb)) { + tucso = dest + ((void *)&(tcp_hdr(skb)->check) - data); + *tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, + &ipv6_hdr(skb)->daddr, 0, + IPPROTO_TCP, 0); +- return false; + } else if (protocol == htons(ETH_P_IPV6)) { + tucso = dest + skb_checksum_start_offset(skb) + skb->csum_offset; + *tucso = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr, +diff --git a/drivers/net/wireless/ath/ath11k/dp.h b/drivers/net/wireless/ath/ath11k/dp.h +index be9eafc872b3b..232fd2e638bf6 100644 +--- a/drivers/net/wireless/ath/ath11k/dp.h ++++ b/drivers/net/wireless/ath/ath11k/dp.h +@@ -303,12 +303,16 @@ struct ath11k_dp { + + #define HTT_TX_WBM_COMP_STATUS_OFFSET 8 + ++#define HTT_INVALID_PEER_ID 0xffff ++ + /* HTT tx completion is overlaid in wbm_release_ring */ + #define HTT_TX_WBM_COMP_INFO0_STATUS GENMASK(12, 9) + #define HTT_TX_WBM_COMP_INFO0_REINJECT_REASON GENMASK(16, 13) + #define HTT_TX_WBM_COMP_INFO0_REINJECT_REASON GENMASK(16, 13) + + #define HTT_TX_WBM_COMP_INFO1_ACK_RSSI GENMASK(31, 24) ++#define HTT_TX_WBM_COMP_INFO2_SW_PEER_ID GENMASK(15, 0) ++#define HTT_TX_WBM_COMP_INFO2_VALID BIT(21) + + struct htt_tx_wbm_completion { + u32 info0; +diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c +index 8afbba2369354..cd24488612454 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_tx.c ++++ b/drivers/net/wireless/ath/ath11k/dp_tx.c +@@ -316,10 +316,12 @@ ath11k_dp_tx_htt_tx_complete_buf(struct ath11k_base *ab, + struct dp_tx_ring *tx_ring, + struct ath11k_dp_htt_wbm_tx_status *ts) + { ++ struct ieee80211_tx_status status = { 0 }; + struct sk_buff *msdu; + struct ieee80211_tx_info *info; + struct ath11k_skb_cb *skb_cb; + struct ath11k *ar; ++ struct ath11k_peer *peer; + + spin_lock(&tx_ring->tx_idr_lock); + msdu = idr_remove(&tx_ring->txbuf_idr, ts->msdu_id); +@@ -341,6 +343,11 @@ ath11k_dp_tx_htt_tx_complete_buf(struct ath11k_base *ab, + + dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE); + ++ if (!skb_cb->vif) { ++ ieee80211_free_txskb(ar->hw, msdu); ++ return; ++ } ++ + memset(&info->status, 0, sizeof(info->status)); + + if (ts->acked) { +@@ -355,7 +362,23 @@ ath11k_dp_tx_htt_tx_complete_buf(struct ath11k_base *ab, + } + } + +- ieee80211_tx_status(ar->hw, msdu); ++ spin_lock_bh(&ab->base_lock); ++ peer = ath11k_peer_find_by_id(ab, ts->peer_id); ++ if (!peer || !peer->sta) { ++ ath11k_dbg(ab, ATH11K_DBG_DATA, ++ "dp_tx: failed to find the peer with peer_id %d\n", ++ ts->peer_id); ++ spin_unlock_bh(&ab->base_lock); ++ ieee80211_free_txskb(ar->hw, msdu); ++ return; ++ } ++ spin_unlock_bh(&ab->base_lock); ++ ++ status.sta = peer->sta; ++ status.info = info; ++ status.skb = msdu; ++ ++ ieee80211_tx_status_ext(ar->hw, &status); + } + + static void +@@ -379,7 +402,15 @@ ath11k_dp_tx_process_htt_tx_complete(struct ath11k_base *ab, + ts.msdu_id = msdu_id; + ts.ack_rssi = FIELD_GET(HTT_TX_WBM_COMP_INFO1_ACK_RSSI, + status_desc->info1); ++ ++ if (FIELD_GET(HTT_TX_WBM_COMP_INFO2_VALID, status_desc->info2)) ++ ts.peer_id = FIELD_GET(HTT_TX_WBM_COMP_INFO2_SW_PEER_ID, ++ status_desc->info2); ++ else ++ ts.peer_id = HTT_INVALID_PEER_ID; ++ + ath11k_dp_tx_htt_tx_complete_buf(ab, tx_ring, &ts); ++ + break; + case HAL_WBM_REL_HTT_TX_COMP_STATUS_REINJ: + case HAL_WBM_REL_HTT_TX_COMP_STATUS_INSPECT: +@@ -535,12 +566,12 @@ static void ath11k_dp_tx_complete_msdu(struct ath11k *ar, + dma_unmap_single(ab->dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE); + + if (unlikely(!rcu_access_pointer(ab->pdevs_active[ar->pdev_idx]))) { +- dev_kfree_skb_any(msdu); ++ ieee80211_free_txskb(ar->hw, msdu); + return; + } + + if (unlikely(!skb_cb->vif)) { +- dev_kfree_skb_any(msdu); ++ ieee80211_free_txskb(ar->hw, msdu); + return; + } + +@@ -593,7 +624,7 @@ static void ath11k_dp_tx_complete_msdu(struct ath11k *ar, + "dp_tx: failed to find the peer with peer_id %d\n", + ts->peer_id); + spin_unlock_bh(&ab->base_lock); +- dev_kfree_skb_any(msdu); ++ ieee80211_free_txskb(ar->hw, msdu); + return; + } + arsta = (struct ath11k_sta *)peer->sta->drv_priv; +diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.h b/drivers/net/wireless/ath/ath11k/dp_tx.h +index e87d65bfbf06e..68a21ea9b9346 100644 +--- a/drivers/net/wireless/ath/ath11k/dp_tx.h ++++ b/drivers/net/wireless/ath/ath11k/dp_tx.h +@@ -13,6 +13,7 @@ struct ath11k_dp_htt_wbm_tx_status { + u32 msdu_id; + bool acked; + int ack_rssi; ++ u16 peer_id; + }; + + void ath11k_dp_tx_update_txcompl(struct ath11k *ar, struct hal_tx_status *ts); +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 6c3d469eed7e3..177a365b8ec55 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -1911,7 +1911,7 @@ char *nvme_fc_io_getuuid(struct nvmefc_fcp_req *req) + struct nvme_fc_fcp_op *op = fcp_req_to_fcp_op(req); + struct request *rq = op->rq; + +- if (!IS_ENABLED(CONFIG_BLK_CGROUP_FC_APPID) || !rq->bio) ++ if (!IS_ENABLED(CONFIG_BLK_CGROUP_FC_APPID) || !rq || !rq->bio) + return NULL; + return blkcg_get_fc_appid(rq->bio); + } +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index b30269f5e68fb..64990a2cfd0a7 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -392,14 +392,6 @@ static int nvme_pci_npages_sgl(void) + NVME_CTRL_PAGE_SIZE); + } + +-static size_t nvme_pci_iod_alloc_size(void) +-{ +- size_t npages = max(nvme_pci_npages_prp(), nvme_pci_npages_sgl()); +- +- return sizeof(__le64 *) * npages + +- sizeof(struct scatterlist) * NVME_MAX_SEGS; +-} +- + static int nvme_admin_init_hctx(struct blk_mq_hw_ctx *hctx, void *data, + unsigned int hctx_idx) + { +@@ -2775,6 +2767,22 @@ static void nvme_release_prp_pools(struct nvme_dev *dev) + dma_pool_destroy(dev->prp_small_pool); + } + ++static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev) ++{ ++ size_t npages = max(nvme_pci_npages_prp(), nvme_pci_npages_sgl()); ++ size_t alloc_size = sizeof(__le64 *) * npages + ++ sizeof(struct scatterlist) * NVME_MAX_SEGS; ++ ++ WARN_ON_ONCE(alloc_size > PAGE_SIZE); ++ dev->iod_mempool = mempool_create_node(1, ++ mempool_kmalloc, mempool_kfree, ++ (void *)alloc_size, GFP_KERNEL, ++ dev_to_node(dev->dev)); ++ if (!dev->iod_mempool) ++ return -ENOMEM; ++ return 0; ++} ++ + static void nvme_free_tagset(struct nvme_dev *dev) + { + if (dev->tagset.tags) +@@ -2782,6 +2790,7 @@ static void nvme_free_tagset(struct nvme_dev *dev) + dev->ctrl.tagset = NULL; + } + ++/* pairs with nvme_pci_alloc_dev */ + static void nvme_pci_free_ctrl(struct nvme_ctrl *ctrl) + { + struct nvme_dev *dev = to_nvme_dev(ctrl); +@@ -3098,20 +3107,20 @@ static void nvme_async_probe(void *data, async_cookie_t cookie) + nvme_put_ctrl(&dev->ctrl); + } + +-static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) ++static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev, ++ const struct pci_device_id *id) + { +- int node, result = -ENOMEM; +- struct nvme_dev *dev; + unsigned long quirks = id->driver_data; +- size_t alloc_size; +- +- node = dev_to_node(&pdev->dev); +- if (node == NUMA_NO_NODE) +- set_dev_node(&pdev->dev, first_memory_node); ++ int node = dev_to_node(&pdev->dev); ++ struct nvme_dev *dev; ++ int ret = -ENOMEM; + + dev = kzalloc_node(sizeof(*dev), GFP_KERNEL, node); + if (!dev) +- return -ENOMEM; ++ return ERR_PTR(-ENOMEM); ++ INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work); ++ INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work); ++ mutex_init(&dev->shutdown_lock); + + dev->nr_write_queues = write_queues; + dev->nr_poll_queues = poll_queues; +@@ -3119,25 +3128,11 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) + dev->queues = kcalloc_node(dev->nr_allocated_queues, + sizeof(struct nvme_queue), GFP_KERNEL, node); + if (!dev->queues) +- goto free; ++ goto out_free_dev; + + dev->dev = get_device(&pdev->dev); +- pci_set_drvdata(pdev, dev); +- +- result = nvme_dev_map(dev); +- if (result) +- goto put_pci; +- +- INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work); +- INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work); +- mutex_init(&dev->shutdown_lock); +- +- result = nvme_setup_prp_pools(dev); +- if (result) +- goto unmap; + + quirks |= check_vendor_combination_bug(pdev); +- + if (!noacpi && acpi_storage_d3(&pdev->dev)) { + /* + * Some systems use a bios work around to ask for D3 on +@@ -3147,46 +3142,54 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) + "platform quirk: setting simple suspend\n"); + quirks |= NVME_QUIRK_SIMPLE_SUSPEND; + } ++ ret = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops, ++ quirks); ++ if (ret) ++ goto out_put_device; ++ return dev; + +- /* +- * Double check that our mempool alloc size will cover the biggest +- * command we support. +- */ +- alloc_size = nvme_pci_iod_alloc_size(); +- WARN_ON_ONCE(alloc_size > PAGE_SIZE); ++out_put_device: ++ put_device(dev->dev); ++ kfree(dev->queues); ++out_free_dev: ++ kfree(dev); ++ return ERR_PTR(ret); ++} + +- dev->iod_mempool = mempool_create_node(1, mempool_kmalloc, +- mempool_kfree, +- (void *) alloc_size, +- GFP_KERNEL, node); +- if (!dev->iod_mempool) { +- result = -ENOMEM; +- goto release_pools; +- } ++static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id) ++{ ++ struct nvme_dev *dev; ++ int result = -ENOMEM; ++ ++ dev = nvme_pci_alloc_dev(pdev, id); ++ if (IS_ERR(dev)) ++ return PTR_ERR(dev); ++ ++ result = nvme_dev_map(dev); ++ if (result) ++ goto out_uninit_ctrl; ++ ++ result = nvme_setup_prp_pools(dev); ++ if (result) ++ goto out_dev_unmap; + +- result = nvme_init_ctrl(&dev->ctrl, &pdev->dev, &nvme_pci_ctrl_ops, +- quirks); ++ result = nvme_pci_alloc_iod_mempool(dev); + if (result) +- goto release_mempool; ++ goto out_release_prp_pools; + + dev_info(dev->ctrl.device, "pci function %s\n", dev_name(&pdev->dev)); ++ pci_set_drvdata(pdev, dev); + + nvme_reset_ctrl(&dev->ctrl); + async_schedule(nvme_async_probe, dev); +- + return 0; + +- release_mempool: +- mempool_destroy(dev->iod_mempool); +- release_pools: ++out_release_prp_pools: + nvme_release_prp_pools(dev); +- unmap: ++out_dev_unmap: + nvme_dev_unmap(dev); +- put_pci: +- put_device(dev->dev); +- free: +- kfree(dev->queues); +- kfree(dev); ++out_uninit_ctrl: ++ nvme_uninit_ctrl(&dev->ctrl); + return result; + } + +diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c +index bcc1dae007803..890c3c0f3d140 100644 +--- a/drivers/parisc/iosapic.c ++++ b/drivers/parisc/iosapic.c +@@ -202,9 +202,9 @@ static inline void iosapic_write(void __iomem *iosapic, unsigned int reg, u32 va + + static DEFINE_SPINLOCK(iosapic_lock); + +-static inline void iosapic_eoi(void __iomem *addr, unsigned int data) ++static inline void iosapic_eoi(__le32 __iomem *addr, __le32 data) + { +- __raw_writel(data, addr); ++ __raw_writel((__force u32)data, addr); + } + + /* +diff --git a/drivers/parisc/iosapic_private.h b/drivers/parisc/iosapic_private.h +index 73ecc657ad954..bd8ff40162b4b 100644 +--- a/drivers/parisc/iosapic_private.h ++++ b/drivers/parisc/iosapic_private.h +@@ -118,8 +118,8 @@ struct iosapic_irt { + struct vector_info { + struct iosapic_info *iosapic; /* I/O SAPIC this vector is on */ + struct irt_entry *irte; /* IRT entry */ +- u32 __iomem *eoi_addr; /* precalculate EOI reg address */ +- u32 eoi_data; /* IA64: ? PA: swapped txn_data */ ++ __le32 __iomem *eoi_addr; /* precalculate EOI reg address */ ++ __le32 eoi_data; /* IA64: ? PA: swapped txn_data */ + int txn_irq; /* virtual IRQ number for processor */ + ulong txn_addr; /* IA64: id_eid PA: partial HPA */ + u32 txn_data; /* CPU interrupt bit */ +diff --git a/drivers/platform/mellanox/Kconfig b/drivers/platform/mellanox/Kconfig +index 30b50920b278c..f7dfa0e785fd6 100644 +--- a/drivers/platform/mellanox/Kconfig ++++ b/drivers/platform/mellanox/Kconfig +@@ -60,6 +60,7 @@ config MLXBF_BOOTCTL + tristate "Mellanox BlueField Firmware Boot Control driver" + depends on ARM64 + depends on ACPI ++ depends on NET + help + The Mellanox BlueField firmware implements functionality to + request swapping the primary and alternate eMMC boot partition, +diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c +index fdf7da06af306..d85d895fee894 100644 +--- a/drivers/platform/x86/asus-nb-wmi.c ++++ b/drivers/platform/x86/asus-nb-wmi.c +@@ -478,6 +478,15 @@ static const struct dmi_system_id asus_quirks[] = { + }, + .driver_data = &quirk_asus_tablet_mode, + }, ++ { ++ .callback = dmi_matched, ++ .ident = "ASUS ROG FLOW X16", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "GV601V"), ++ }, ++ .driver_data = &quirk_asus_tablet_mode, ++ }, + { + .callback = dmi_matched, + .ident = "ASUS VivoBook E410MA", +diff --git a/drivers/platform/x86/intel_scu_ipc.c b/drivers/platform/x86/intel_scu_ipc.c +index e7a3e34028178..189c5460edd81 100644 +--- a/drivers/platform/x86/intel_scu_ipc.c ++++ b/drivers/platform/x86/intel_scu_ipc.c +@@ -19,6 +19,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -232,19 +233,15 @@ static inline u32 ipc_data_readl(struct intel_scu_ipc_dev *scu, u32 offset) + /* Wait till scu status is busy */ + static inline int busy_loop(struct intel_scu_ipc_dev *scu) + { +- unsigned long end = jiffies + IPC_TIMEOUT; +- +- do { +- u32 status; +- +- status = ipc_read_status(scu); +- if (!(status & IPC_STATUS_BUSY)) +- return (status & IPC_STATUS_ERR) ? -EIO : 0; ++ u8 status; ++ int err; + +- usleep_range(50, 100); +- } while (time_before(jiffies, end)); ++ err = readx_poll_timeout(ipc_read_status, scu, status, !(status & IPC_STATUS_BUSY), ++ 100, jiffies_to_usecs(IPC_TIMEOUT)); ++ if (err) ++ return err; + +- return -ETIMEDOUT; ++ return (status & IPC_STATUS_ERR) ? -EIO : 0; + } + + /* Wait till ipc ioc interrupt is received or timeout in 10 HZ */ +@@ -252,10 +249,12 @@ static inline int ipc_wait_for_interrupt(struct intel_scu_ipc_dev *scu) + { + int status; + +- if (!wait_for_completion_timeout(&scu->cmd_complete, IPC_TIMEOUT)) +- return -ETIMEDOUT; ++ wait_for_completion_timeout(&scu->cmd_complete, IPC_TIMEOUT); + + status = ipc_read_status(scu); ++ if (status & IPC_STATUS_BUSY) ++ return -ETIMEDOUT; ++ + if (status & IPC_STATUS_ERR) + return -EIO; + +@@ -267,6 +266,24 @@ static int intel_scu_ipc_check_status(struct intel_scu_ipc_dev *scu) + return scu->irq > 0 ? ipc_wait_for_interrupt(scu) : busy_loop(scu); + } + ++static struct intel_scu_ipc_dev *intel_scu_ipc_get(struct intel_scu_ipc_dev *scu) ++{ ++ u8 status; ++ ++ if (!scu) ++ scu = ipcdev; ++ if (!scu) ++ return ERR_PTR(-ENODEV); ++ ++ status = ipc_read_status(scu); ++ if (status & IPC_STATUS_BUSY) { ++ dev_dbg(&scu->dev, "device is busy\n"); ++ return ERR_PTR(-EBUSY); ++ } ++ ++ return scu; ++} ++ + /* Read/Write power control(PMIC in Langwell, MSIC in PenWell) registers */ + static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data, + u32 count, u32 op, u32 id) +@@ -280,11 +297,10 @@ static int pwr_reg_rdwr(struct intel_scu_ipc_dev *scu, u16 *addr, u8 *data, + memset(cbuf, 0, sizeof(cbuf)); + + mutex_lock(&ipclock); +- if (!scu) +- scu = ipcdev; +- if (!scu) { ++ scu = intel_scu_ipc_get(scu); ++ if (IS_ERR(scu)) { + mutex_unlock(&ipclock); +- return -ENODEV; ++ return PTR_ERR(scu); + } + + for (nc = 0; nc < count; nc++, offset += 2) { +@@ -439,13 +455,12 @@ int intel_scu_ipc_dev_simple_command(struct intel_scu_ipc_dev *scu, int cmd, + int err; + + mutex_lock(&ipclock); +- if (!scu) +- scu = ipcdev; +- if (!scu) { ++ scu = intel_scu_ipc_get(scu); ++ if (IS_ERR(scu)) { + mutex_unlock(&ipclock); +- return -ENODEV; ++ return PTR_ERR(scu); + } +- scu = ipcdev; ++ + cmdval = sub << 12 | cmd; + ipc_command(scu, cmdval); + err = intel_scu_ipc_check_status(scu); +@@ -485,11 +500,10 @@ int intel_scu_ipc_dev_command_with_size(struct intel_scu_ipc_dev *scu, int cmd, + return -EINVAL; + + mutex_lock(&ipclock); +- if (!scu) +- scu = ipcdev; +- if (!scu) { ++ scu = intel_scu_ipc_get(scu); ++ if (IS_ERR(scu)) { + mutex_unlock(&ipclock); +- return -ENODEV; ++ return PTR_ERR(scu); + } + + memcpy(inbuf, in, inlen); +diff --git a/drivers/power/supply/ab8500_btemp.c b/drivers/power/supply/ab8500_btemp.c +index 6f83e99d2eb72..ce36d6ca34226 100644 +--- a/drivers/power/supply/ab8500_btemp.c ++++ b/drivers/power/supply/ab8500_btemp.c +@@ -115,7 +115,6 @@ struct ab8500_btemp { + static enum power_supply_property ab8500_btemp_props[] = { + POWER_SUPPLY_PROP_PRESENT, + POWER_SUPPLY_PROP_ONLINE, +- POWER_SUPPLY_PROP_TECHNOLOGY, + POWER_SUPPLY_PROP_TEMP, + }; + +@@ -532,12 +531,6 @@ static int ab8500_btemp_get_property(struct power_supply *psy, + else + val->intval = 1; + break; +- case POWER_SUPPLY_PROP_TECHNOLOGY: +- if (di->bm->bi) +- val->intval = di->bm->bi->technology; +- else +- val->intval = POWER_SUPPLY_TECHNOLOGY_UNKNOWN; +- break; + case POWER_SUPPLY_PROP_TEMP: + val->intval = ab8500_btemp_get_temp(di); + break; +@@ -662,7 +655,7 @@ static char *supply_interface[] = { + + static const struct power_supply_desc ab8500_btemp_desc = { + .name = "ab8500_btemp", +- .type = POWER_SUPPLY_TYPE_BATTERY, ++ .type = POWER_SUPPLY_TYPE_UNKNOWN, + .properties = ab8500_btemp_props, + .num_properties = ARRAY_SIZE(ab8500_btemp_props), + .get_property = ab8500_btemp_get_property, +diff --git a/drivers/power/supply/ab8500_chargalg.c b/drivers/power/supply/ab8500_chargalg.c +index ea4ad61d4c7e2..2205ea0834a61 100644 +--- a/drivers/power/supply/ab8500_chargalg.c ++++ b/drivers/power/supply/ab8500_chargalg.c +@@ -1720,7 +1720,7 @@ static char *supply_interface[] = { + + static const struct power_supply_desc ab8500_chargalg_desc = { + .name = "ab8500_chargalg", +- .type = POWER_SUPPLY_TYPE_BATTERY, ++ .type = POWER_SUPPLY_TYPE_UNKNOWN, + .properties = ab8500_chargalg_props, + .num_properties = ARRAY_SIZE(ab8500_chargalg_props), + .get_property = ab8500_chargalg_get_property, +diff --git a/drivers/power/supply/mt6370-charger.c b/drivers/power/supply/mt6370-charger.c +index f27dae5043f5b..a9641bd3d8cf8 100644 +--- a/drivers/power/supply/mt6370-charger.c ++++ b/drivers/power/supply/mt6370-charger.c +@@ -324,7 +324,7 @@ static int mt6370_chg_toggle_cfo(struct mt6370_priv *priv) + + if (fl_strobe) { + dev_err(priv->dev, "Flash led is still in strobe mode\n"); +- return ret; ++ return -EINVAL; + } + + /* cfo off */ +diff --git a/drivers/power/supply/rk817_charger.c b/drivers/power/supply/rk817_charger.c +index f1b431aa0e4f2..c04b96edcf595 100644 +--- a/drivers/power/supply/rk817_charger.c ++++ b/drivers/power/supply/rk817_charger.c +@@ -1058,6 +1058,13 @@ static void rk817_charging_monitor(struct work_struct *work) + queue_delayed_work(system_wq, &charger->work, msecs_to_jiffies(8000)); + } + ++static void rk817_cleanup_node(void *data) ++{ ++ struct device_node *node = data; ++ ++ of_node_put(node); ++} ++ + static int rk817_charger_probe(struct platform_device *pdev) + { + struct rk808 *rk808 = dev_get_drvdata(pdev->dev.parent); +@@ -1074,11 +1081,13 @@ static int rk817_charger_probe(struct platform_device *pdev) + if (!node) + return -ENODEV; + ++ ret = devm_add_action_or_reset(&pdev->dev, rk817_cleanup_node, node); ++ if (ret) ++ return ret; ++ + charger = devm_kzalloc(&pdev->dev, sizeof(*charger), GFP_KERNEL); +- if (!charger) { +- of_node_put(node); ++ if (!charger) + return -ENOMEM; +- } + + charger->rk808 = rk808; + +@@ -1224,3 +1233,4 @@ MODULE_DESCRIPTION("Battery power supply driver for RK817 PMIC"); + MODULE_AUTHOR("Maya Matuszczyk "); + MODULE_AUTHOR("Chris Morgan "); + MODULE_LICENSE("GPL"); ++MODULE_ALIAS("platform:rk817-charger"); +diff --git a/drivers/power/supply/ucs1002_power.c b/drivers/power/supply/ucs1002_power.c +index ef673ec3db568..332cb50d9fb4f 100644 +--- a/drivers/power/supply/ucs1002_power.c ++++ b/drivers/power/supply/ucs1002_power.c +@@ -384,7 +384,8 @@ static int ucs1002_get_property(struct power_supply *psy, + case POWER_SUPPLY_PROP_USB_TYPE: + return ucs1002_get_usb_type(info, val); + case POWER_SUPPLY_PROP_HEALTH: +- return val->intval = info->health; ++ val->intval = info->health; ++ return 0; + case POWER_SUPPLY_PROP_PRESENT: + val->intval = info->present; + return 0; +diff --git a/drivers/s390/crypto/pkey_api.c b/drivers/s390/crypto/pkey_api.c +index 2b92ec20ed68e..df0f19e6d9235 100644 +--- a/drivers/s390/crypto/pkey_api.c ++++ b/drivers/s390/crypto/pkey_api.c +@@ -212,7 +212,8 @@ static int pkey_clr2ep11key(const u8 *clrkey, size_t clrkeylen, + card = apqns[i] >> 16; + dom = apqns[i] & 0xFFFF; + rc = ep11_clr2keyblob(card, dom, clrkeylen * 8, +- 0, clrkey, keybuf, keybuflen); ++ 0, clrkey, keybuf, keybuflen, ++ PKEY_TYPE_EP11); + if (rc == 0) + break; + } +@@ -627,6 +628,11 @@ static int pkey_clr2seckey2(const struct pkey_apqn *apqns, size_t nr_apqns, + if (*keybufsize < MINEP11AESKEYBLOBSIZE) + return -EINVAL; + break; ++ case PKEY_TYPE_EP11_AES: ++ if (*keybufsize < (sizeof(struct ep11kblob_header) + ++ MINEP11AESKEYBLOBSIZE)) ++ return -EINVAL; ++ break; + default: + return -EINVAL; + } +@@ -645,9 +651,11 @@ static int pkey_clr2seckey2(const struct pkey_apqn *apqns, size_t nr_apqns, + for (i = 0, rc = -ENODEV; i < nr_apqns; i++) { + card = apqns[i].card; + dom = apqns[i].domain; +- if (ktype == PKEY_TYPE_EP11) { ++ if (ktype == PKEY_TYPE_EP11 || ++ ktype == PKEY_TYPE_EP11_AES) { + rc = ep11_clr2keyblob(card, dom, ksize, kflags, +- clrkey, keybuf, keybufsize); ++ clrkey, keybuf, keybufsize, ++ ktype); + } else if (ktype == PKEY_TYPE_CCA_DATA) { + rc = cca_clr2seckey(card, dom, ksize, + clrkey, keybuf); +@@ -1361,7 +1369,7 @@ static long pkey_unlocked_ioctl(struct file *filp, unsigned int cmd, + apqns = _copy_apqns_from_user(kcs.apqns, kcs.apqn_entries); + if (IS_ERR(apqns)) + return PTR_ERR(apqns); +- kkey = kmalloc(klen, GFP_KERNEL); ++ kkey = kzalloc(klen, GFP_KERNEL); + if (!kkey) { + kfree(apqns); + return -ENOMEM; +diff --git a/drivers/s390/crypto/zcrypt_ep11misc.c b/drivers/s390/crypto/zcrypt_ep11misc.c +index 20bbeec1a1a22..77e1ffaafaea1 100644 +--- a/drivers/s390/crypto/zcrypt_ep11misc.c ++++ b/drivers/s390/crypto/zcrypt_ep11misc.c +@@ -1000,12 +1000,12 @@ out: + return rc; + } + +-static int ep11_unwrapkey(u16 card, u16 domain, +- const u8 *kek, size_t keksize, +- const u8 *enckey, size_t enckeysize, +- u32 mech, const u8 *iv, +- u32 keybitsize, u32 keygenflags, +- u8 *keybuf, size_t *keybufsize) ++static int _ep11_unwrapkey(u16 card, u16 domain, ++ const u8 *kek, size_t keksize, ++ const u8 *enckey, size_t enckeysize, ++ u32 mech, const u8 *iv, ++ u32 keybitsize, u32 keygenflags, ++ u8 *keybuf, size_t *keybufsize) + { + struct uw_req_pl { + struct pl_head head; +@@ -1042,7 +1042,6 @@ static int ep11_unwrapkey(u16 card, u16 domain, + struct ep11_cprb *req = NULL, *rep = NULL; + struct ep11_target_dev target; + struct ep11_urb *urb = NULL; +- struct ep11keyblob *kb; + size_t req_pl_size; + int api, rc = -ENOMEM; + u8 *p; +@@ -1124,14 +1123,9 @@ static int ep11_unwrapkey(u16 card, u16 domain, + goto out; + } + +- /* copy key blob and set header values */ ++ /* copy key blob */ + memcpy(keybuf, rep_pl->data, rep_pl->data_len); + *keybufsize = rep_pl->data_len; +- kb = (struct ep11keyblob *)keybuf; +- kb->head.type = TOKTYPE_NON_CCA; +- kb->head.len = rep_pl->data_len; +- kb->head.version = TOKVER_EP11_AES; +- kb->head.bitlen = keybitsize; + + out: + kfree(req); +@@ -1140,6 +1134,42 @@ out: + return rc; + } + ++static int ep11_unwrapkey(u16 card, u16 domain, ++ const u8 *kek, size_t keksize, ++ const u8 *enckey, size_t enckeysize, ++ u32 mech, const u8 *iv, ++ u32 keybitsize, u32 keygenflags, ++ u8 *keybuf, size_t *keybufsize, ++ u8 keybufver) ++{ ++ struct ep11kblob_header *hdr; ++ size_t hdr_size, pl_size; ++ u8 *pl; ++ int rc; ++ ++ rc = ep11_kb_split(keybuf, *keybufsize, keybufver, ++ &hdr, &hdr_size, &pl, &pl_size); ++ if (rc) ++ return rc; ++ ++ rc = _ep11_unwrapkey(card, domain, kek, keksize, enckey, enckeysize, ++ mech, iv, keybitsize, keygenflags, ++ pl, &pl_size); ++ if (rc) ++ return rc; ++ ++ *keybufsize = hdr_size + pl_size; ++ ++ /* update header information */ ++ hdr = (struct ep11kblob_header *)keybuf; ++ hdr->type = TOKTYPE_NON_CCA; ++ hdr->len = *keybufsize; ++ hdr->version = keybufver; ++ hdr->bitlen = keybitsize; ++ ++ return 0; ++} ++ + static int ep11_wrapkey(u16 card, u16 domain, + const u8 *key, size_t keysize, + u32 mech, const u8 *iv, +@@ -1274,7 +1304,8 @@ out: + } + + int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, +- const u8 *clrkey, u8 *keybuf, size_t *keybufsize) ++ const u8 *clrkey, u8 *keybuf, size_t *keybufsize, ++ u32 keytype) + { + int rc; + u8 encbuf[64], *kek = NULL; +@@ -1321,7 +1352,7 @@ int ep11_clr2keyblob(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, + /* Step 3: import the encrypted key value as a new key */ + rc = ep11_unwrapkey(card, domain, kek, keklen, + encbuf, encbuflen, 0, def_iv, +- keybitsize, 0, keybuf, keybufsize); ++ keybitsize, 0, keybuf, keybufsize, keytype); + if (rc) { + DEBUG_ERR( + "%s importing key value as new key failed,, rc=%d\n", +diff --git a/drivers/s390/crypto/zcrypt_ep11misc.h b/drivers/s390/crypto/zcrypt_ep11misc.h +index ed328c354bade..b7f9cbe3d58de 100644 +--- a/drivers/s390/crypto/zcrypt_ep11misc.h ++++ b/drivers/s390/crypto/zcrypt_ep11misc.h +@@ -113,7 +113,8 @@ int ep11_genaeskey(u16 card, u16 domain, u32 keybitsize, u32 keygenflags, + * Generate EP11 AES secure key with given clear key value. + */ + int ep11_clr2keyblob(u16 cardnr, u16 domain, u32 keybitsize, u32 keygenflags, +- const u8 *clrkey, u8 *keybuf, size_t *keybufsize); ++ const u8 *clrkey, u8 *keybuf, size_t *keybufsize, ++ u32 keytype); + + /* + * Build a list of ep11 apqns meeting the following constrains: +diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c +index 8009eab3b7bee..56ade46309707 100644 +--- a/drivers/scsi/iscsi_tcp.c ++++ b/drivers/scsi/iscsi_tcp.c +@@ -724,6 +724,10 @@ iscsi_sw_tcp_conn_bind(struct iscsi_cls_session *cls_session, + return -EEXIST; + } + ++ err = -EINVAL; ++ if (!sk_is_tcp(sock->sk)) ++ goto free_socket; ++ + err = iscsi_conn_bind(cls_session, cls_conn, is_leading); + if (err) + goto free_socket; +diff --git a/drivers/scsi/pm8001/pm8001_hwi.c b/drivers/scsi/pm8001/pm8001_hwi.c +index 628b08ba6770b..e2c52c2d00b33 100644 +--- a/drivers/scsi/pm8001/pm8001_hwi.c ++++ b/drivers/scsi/pm8001/pm8001_hwi.c +@@ -4313,7 +4313,7 @@ pm8001_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id) + payload.sas_identify.dev_type = SAS_END_DEVICE; + payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL; + memcpy(payload.sas_identify.sas_addr, +- pm8001_ha->sas_addr, SAS_ADDR_SIZE); ++ &pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE); + payload.sas_identify.phy_id = phy_id; + + return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload, +diff --git a/drivers/scsi/pm8001/pm80xx_hwi.c b/drivers/scsi/pm8001/pm80xx_hwi.c +index f8b8624458f73..2bf293e8f7472 100644 +--- a/drivers/scsi/pm8001/pm80xx_hwi.c ++++ b/drivers/scsi/pm8001/pm80xx_hwi.c +@@ -3750,10 +3750,12 @@ static int mpi_set_controller_config_resp(struct pm8001_hba_info *pm8001_ha, + (struct set_ctrl_cfg_resp *)(piomb + 4); + u32 status = le32_to_cpu(pPayload->status); + u32 err_qlfr_pgcd = le32_to_cpu(pPayload->err_qlfr_pgcd); ++ u32 tag = le32_to_cpu(pPayload->tag); + + pm8001_dbg(pm8001_ha, MSG, + "SET CONTROLLER RESP: status 0x%x qlfr_pgcd 0x%x\n", + status, err_qlfr_pgcd); ++ pm8001_tag_free(pm8001_ha, tag); + + return 0; + } +@@ -4803,7 +4805,7 @@ pm80xx_chip_phy_start_req(struct pm8001_hba_info *pm8001_ha, u8 phy_id) + payload.sas_identify.dev_type = SAS_END_DEVICE; + payload.sas_identify.initiator_bits = SAS_PROTOCOL_ALL; + memcpy(payload.sas_identify.sas_addr, +- &pm8001_ha->sas_addr, SAS_ADDR_SIZE); ++ &pm8001_ha->phy[phy_id].dev_sas_addr, SAS_ADDR_SIZE); + payload.sas_identify.phy_id = phy_id; + + return pm8001_mpi_build_cmd(pm8001_ha, 0, opcode, &payload, +diff --git a/drivers/scsi/qedf/qedf_io.c b/drivers/scsi/qedf/qedf_io.c +index 4750ec5789a80..10fe3383855c0 100644 +--- a/drivers/scsi/qedf/qedf_io.c ++++ b/drivers/scsi/qedf/qedf_io.c +@@ -1904,6 +1904,7 @@ int qedf_initiate_abts(struct qedf_ioreq *io_req, bool return_scsi_cmd_on_abts) + goto drop_rdata_kref; + } + ++ spin_lock_irqsave(&fcport->rport_lock, flags); + if (!test_bit(QEDF_CMD_OUTSTANDING, &io_req->flags) || + test_bit(QEDF_CMD_IN_CLEANUP, &io_req->flags) || + test_bit(QEDF_CMD_IN_ABORT, &io_req->flags)) { +@@ -1911,17 +1912,20 @@ int qedf_initiate_abts(struct qedf_ioreq *io_req, bool return_scsi_cmd_on_abts) + "io_req xid=0x%x sc_cmd=%p already in cleanup or abort processing or already completed.\n", + io_req->xid, io_req->sc_cmd); + rc = 1; ++ spin_unlock_irqrestore(&fcport->rport_lock, flags); + goto drop_rdata_kref; + } + ++ /* Set the command type to abort */ ++ io_req->cmd_type = QEDF_ABTS; ++ spin_unlock_irqrestore(&fcport->rport_lock, flags); ++ + kref_get(&io_req->refcount); + + xid = io_req->xid; + qedf->control_requests++; + qedf->packet_aborts++; + +- /* Set the command type to abort */ +- io_req->cmd_type = QEDF_ABTS; + io_req->return_scsi_cmd_on_abts = return_scsi_cmd_on_abts; + + set_bit(QEDF_CMD_IN_ABORT, &io_req->flags); +@@ -2210,7 +2214,9 @@ process_els: + refcount, fcport, fcport->rdata->ids.port_id); + + /* Cleanup cmds re-use the same TID as the original I/O */ ++ spin_lock_irqsave(&fcport->rport_lock, flags); + io_req->cmd_type = QEDF_CLEANUP; ++ spin_unlock_irqrestore(&fcport->rport_lock, flags); + io_req->return_scsi_cmd_on_abts = return_scsi_cmd_on_abts; + + init_completion(&io_req->cleanup_done); +diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c +index c4f293d39f228..d969b0dc97326 100644 +--- a/drivers/scsi/qedf/qedf_main.c ++++ b/drivers/scsi/qedf/qedf_main.c +@@ -2807,6 +2807,8 @@ void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe) + struct qedf_ioreq *io_req; + struct qedf_rport *fcport; + u32 comp_type; ++ u8 io_comp_type; ++ unsigned long flags; + + comp_type = (cqe->cqe_data >> FCOE_CQE_CQE_TYPE_SHIFT) & + FCOE_CQE_CQE_TYPE_MASK; +@@ -2840,11 +2842,14 @@ void qedf_process_cqe(struct qedf_ctx *qedf, struct fcoe_cqe *cqe) + return; + } + ++ spin_lock_irqsave(&fcport->rport_lock, flags); ++ io_comp_type = io_req->cmd_type; ++ spin_unlock_irqrestore(&fcport->rport_lock, flags); + + switch (comp_type) { + case FCOE_GOOD_COMPLETION_CQE_TYPE: + atomic_inc(&fcport->free_sqes); +- switch (io_req->cmd_type) { ++ switch (io_comp_type) { + case QEDF_SCSI_CMD: + qedf_scsi_completion(qedf, cqe, io_req); + break; +diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h +index 7d282906598f3..1713588f671f3 100644 +--- a/drivers/scsi/qla2xxx/qla_def.h ++++ b/drivers/scsi/qla2xxx/qla_def.h +@@ -3475,6 +3475,7 @@ struct qla_msix_entry { + int have_irq; + int in_use; + uint32_t vector; ++ uint32_t vector_base0; + uint16_t entry; + char name[30]; + void *handle; +@@ -3804,6 +3805,7 @@ struct qla_qpair { + uint64_t retry_term_jiff; + struct qla_tgt_counters tgt_counters; + uint16_t cpuid; ++ bool cpu_mapped; + struct qla_fw_resources fwres ____cacheline_aligned; + u32 cmd_cnt; + u32 cmd_completion_cnt; +@@ -4133,6 +4135,7 @@ struct qla_hw_data { + struct req_que **req_q_map; + struct rsp_que **rsp_q_map; + struct qla_qpair **queue_pair_map; ++ struct qla_qpair **qp_cpu_map; + unsigned long req_qid_map[(QLA_MAX_QUEUES / 8) / sizeof(unsigned long)]; + unsigned long rsp_qid_map[(QLA_MAX_QUEUES / 8) / sizeof(unsigned long)]; + unsigned long qpair_qid_map[(QLA_MAX_QUEUES / 8) +diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c +index 36abdb0de1694..884ed77259f85 100644 +--- a/drivers/scsi/qla2xxx/qla_init.c ++++ b/drivers/scsi/qla2xxx/qla_init.c +@@ -9758,8 +9758,9 @@ struct qla_qpair *qla2xxx_create_qpair(struct scsi_qla_host *vha, int qos, + qpair->req = ha->req_q_map[req_id]; + qpair->rsp->req = qpair->req; + qpair->rsp->qpair = qpair; +- /* init qpair to this cpu. Will adjust at run time. */ +- qla_cpu_update(qpair, raw_smp_processor_id()); ++ ++ if (!qpair->cpu_mapped) ++ qla_cpu_update(qpair, raw_smp_processor_id()); + + if (IS_T10_PI_CAPABLE(ha) && ql2xenabledif) { + if (ha->fw_attributes & BIT_4) +diff --git a/drivers/scsi/qla2xxx/qla_inline.h b/drivers/scsi/qla2xxx/qla_inline.h +index a7b5d11146827..a4a56ab0ba747 100644 +--- a/drivers/scsi/qla2xxx/qla_inline.h ++++ b/drivers/scsi/qla2xxx/qla_inline.h +@@ -573,3 +573,61 @@ fcport_is_bigger(fc_port_t *fcport) + { + return !fcport_is_smaller(fcport); + } ++ ++static inline struct qla_qpair * ++qla_mapq_nvme_select_qpair(struct qla_hw_data *ha, struct qla_qpair *qpair) ++{ ++ int cpuid = raw_smp_processor_id(); ++ ++ if (qpair->cpuid != cpuid && ++ ha->qp_cpu_map[cpuid]) { ++ qpair = ha->qp_cpu_map[cpuid]; ++ } ++ return qpair; ++} ++ ++static inline void ++qla_mapq_init_qp_cpu_map(struct qla_hw_data *ha, ++ struct qla_msix_entry *msix, ++ struct qla_qpair *qpair) ++{ ++ const struct cpumask *mask; ++ unsigned int cpu; ++ ++ if (!ha->qp_cpu_map) ++ return; ++ mask = pci_irq_get_affinity(ha->pdev, msix->vector_base0); ++ if (!mask) ++ return; ++ qpair->cpuid = cpumask_first(mask); ++ for_each_cpu(cpu, mask) { ++ ha->qp_cpu_map[cpu] = qpair; ++ } ++ msix->cpuid = qpair->cpuid; ++ qpair->cpu_mapped = true; ++} ++ ++static inline void ++qla_mapq_free_qp_cpu_map(struct qla_hw_data *ha) ++{ ++ if (ha->qp_cpu_map) { ++ kfree(ha->qp_cpu_map); ++ ha->qp_cpu_map = NULL; ++ } ++} ++ ++static inline int qla_mapq_alloc_qp_cpu_map(struct qla_hw_data *ha) ++{ ++ scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev); ++ ++ if (!ha->qp_cpu_map) { ++ ha->qp_cpu_map = kcalloc(NR_CPUS, sizeof(struct qla_qpair *), ++ GFP_KERNEL); ++ if (!ha->qp_cpu_map) { ++ ql_log(ql_log_fatal, vha, 0x0180, ++ "Unable to allocate memory for qp_cpu_map ptrs.\n"); ++ return -1; ++ } ++ } ++ return 0; ++} +diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c +index 0111249cc8774..db65dbab3a9fa 100644 +--- a/drivers/scsi/qla2xxx/qla_isr.c ++++ b/drivers/scsi/qla2xxx/qla_isr.c +@@ -3817,9 +3817,11 @@ void qla24xx_process_response_queue(struct scsi_qla_host *vha, + if (!ha->flags.fw_started) + return; + +- if (rsp->qpair->cpuid != smp_processor_id() || !rsp->qpair->rcv_intr) { ++ if (rsp->qpair->cpuid != raw_smp_processor_id() || !rsp->qpair->rcv_intr) { + rsp->qpair->rcv_intr = 1; +- qla_cpu_update(rsp->qpair, smp_processor_id()); ++ ++ if (!rsp->qpair->cpu_mapped) ++ qla_cpu_update(rsp->qpair, raw_smp_processor_id()); + } + + #define __update_rsp_in(_is_shadow_hba, _rsp, _rsp_in) \ +@@ -4306,7 +4308,7 @@ qla2xxx_msix_rsp_q(int irq, void *dev_id) + } + ha = qpair->hw; + +- queue_work_on(smp_processor_id(), ha->wq, &qpair->q_work); ++ queue_work(ha->wq, &qpair->q_work); + + return IRQ_HANDLED; + } +@@ -4332,7 +4334,7 @@ qla2xxx_msix_rsp_q_hs(int irq, void *dev_id) + wrt_reg_dword(®->hccr, HCCRX_CLR_RISC_INT); + spin_unlock_irqrestore(&ha->hardware_lock, flags); + +- queue_work_on(smp_processor_id(), ha->wq, &qpair->q_work); ++ queue_work(ha->wq, &qpair->q_work); + + return IRQ_HANDLED; + } +@@ -4425,6 +4427,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp) + for (i = 0; i < ha->msix_count; i++) { + qentry = &ha->msix_entries[i]; + qentry->vector = pci_irq_vector(ha->pdev, i); ++ qentry->vector_base0 = i; + qentry->entry = i; + qentry->have_irq = 0; + qentry->in_use = 0; +@@ -4652,5 +4655,6 @@ int qla25xx_request_irq(struct qla_hw_data *ha, struct qla_qpair *qpair, + } + msix->have_irq = 1; + msix->handle = qpair; ++ qla_mapq_init_qp_cpu_map(ha, msix, qpair); + return ret; + } +diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c +index c9a6fc882a801..9941b38eac93c 100644 +--- a/drivers/scsi/qla2xxx/qla_nvme.c ++++ b/drivers/scsi/qla2xxx/qla_nvme.c +@@ -609,6 +609,7 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport, + fc_port_t *fcport; + struct srb_iocb *nvme; + struct scsi_qla_host *vha; ++ struct qla_hw_data *ha; + int rval; + srb_t *sp; + struct qla_qpair *qpair = hw_queue_handle; +@@ -629,6 +630,7 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport, + return -ENODEV; + + vha = fcport->vha; ++ ha = vha->hw; + + if (test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags)) + return -EBUSY; +@@ -643,6 +645,8 @@ static int qla_nvme_post_cmd(struct nvme_fc_local_port *lport, + if (fcport->nvme_flag & NVME_FLAG_RESETTING) + return -EBUSY; + ++ qpair = qla_mapq_nvme_select_qpair(ha, qpair); ++ + /* Alloc SRB structure */ + sp = qla2xxx_get_qpair_sp(vha, qpair, fcport, GFP_ATOMIC); + if (!sp) +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c +index 78f7cd16967fa..b33ffec1cb75e 100644 +--- a/drivers/scsi/qla2xxx/qla_os.c ++++ b/drivers/scsi/qla2xxx/qla_os.c +@@ -480,6 +480,11 @@ static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req, + "Unable to allocate memory for queue pair ptrs.\n"); + goto fail_qpair_map; + } ++ if (qla_mapq_alloc_qp_cpu_map(ha) != 0) { ++ kfree(ha->queue_pair_map); ++ ha->queue_pair_map = NULL; ++ goto fail_qpair_map; ++ } + } + + /* +@@ -554,6 +559,7 @@ static void qla2x00_free_queues(struct qla_hw_data *ha) + ha->base_qpair = NULL; + } + ++ qla_mapq_free_qp_cpu_map(ha); + spin_lock_irqsave(&ha->hardware_lock, flags); + for (cnt = 0; cnt < ha->max_req_queues; cnt++) { + if (!test_bit(cnt, ha->req_qid_map)) +diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c +index 545473a0ffc84..5a5beb41786ed 100644 +--- a/drivers/scsi/qla2xxx/qla_target.c ++++ b/drivers/scsi/qla2xxx/qla_target.c +@@ -4442,8 +4442,7 @@ static int qlt_handle_cmd_for_atio(struct scsi_qla_host *vha, + queue_work_on(cmd->se_cmd.cpuid, qla_tgt_wq, &cmd->work); + } else if (ha->msix_count) { + if (cmd->atio.u.isp24.fcp_cmnd.rddata) +- queue_work_on(smp_processor_id(), qla_tgt_wq, +- &cmd->work); ++ queue_work(qla_tgt_wq, &cmd->work); + else + queue_work_on(cmd->se_cmd.cpuid, qla_tgt_wq, + &cmd->work); +diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c +index 8fa0056b56ddb..e54ee6770e79f 100644 +--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c ++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c +@@ -310,7 +310,7 @@ static void tcm_qla2xxx_free_cmd(struct qla_tgt_cmd *cmd) + cmd->trc_flags |= TRC_CMD_DONE; + + INIT_WORK(&cmd->work, tcm_qla2xxx_complete_free); +- queue_work_on(smp_processor_id(), tcm_qla2xxx_free_wq, &cmd->work); ++ queue_work(tcm_qla2xxx_free_wq, &cmd->work); + } + + /* +@@ -557,7 +557,7 @@ static void tcm_qla2xxx_handle_data(struct qla_tgt_cmd *cmd) + cmd->trc_flags |= TRC_DATA_IN; + cmd->cmd_in_wq = 1; + INIT_WORK(&cmd->work, tcm_qla2xxx_handle_data_work); +- queue_work_on(smp_processor_id(), tcm_qla2xxx_free_wq, &cmd->work); ++ queue_work(tcm_qla2xxx_free_wq, &cmd->work); + } + + static int tcm_qla2xxx_chk_dif_tags(uint32_t tag) +diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c +index 32ed9dc88e455..08197b03955dd 100644 +--- a/drivers/soc/imx/soc-imx8m.c ++++ b/drivers/soc/imx/soc-imx8m.c +@@ -100,6 +100,7 @@ static void __init imx8mm_soc_uid(void) + { + void __iomem *ocotp_base; + struct device_node *np; ++ struct clk *clk; + u32 offset = of_machine_is_compatible("fsl,imx8mp") ? + IMX8MP_OCOTP_UID_OFFSET : 0; + +@@ -109,11 +110,20 @@ static void __init imx8mm_soc_uid(void) + + ocotp_base = of_iomap(np, 0); + WARN_ON(!ocotp_base); ++ clk = of_clk_get_by_name(np, NULL); ++ if (IS_ERR(clk)) { ++ WARN_ON(IS_ERR(clk)); ++ return; ++ } ++ ++ clk_prepare_enable(clk); + + soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset); + soc_uid <<= 32; + soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset); + ++ clk_disable_unprepare(clk); ++ clk_put(clk); + iounmap(ocotp_base); + of_node_put(np); + } +diff --git a/drivers/spi/spi-gxp.c b/drivers/spi/spi-gxp.c +index c900c2f39b578..21b07e2518513 100644 +--- a/drivers/spi/spi-gxp.c ++++ b/drivers/spi/spi-gxp.c +@@ -195,7 +195,7 @@ static ssize_t gxp_spi_write(struct gxp_spi_chip *chip, const struct spi_mem_op + return ret; + } + +- return write_len; ++ return 0; + } + + static int do_gxp_exec_mem_op(struct spi_mem *mem, const struct spi_mem_op *op) +diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c +index f0d532ea40e82..b718a74fa3edc 100644 +--- a/drivers/spi/spi-intel-pci.c ++++ b/drivers/spi/spi-intel-pci.c +@@ -72,6 +72,7 @@ static const struct pci_device_id intel_spi_pci_ids[] = { + { PCI_VDEVICE(INTEL, 0x4da4), (unsigned long)&bxt_info }, + { PCI_VDEVICE(INTEL, 0x51a4), (unsigned long)&cnl_info }, + { PCI_VDEVICE(INTEL, 0x54a4), (unsigned long)&cnl_info }, ++ { PCI_VDEVICE(INTEL, 0x5794), (unsigned long)&cnl_info }, + { PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info }, + { PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info }, + { PCI_VDEVICE(INTEL, 0x7e23), (unsigned long)&cnl_info }, +diff --git a/drivers/spi/spi-nxp-fspi.c b/drivers/spi/spi-nxp-fspi.c +index d6a65a989ef80..c7a4a3606547e 100644 +--- a/drivers/spi/spi-nxp-fspi.c ++++ b/drivers/spi/spi-nxp-fspi.c +@@ -1029,6 +1029,13 @@ static int nxp_fspi_default_setup(struct nxp_fspi *f) + fspi_writel(f, FSPI_AHBCR_PREF_EN | FSPI_AHBCR_RDADDROPT, + base + FSPI_AHBCR); + ++ /* Reset the FLSHxCR1 registers. */ ++ reg = FSPI_FLSHXCR1_TCSH(0x3) | FSPI_FLSHXCR1_TCSS(0x3); ++ fspi_writel(f, reg, base + FSPI_FLSHA1CR1); ++ fspi_writel(f, reg, base + FSPI_FLSHA2CR1); ++ fspi_writel(f, reg, base + FSPI_FLSHB1CR1); ++ fspi_writel(f, reg, base + FSPI_FLSHB2CR1); ++ + /* AHB Read - Set lut sequence ID for all CS. */ + fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA1CR2); + fspi_writel(f, SEQID_LUT, base + FSPI_FLSHA2CR2); +diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c +index def09cf0dc147..12241815510d4 100644 +--- a/drivers/spi/spi-stm32.c ++++ b/drivers/spi/spi-stm32.c +@@ -268,6 +268,7 @@ struct stm32_spi_cfg { + * @fifo_size: size of the embedded fifo in bytes + * @cur_midi: master inter-data idleness in ns + * @cur_speed: speed configured in Hz ++ * @cur_half_period: time of a half bit in us + * @cur_bpw: number of bits in a single SPI data frame + * @cur_fthlv: fifo threshold level (data frames in a single data packet) + * @cur_comm: SPI communication mode +@@ -294,6 +295,7 @@ struct stm32_spi { + + unsigned int cur_midi; + unsigned int cur_speed; ++ unsigned int cur_half_period; + unsigned int cur_bpw; + unsigned int cur_fthlv; + unsigned int cur_comm; +@@ -454,6 +456,8 @@ static int stm32_spi_prepare_mbr(struct stm32_spi *spi, u32 speed_hz, + + spi->cur_speed = spi->clk_rate / (1 << mbrdiv); + ++ spi->cur_half_period = DIV_ROUND_CLOSEST(USEC_PER_SEC, 2 * spi->cur_speed); ++ + return mbrdiv - 1; + } + +@@ -695,6 +699,10 @@ static void stm32h7_spi_disable(struct stm32_spi *spi) + return; + } + ++ /* Add a delay to make sure that transmission is ended. */ ++ if (spi->cur_half_period) ++ udelay(spi->cur_half_period); ++ + if (spi->cur_usedma && spi->dma_tx) + dmaengine_terminate_all(spi->dma_tx); + if (spi->cur_usedma && spi->dma_rx) +diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c +index 23ad052528dbe..d79853ba7792a 100644 +--- a/drivers/spi/spi-sun6i.c ++++ b/drivers/spi/spi-sun6i.c +@@ -95,6 +95,7 @@ struct sun6i_spi { + struct reset_control *rstc; + + struct completion done; ++ struct completion dma_rx_done; + + const u8 *tx_buf; + u8 *rx_buf; +@@ -189,6 +190,13 @@ static size_t sun6i_spi_max_transfer_size(struct spi_device *spi) + return SUN6I_MAX_XFER_SIZE - 1; + } + ++static void sun6i_spi_dma_rx_cb(void *param) ++{ ++ struct sun6i_spi *sspi = param; ++ ++ complete(&sspi->dma_rx_done); ++} ++ + static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi, + struct spi_transfer *tfr) + { +@@ -200,7 +208,7 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi, + struct dma_slave_config rxconf = { + .direction = DMA_DEV_TO_MEM, + .src_addr = sspi->dma_addr_rx, +- .src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES, ++ .src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE, + .src_maxburst = 8, + }; + +@@ -213,6 +221,8 @@ static int sun6i_spi_prepare_dma(struct sun6i_spi *sspi, + DMA_PREP_INTERRUPT); + if (!rxdesc) + return -EINVAL; ++ rxdesc->callback_param = sspi; ++ rxdesc->callback = sun6i_spi_dma_rx_cb; + } + + txdesc = NULL; +@@ -268,6 +278,7 @@ static int sun6i_spi_transfer_one(struct spi_master *master, + return -EINVAL; + + reinit_completion(&sspi->done); ++ reinit_completion(&sspi->dma_rx_done); + sspi->tx_buf = tfr->tx_buf; + sspi->rx_buf = tfr->rx_buf; + sspi->len = tfr->len; +@@ -426,6 +437,22 @@ static int sun6i_spi_transfer_one(struct spi_master *master, + start = jiffies; + timeout = wait_for_completion_timeout(&sspi->done, + msecs_to_jiffies(tx_time)); ++ ++ if (!use_dma) { ++ sun6i_spi_drain_fifo(sspi); ++ } else { ++ if (timeout && rx_len) { ++ /* ++ * Even though RX on the peripheral side has finished ++ * RX DMA might still be in flight ++ */ ++ timeout = wait_for_completion_timeout(&sspi->dma_rx_done, ++ timeout); ++ if (!timeout) ++ dev_warn(&master->dev, "RX DMA timeout\n"); ++ } ++ } ++ + end = jiffies; + if (!timeout) { + dev_warn(&master->dev, +@@ -453,7 +480,6 @@ static irqreturn_t sun6i_spi_handler(int irq, void *dev_id) + /* Transfer complete */ + if (status & SUN6I_INT_CTL_TC) { + sun6i_spi_write(sspi, SUN6I_INT_STA_REG, SUN6I_INT_CTL_TC); +- sun6i_spi_drain_fifo(sspi); + complete(&sspi->done); + return IRQ_HANDLED; + } +@@ -611,6 +637,7 @@ static int sun6i_spi_probe(struct platform_device *pdev) + } + + init_completion(&sspi->done); ++ init_completion(&sspi->dma_rx_done); + + sspi->rstc = devm_reset_control_get_exclusive(&pdev->dev, NULL); + if (IS_ERR(sspi->rstc)) { +diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c +index 762d1990180bf..4104743dbc17e 100644 +--- a/drivers/thermal/thermal_of.c ++++ b/drivers/thermal/thermal_of.c +@@ -149,8 +149,10 @@ static int of_find_trip_id(struct device_node *np, struct device_node *trip) + */ + for_each_child_of_node(trips, t) { + +- if (t == trip) ++ if (t == trip) { ++ of_node_put(t); + goto out; ++ } + i++; + } + +@@ -519,8 +521,10 @@ static int thermal_of_for_each_cooling_maps(struct thermal_zone_device *tz, + + for_each_child_of_node(cm_np, child) { + ret = thermal_of_for_each_cooling_device(tz_np, child, tz, cdev, action); +- if (ret) ++ if (ret) { ++ of_node_put(child); + break; ++ } + } + + of_node_put(cm_np); +diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c +index c1fa20a4e3420..4b43589304704 100644 +--- a/drivers/tty/n_gsm.c ++++ b/drivers/tty/n_gsm.c +@@ -2509,10 +2509,8 @@ static void gsm_cleanup_mux(struct gsm_mux *gsm, bool disc) + gsm->has_devices = false; + } + for (i = NUM_DLCI - 1; i >= 0; i--) +- if (gsm->dlci[i]) { ++ if (gsm->dlci[i]) + gsm_dlci_release(gsm->dlci[i]); +- gsm->dlci[i] = NULL; +- } + mutex_unlock(&gsm->mutex); + /* Now wipe the queues */ + tty_ldisc_flush(gsm->tty); +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index 38760bd6e0c29..8efe31448df3c 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -1953,7 +1953,10 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) + skip_rx = true; + + if (status & (UART_LSR_DR | UART_LSR_BI) && !skip_rx) { +- if (irqd_is_wakeup_set(irq_get_irq_data(port->irq))) ++ struct irq_data *d; ++ ++ d = irq_get_irq_data(port->irq); ++ if (d && irqd_is_wakeup_set(d)) + pm_wakeup_event(tport->tty->dev, 0); + if (!up->dma || handle_rx_dma(up, iir)) + status = serial8250_rx_chars(up, status); +diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c +index 36437d39b93c8..b4e3f14b9a3d7 100644 +--- a/drivers/ufs/core/ufshcd.c ++++ b/drivers/ufs/core/ufshcd.c +@@ -22,6 +22,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -2254,7 +2255,11 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba) + */ + static inline bool ufshcd_ready_for_uic_cmd(struct ufs_hba *hba) + { +- return ufshcd_readl(hba, REG_CONTROLLER_STATUS) & UIC_COMMAND_READY; ++ u32 val; ++ int ret = read_poll_timeout(ufshcd_readl, val, val & UIC_COMMAND_READY, ++ 500, UIC_CMD_TIMEOUT * 1000, false, hba, ++ REG_CONTROLLER_STATUS); ++ return ret == 0 ? true : false; + } + + /** +@@ -2346,7 +2351,6 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd, + bool completion) + { + lockdep_assert_held(&hba->uic_cmd_mutex); +- lockdep_assert_held(hba->host->host_lock); + + if (!ufshcd_ready_for_uic_cmd(hba)) { + dev_err(hba->dev, +@@ -2373,7 +2377,6 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd, + int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd) + { + int ret; +- unsigned long flags; + + if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD) + return 0; +@@ -2382,9 +2385,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd) + mutex_lock(&hba->uic_cmd_mutex); + ufshcd_add_delay_before_dme_cmd(hba); + +- spin_lock_irqsave(hba->host->host_lock, flags); + ret = __ufshcd_send_uic_cmd(hba, uic_cmd, true); +- spin_unlock_irqrestore(hba->host->host_lock, flags); + if (!ret) + ret = ufshcd_wait_for_uic_cmd(hba, uic_cmd); + +@@ -4076,8 +4077,8 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd) + wmb(); + reenable_intr = true; + } +- ret = __ufshcd_send_uic_cmd(hba, cmd, false); + spin_unlock_irqrestore(hba->host->host_lock, flags); ++ ret = __ufshcd_send_uic_cmd(hba, cmd, false); + if (ret) { + dev_err(hba->dev, + "pwr ctrl cmd 0x%x with mode 0x%x uic error %d\n", +diff --git a/drivers/vfio/mdev/mdev_sysfs.c b/drivers/vfio/mdev/mdev_sysfs.c +index abe3359dd477f..16b007c6bbb56 100644 +--- a/drivers/vfio/mdev/mdev_sysfs.c ++++ b/drivers/vfio/mdev/mdev_sysfs.c +@@ -233,7 +233,8 @@ int parent_create_sysfs_files(struct mdev_parent *parent) + out_err: + while (--i >= 0) + mdev_type_remove(parent->types[i]); +- return 0; ++ kset_unregister(parent->mdev_types_kset); ++ return ret; + } + + static ssize_t remove_store(struct device *dev, struct device_attribute *attr, +diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig +index 974e862cd20d6..ff95f19224901 100644 +--- a/drivers/video/fbdev/Kconfig ++++ b/drivers/video/fbdev/Kconfig +@@ -2015,7 +2015,7 @@ config FB_COBALT + + config FB_SH7760 + bool "SH7760/SH7763/SH7720/SH7721 LCDC support" +- depends on FB && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \ ++ depends on FB=y && (CPU_SUBTYPE_SH7760 || CPU_SUBTYPE_SH7763 \ + || CPU_SUBTYPE_SH7720 || CPU_SUBTYPE_SH7721) + select FB_CFB_FILLRECT + select FB_CFB_COPYAREA +diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c +index 069f12cc7634c..2aecd4ffb13b3 100644 +--- a/fs/binfmt_elf_fdpic.c ++++ b/fs/binfmt_elf_fdpic.c +@@ -345,10 +345,9 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm) + /* there's now no turning back... the old userspace image is dead, + * defunct, deceased, etc. + */ ++ SET_PERSONALITY(exec_params.hdr); + if (elf_check_fdpic(&exec_params.hdr)) +- set_personality(PER_LINUX_FDPIC); +- else +- set_personality(PER_LINUX); ++ current->personality |= PER_LINUX_FDPIC; + if (elf_read_implies_exec(&exec_params.hdr, executable_stack)) + current->personality |= READ_IMPLIES_EXEC; + +diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c +index d2cbb7733c7d6..1331e56e8e84f 100644 +--- a/fs/btrfs/delayed-inode.c ++++ b/fs/btrfs/delayed-inode.c +@@ -407,6 +407,7 @@ static void finish_one_item(struct btrfs_delayed_root *delayed_root) + + static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item) + { ++ struct btrfs_delayed_node *delayed_node = delayed_item->delayed_node; + struct rb_root_cached *root; + struct btrfs_delayed_root *delayed_root; + +@@ -414,18 +415,21 @@ static void __btrfs_remove_delayed_item(struct btrfs_delayed_item *delayed_item) + if (RB_EMPTY_NODE(&delayed_item->rb_node)) + return; + +- delayed_root = delayed_item->delayed_node->root->fs_info->delayed_root; ++ /* If it's in a rbtree, then we need to have delayed node locked. */ ++ lockdep_assert_held(&delayed_node->mutex); ++ ++ delayed_root = delayed_node->root->fs_info->delayed_root; + + BUG_ON(!delayed_root); + + if (delayed_item->type == BTRFS_DELAYED_INSERTION_ITEM) +- root = &delayed_item->delayed_node->ins_root; ++ root = &delayed_node->ins_root; + else +- root = &delayed_item->delayed_node->del_root; ++ root = &delayed_node->del_root; + + rb_erase_cached(&delayed_item->rb_node, root); + RB_CLEAR_NODE(&delayed_item->rb_node); +- delayed_item->delayed_node->count--; ++ delayed_node->count--; + + finish_one_item(delayed_root); + } +@@ -1421,7 +1425,29 @@ void btrfs_balance_delayed_items(struct btrfs_fs_info *fs_info) + btrfs_wq_run_delayed_node(delayed_root, fs_info, BTRFS_DELAYED_BATCH); + } + +-/* Will return 0 or -ENOMEM */ ++static void btrfs_release_dir_index_item_space(struct btrfs_trans_handle *trans) ++{ ++ struct btrfs_fs_info *fs_info = trans->fs_info; ++ const u64 bytes = btrfs_calc_insert_metadata_size(fs_info, 1); ++ ++ if (test_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags)) ++ return; ++ ++ /* ++ * Adding the new dir index item does not require touching another ++ * leaf, so we can release 1 unit of metadata that was previously ++ * reserved when starting the transaction. This applies only to ++ * the case where we had a transaction start and excludes the ++ * transaction join case (when replaying log trees). ++ */ ++ trace_btrfs_space_reservation(fs_info, "transaction", ++ trans->transid, bytes, 0); ++ btrfs_block_rsv_release(fs_info, trans->block_rsv, bytes, NULL); ++ ASSERT(trans->bytes_reserved >= bytes); ++ trans->bytes_reserved -= bytes; ++} ++ ++/* Will return 0, -ENOMEM or -EEXIST (index number collision, unexpected). */ + int btrfs_insert_delayed_dir_index(struct btrfs_trans_handle *trans, + const char *name, int name_len, + struct btrfs_inode *dir, +@@ -1463,6 +1489,27 @@ int btrfs_insert_delayed_dir_index(struct btrfs_trans_handle *trans, + + mutex_lock(&delayed_node->mutex); + ++ /* ++ * First attempt to insert the delayed item. This is to make the error ++ * handling path simpler in case we fail (-EEXIST). There's no risk of ++ * any other task coming in and running the delayed item before we do ++ * the metadata space reservation below, because we are holding the ++ * delayed node's mutex and that mutex must also be locked before the ++ * node's delayed items can be run. ++ */ ++ ret = __btrfs_add_delayed_item(delayed_node, delayed_item); ++ if (unlikely(ret)) { ++ btrfs_err(trans->fs_info, ++"error adding delayed dir index item, name: %.*s, index: %llu, root: %llu, dir: %llu, dir->index_cnt: %llu, delayed_node->index_cnt: %llu, error: %d", ++ name_len, name, index, btrfs_root_id(delayed_node->root), ++ delayed_node->inode_id, dir->index_cnt, ++ delayed_node->index_cnt, ret); ++ btrfs_release_delayed_item(delayed_item); ++ btrfs_release_dir_index_item_space(trans); ++ mutex_unlock(&delayed_node->mutex); ++ goto release_node; ++ } ++ + if (delayed_node->index_item_leaves == 0 || + delayed_node->curr_index_batch_size + data_len > leaf_data_size) { + delayed_node->curr_index_batch_size = data_len; +@@ -1480,36 +1527,14 @@ int btrfs_insert_delayed_dir_index(struct btrfs_trans_handle *trans, + * impossible. + */ + if (WARN_ON(ret)) { +- mutex_unlock(&delayed_node->mutex); + btrfs_release_delayed_item(delayed_item); ++ mutex_unlock(&delayed_node->mutex); + goto release_node; + } + + delayed_node->index_item_leaves++; +- } else if (!test_bit(BTRFS_FS_LOG_RECOVERING, &fs_info->flags)) { +- const u64 bytes = btrfs_calc_insert_metadata_size(fs_info, 1); +- +- /* +- * Adding the new dir index item does not require touching another +- * leaf, so we can release 1 unit of metadata that was previously +- * reserved when starting the transaction. This applies only to +- * the case where we had a transaction start and excludes the +- * transaction join case (when replaying log trees). +- */ +- trace_btrfs_space_reservation(fs_info, "transaction", +- trans->transid, bytes, 0); +- btrfs_block_rsv_release(fs_info, trans->block_rsv, bytes, NULL); +- ASSERT(trans->bytes_reserved >= bytes); +- trans->bytes_reserved -= bytes; +- } +- +- ret = __btrfs_add_delayed_item(delayed_node, delayed_item); +- if (unlikely(ret)) { +- btrfs_err(trans->fs_info, +- "err add delayed dir index item(name: %.*s) into the insertion tree of the delayed node(root id: %llu, inode id: %llu, errno: %d)", +- name_len, name, delayed_node->root->root_key.objectid, +- delayed_node->inode_id, ret); +- BUG(); ++ } else { ++ btrfs_release_dir_index_item_space(trans); + } + mutex_unlock(&delayed_node->mutex); + +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 0ad69041954ff..afcc96a1f4276 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -5184,8 +5184,14 @@ void read_extent_buffer(const struct extent_buffer *eb, void *dstv, + char *dst = (char *)dstv; + unsigned long i = get_eb_page_index(start); + +- if (check_eb_range(eb, start, len)) ++ if (check_eb_range(eb, start, len)) { ++ /* ++ * Invalid range hit, reset the memory, so callers won't get ++ * some random garbage for their uninitialzed memory. ++ */ ++ memset(dstv, 0, len); + return; ++ } + + offset = get_eb_offset_in_page(eb, start); + +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index 6438300fa2461..582b71b7fa779 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -2418,7 +2418,7 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) + * calculated f_bavail. + */ + if (!mixed && block_rsv->space_info->full && +- total_free_meta - thresh < block_rsv->size) ++ (total_free_meta < thresh || total_free_meta - thresh < block_rsv->size)) + buf->f_bavail = 0; + + buf->f_type = BTRFS_SUPER_MAGIC; +diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c +index 4a9ad5ff726d4..36052a3626830 100644 +--- a/fs/ceph/caps.c ++++ b/fs/ceph/caps.c +@@ -4100,6 +4100,9 @@ void ceph_handle_caps(struct ceph_mds_session *session, + + dout("handle_caps from mds%d\n", session->s_mds); + ++ if (!ceph_inc_mds_stopping_blocker(mdsc, session)) ++ return; ++ + /* decode */ + end = msg->front.iov_base + msg->front.iov_len; + if (msg->front.iov_len < sizeof(*h)) +@@ -4196,7 +4199,6 @@ void ceph_handle_caps(struct ceph_mds_session *session, + vino.snap, inode); + + mutex_lock(&session->s_mutex); +- inc_session_sequence(session); + dout(" mds%d seq %lld cap seq %u\n", session->s_mds, session->s_seq, + (unsigned)seq); + +@@ -4299,6 +4301,8 @@ done: + done_unlocked: + iput(inode); + out: ++ ceph_dec_mds_stopping_blocker(mdsc); ++ + ceph_put_string(extra_info.pool_ns); + + /* Defer closing the sessions after s_mutex lock being released */ +diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c +index 5399a9ea5b4f1..f6a7fd47efd7a 100644 +--- a/fs/ceph/mds_client.c ++++ b/fs/ceph/mds_client.c +@@ -4546,6 +4546,9 @@ static void handle_lease(struct ceph_mds_client *mdsc, + + dout("handle_lease from mds%d\n", mds); + ++ if (!ceph_inc_mds_stopping_blocker(mdsc, session)) ++ return; ++ + /* decode */ + if (msg->front.iov_len < sizeof(*h) + sizeof(u32)) + goto bad; +@@ -4564,8 +4567,6 @@ static void handle_lease(struct ceph_mds_client *mdsc, + dname.len, dname.name); + + mutex_lock(&session->s_mutex); +- inc_session_sequence(session); +- + if (!inode) { + dout("handle_lease no inode %llx\n", vino.ino); + goto release; +@@ -4627,9 +4628,13 @@ release: + out: + mutex_unlock(&session->s_mutex); + iput(inode); ++ ++ ceph_dec_mds_stopping_blocker(mdsc); + return; + + bad: ++ ceph_dec_mds_stopping_blocker(mdsc); ++ + pr_err("corrupt lease message\n"); + ceph_msg_dump(msg); + } +@@ -4825,6 +4830,9 @@ int ceph_mdsc_init(struct ceph_fs_client *fsc) + } + + init_completion(&mdsc->safe_umount_waiters); ++ spin_lock_init(&mdsc->stopping_lock); ++ atomic_set(&mdsc->stopping_blockers, 0); ++ init_completion(&mdsc->stopping_waiter); + init_waitqueue_head(&mdsc->session_close_wq); + INIT_LIST_HEAD(&mdsc->waiting_for_map); + mdsc->quotarealms_inodes = RB_ROOT; +diff --git a/fs/ceph/mds_client.h b/fs/ceph/mds_client.h +index 9a80658f41679..0913959ccfa64 100644 +--- a/fs/ceph/mds_client.h ++++ b/fs/ceph/mds_client.h +@@ -381,8 +381,9 @@ struct cap_wait { + }; + + enum { +- CEPH_MDSC_STOPPING_BEGIN = 1, +- CEPH_MDSC_STOPPING_FLUSHED = 2, ++ CEPH_MDSC_STOPPING_BEGIN = 1, ++ CEPH_MDSC_STOPPING_FLUSHING = 2, ++ CEPH_MDSC_STOPPING_FLUSHED = 3, + }; + + /* +@@ -401,7 +402,11 @@ struct ceph_mds_client { + struct ceph_mds_session **sessions; /* NULL for mds if no session */ + atomic_t num_sessions; + int max_sessions; /* len of sessions array */ +- int stopping; /* true if shutting down */ ++ ++ spinlock_t stopping_lock; /* protect snap_empty */ ++ int stopping; /* the stage of shutting down */ ++ atomic_t stopping_blockers; ++ struct completion stopping_waiter; + + atomic64_t quotarealms_count; /* # realms with quota */ + /* +diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c +index 64592adfe48fb..f7fcf7f08ec64 100644 +--- a/fs/ceph/quota.c ++++ b/fs/ceph/quota.c +@@ -47,25 +47,23 @@ void ceph_handle_quota(struct ceph_mds_client *mdsc, + struct inode *inode; + struct ceph_inode_info *ci; + ++ if (!ceph_inc_mds_stopping_blocker(mdsc, session)) ++ return; ++ + if (msg->front.iov_len < sizeof(*h)) { + pr_err("%s corrupt message mds%d len %d\n", __func__, + session->s_mds, (int)msg->front.iov_len); + ceph_msg_dump(msg); +- return; ++ goto out; + } + +- /* increment msg sequence number */ +- mutex_lock(&session->s_mutex); +- inc_session_sequence(session); +- mutex_unlock(&session->s_mutex); +- + /* lookup inode */ + vino.ino = le64_to_cpu(h->ino); + vino.snap = CEPH_NOSNAP; + inode = ceph_find_inode(sb, vino); + if (!inode) { + pr_warn("Failed to find inode %llu\n", vino.ino); +- return; ++ goto out; + } + ci = ceph_inode(inode); + +@@ -78,6 +76,8 @@ void ceph_handle_quota(struct ceph_mds_client *mdsc, + spin_unlock(&ci->i_ceph_lock); + + iput(inode); ++out: ++ ceph_dec_mds_stopping_blocker(mdsc); + } + + static struct ceph_quotarealm_inode * +diff --git a/fs/ceph/snap.c b/fs/ceph/snap.c +index 2e73ba62bd7aa..82f7592e1747b 100644 +--- a/fs/ceph/snap.c ++++ b/fs/ceph/snap.c +@@ -1012,6 +1012,9 @@ void ceph_handle_snap(struct ceph_mds_client *mdsc, + int locked_rwsem = 0; + bool close_sessions = false; + ++ if (!ceph_inc_mds_stopping_blocker(mdsc, session)) ++ return; ++ + /* decode */ + if (msg->front.iov_len < sizeof(*h)) + goto bad; +@@ -1027,10 +1030,6 @@ void ceph_handle_snap(struct ceph_mds_client *mdsc, + dout("%s from mds%d op %s split %llx tracelen %d\n", __func__, + mds, ceph_snap_op_name(op), split, trace_len); + +- mutex_lock(&session->s_mutex); +- inc_session_sequence(session); +- mutex_unlock(&session->s_mutex); +- + down_write(&mdsc->snap_rwsem); + locked_rwsem = 1; + +@@ -1148,6 +1147,7 @@ skip_inode: + up_write(&mdsc->snap_rwsem); + + flush_snaps(mdsc); ++ ceph_dec_mds_stopping_blocker(mdsc); + return; + + bad: +@@ -1157,6 +1157,8 @@ out: + if (locked_rwsem) + up_write(&mdsc->snap_rwsem); + ++ ceph_dec_mds_stopping_blocker(mdsc); ++ + if (close_sessions) + ceph_mdsc_close_sessions(mdsc); + return; +diff --git a/fs/ceph/super.c b/fs/ceph/super.c +index a5f52013314d6..281b493fdac8e 100644 +--- a/fs/ceph/super.c ++++ b/fs/ceph/super.c +@@ -1365,25 +1365,90 @@ nomem: + return -ENOMEM; + } + ++/* ++ * Return true if it successfully increases the blocker counter, ++ * or false if the mdsc is in stopping and flushed state. ++ */ ++static bool __inc_stopping_blocker(struct ceph_mds_client *mdsc) ++{ ++ spin_lock(&mdsc->stopping_lock); ++ if (mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHING) { ++ spin_unlock(&mdsc->stopping_lock); ++ return false; ++ } ++ atomic_inc(&mdsc->stopping_blockers); ++ spin_unlock(&mdsc->stopping_lock); ++ return true; ++} ++ ++static void __dec_stopping_blocker(struct ceph_mds_client *mdsc) ++{ ++ spin_lock(&mdsc->stopping_lock); ++ if (!atomic_dec_return(&mdsc->stopping_blockers) && ++ mdsc->stopping >= CEPH_MDSC_STOPPING_FLUSHING) ++ complete_all(&mdsc->stopping_waiter); ++ spin_unlock(&mdsc->stopping_lock); ++} ++ ++/* For metadata IO requests */ ++bool ceph_inc_mds_stopping_blocker(struct ceph_mds_client *mdsc, ++ struct ceph_mds_session *session) ++{ ++ mutex_lock(&session->s_mutex); ++ inc_session_sequence(session); ++ mutex_unlock(&session->s_mutex); ++ ++ return __inc_stopping_blocker(mdsc); ++} ++ ++void ceph_dec_mds_stopping_blocker(struct ceph_mds_client *mdsc) ++{ ++ __dec_stopping_blocker(mdsc); ++} ++ + static void ceph_kill_sb(struct super_block *s) + { + struct ceph_fs_client *fsc = ceph_sb_to_client(s); ++ struct ceph_mds_client *mdsc = fsc->mdsc; ++ bool wait; + + dout("kill_sb %p\n", s); + +- ceph_mdsc_pre_umount(fsc->mdsc); ++ ceph_mdsc_pre_umount(mdsc); + flush_fs_workqueues(fsc); + + /* + * Though the kill_anon_super() will finally trigger the +- * sync_filesystem() anyway, we still need to do it here +- * and then bump the stage of shutdown to stop the work +- * queue as earlier as possible. ++ * sync_filesystem() anyway, we still need to do it here and ++ * then bump the stage of shutdown. This will allow us to ++ * drop any further message, which will increase the inodes' ++ * i_count reference counters but makes no sense any more, ++ * from MDSs. ++ * ++ * Without this when evicting the inodes it may fail in the ++ * kill_anon_super(), which will trigger a warning when ++ * destroying the fscrypt keyring and then possibly trigger ++ * a further crash in ceph module when the iput() tries to ++ * evict the inodes later. + */ + sync_filesystem(s); + +- fsc->mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED; ++ spin_lock(&mdsc->stopping_lock); ++ mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHING; ++ wait = !!atomic_read(&mdsc->stopping_blockers); ++ spin_unlock(&mdsc->stopping_lock); ++ ++ if (wait && atomic_read(&mdsc->stopping_blockers)) { ++ long timeleft = wait_for_completion_killable_timeout( ++ &mdsc->stopping_waiter, ++ fsc->client->options->mount_timeout); ++ if (!timeleft) /* timed out */ ++ pr_warn("umount timed out, %ld\n", timeleft); ++ else if (timeleft < 0) /* killed */ ++ pr_warn("umount was killed, %ld\n", timeleft); ++ } + ++ mdsc->stopping = CEPH_MDSC_STOPPING_FLUSHED; + kill_anon_super(s); + + fsc->client->extra_mon_dispatch = NULL; +diff --git a/fs/ceph/super.h b/fs/ceph/super.h +index 562f42f4a77d7..7ca74f5f70be5 100644 +--- a/fs/ceph/super.h ++++ b/fs/ceph/super.h +@@ -1374,4 +1374,7 @@ extern bool ceph_quota_update_statfs(struct ceph_fs_client *fsc, + struct kstatfs *buf); + extern void ceph_cleanup_quotarealms_inodes(struct ceph_mds_client *mdsc); + ++bool ceph_inc_mds_stopping_blocker(struct ceph_mds_client *mdsc, ++ struct ceph_mds_session *session); ++void ceph_dec_mds_stopping_blocker(struct ceph_mds_client *mdsc); + #endif /* _FS_CEPH_SUPER_H */ +diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c +index 016925b1a0908..3c8300e08f412 100644 +--- a/fs/ext4/mballoc.c ++++ b/fs/ext4/mballoc.c +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + #include + + /* +@@ -6420,6 +6421,21 @@ __acquires(bitlock) + return ret; + } + ++static ext4_grpblk_t ext4_last_grp_cluster(struct super_block *sb, ++ ext4_group_t grp) ++{ ++ if (grp < ext4_get_groups_count(sb)) ++ return EXT4_CLUSTERS_PER_GROUP(sb) - 1; ++ return (ext4_blocks_count(EXT4_SB(sb)->s_es) - ++ ext4_group_first_block_no(sb, grp) - 1) >> ++ EXT4_CLUSTER_BITS(sb); ++} ++ ++static bool ext4_trim_interrupted(void) ++{ ++ return fatal_signal_pending(current) || freezing(current); ++} ++ + static int ext4_try_to_trim_range(struct super_block *sb, + struct ext4_buddy *e4b, ext4_grpblk_t start, + ext4_grpblk_t max, ext4_grpblk_t minblocks) +@@ -6427,11 +6443,13 @@ __acquires(ext4_group_lock_ptr(sb, e4b->bd_group)) + __releases(ext4_group_lock_ptr(sb, e4b->bd_group)) + { + ext4_grpblk_t next, count, free_count; ++ bool set_trimmed = false; + void *bitmap; + + bitmap = e4b->bd_bitmap; +- start = (e4b->bd_info->bb_first_free > start) ? +- e4b->bd_info->bb_first_free : start; ++ if (start == 0 && max >= ext4_last_grp_cluster(sb, e4b->bd_group)) ++ set_trimmed = true; ++ start = max(e4b->bd_info->bb_first_free, start); + count = 0; + free_count = 0; + +@@ -6445,16 +6463,14 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group)) + int ret = ext4_trim_extent(sb, start, next - start, e4b); + + if (ret && ret != -EOPNOTSUPP) +- break; ++ return count; + count += next - start; + } + free_count += next - start; + start = next + 1; + +- if (fatal_signal_pending(current)) { +- count = -ERESTARTSYS; +- break; +- } ++ if (ext4_trim_interrupted()) ++ return count; + + if (need_resched()) { + ext4_unlock_group(sb, e4b->bd_group); +@@ -6466,6 +6482,9 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group)) + break; + } + ++ if (set_trimmed) ++ EXT4_MB_GRP_SET_TRIMMED(e4b->bd_info); ++ + return count; + } + +@@ -6476,7 +6495,6 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group)) + * @start: first group block to examine + * @max: last group block to examine + * @minblocks: minimum extent block count +- * @set_trimmed: set the trimmed flag if at least one block is trimmed + * + * ext4_trim_all_free walks through group's block bitmap searching for free + * extents. When the free extent is found, mark it as used in group buddy +@@ -6486,7 +6504,7 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group)) + static ext4_grpblk_t + ext4_trim_all_free(struct super_block *sb, ext4_group_t group, + ext4_grpblk_t start, ext4_grpblk_t max, +- ext4_grpblk_t minblocks, bool set_trimmed) ++ ext4_grpblk_t minblocks) + { + struct ext4_buddy e4b; + int ret; +@@ -6503,13 +6521,10 @@ ext4_trim_all_free(struct super_block *sb, ext4_group_t group, + ext4_lock_group(sb, group); + + if (!EXT4_MB_GRP_WAS_TRIMMED(e4b.bd_info) || +- minblocks < EXT4_SB(sb)->s_last_trim_minblks) { ++ minblocks < EXT4_SB(sb)->s_last_trim_minblks) + ret = ext4_try_to_trim_range(sb, &e4b, start, max, minblocks); +- if (ret >= 0 && set_trimmed) +- EXT4_MB_GRP_SET_TRIMMED(e4b.bd_info); +- } else { ++ else + ret = 0; +- } + + ext4_unlock_group(sb, group); + ext4_mb_unload_buddy(&e4b); +@@ -6542,7 +6557,6 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range) + ext4_fsblk_t first_data_blk = + le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block); + ext4_fsblk_t max_blks = ext4_blocks_count(EXT4_SB(sb)->s_es); +- bool whole_group, eof = false; + int ret = 0; + + start = range->start >> sb->s_blocksize_bits; +@@ -6561,10 +6575,8 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range) + if (minlen > EXT4_CLUSTERS_PER_GROUP(sb)) + goto out; + } +- if (end >= max_blks - 1) { ++ if (end >= max_blks - 1) + end = max_blks - 1; +- eof = true; +- } + if (end <= first_data_blk) + goto out; + if (start < first_data_blk) +@@ -6578,9 +6590,10 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range) + + /* end now represents the last cluster to discard in this group */ + end = EXT4_CLUSTERS_PER_GROUP(sb) - 1; +- whole_group = true; + + for (group = first_group; group <= last_group; group++) { ++ if (ext4_trim_interrupted()) ++ break; + grp = ext4_get_group_info(sb, group); + if (!grp) + continue; +@@ -6597,13 +6610,11 @@ int ext4_trim_fs(struct super_block *sb, struct fstrim_range *range) + * change it for the last group, note that last_cluster is + * already computed earlier by ext4_get_group_no_and_offset() + */ +- if (group == last_group) { ++ if (group == last_group) + end = last_cluster; +- whole_group = eof ? true : end == EXT4_CLUSTERS_PER_GROUP(sb) - 1; +- } + if (grp->bb_free >= minlen) { + cnt = ext4_trim_all_free(sb, group, first_cluster, +- end, minlen, whole_group); ++ end, minlen); + if (cnt < 0) { + ret = cnt; + break; +@@ -6648,8 +6659,7 @@ ext4_mballoc_query_range( + + ext4_lock_group(sb, group); + +- start = (e4b.bd_info->bb_first_free > start) ? +- e4b.bd_info->bb_first_free : start; ++ start = max(e4b.bd_info->bb_first_free, start); + if (end >= EXT4_CLUSTERS_PER_GROUP(sb)) + end = EXT4_CLUSTERS_PER_GROUP(sb) - 1; + +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index c230824ab5e6e..a982f91b71eb2 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -1212,7 +1212,8 @@ int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index) + } + + struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, +- blk_opf_t op_flags, bool for_write) ++ blk_opf_t op_flags, bool for_write, ++ pgoff_t *next_pgofs) + { + struct address_space *mapping = inode->i_mapping; + struct dnode_of_data dn; +@@ -1238,12 +1239,17 @@ struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, + + set_new_dnode(&dn, inode, NULL, NULL, 0); + err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE); +- if (err) ++ if (err) { ++ if (err == -ENOENT && next_pgofs) ++ *next_pgofs = f2fs_get_next_page_offset(&dn, index); + goto put_err; ++ } + f2fs_put_dnode(&dn); + + if (unlikely(dn.data_blkaddr == NULL_ADDR)) { + err = -ENOENT; ++ if (next_pgofs) ++ *next_pgofs = index + 1; + goto put_err; + } + if (dn.data_blkaddr != NEW_ADDR && +@@ -1287,7 +1293,8 @@ put_err: + return ERR_PTR(err); + } + +-struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index) ++struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index, ++ pgoff_t *next_pgofs) + { + struct address_space *mapping = inode->i_mapping; + struct page *page; +@@ -1297,7 +1304,7 @@ struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index) + return page; + f2fs_put_page(page, 0); + +- page = f2fs_get_read_data_page(inode, index, 0, false); ++ page = f2fs_get_read_data_page(inode, index, 0, false, next_pgofs); + if (IS_ERR(page)) + return page; + +@@ -1322,18 +1329,14 @@ struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index, + { + struct address_space *mapping = inode->i_mapping; + struct page *page; +-repeat: +- page = f2fs_get_read_data_page(inode, index, 0, for_write); ++ ++ page = f2fs_get_read_data_page(inode, index, 0, for_write, NULL); + if (IS_ERR(page)) + return page; + + /* wait for read completion */ + lock_page(page); +- if (unlikely(page->mapping != mapping)) { +- f2fs_put_page(page, 1); +- goto repeat; +- } +- if (unlikely(!PageUptodate(page))) { ++ if (unlikely(page->mapping != mapping || !PageUptodate(page))) { + f2fs_put_page(page, 1); + return ERR_PTR(-EIO); + } +diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c +index bf5ba75b75d24..8373eba3a1337 100644 +--- a/fs/f2fs/dir.c ++++ b/fs/f2fs/dir.c +@@ -340,6 +340,7 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir, + unsigned int bidx, end_block; + struct page *dentry_page; + struct f2fs_dir_entry *de = NULL; ++ pgoff_t next_pgofs; + bool room = false; + int max_slots; + +@@ -350,12 +351,13 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir, + le32_to_cpu(fname->hash) % nbucket); + end_block = bidx + nblock; + +- for (; bidx < end_block; bidx++) { ++ while (bidx < end_block) { + /* no need to allocate new dentry pages to all the indices */ +- dentry_page = f2fs_find_data_page(dir, bidx); ++ dentry_page = f2fs_find_data_page(dir, bidx, &next_pgofs); + if (IS_ERR(dentry_page)) { + if (PTR_ERR(dentry_page) == -ENOENT) { + room = true; ++ bidx = next_pgofs; + continue; + } else { + *res_page = dentry_page; +@@ -376,6 +378,8 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir, + if (max_slots >= s) + room = true; + f2fs_put_page(dentry_page, 0); ++ ++ bidx++; + } + + if (!de && room && F2FS_I(dir)->chash != fname->hash) { +@@ -963,7 +967,7 @@ void f2fs_delete_entry(struct f2fs_dir_entry *dentry, struct page *page, + + bool f2fs_empty_dir(struct inode *dir) + { +- unsigned long bidx; ++ unsigned long bidx = 0; + struct page *dentry_page; + unsigned int bit_pos; + struct f2fs_dentry_block *dentry_blk; +@@ -972,13 +976,17 @@ bool f2fs_empty_dir(struct inode *dir) + if (f2fs_has_inline_dentry(dir)) + return f2fs_empty_inline_dir(dir); + +- for (bidx = 0; bidx < nblock; bidx++) { +- dentry_page = f2fs_get_lock_data_page(dir, bidx, false); ++ while (bidx < nblock) { ++ pgoff_t next_pgofs; ++ ++ dentry_page = f2fs_find_data_page(dir, bidx, &next_pgofs); + if (IS_ERR(dentry_page)) { +- if (PTR_ERR(dentry_page) == -ENOENT) ++ if (PTR_ERR(dentry_page) == -ENOENT) { ++ bidx = next_pgofs; + continue; +- else ++ } else { + return false; ++ } + } + + dentry_blk = page_address(dentry_page); +@@ -990,10 +998,12 @@ bool f2fs_empty_dir(struct inode *dir) + NR_DENTRY_IN_BLOCK, + bit_pos); + +- f2fs_put_page(dentry_page, 1); ++ f2fs_put_page(dentry_page, 0); + + if (bit_pos < NR_DENTRY_IN_BLOCK) + return false; ++ ++ bidx++; + } + return true; + } +@@ -1111,7 +1121,8 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) + goto out_free; + } + +- for (; n < npages; n++, ctx->pos = n * NR_DENTRY_IN_BLOCK) { ++ for (; n < npages; ctx->pos = n * NR_DENTRY_IN_BLOCK) { ++ pgoff_t next_pgofs; + + /* allow readdir() to be interrupted */ + if (fatal_signal_pending(current)) { +@@ -1125,11 +1136,12 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) + page_cache_sync_readahead(inode->i_mapping, ra, file, n, + min(npages - n, (pgoff_t)MAX_DIR_RA_PAGES)); + +- dentry_page = f2fs_find_data_page(inode, n); ++ dentry_page = f2fs_find_data_page(inode, n, &next_pgofs); + if (IS_ERR(dentry_page)) { + err = PTR_ERR(dentry_page); + if (err == -ENOENT) { + err = 0; ++ n = next_pgofs; + continue; + } else { + goto out_free; +@@ -1148,6 +1160,8 @@ static int f2fs_readdir(struct file *file, struct dir_context *ctx) + } + + f2fs_put_page(dentry_page, 0); ++ ++ n++; + } + out_free: + fscrypt_fname_free_buffer(&fstr); +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 37dca728ff967..f56abb39601ac 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -3784,8 +3784,9 @@ int f2fs_reserve_new_block(struct dnode_of_data *dn); + int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index); + int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index); + struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, +- blk_opf_t op_flags, bool for_write); +-struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index); ++ blk_opf_t op_flags, bool for_write, pgoff_t *next_pgofs); ++struct page *f2fs_find_data_page(struct inode *inode, pgoff_t index, ++ pgoff_t *next_pgofs); + struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index, + bool for_write); + struct page *f2fs_get_new_data_page(struct inode *inode, +diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c +index aa4d513daa8f8..ec7212f7a9b73 100644 +--- a/fs/f2fs/gc.c ++++ b/fs/f2fs/gc.c +@@ -1600,8 +1600,8 @@ next_step: + continue; + } + +- data_page = f2fs_get_read_data_page(inode, +- start_bidx, REQ_RAHEAD, true); ++ data_page = f2fs_get_read_data_page(inode, start_bidx, ++ REQ_RAHEAD, true, NULL); + f2fs_up_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); + if (IS_ERR(data_page)) { + iput(inode); +diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c +index 7679a68e81930..caa0a053e8a9d 100644 +--- a/fs/netfs/buffered_read.c ++++ b/fs/netfs/buffered_read.c +@@ -47,12 +47,14 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) + xas_for_each(&xas, folio, last_page) { + loff_t pg_end; + bool pg_failed = false; ++ bool folio_started; + + if (xas_retry(&xas, folio)) + continue; + + pg_end = folio_pos(folio) + folio_size(folio) - 1; + ++ folio_started = false; + for (;;) { + loff_t sreq_end; + +@@ -60,8 +62,10 @@ void netfs_rreq_unlock_folios(struct netfs_io_request *rreq) + pg_failed = true; + break; + } +- if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) ++ if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) { + folio_start_fscache(folio); ++ folio_started = true; ++ } + pg_failed |= subreq_failed; + sreq_end = subreq->start + subreq->len - 1; + if (pg_end < sreq_end) +diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c +index 3bb530d4bb5ce..5a976fa343df1 100644 +--- a/fs/nfs/direct.c ++++ b/fs/nfs/direct.c +@@ -93,12 +93,10 @@ nfs_direct_handle_truncated(struct nfs_direct_req *dreq, + dreq->max_count = dreq_len; + if (dreq->count > dreq_len) + dreq->count = dreq_len; +- +- if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) +- dreq->error = hdr->error; +- else /* Clear outstanding error if this is EOF */ +- dreq->error = 0; + } ++ ++ if (test_bit(NFS_IOHDR_ERROR, &hdr->flags) && !dreq->error) ++ dreq->error = hdr->error; + } + + static void +@@ -120,6 +118,18 @@ nfs_direct_count_bytes(struct nfs_direct_req *dreq, + dreq->count = dreq_len; + } + ++static void nfs_direct_truncate_request(struct nfs_direct_req *dreq, ++ struct nfs_page *req) ++{ ++ loff_t offs = req_offset(req); ++ size_t req_start = (size_t)(offs - dreq->io_start); ++ ++ if (req_start < dreq->max_count) ++ dreq->max_count = req_start; ++ if (req_start < dreq->count) ++ dreq->count = req_start; ++} ++ + /** + * nfs_swap_rw - NFS address space operation for swap I/O + * @iocb: target I/O control block +@@ -490,7 +500,9 @@ static void nfs_direct_add_page_head(struct list_head *list, + kref_get(&head->wb_kref); + } + +-static void nfs_direct_join_group(struct list_head *list, struct inode *inode) ++static void nfs_direct_join_group(struct list_head *list, ++ struct nfs_commit_info *cinfo, ++ struct inode *inode) + { + struct nfs_page *req, *subreq; + +@@ -512,7 +524,7 @@ static void nfs_direct_join_group(struct list_head *list, struct inode *inode) + nfs_release_request(subreq); + } + } while ((subreq = subreq->wb_this_page) != req); +- nfs_join_page_group(req, inode); ++ nfs_join_page_group(req, cinfo, inode); + } + } + +@@ -530,20 +542,15 @@ nfs_direct_write_scan_commit_list(struct inode *inode, + static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq) + { + struct nfs_pageio_descriptor desc; +- struct nfs_page *req, *tmp; ++ struct nfs_page *req; + LIST_HEAD(reqs); + struct nfs_commit_info cinfo; +- LIST_HEAD(failed); + + nfs_init_cinfo_from_dreq(&cinfo, dreq); + nfs_direct_write_scan_commit_list(dreq->inode, &reqs, &cinfo); + +- nfs_direct_join_group(&reqs, dreq->inode); ++ nfs_direct_join_group(&reqs, &cinfo, dreq->inode); + +- dreq->count = 0; +- dreq->max_count = 0; +- list_for_each_entry(req, &reqs, wb_list) +- dreq->max_count += req->wb_bytes; + nfs_clear_pnfs_ds_commit_verifiers(&dreq->ds_cinfo); + get_dreq(dreq); + +@@ -551,27 +558,40 @@ static void nfs_direct_write_reschedule(struct nfs_direct_req *dreq) + &nfs_direct_write_completion_ops); + desc.pg_dreq = dreq; + +- list_for_each_entry_safe(req, tmp, &reqs, wb_list) { ++ while (!list_empty(&reqs)) { ++ req = nfs_list_entry(reqs.next); + /* Bump the transmission count */ + req->wb_nio++; + if (!nfs_pageio_add_request(&desc, req)) { +- nfs_list_move_request(req, &failed); +- spin_lock(&cinfo.inode->i_lock); +- dreq->flags = 0; +- if (desc.pg_error < 0) ++ spin_lock(&dreq->lock); ++ if (dreq->error < 0) { ++ desc.pg_error = dreq->error; ++ } else if (desc.pg_error != -EAGAIN) { ++ dreq->flags = 0; ++ if (!desc.pg_error) ++ desc.pg_error = -EIO; + dreq->error = desc.pg_error; +- else +- dreq->error = -EIO; +- spin_unlock(&cinfo.inode->i_lock); ++ } else ++ dreq->flags = NFS_ODIRECT_RESCHED_WRITES; ++ spin_unlock(&dreq->lock); ++ break; + } + nfs_release_request(req); + } + nfs_pageio_complete(&desc); + +- while (!list_empty(&failed)) { +- req = nfs_list_entry(failed.next); ++ while (!list_empty(&reqs)) { ++ req = nfs_list_entry(reqs.next); + nfs_list_remove_request(req); + nfs_unlock_and_release_request(req); ++ if (desc.pg_error == -EAGAIN) { ++ nfs_mark_request_commit(req, NULL, &cinfo, 0); ++ } else { ++ spin_lock(&dreq->lock); ++ nfs_direct_truncate_request(dreq, req); ++ spin_unlock(&dreq->lock); ++ nfs_release_request(req); ++ } + } + + if (put_dreq(dreq)) +@@ -591,8 +611,6 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data) + if (status < 0) { + /* Errors in commit are fatal */ + dreq->error = status; +- dreq->max_count = 0; +- dreq->count = 0; + dreq->flags = NFS_ODIRECT_DONE; + } else { + status = dreq->error; +@@ -603,7 +621,12 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data) + while (!list_empty(&data->pages)) { + req = nfs_list_entry(data->pages.next); + nfs_list_remove_request(req); +- if (status >= 0 && !nfs_write_match_verf(verf, req)) { ++ if (status < 0) { ++ spin_lock(&dreq->lock); ++ nfs_direct_truncate_request(dreq, req); ++ spin_unlock(&dreq->lock); ++ nfs_release_request(req); ++ } else if (!nfs_write_match_verf(verf, req)) { + dreq->flags = NFS_ODIRECT_RESCHED_WRITES; + /* + * Despite the reboot, the write was successful, +@@ -611,7 +634,7 @@ static void nfs_direct_commit_complete(struct nfs_commit_data *data) + */ + req->wb_nio = 0; + nfs_mark_request_commit(req, NULL, &cinfo, 0); +- } else /* Error or match */ ++ } else + nfs_release_request(req); + nfs_unlock_and_release_request(req); + } +@@ -664,6 +687,7 @@ static void nfs_direct_write_clear_reqs(struct nfs_direct_req *dreq) + while (!list_empty(&reqs)) { + req = nfs_list_entry(reqs.next); + nfs_list_remove_request(req); ++ nfs_direct_truncate_request(dreq, req); + nfs_release_request(req); + nfs_unlock_and_release_request(req); + } +@@ -713,7 +737,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr) + } + + nfs_direct_count_bytes(dreq, hdr); +- if (test_bit(NFS_IOHDR_UNSTABLE_WRITES, &hdr->flags)) { ++ if (test_bit(NFS_IOHDR_UNSTABLE_WRITES, &hdr->flags) && ++ !test_bit(NFS_IOHDR_ERROR, &hdr->flags)) { + if (!dreq->flags) + dreq->flags = NFS_ODIRECT_DO_COMMIT; + flags = dreq->flags; +@@ -757,18 +782,23 @@ static void nfs_write_sync_pgio_error(struct list_head *head, int error) + static void nfs_direct_write_reschedule_io(struct nfs_pgio_header *hdr) + { + struct nfs_direct_req *dreq = hdr->dreq; ++ struct nfs_page *req; ++ struct nfs_commit_info cinfo; + + trace_nfs_direct_write_reschedule_io(dreq); + ++ nfs_init_cinfo_from_dreq(&cinfo, dreq); + spin_lock(&dreq->lock); +- if (dreq->error == 0) { ++ if (dreq->error == 0) + dreq->flags = NFS_ODIRECT_RESCHED_WRITES; +- /* fake unstable write to let common nfs resend pages */ +- hdr->verf.committed = NFS_UNSTABLE; +- hdr->good_bytes = hdr->args.offset + hdr->args.count - +- hdr->io_start; +- } ++ set_bit(NFS_IOHDR_REDO, &hdr->flags); + spin_unlock(&dreq->lock); ++ while (!list_empty(&hdr->pages)) { ++ req = nfs_list_entry(hdr->pages.next); ++ nfs_list_remove_request(req); ++ nfs_unlock_request(req); ++ nfs_mark_request_commit(req, NULL, &cinfo, 0); ++ } + } + + static const struct nfs_pgio_completion_ops nfs_direct_write_completion_ops = { +@@ -796,9 +826,11 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, + { + struct nfs_pageio_descriptor desc; + struct inode *inode = dreq->inode; ++ struct nfs_commit_info cinfo; + ssize_t result = 0; + size_t requested_bytes = 0; + size_t wsize = max_t(size_t, NFS_SERVER(inode)->wsize, PAGE_SIZE); ++ bool defer = false; + + trace_nfs_direct_write_schedule_iovec(dreq); + +@@ -839,19 +871,39 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, + break; + } + ++ pgbase = 0; ++ bytes -= req_len; ++ requested_bytes += req_len; ++ pos += req_len; ++ dreq->bytes_left -= req_len; ++ ++ if (defer) { ++ nfs_mark_request_commit(req, NULL, &cinfo, 0); ++ continue; ++ } ++ + nfs_lock_request(req); + req->wb_index = pos >> PAGE_SHIFT; + req->wb_offset = pos & ~PAGE_MASK; +- if (!nfs_pageio_add_request(&desc, req)) { ++ if (nfs_pageio_add_request(&desc, req)) ++ continue; ++ ++ /* Exit on hard errors */ ++ if (desc.pg_error < 0 && desc.pg_error != -EAGAIN) { + result = desc.pg_error; + nfs_unlock_and_release_request(req); + break; + } +- pgbase = 0; +- bytes -= req_len; +- requested_bytes += req_len; +- pos += req_len; +- dreq->bytes_left -= req_len; ++ ++ /* If the error is soft, defer remaining requests */ ++ nfs_init_cinfo_from_dreq(&cinfo, dreq); ++ spin_lock(&dreq->lock); ++ dreq->flags = NFS_ODIRECT_RESCHED_WRITES; ++ spin_unlock(&dreq->lock); ++ nfs_unlock_request(req); ++ nfs_mark_request_commit(req, NULL, &cinfo, 0); ++ desc.pg_error = 0; ++ defer = true; + } + nfs_direct_release_pages(pagevec, npages); + kvfree(pagevec); +diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c +index 1ec79ccf89ad2..5c69a6e9ab3e1 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayout.c ++++ b/fs/nfs/flexfilelayout/flexfilelayout.c +@@ -1235,6 +1235,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg, + case -EPFNOSUPPORT: + case -EPROTONOSUPPORT: + case -EOPNOTSUPP: ++ case -EINVAL: + case -ECONNREFUSED: + case -ECONNRESET: + case -EHOSTDOWN: +diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c +index d3051b051a564..84b345efcec00 100644 +--- a/fs/nfs/nfs4client.c ++++ b/fs/nfs/nfs4client.c +@@ -231,6 +231,8 @@ struct nfs_client *nfs4_alloc_client(const struct nfs_client_initdata *cl_init) + __set_bit(NFS_CS_DISCRTRY, &clp->cl_flags); + __set_bit(NFS_CS_NO_RETRANS_TIMEOUT, &clp->cl_flags); + ++ if (test_bit(NFS_CS_DS, &cl_init->init_flags)) ++ __set_bit(NFS_CS_DS, &clp->cl_flags); + /* + * Set up the connection to the server before we add add to the + * global list. +@@ -414,6 +416,8 @@ static void nfs4_add_trunk(struct nfs_client *clp, struct nfs_client *old) + .net = old->cl_net, + .servername = old->cl_hostname, + }; ++ int max_connect = test_bit(NFS_CS_PNFS, &clp->cl_flags) ? ++ clp->cl_max_connect : old->cl_max_connect; + + if (clp->cl_proto != old->cl_proto) + return; +@@ -427,7 +431,7 @@ static void nfs4_add_trunk(struct nfs_client *clp, struct nfs_client *old) + xprt_args.addrlen = clp_salen; + + rpc_clnt_add_xprt(old->cl_rpcclient, &xprt_args, +- rpc_clnt_test_and_add_xprt, NULL); ++ rpc_clnt_test_and_add_xprt, &max_connect); + } + + /** +@@ -993,6 +997,9 @@ struct nfs_client *nfs4_set_ds_client(struct nfs_server *mds_srv, + if (mds_srv->flags & NFS_MOUNT_NORESVPORT) + __set_bit(NFS_CS_NORESVPORT, &cl_init.init_flags); + ++ __set_bit(NFS_CS_DS, &cl_init.init_flags); ++ __set_bit(NFS_CS_PNFS, &cl_init.init_flags); ++ cl_init.max_connect = NFS_MAX_TRANSPORTS; + /* + * Set an authflavor equual to the MDS value. Use the MDS nfs_client + * cl_ipaddr so as to use the same EXCHANGE_ID co_ownerid as the MDS +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 2dec0fed1ba16..be570c65ae154 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -2708,8 +2708,12 @@ static int _nfs4_proc_open(struct nfs4_opendata *data, + return status; + } + if (!(o_res->f_attr->valid & NFS_ATTR_FATTR)) { ++ struct nfs_fh *fh = &o_res->fh; ++ + nfs4_sequence_free_slot(&o_res->seq_res); +- nfs4_proc_getattr(server, &o_res->fh, o_res->f_attr, NULL); ++ if (o_arg->claim == NFS4_OPEN_CLAIM_FH) ++ fh = NFS_FH(d_inode(data->dentry)); ++ nfs4_proc_getattr(server, fh, o_res->f_attr, NULL); + } + return 0; + } +@@ -8794,6 +8798,8 @@ nfs4_run_exchange_id(struct nfs_client *clp, const struct cred *cred, + #ifdef CONFIG_NFS_V4_1_MIGRATION + calldata->args.flags |= EXCHGID4_FLAG_SUPP_MOVED_MIGR; + #endif ++ if (test_bit(NFS_CS_DS, &clp->cl_flags)) ++ calldata->args.flags |= EXCHGID4_FLAG_USE_PNFS_DS; + msg.rpc_argp = &calldata->args; + msg.rpc_resp = &calldata->res; + task_setup_data.callback_data = calldata; +@@ -8871,6 +8877,8 @@ static int _nfs4_proc_exchange_id(struct nfs_client *clp, const struct cred *cre + /* Save the EXCHANGE_ID verifier session trunk tests */ + memcpy(clp->cl_confirm.data, argp->verifier.data, + sizeof(clp->cl_confirm.data)); ++ if (resp->flags & EXCHGID4_FLAG_USE_PNFS_DS) ++ set_bit(NFS_CS_DS, &clp->cl_flags); + out: + trace_nfs4_exchange_id(clp, status); + rpc_put_task(task); +diff --git a/fs/nfs/write.c b/fs/nfs/write.c +index f41d24b54fd1f..0a8aed0ac9945 100644 +--- a/fs/nfs/write.c ++++ b/fs/nfs/write.c +@@ -58,7 +58,8 @@ static const struct nfs_pgio_completion_ops nfs_async_write_completion_ops; + static const struct nfs_commit_completion_ops nfs_commit_completion_ops; + static const struct nfs_rw_ops nfs_rw_write_ops; + static void nfs_inode_remove_request(struct nfs_page *req); +-static void nfs_clear_request_commit(struct nfs_page *req); ++static void nfs_clear_request_commit(struct nfs_commit_info *cinfo, ++ struct nfs_page *req); + static void nfs_init_cinfo_from_inode(struct nfs_commit_info *cinfo, + struct inode *inode); + static struct nfs_page * +@@ -502,8 +503,8 @@ nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list, + * the (former) group. All subrequests are removed from any write or commit + * lists, unlinked from the group and destroyed. + */ +-void +-nfs_join_page_group(struct nfs_page *head, struct inode *inode) ++void nfs_join_page_group(struct nfs_page *head, struct nfs_commit_info *cinfo, ++ struct inode *inode) + { + struct nfs_page *subreq; + struct nfs_page *destroy_list = NULL; +@@ -533,7 +534,7 @@ nfs_join_page_group(struct nfs_page *head, struct inode *inode) + * Commit list removal accounting is done after locks are dropped */ + subreq = head; + do { +- nfs_clear_request_commit(subreq); ++ nfs_clear_request_commit(cinfo, subreq); + subreq = subreq->wb_this_page; + } while (subreq != head); + +@@ -567,8 +568,10 @@ nfs_lock_and_join_requests(struct page *page) + { + struct inode *inode = page_file_mapping(page)->host; + struct nfs_page *head; ++ struct nfs_commit_info cinfo; + int ret; + ++ nfs_init_cinfo_from_inode(&cinfo, inode); + /* + * A reference is taken only on the head request which acts as a + * reference to the whole page group - the group will not be destroyed +@@ -585,7 +588,7 @@ nfs_lock_and_join_requests(struct page *page) + return ERR_PTR(ret); + } + +- nfs_join_page_group(head, inode); ++ nfs_join_page_group(head, &cinfo, inode); + + return head; + } +@@ -956,18 +959,16 @@ nfs_clear_page_commit(struct page *page) + } + + /* Called holding the request lock on @req */ +-static void +-nfs_clear_request_commit(struct nfs_page *req) ++static void nfs_clear_request_commit(struct nfs_commit_info *cinfo, ++ struct nfs_page *req) + { + if (test_bit(PG_CLEAN, &req->wb_flags)) { + struct nfs_open_context *ctx = nfs_req_openctx(req); + struct inode *inode = d_inode(ctx->dentry); +- struct nfs_commit_info cinfo; + +- nfs_init_cinfo_from_inode(&cinfo, inode); + mutex_lock(&NFS_I(inode)->commit_mutex); +- if (!pnfs_clear_request_commit(req, &cinfo)) { +- nfs_request_remove_commit_list(req, &cinfo); ++ if (!pnfs_clear_request_commit(req, cinfo)) { ++ nfs_request_remove_commit_list(req, cinfo); + } + mutex_unlock(&NFS_I(inode)->commit_mutex); + nfs_clear_page_commit(req->wb_page); +diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c +index b0d22ff24b674..fcd13da5d0125 100644 +--- a/fs/nilfs2/gcinode.c ++++ b/fs/nilfs2/gcinode.c +@@ -73,10 +73,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff, + struct the_nilfs *nilfs = inode->i_sb->s_fs_info; + + err = nilfs_dat_translate(nilfs->ns_dat, vbn, &pbn); +- if (unlikely(err)) { /* -EIO, -ENOMEM, -ENOENT */ +- brelse(bh); ++ if (unlikely(err)) /* -EIO, -ENOMEM, -ENOENT */ + goto failed; +- } + } + + lock_buffer(bh); +@@ -102,6 +100,8 @@ int nilfs_gccache_submit_read_data(struct inode *inode, sector_t blkoff, + failed: + unlock_page(bh->b_page); + put_page(bh->b_page); ++ if (unlikely(err)) ++ brelse(bh); + return err; + } + +diff --git a/fs/proc/internal.h b/fs/proc/internal.h +index b701d0207edf0..6b921826d85b6 100644 +--- a/fs/proc/internal.h ++++ b/fs/proc/internal.h +@@ -289,9 +289,7 @@ struct proc_maps_private { + struct inode *inode; + struct task_struct *task; + struct mm_struct *mm; +-#ifdef CONFIG_MMU + struct vma_iterator iter; +-#endif + #ifdef CONFIG_NUMA + struct mempolicy *task_mempolicy; + #endif +diff --git a/fs/proc/task_nommu.c b/fs/proc/task_nommu.c +index 2fd06f52b6a44..dc05780f93e13 100644 +--- a/fs/proc/task_nommu.c ++++ b/fs/proc/task_nommu.c +@@ -188,15 +188,28 @@ static int show_map(struct seq_file *m, void *_p) + return nommu_vma_show(m, _p); + } + +-static void *m_start(struct seq_file *m, loff_t *pos) ++static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv, ++ loff_t *ppos) ++{ ++ struct vm_area_struct *vma = vma_next(&priv->iter); ++ ++ if (vma) { ++ *ppos = vma->vm_start; ++ } else { ++ *ppos = -1UL; ++ } ++ ++ return vma; ++} ++ ++static void *m_start(struct seq_file *m, loff_t *ppos) + { + struct proc_maps_private *priv = m->private; ++ unsigned long last_addr = *ppos; + struct mm_struct *mm; +- struct vm_area_struct *vma; +- unsigned long addr = *pos; + +- /* See m_next(). Zero at the start or after lseek. */ +- if (addr == -1UL) ++ /* See proc_get_vma(). Zero at the start or after lseek. */ ++ if (last_addr == -1UL) + return NULL; + + /* pin the task and mm whilst we play with them */ +@@ -205,44 +218,41 @@ static void *m_start(struct seq_file *m, loff_t *pos) + return ERR_PTR(-ESRCH); + + mm = priv->mm; +- if (!mm || !mmget_not_zero(mm)) ++ if (!mm || !mmget_not_zero(mm)) { ++ put_task_struct(priv->task); ++ priv->task = NULL; + return NULL; ++ } + + if (mmap_read_lock_killable(mm)) { + mmput(mm); ++ put_task_struct(priv->task); ++ priv->task = NULL; + return ERR_PTR(-EINTR); + } + +- /* start the next element from addr */ +- vma = find_vma(mm, addr); +- if (vma) +- return vma; ++ vma_iter_init(&priv->iter, mm, last_addr); + +- mmap_read_unlock(mm); +- mmput(mm); +- return NULL; ++ return proc_get_vma(priv, ppos); + } + +-static void m_stop(struct seq_file *m, void *_vml) ++static void m_stop(struct seq_file *m, void *v) + { + struct proc_maps_private *priv = m->private; ++ struct mm_struct *mm = priv->mm; + +- if (!IS_ERR_OR_NULL(_vml)) { +- mmap_read_unlock(priv->mm); +- mmput(priv->mm); +- } +- if (priv->task) { +- put_task_struct(priv->task); +- priv->task = NULL; +- } ++ if (!priv->task) ++ return; ++ ++ mmap_read_unlock(mm); ++ mmput(mm); ++ put_task_struct(priv->task); ++ priv->task = NULL; + } + +-static void *m_next(struct seq_file *m, void *_p, loff_t *pos) ++static void *m_next(struct seq_file *m, void *_p, loff_t *ppos) + { +- struct vm_area_struct *vma = _p; +- +- *pos = vma->vm_end; +- return find_vma(vma->vm_mm, vma->vm_end); ++ return proc_get_vma(m->private, ppos); + } + + static const struct seq_operations proc_pid_maps_ops = { +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index 03f34ec63e10d..39602f39aea8f 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -1776,6 +1776,7 @@ static inline bool is_retryable_error(int error) + #define MID_RETRY_NEEDED 8 /* session closed while this request out */ + #define MID_RESPONSE_MALFORMED 0x10 + #define MID_SHUTDOWN 0x20 ++#define MID_RESPONSE_READY 0x40 /* ready for other process handle the rsp */ + + /* Flags */ + #define MID_WAIT_CANCELLED 1 /* Cancelled while waiting for response */ +diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c +index e2e2ef0fa9a0f..f4818599c00a2 100644 +--- a/fs/smb/client/fs_context.c ++++ b/fs/smb/client/fs_context.c +@@ -1487,6 +1487,7 @@ static int smb3_fs_context_parse_param(struct fs_context *fc, + + cifs_parse_mount_err: + kfree_sensitive(ctx->password); ++ ctx->password = NULL; + return -EINVAL; + } + +diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c +index 92c1ed9304be7..9531ea2430899 100644 +--- a/fs/smb/client/inode.c ++++ b/fs/smb/client/inode.c +@@ -2605,7 +2605,7 @@ int cifs_fiemap(struct inode *inode, struct fiemap_extent_info *fei, u64 start, + } + + cifsFileInfo_put(cfile); +- return -ENOTSUPP; ++ return -EOPNOTSUPP; + } + + int cifs_truncate_page(struct address_space *mapping, loff_t from) +diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c +index 1387d5126f53b..efff7137412b4 100644 +--- a/fs/smb/client/smb2ops.c ++++ b/fs/smb/client/smb2ops.c +@@ -292,7 +292,7 @@ smb2_adjust_credits(struct TCP_Server_Info *server, + cifs_server_dbg(VFS, "request has less credits (%d) than required (%d)", + credits->value, new_val); + +- return -ENOTSUPP; ++ return -EOPNOTSUPP; + } + + spin_lock(&server->req_lock); +@@ -1155,7 +1155,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon, + /* Use a fudge factor of 256 bytes in case we collide + * with a different set_EAs command. + */ +- if(CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE - ++ if (CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE - + MAX_SMB2_CLOSE_RESPONSE_SIZE - 256 < + used_len + ea_name_len + ea_value_len + 1) { + rc = -ENOSPC; +@@ -4721,7 +4721,7 @@ handle_read_data(struct TCP_Server_Info *server, struct mid_q_entry *mid, + + if (shdr->Command != SMB2_READ) { + cifs_server_dbg(VFS, "only big read responses are supported\n"); +- return -ENOTSUPP; ++ return -EOPNOTSUPP; + } + + if (server->ops->is_session_expired && +diff --git a/fs/smb/client/transport.c b/fs/smb/client/transport.c +index e03ffcf7e201c..87aea456ee903 100644 +--- a/fs/smb/client/transport.c ++++ b/fs/smb/client/transport.c +@@ -35,6 +35,8 @@ + void + cifs_wake_up_task(struct mid_q_entry *mid) + { ++ if (mid->mid_state == MID_RESPONSE_RECEIVED) ++ mid->mid_state = MID_RESPONSE_READY; + wake_up_process(mid->callback_data); + } + +@@ -87,7 +89,8 @@ static void __release_mid(struct kref *refcount) + struct TCP_Server_Info *server = midEntry->server; + + if (midEntry->resp_buf && (midEntry->mid_flags & MID_WAIT_CANCELLED) && +- midEntry->mid_state == MID_RESPONSE_RECEIVED && ++ (midEntry->mid_state == MID_RESPONSE_RECEIVED || ++ midEntry->mid_state == MID_RESPONSE_READY) && + server->ops->handle_cancelled_mid) + server->ops->handle_cancelled_mid(midEntry, server); + +@@ -759,7 +762,8 @@ wait_for_response(struct TCP_Server_Info *server, struct mid_q_entry *midQ) + int error; + + error = wait_event_state(server->response_q, +- midQ->mid_state != MID_REQUEST_SUBMITTED, ++ midQ->mid_state != MID_REQUEST_SUBMITTED && ++ midQ->mid_state != MID_RESPONSE_RECEIVED, + (TASK_KILLABLE|TASK_FREEZABLE_UNSAFE)); + if (error < 0) + return -ERESTARTSYS; +@@ -912,7 +916,7 @@ cifs_sync_mid_result(struct mid_q_entry *mid, struct TCP_Server_Info *server) + + spin_lock(&server->mid_lock); + switch (mid->mid_state) { +- case MID_RESPONSE_RECEIVED: ++ case MID_RESPONSE_READY: + spin_unlock(&server->mid_lock); + return rc; + case MID_RETRY_NEEDED: +@@ -1011,6 +1015,9 @@ cifs_compound_callback(struct mid_q_entry *mid) + credits.instance = server->reconnect_instance; + + add_credits(server, &credits, mid->optype); ++ ++ if (mid->mid_state == MID_RESPONSE_RECEIVED) ++ mid->mid_state = MID_RESPONSE_READY; + } + + static void +@@ -1206,7 +1213,8 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + send_cancel(server, &rqst[i], midQ[i]); + spin_lock(&server->mid_lock); + midQ[i]->mid_flags |= MID_WAIT_CANCELLED; +- if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED) { ++ if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED || ++ midQ[i]->mid_state == MID_RESPONSE_RECEIVED) { + midQ[i]->callback = cifs_cancelled_callback; + cancelled_mid[i] = true; + credits[i].value = 0; +@@ -1227,7 +1235,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + } + + if (!midQ[i]->resp_buf || +- midQ[i]->mid_state != MID_RESPONSE_RECEIVED) { ++ midQ[i]->mid_state != MID_RESPONSE_READY) { + rc = -EIO; + cifs_dbg(FYI, "Bad MID state?\n"); + goto out; +@@ -1414,7 +1422,8 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses, + if (rc != 0) { + send_cancel(server, &rqst, midQ); + spin_lock(&server->mid_lock); +- if (midQ->mid_state == MID_REQUEST_SUBMITTED) { ++ if (midQ->mid_state == MID_REQUEST_SUBMITTED || ++ midQ->mid_state == MID_RESPONSE_RECEIVED) { + /* no longer considered to be "in-flight" */ + midQ->callback = release_mid; + spin_unlock(&server->mid_lock); +@@ -1431,7 +1440,7 @@ SendReceive(const unsigned int xid, struct cifs_ses *ses, + } + + if (!midQ->resp_buf || !out_buf || +- midQ->mid_state != MID_RESPONSE_RECEIVED) { ++ midQ->mid_state != MID_RESPONSE_READY) { + rc = -EIO; + cifs_server_dbg(VFS, "Bad MID state?\n"); + goto out; +@@ -1555,14 +1564,16 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon, + + /* Wait for a reply - allow signals to interrupt. */ + rc = wait_event_interruptible(server->response_q, +- (!(midQ->mid_state == MID_REQUEST_SUBMITTED)) || ++ (!(midQ->mid_state == MID_REQUEST_SUBMITTED || ++ midQ->mid_state == MID_RESPONSE_RECEIVED)) || + ((server->tcpStatus != CifsGood) && + (server->tcpStatus != CifsNew))); + + /* Were we interrupted by a signal ? */ + spin_lock(&server->srv_lock); + if ((rc == -ERESTARTSYS) && +- (midQ->mid_state == MID_REQUEST_SUBMITTED) && ++ (midQ->mid_state == MID_REQUEST_SUBMITTED || ++ midQ->mid_state == MID_RESPONSE_RECEIVED) && + ((server->tcpStatus == CifsGood) || + (server->tcpStatus == CifsNew))) { + spin_unlock(&server->srv_lock); +@@ -1593,7 +1604,8 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon, + if (rc) { + send_cancel(server, &rqst, midQ); + spin_lock(&server->mid_lock); +- if (midQ->mid_state == MID_REQUEST_SUBMITTED) { ++ if (midQ->mid_state == MID_REQUEST_SUBMITTED || ++ midQ->mid_state == MID_RESPONSE_RECEIVED) { + /* no longer considered to be "in-flight" */ + midQ->callback = release_mid; + spin_unlock(&server->mid_lock); +@@ -1613,7 +1625,7 @@ SendReceiveBlockingLock(const unsigned int xid, struct cifs_tcon *tcon, + return rc; + + /* rcvd frame is ok */ +- if (out_buf == NULL || midQ->mid_state != MID_RESPONSE_RECEIVED) { ++ if (out_buf == NULL || midQ->mid_state != MID_RESPONSE_READY) { + rc = -EIO; + cifs_tcon_dbg(VFS, "Bad MID state?\n"); + goto out; +diff --git a/include/linux/bpf.h b/include/linux/bpf.h +index b3d3aa8437dce..1ed2ec035e779 100644 +--- a/include/linux/bpf.h ++++ b/include/linux/bpf.h +@@ -301,7 +301,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) + + size /= sizeof(long); + while (size--) +- *ldst++ = *lsrc++; ++ data_race(*ldst++ = *lsrc++); + } + + /* copy everything but bpf_spin_lock, bpf_timer, and kptrs. There could be one of each. */ +diff --git a/include/linux/btf_ids.h b/include/linux/btf_ids.h +index 2b98720084285..0f02bbb205735 100644 +--- a/include/linux/btf_ids.h ++++ b/include/linux/btf_ids.h +@@ -49,7 +49,7 @@ word \ + ____BTF_ID(symbol, word) + + #define __ID(prefix) \ +- __PASTE(prefix, __COUNTER__) ++ __PASTE(__PASTE(prefix, __COUNTER__), __LINE__) + + /* + * The BTF_ID defines unique symbol for each ID pointing +diff --git a/include/linux/if_team.h b/include/linux/if_team.h +index 8de6b6e678295..34bcba5a70677 100644 +--- a/include/linux/if_team.h ++++ b/include/linux/if_team.h +@@ -189,6 +189,8 @@ struct team { + struct net_device *dev; /* associated netdevice */ + struct team_pcpu_stats __percpu *pcpu_stats; + ++ const struct header_ops *header_ops_cache; ++ + struct mutex lock; /* used for overall locking, e.g. port lists write */ + + /* +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index a92bce40b04b3..4a1dc88ddbff9 100644 +--- a/include/linux/interrupt.h ++++ b/include/linux/interrupt.h +@@ -569,8 +569,12 @@ enum + * 2) rcu_report_dead() reports the final quiescent states. + * + * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue ++ * ++ * _ (HR)TIMER_SOFTIRQ: (hr)timers_dead_cpu() migrates the queue + */ +-#define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ)) ++#define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(TIMER_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ) |\ ++ BIT(HRTIMER_SOFTIRQ) | BIT(RCU_SOFTIRQ)) ++ + + /* map softirq index to softirq name. update 'softirq_to_name' in + * kernel/softirq.c when adding a new softirq. +diff --git a/include/linux/libata.h b/include/linux/libata.h +index 4c9b322bb3d88..a9ec8d97a715b 100644 +--- a/include/linux/libata.h ++++ b/include/linux/libata.h +@@ -253,7 +253,7 @@ enum { + * advised to wait only for the following duration before + * doing SRST. + */ +- ATA_TMOUT_PMP_SRST_WAIT = 5000, ++ ATA_TMOUT_PMP_SRST_WAIT = 10000, + + /* When the LPM policy is set to ATA_LPM_MAX_POWER, there might + * be a spurious PHY event, so ignore the first PHY event that +@@ -1136,6 +1136,7 @@ extern int ata_std_bios_param(struct scsi_device *sdev, + struct block_device *bdev, + sector_t capacity, int geom[]); + extern void ata_scsi_unlock_native_capacity(struct scsi_device *sdev); ++extern int ata_scsi_slave_alloc(struct scsi_device *sdev); + extern int ata_scsi_slave_config(struct scsi_device *sdev); + extern void ata_scsi_slave_destroy(struct scsi_device *sdev); + extern int ata_scsi_change_queue_depth(struct scsi_device *sdev, +@@ -1384,6 +1385,7 @@ extern const struct attribute_group *ata_common_sdev_groups[]; + .this_id = ATA_SHT_THIS_ID, \ + .emulated = ATA_SHT_EMULATED, \ + .proc_name = drv_name, \ ++ .slave_alloc = ata_scsi_slave_alloc, \ + .slave_destroy = ata_scsi_slave_destroy, \ + .bios_param = ata_std_bios_param, \ + .unlock_native_capacity = ata_scsi_unlock_native_capacity,\ +diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h +index 099521835cd14..50a078a31734c 100644 +--- a/include/linux/memcontrol.h ++++ b/include/linux/memcontrol.h +@@ -902,7 +902,7 @@ unsigned long mem_cgroup_get_zone_lru_size(struct lruvec *lruvec, + return READ_ONCE(mz->lru_zone_size[zone_idx][lru]); + } + +-void mem_cgroup_handle_over_high(void); ++void mem_cgroup_handle_over_high(gfp_t gfp_mask); + + unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg); + +@@ -1437,7 +1437,7 @@ static inline void mem_cgroup_unlock_pages(void) + rcu_read_unlock(); + } + +-static inline void mem_cgroup_handle_over_high(void) ++static inline void mem_cgroup_handle_over_high(gfp_t gfp_mask) + { + } + +diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h +index ea2f7e6b1b0b5..ef8ba5fbc6503 100644 +--- a/include/linux/nfs_fs_sb.h ++++ b/include/linux/nfs_fs_sb.h +@@ -48,6 +48,7 @@ struct nfs_client { + #define NFS_CS_NOPING 6 /* - don't ping on connect */ + #define NFS_CS_DS 7 /* - Server is a DS */ + #define NFS_CS_REUSEPORT 8 /* - reuse src port on reconnect */ ++#define NFS_CS_PNFS 9 /* - Server used for pnfs */ + struct sockaddr_storage cl_addr; /* server identifier */ + size_t cl_addrlen; + char * cl_hostname; /* hostname of server */ +diff --git a/include/linux/nfs_page.h b/include/linux/nfs_page.h +index ba7e2e4b09264..e39a8cf8b1797 100644 +--- a/include/linux/nfs_page.h ++++ b/include/linux/nfs_page.h +@@ -145,7 +145,9 @@ extern void nfs_unlock_request(struct nfs_page *req); + extern void nfs_unlock_and_release_request(struct nfs_page *); + extern struct nfs_page *nfs_page_group_lock_head(struct nfs_page *req); + extern int nfs_page_group_lock_subrequests(struct nfs_page *head); +-extern void nfs_join_page_group(struct nfs_page *head, struct inode *inode); ++extern void nfs_join_page_group(struct nfs_page *head, ++ struct nfs_commit_info *cinfo, ++ struct inode *inode); + extern int nfs_page_group_lock(struct nfs_page *); + extern void nfs_page_group_unlock(struct nfs_page *); + extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int); +diff --git a/include/linux/resume_user_mode.h b/include/linux/resume_user_mode.h +index 2851894544496..f8f3e958e9cf2 100644 +--- a/include/linux/resume_user_mode.h ++++ b/include/linux/resume_user_mode.h +@@ -55,7 +55,7 @@ static inline void resume_user_mode_work(struct pt_regs *regs) + } + #endif + +- mem_cgroup_handle_over_high(); ++ mem_cgroup_handle_over_high(GFP_KERNEL); + blkcg_maybe_throttle_current(); + + rseq_handle_notify_resume(NULL, regs); +diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h +index 3926e90279477..d778af83c8f36 100644 +--- a/include/linux/seqlock.h ++++ b/include/linux/seqlock.h +@@ -512,8 +512,8 @@ do { \ + + static inline void do_write_seqcount_begin_nested(seqcount_t *s, int subclass) + { +- do_raw_write_seqcount_begin(s); + seqcount_acquire(&s->dep_map, subclass, 0, _RET_IP_); ++ do_raw_write_seqcount_begin(s); + } + + /** +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h +index c752b6f509791..d1f81a6d7773b 100644 +--- a/include/net/netfilter/nf_tables.h ++++ b/include/net/netfilter/nf_tables.h +@@ -507,6 +507,7 @@ struct nft_set_elem_expr { + * + * @list: table set list node + * @bindings: list of set bindings ++ * @refs: internal refcounting for async set destruction + * @table: table this set belongs to + * @net: netnamespace this set belongs to + * @name: name of the set +@@ -528,6 +529,7 @@ struct nft_set_elem_expr { + * @expr: stateful expression + * @ops: set ops + * @flags: set flags ++ * @dead: set will be freed, never cleared + * @genmask: generation mask + * @klen: key length + * @dlen: data length +@@ -536,6 +538,7 @@ struct nft_set_elem_expr { + struct nft_set { + struct list_head list; + struct list_head bindings; ++ refcount_t refs; + struct nft_table *table; + possible_net_t net; + char *name; +@@ -557,7 +560,8 @@ struct nft_set { + struct list_head pending_update; + /* runtime data below here */ + const struct nft_set_ops *ops ____cacheline_aligned; +- u16 flags:14, ++ u16 flags:13, ++ dead:1, + genmask:2; + u8 klen; + u8 dlen; +@@ -578,6 +582,11 @@ static inline void *nft_set_priv(const struct nft_set *set) + return (void *)set->data; + } + ++static inline bool nft_set_gc_is_pending(const struct nft_set *s) ++{ ++ return refcount_read(&s->refs) != 1; ++} ++ + static inline struct nft_set *nft_set_container_of(const void *priv) + { + return (void *)priv - offsetof(struct nft_set, data); +@@ -591,7 +600,6 @@ struct nft_set *nft_set_lookup_global(const struct net *net, + + struct nft_set_ext *nft_set_catchall_lookup(const struct net *net, + const struct nft_set *set); +-void *nft_set_catchall_gc(const struct nft_set *set); + + static inline unsigned long nft_set_gc_interval(const struct nft_set *set) + { +@@ -808,62 +816,6 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem, + void nf_tables_set_elem_destroy(const struct nft_ctx *ctx, + const struct nft_set *set, void *elem); + +-/** +- * struct nft_set_gc_batch_head - nf_tables set garbage collection batch +- * +- * @rcu: rcu head +- * @set: set the elements belong to +- * @cnt: count of elements +- */ +-struct nft_set_gc_batch_head { +- struct rcu_head rcu; +- const struct nft_set *set; +- unsigned int cnt; +-}; +- +-#define NFT_SET_GC_BATCH_SIZE ((PAGE_SIZE - \ +- sizeof(struct nft_set_gc_batch_head)) / \ +- sizeof(void *)) +- +-/** +- * struct nft_set_gc_batch - nf_tables set garbage collection batch +- * +- * @head: GC batch head +- * @elems: garbage collection elements +- */ +-struct nft_set_gc_batch { +- struct nft_set_gc_batch_head head; +- void *elems[NFT_SET_GC_BATCH_SIZE]; +-}; +- +-struct nft_set_gc_batch *nft_set_gc_batch_alloc(const struct nft_set *set, +- gfp_t gfp); +-void nft_set_gc_batch_release(struct rcu_head *rcu); +- +-static inline void nft_set_gc_batch_complete(struct nft_set_gc_batch *gcb) +-{ +- if (gcb != NULL) +- call_rcu(&gcb->head.rcu, nft_set_gc_batch_release); +-} +- +-static inline struct nft_set_gc_batch * +-nft_set_gc_batch_check(const struct nft_set *set, struct nft_set_gc_batch *gcb, +- gfp_t gfp) +-{ +- if (gcb != NULL) { +- if (gcb->head.cnt + 1 < ARRAY_SIZE(gcb->elems)) +- return gcb; +- nft_set_gc_batch_complete(gcb); +- } +- return nft_set_gc_batch_alloc(set, gfp); +-} +- +-static inline void nft_set_gc_batch_add(struct nft_set_gc_batch *gcb, +- void *elem) +-{ +- gcb->elems[gcb->head.cnt++] = elem; +-} +- + struct nft_expr_ops; + /** + * struct nft_expr_type - nf_tables expression type +@@ -1542,39 +1494,30 @@ static inline void nft_set_elem_change_active(const struct net *net, + + #endif /* IS_ENABLED(CONFIG_NF_TABLES) */ + +-/* +- * We use a free bit in the genmask field to indicate the element +- * is busy, meaning it is currently being processed either by +- * the netlink API or GC. +- * +- * Even though the genmask is only a single byte wide, this works +- * because the extension structure if fully constant once initialized, +- * so there are no non-atomic write accesses unless it is already +- * marked busy. +- */ +-#define NFT_SET_ELEM_BUSY_MASK (1 << 2) ++#define NFT_SET_ELEM_DEAD_MASK (1 << 2) + + #if defined(__LITTLE_ENDIAN_BITFIELD) +-#define NFT_SET_ELEM_BUSY_BIT 2 ++#define NFT_SET_ELEM_DEAD_BIT 2 + #elif defined(__BIG_ENDIAN_BITFIELD) +-#define NFT_SET_ELEM_BUSY_BIT (BITS_PER_LONG - BITS_PER_BYTE + 2) ++#define NFT_SET_ELEM_DEAD_BIT (BITS_PER_LONG - BITS_PER_BYTE + 2) + #else + #error + #endif + +-static inline int nft_set_elem_mark_busy(struct nft_set_ext *ext) ++static inline void nft_set_elem_dead(struct nft_set_ext *ext) + { + unsigned long *word = (unsigned long *)ext; + + BUILD_BUG_ON(offsetof(struct nft_set_ext, genmask) != 0); +- return test_and_set_bit(NFT_SET_ELEM_BUSY_BIT, word); ++ set_bit(NFT_SET_ELEM_DEAD_BIT, word); + } + +-static inline void nft_set_elem_clear_busy(struct nft_set_ext *ext) ++static inline int nft_set_elem_is_dead(const struct nft_set_ext *ext) + { + unsigned long *word = (unsigned long *)ext; + +- clear_bit(NFT_SET_ELEM_BUSY_BIT, word); ++ BUILD_BUG_ON(offsetof(struct nft_set_ext, genmask) != 0); ++ return test_bit(NFT_SET_ELEM_DEAD_BIT, word); + } + + /** +@@ -1708,6 +1651,39 @@ struct nft_trans_flowtable { + #define nft_trans_flowtable_flags(trans) \ + (((struct nft_trans_flowtable *)trans->data)->flags) + ++#define NFT_TRANS_GC_BATCHCOUNT 256 ++ ++struct nft_trans_gc { ++ struct list_head list; ++ struct net *net; ++ struct nft_set *set; ++ u32 seq; ++ u16 count; ++ void *priv[NFT_TRANS_GC_BATCHCOUNT]; ++ struct rcu_head rcu; ++}; ++ ++struct nft_trans_gc *nft_trans_gc_alloc(struct nft_set *set, ++ unsigned int gc_seq, gfp_t gfp); ++void nft_trans_gc_destroy(struct nft_trans_gc *trans); ++ ++struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc, ++ unsigned int gc_seq, gfp_t gfp); ++void nft_trans_gc_queue_async_done(struct nft_trans_gc *gc); ++ ++struct nft_trans_gc *nft_trans_gc_queue_sync(struct nft_trans_gc *gc, gfp_t gfp); ++void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans); ++ ++void nft_trans_gc_elem_add(struct nft_trans_gc *gc, void *priv); ++ ++struct nft_trans_gc *nft_trans_gc_catchall_async(struct nft_trans_gc *gc, ++ unsigned int gc_seq); ++struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc); ++ ++void nft_setelem_data_deactivate(const struct net *net, ++ const struct nft_set *set, ++ struct nft_set_elem *elem); ++ + int __init nft_chain_filter_init(void); + void nft_chain_filter_fini(void); + +@@ -1735,6 +1711,7 @@ struct nftables_pernet { + u64 table_handle; + unsigned int base_seq; + u8 validate_state; ++ unsigned int gc_seq; + }; + + extern unsigned int nf_tables_net_id; +diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h +index 51b9aa640ad2a..53bc487947197 100644 +--- a/include/uapi/linux/bpf.h ++++ b/include/uapi/linux/bpf.h +@@ -1837,7 +1837,9 @@ union bpf_attr { + * performed again, if the helper is used in combination with + * direct packet access. + * Return +- * 0 on success, or a negative error in case of failure. ++ * 0 on success, or a negative error in case of failure. Positive ++ * error indicates a potential drop or congestion in the target ++ * device. The particular positive error codes are not defined. + * + * u64 bpf_get_current_pid_tgid(void) + * Description +diff --git a/io_uring/fs.c b/io_uring/fs.c +index 7100c293c13a8..27676e0150049 100644 +--- a/io_uring/fs.c ++++ b/io_uring/fs.c +@@ -243,7 +243,7 @@ int io_linkat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) + struct io_link *lnk = io_kiocb_to_cmd(req, struct io_link); + const char __user *oldf, *newf; + +- if (sqe->rw_flags || sqe->buf_index || sqe->splice_fd_in) ++ if (sqe->buf_index || sqe->splice_fd_in) + return -EINVAL; + if (unlikely(req->flags & REQ_F_FIXED_FILE)) + return -EBADF; +diff --git a/kernel/bpf/queue_stack_maps.c b/kernel/bpf/queue_stack_maps.c +index 8a5e060de63bc..a8fe640318c6c 100644 +--- a/kernel/bpf/queue_stack_maps.c ++++ b/kernel/bpf/queue_stack_maps.c +@@ -102,7 +102,12 @@ static int __queue_map_get(struct bpf_map *map, void *value, bool delete) + int err = 0; + void *ptr; + +- raw_spin_lock_irqsave(&qs->lock, flags); ++ if (in_nmi()) { ++ if (!raw_spin_trylock_irqsave(&qs->lock, flags)) ++ return -EBUSY; ++ } else { ++ raw_spin_lock_irqsave(&qs->lock, flags); ++ } + + if (queue_stack_map_is_empty(qs)) { + memset(value, 0, qs->map.value_size); +@@ -132,7 +137,12 @@ static int __stack_map_get(struct bpf_map *map, void *value, bool delete) + void *ptr; + u32 index; + +- raw_spin_lock_irqsave(&qs->lock, flags); ++ if (in_nmi()) { ++ if (!raw_spin_trylock_irqsave(&qs->lock, flags)) ++ return -EBUSY; ++ } else { ++ raw_spin_lock_irqsave(&qs->lock, flags); ++ } + + if (queue_stack_map_is_empty(qs)) { + memset(value, 0, qs->map.value_size); +@@ -197,7 +207,12 @@ static int queue_stack_map_push_elem(struct bpf_map *map, void *value, + if (flags & BPF_NOEXIST || flags > BPF_EXIST) + return -EINVAL; + +- raw_spin_lock_irqsave(&qs->lock, irq_flags); ++ if (in_nmi()) { ++ if (!raw_spin_trylock_irqsave(&qs->lock, irq_flags)) ++ return -EBUSY; ++ } else { ++ raw_spin_lock_irqsave(&qs->lock, irq_flags); ++ } + + if (queue_stack_map_is_full(qs)) { + if (!replace) { +diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c +index 18c93c2276cae..3ff7089d11a92 100644 +--- a/kernel/dma/debug.c ++++ b/kernel/dma/debug.c +@@ -603,15 +603,19 @@ static struct dma_debug_entry *__dma_entry_alloc(void) + return entry; + } + +-static void __dma_entry_alloc_check_leak(void) ++/* ++ * This should be called outside of free_entries_lock scope to avoid potential ++ * deadlocks with serial consoles that use DMA. ++ */ ++static void __dma_entry_alloc_check_leak(u32 nr_entries) + { +- u32 tmp = nr_total_entries % nr_prealloc_entries; ++ u32 tmp = nr_entries % nr_prealloc_entries; + + /* Shout each time we tick over some multiple of the initial pool */ + if (tmp < DMA_DEBUG_DYNAMIC_ENTRIES) { + pr_info("dma_debug_entry pool grown to %u (%u00%%)\n", +- nr_total_entries, +- (nr_total_entries / nr_prealloc_entries)); ++ nr_entries, ++ (nr_entries / nr_prealloc_entries)); + } + } + +@@ -622,8 +626,10 @@ static void __dma_entry_alloc_check_leak(void) + */ + static struct dma_debug_entry *dma_entry_alloc(void) + { ++ bool alloc_check_leak = false; + struct dma_debug_entry *entry; + unsigned long flags; ++ u32 nr_entries; + + spin_lock_irqsave(&free_entries_lock, flags); + if (num_free_entries == 0) { +@@ -633,13 +639,17 @@ static struct dma_debug_entry *dma_entry_alloc(void) + pr_err("debugging out of memory - disabling\n"); + return NULL; + } +- __dma_entry_alloc_check_leak(); ++ alloc_check_leak = true; ++ nr_entries = nr_total_entries; + } + + entry = __dma_entry_alloc(); + + spin_unlock_irqrestore(&free_entries_lock, flags); + ++ if (alloc_check_leak) ++ __dma_entry_alloc_check_leak(nr_entries); ++ + #ifdef CONFIG_STACKTRACE + entry->stack_len = stack_trace_save(entry->stack_entries, + ARRAY_SIZE(entry->stack_entries), +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 0f6a92737c912..55d13980e29fd 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -9019,7 +9019,7 @@ void __init init_idle(struct task_struct *idle, int cpu) + * PF_KTHREAD should already be set at this point; regardless, make it + * look like a proper per-CPU kthread. + */ +- idle->flags |= PF_IDLE | PF_KTHREAD | PF_NO_SETAFFINITY; ++ idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY; + kthread_set_per_cpu(idle, cpu); + + #ifdef CONFIG_SMP +diff --git a/kernel/sched/cpupri.c b/kernel/sched/cpupri.c +index a286e726eb4b8..42c40cfdf8363 100644 +--- a/kernel/sched/cpupri.c ++++ b/kernel/sched/cpupri.c +@@ -101,6 +101,7 @@ static inline int __cpupri_find(struct cpupri *cp, struct task_struct *p, + + if (lowest_mask) { + cpumask_and(lowest_mask, &p->cpus_mask, vec->mask); ++ cpumask_and(lowest_mask, lowest_mask, cpu_active_mask); + + /* + * We have to ensure that we have at least one bit +diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c +index f26ab2675f7d7..200a0fac03b8e 100644 +--- a/kernel/sched/idle.c ++++ b/kernel/sched/idle.c +@@ -394,6 +394,7 @@ EXPORT_SYMBOL_GPL(play_idle_precise); + + void cpu_startup_entry(enum cpuhp_state state) + { ++ current->flags |= PF_IDLE; + arch_cpu_idle_prepare(); + cpuhp_online_idle(state); + while (1) +diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c +index 9fc5db194027b..8c77c54e6348b 100644 +--- a/kernel/trace/bpf_trace.c ++++ b/kernel/trace/bpf_trace.c +@@ -2684,6 +2684,17 @@ static void symbols_swap_r(void *a, void *b, int size, const void *priv) + } + } + ++static int addrs_check_error_injection_list(unsigned long *addrs, u32 cnt) ++{ ++ u32 i; ++ ++ for (i = 0; i < cnt; i++) { ++ if (!within_error_injection_list(addrs[i])) ++ return -EINVAL; ++ } ++ return 0; ++} ++ + int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) + { + struct bpf_kprobe_multi_link *link = NULL; +@@ -2761,6 +2772,11 @@ int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *pr + goto error; + } + ++ if (prog->kprobe_override && addrs_check_error_injection_list(addrs, cnt)) { ++ err = -EINVAL; ++ goto error; ++ } ++ + link = kzalloc(sizeof(*link), GFP_KERNEL); + if (!link) { + err = -ENOMEM; +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c +index de55107aef5d5..2f562cf961e0a 100644 +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -1142,6 +1142,9 @@ __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu, + if (full) { + poll_wait(filp, &work->full_waiters, poll_table); + work->full_waiters_pending = true; ++ if (!cpu_buffer->shortest_full || ++ cpu_buffer->shortest_full > full) ++ cpu_buffer->shortest_full = full; + } else { + poll_wait(filp, &work->waiters, poll_table); + work->waiters_pending = true; +@@ -2212,6 +2215,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size, + err = -ENOMEM; + goto out_err; + } ++ ++ cond_resched(); + } + + cpus_read_lock(); +@@ -2386,6 +2391,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter) + */ + commit = rb_page_commit(iter_head_page); + smp_rmb(); ++ ++ /* An event needs to be at least 8 bytes in size */ ++ if (iter->head > commit - 8) ++ goto reset; ++ + event = __rb_page_index(iter_head_page, iter->head); + length = rb_event_length(event); + +diff --git a/mm/damon/vaddr-test.h b/mm/damon/vaddr-test.h +index bce37c4875402..e939598aff94b 100644 +--- a/mm/damon/vaddr-test.h ++++ b/mm/damon/vaddr-test.h +@@ -140,6 +140,8 @@ static void damon_do_test_apply_three_regions(struct kunit *test, + KUNIT_EXPECT_EQ(test, r->ar.start, expected[i * 2]); + KUNIT_EXPECT_EQ(test, r->ar.end, expected[i * 2 + 1]); + } ++ ++ damon_destroy_target(t); + } + + /* +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index 67b6d8238b3ed..dacbaf4f7b2c4 100644 +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -2545,7 +2545,7 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg, + * Scheduled by try_charge() to be executed from the userland return path + * and reclaims memory over the high limit. + */ +-void mem_cgroup_handle_over_high(void) ++void mem_cgroup_handle_over_high(gfp_t gfp_mask) + { + unsigned long penalty_jiffies; + unsigned long pflags; +@@ -2573,7 +2573,7 @@ retry_reclaim: + */ + nr_reclaimed = reclaim_high(memcg, + in_retry ? SWAP_CLUSTER_MAX : nr_pages, +- GFP_KERNEL); ++ gfp_mask); + + /* + * memory.high is breached and reclaim is unable to keep up. Throttle +@@ -2809,7 +2809,7 @@ done_restock: + if (current->memcg_nr_pages_over_high > MEMCG_CHARGE_BATCH && + !(current->flags & PF_MEMALLOC) && + gfpflags_allow_blocking(gfp_mask)) { +- mem_cgroup_handle_over_high(); ++ mem_cgroup_handle_over_high(gfp_mask); + } + return 0; + } +@@ -3842,8 +3842,11 @@ static ssize_t mem_cgroup_write(struct kernfs_open_file *of, + ret = mem_cgroup_resize_max(memcg, nr_pages, true); + break; + case _KMEM: +- /* kmem.limit_in_bytes is deprecated. */ +- ret = -EOPNOTSUPP; ++ pr_warn_once("kmem.limit_in_bytes is deprecated and will be removed. " ++ "Writing any value to this file has no effect. " ++ "Please report your usecase to linux-mm@kvack.org if you " ++ "depend on this functionality.\n"); ++ ret = 0; + break; + case _TCP: + ret = memcg_update_tcp_max(memcg, nr_pages); +diff --git a/mm/slab_common.c b/mm/slab_common.c +index 0042fb2730d1e..4736c0e6093fa 100644 +--- a/mm/slab_common.c ++++ b/mm/slab_common.c +@@ -474,7 +474,7 @@ void slab_kmem_cache_release(struct kmem_cache *s) + + void kmem_cache_destroy(struct kmem_cache *s) + { +- int refcnt; ++ int err = -EBUSY; + bool rcu_set; + + if (unlikely(!s) || !kasan_check_byte(s)) +@@ -485,17 +485,17 @@ void kmem_cache_destroy(struct kmem_cache *s) + + rcu_set = s->flags & SLAB_TYPESAFE_BY_RCU; + +- refcnt = --s->refcount; +- if (refcnt) ++ s->refcount--; ++ if (s->refcount) + goto out_unlock; + +- WARN(shutdown_cache(s), +- "%s %s: Slab cache still has objects when called from %pS", ++ err = shutdown_cache(s); ++ WARN(err, "%s %s: Slab cache still has objects when called from %pS", + __func__, s->name, (void *)_RET_IP_); + out_unlock: + mutex_unlock(&slab_mutex); + cpus_read_unlock(); +- if (!refcnt && !rcu_set) ++ if (!err && !rcu_set) + kmem_cache_release(s); + } + EXPORT_SYMBOL(kmem_cache_destroy); +diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c +index bd54f17e3c3d8..4e3394a7d7d45 100644 +--- a/net/bridge/br_forward.c ++++ b/net/bridge/br_forward.c +@@ -124,7 +124,7 @@ static int deliver_clone(const struct net_bridge_port *prev, + + skb = skb_clone(skb, GFP_ATOMIC); + if (!skb) { +- dev->stats.tx_dropped++; ++ DEV_STATS_INC(dev, tx_dropped); + return -ENOMEM; + } + +@@ -263,7 +263,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, + + skb = skb_copy(skb, GFP_ATOMIC); + if (!skb) { +- dev->stats.tx_dropped++; ++ DEV_STATS_INC(dev, tx_dropped); + return; + } + +diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c +index 68b3e850bcb9d..6bb272894c960 100644 +--- a/net/bridge/br_input.c ++++ b/net/bridge/br_input.c +@@ -164,12 +164,12 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb + if ((mdst && mdst->host_joined) || + br_multicast_is_router(brmctx, skb)) { + local_rcv = true; +- br->dev->stats.multicast++; ++ DEV_STATS_INC(br->dev, multicast); + } + mcast_hit = true; + } else { + local_rcv = true; +- br->dev->stats.multicast++; ++ DEV_STATS_INC(br->dev, multicast); + } + break; + case BR_PKT_UNICAST: +diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c +index 3288490590f27..0c85c8a9e752f 100644 +--- a/net/core/flow_dissector.c ++++ b/net/core/flow_dissector.c +@@ -1366,7 +1366,7 @@ proto_again: + break; + } + +- nhoff += ntohs(hdr->message_length); ++ nhoff += sizeof(struct ptp_header); + fdret = FLOW_DISSECT_RET_OUT_GOOD; + break; + } +diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c +index 8f5d3c0881118..247179d4c8865 100644 +--- a/net/dccp/ipv4.c ++++ b/net/dccp/ipv4.c +@@ -255,13 +255,8 @@ static int dccp_v4_err(struct sk_buff *skb, u32 info) + int err; + struct net *net = dev_net(skb->dev); + +- /* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x, +- * which is in byte 7 of the dccp header. +- * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us. +- * +- * Later on, we want to access the sequence number fields, which are +- * beyond 8 bytes, so we have to pskb_may_pull() ourselves. +- */ ++ if (!pskb_may_pull(skb, offset + sizeof(*dh))) ++ return -EINVAL; + dh = (struct dccp_hdr *)(skb->data + offset); + if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh))) + return -EINVAL; +diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c +index 2b09e2644b13f..6fb34eaf1237a 100644 +--- a/net/dccp/ipv6.c ++++ b/net/dccp/ipv6.c +@@ -83,13 +83,8 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + __u64 seq; + struct net *net = dev_net(skb->dev); + +- /* For the first __dccp_basic_hdr_len() check, we only need dh->dccph_x, +- * which is in byte 7 of the dccp header. +- * Our caller (icmpv6_notify()) already pulled 8 bytes for us. +- * +- * Later on, we want to access the sequence number fields, which are +- * beyond 8 bytes, so we have to pskb_may_pull() ourselves. +- */ ++ if (!pskb_may_pull(skb, offset + sizeof(*dh))) ++ return -EINVAL; + dh = (struct dccp_hdr *)(skb->data + offset); + if (!pskb_may_pull(skb, offset + __dccp_basic_hdr_len(dh))) + return -EINVAL; +diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c +index a16f0445023aa..0b01998780952 100644 +--- a/net/hsr/hsr_framereg.c ++++ b/net/hsr/hsr_framereg.c +@@ -295,13 +295,13 @@ void hsr_handle_sup_frame(struct hsr_frame_info *frame) + + /* And leave the HSR tag. */ + if (ethhdr->h_proto == htons(ETH_P_HSR)) { +- pull_size = sizeof(struct ethhdr); ++ pull_size = sizeof(struct hsr_tag); + skb_pull(skb, pull_size); + total_pull_size += pull_size; + } + + /* And leave the HSR sup tag. */ +- pull_size = sizeof(struct hsr_tag); ++ pull_size = sizeof(struct hsr_sup_tag); + skb_pull(skb, pull_size); + total_pull_size += pull_size; + +diff --git a/net/hsr/hsr_main.h b/net/hsr/hsr_main.h +index 16ae9fb09ccd2..58a5a8b3891ff 100644 +--- a/net/hsr/hsr_main.h ++++ b/net/hsr/hsr_main.h +@@ -83,7 +83,7 @@ struct hsr_vlan_ethhdr { + struct hsr_sup_tlv { + u8 HSR_TLV_type; + u8 HSR_TLV_length; +-}; ++} __packed; + + /* HSR/PRP Supervision Frame data types. + * Field names as defined in the IEC:2010 standard for HSR. +diff --git a/net/ipv4/route.c b/net/ipv4/route.c +index a04ffc128e22b..84a0a71a6f4e7 100644 +--- a/net/ipv4/route.c ++++ b/net/ipv4/route.c +@@ -1213,6 +1213,7 @@ EXPORT_INDIRECT_CALLABLE(ipv4_dst_check); + + static void ipv4_send_dest_unreach(struct sk_buff *skb) + { ++ struct net_device *dev; + struct ip_options opt; + int res; + +@@ -1230,7 +1231,8 @@ static void ipv4_send_dest_unreach(struct sk_buff *skb) + opt.optlen = ip_hdr(skb)->ihl * 4 - sizeof(struct iphdr); + + rcu_read_lock(); +- res = __ip_options_compile(dev_net(skb->dev), &opt, skb, NULL); ++ dev = skb->dev ? skb->dev : skb_rtable(skb)->dst.dev; ++ res = __ip_options_compile(dev_net(dev), &opt, skb, NULL); + rcu_read_unlock(); + + if (res) +diff --git a/net/mptcp/options.c b/net/mptcp/options.c +index 6b2ef3bb53a3d..0c786ceda5ee6 100644 +--- a/net/mptcp/options.c ++++ b/net/mptcp/options.c +@@ -1248,12 +1248,13 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th) + + if (rcv_wnd == rcv_wnd_old) + break; +- if (before64(rcv_wnd_new, rcv_wnd)) { ++ ++ rcv_wnd_old = rcv_wnd; ++ if (before64(rcv_wnd_new, rcv_wnd_old)) { + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICTUPDATE); + goto raise_win; + } + MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICT); +- rcv_wnd_old = rcv_wnd; + } + return; + } +diff --git a/net/ncsi/ncsi-aen.c b/net/ncsi/ncsi-aen.c +index 62fb1031763d1..f8854bff286cb 100644 +--- a/net/ncsi/ncsi-aen.c ++++ b/net/ncsi/ncsi-aen.c +@@ -89,6 +89,11 @@ static int ncsi_aen_handler_lsc(struct ncsi_dev_priv *ndp, + if ((had_link == has_link) || chained) + return 0; + ++ if (had_link) ++ netif_carrier_off(ndp->ndev.dev); ++ else ++ netif_carrier_on(ndp->ndev.dev); ++ + if (!ndp->multi_package && !nc->package->multi_channel) { + if (had_link) { + ndp->flags |= NCSI_DEV_RESHUFFLE; +diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c +index 9a6b64779e644..20eede37d5228 100644 +--- a/net/netfilter/ipset/ip_set_core.c ++++ b/net/netfilter/ipset/ip_set_core.c +@@ -682,6 +682,14 @@ __ip_set_put(struct ip_set *set) + /* set->ref can be swapped out by ip_set_swap, netlink events (like dump) need + * a separate reference counter + */ ++static void ++__ip_set_get_netlink(struct ip_set *set) ++{ ++ write_lock_bh(&ip_set_ref_lock); ++ set->ref_netlink++; ++ write_unlock_bh(&ip_set_ref_lock); ++} ++ + static void + __ip_set_put_netlink(struct ip_set *set) + { +@@ -1695,11 +1703,11 @@ call_ad(struct net *net, struct sock *ctnl, struct sk_buff *skb, + + do { + if (retried) { +- __ip_set_get(set); ++ __ip_set_get_netlink(set); + nfnl_unlock(NFNL_SUBSYS_IPSET); + cond_resched(); + nfnl_lock(NFNL_SUBSYS_IPSET); +- __ip_set_put(set); ++ __ip_set_put_netlink(set); + } + + ip_set_lock(set); +diff --git a/net/netfilter/nf_conntrack_bpf.c b/net/netfilter/nf_conntrack_bpf.c +index 8639e7efd0e22..816283f0aa593 100644 +--- a/net/netfilter/nf_conntrack_bpf.c ++++ b/net/netfilter/nf_conntrack_bpf.c +@@ -384,6 +384,8 @@ struct nf_conn *bpf_ct_insert_entry(struct nf_conn___init *nfct_i) + struct nf_conn *nfct = (struct nf_conn *)nfct_i; + int err; + ++ if (!nf_ct_is_confirmed(nfct)) ++ nfct->timeout += nfct_time_stamp; + nfct->status |= IPS_CONFIRMED; + err = nf_conntrack_hash_check_insert(nfct); + if (err < 0) { +diff --git a/net/netfilter/nf_conntrack_extend.c b/net/netfilter/nf_conntrack_extend.c +index 0b513f7bf9f39..dd62cc12e7750 100644 +--- a/net/netfilter/nf_conntrack_extend.c ++++ b/net/netfilter/nf_conntrack_extend.c +@@ -40,10 +40,10 @@ static const u8 nf_ct_ext_type_len[NF_CT_EXT_NUM] = { + [NF_CT_EXT_ECACHE] = sizeof(struct nf_conntrack_ecache), + #endif + #ifdef CONFIG_NF_CONNTRACK_TIMESTAMP +- [NF_CT_EXT_TSTAMP] = sizeof(struct nf_conn_acct), ++ [NF_CT_EXT_TSTAMP] = sizeof(struct nf_conn_tstamp), + #endif + #ifdef CONFIG_NF_CONNTRACK_TIMEOUT +- [NF_CT_EXT_TIMEOUT] = sizeof(struct nf_conn_tstamp), ++ [NF_CT_EXT_TIMEOUT] = sizeof(struct nf_conn_timeout), + #endif + #ifdef CONFIG_NF_CONNTRACK_LABELS + [NF_CT_EXT_LABELS] = sizeof(struct nf_conn_labels), +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 3c5cac9bd9b70..52b81dc1fcf5b 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -31,7 +31,9 @@ static LIST_HEAD(nf_tables_expressions); + static LIST_HEAD(nf_tables_objects); + static LIST_HEAD(nf_tables_flowtables); + static LIST_HEAD(nf_tables_destroy_list); ++static LIST_HEAD(nf_tables_gc_list); + static DEFINE_SPINLOCK(nf_tables_destroy_list_lock); ++static DEFINE_SPINLOCK(nf_tables_gc_list_lock); + + enum { + NFT_VALIDATE_SKIP = 0, +@@ -122,6 +124,9 @@ static void nft_validate_state_update(struct net *net, u8 new_validate_state) + static void nf_tables_trans_destroy_work(struct work_struct *w); + static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work); + ++static void nft_trans_gc_work(struct work_struct *work); ++static DECLARE_WORK(trans_gc_work, nft_trans_gc_work); ++ + static void nft_ctx_init(struct nft_ctx *ctx, + struct net *net, + const struct sk_buff *skb, +@@ -583,10 +588,6 @@ static int nft_trans_set_add(const struct nft_ctx *ctx, int msg_type, + return __nft_trans_set_add(ctx, msg_type, set, NULL); + } + +-static void nft_setelem_data_deactivate(const struct net *net, +- const struct nft_set *set, +- struct nft_set_elem *elem); +- + static int nft_mapelem_deactivate(const struct nft_ctx *ctx, + struct nft_set *set, + const struct nft_set_iter *iter, +@@ -1210,6 +1211,10 @@ static int nf_tables_updtable(struct nft_ctx *ctx) + flags & NFT_TABLE_F_OWNER)) + return -EOPNOTSUPP; + ++ /* No dormant off/on/off/on games in single transaction */ ++ if (ctx->table->flags & __NFT_TABLE_F_UPDATE) ++ return -EINVAL; ++ + trans = nft_trans_alloc(ctx, NFT_MSG_NEWTABLE, + sizeof(struct nft_trans_table)); + if (trans == NULL) +@@ -1422,7 +1427,7 @@ static int nft_flush_table(struct nft_ctx *ctx) + if (!nft_is_active_next(ctx->net, chain)) + continue; + +- if (nft_chain_is_bound(chain)) ++ if (nft_chain_binding(chain)) + continue; + + ctx->chain = chain; +@@ -1436,8 +1441,7 @@ static int nft_flush_table(struct nft_ctx *ctx) + if (!nft_is_active_next(ctx->net, set)) + continue; + +- if (nft_set_is_anonymous(set) && +- !list_empty(&set->bindings)) ++ if (nft_set_is_anonymous(set)) + continue; + + err = nft_delset(ctx, set); +@@ -1467,7 +1471,7 @@ static int nft_flush_table(struct nft_ctx *ctx) + if (!nft_is_active_next(ctx->net, chain)) + continue; + +- if (nft_chain_is_bound(chain)) ++ if (nft_chain_binding(chain)) + continue; + + ctx->chain = chain; +@@ -2788,6 +2792,9 @@ static int nf_tables_delchain(struct sk_buff *skb, const struct nfnl_info *info, + return PTR_ERR(chain); + } + ++ if (nft_chain_binding(chain)) ++ return -EOPNOTSUPP; ++ + if (info->nlh->nlmsg_flags & NLM_F_NONREC && + chain->use > 0) + return -EBUSY; +@@ -3767,6 +3774,11 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info, + } + + if (info->nlh->nlmsg_flags & NLM_F_REPLACE) { ++ if (nft_chain_binding(chain)) { ++ err = -EOPNOTSUPP; ++ goto err_destroy_flow_rule; ++ } ++ + err = nft_delrule(&ctx, old_rule); + if (err < 0) + goto err_destroy_flow_rule; +@@ -3870,7 +3882,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info, + NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]); + return PTR_ERR(chain); + } +- if (nft_chain_is_bound(chain)) ++ if (nft_chain_binding(chain)) + return -EOPNOTSUPP; + } + +@@ -3900,7 +3912,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info, + list_for_each_entry(chain, &table->chains, list) { + if (!nft_is_active_next(net, chain)) + continue; +- if (nft_chain_is_bound(chain)) ++ if (nft_chain_binding(chain)) + continue; + + ctx.chain = chain; +@@ -4854,6 +4866,7 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info, + + INIT_LIST_HEAD(&set->bindings); + INIT_LIST_HEAD(&set->catchall_list); ++ refcount_set(&set->refs, 1); + set->table = table; + write_pnet(&set->net, net); + set->ops = ops; +@@ -4921,6 +4934,14 @@ static void nft_set_catchall_destroy(const struct nft_ctx *ctx, + } + } + ++static void nft_set_put(struct nft_set *set) ++{ ++ if (refcount_dec_and_test(&set->refs)) { ++ kfree(set->name); ++ kvfree(set); ++ } ++} ++ + static void nft_set_destroy(const struct nft_ctx *ctx, struct nft_set *set) + { + int i; +@@ -4933,8 +4954,7 @@ static void nft_set_destroy(const struct nft_ctx *ctx, struct nft_set *set) + + set->ops->destroy(ctx, set); + nft_set_catchall_destroy(ctx, set); +- kfree(set->name); +- kvfree(set); ++ nft_set_put(set); + } + + static int nf_tables_delset(struct sk_buff *skb, const struct nfnl_info *info, +@@ -5386,8 +5406,12 @@ static int nf_tables_dump_setelem(const struct nft_ctx *ctx, + const struct nft_set_iter *iter, + struct nft_set_elem *elem) + { ++ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); + struct nft_set_dump_args *args; + ++ if (nft_set_elem_expired(ext)) ++ return 0; ++ + args = container_of(iter, struct nft_set_dump_args, iter); + return nf_tables_fill_setelem(args->skb, set, elem); + } +@@ -6047,7 +6071,8 @@ struct nft_set_ext *nft_set_catchall_lookup(const struct net *net, + list_for_each_entry_rcu(catchall, &set->catchall_list, list) { + ext = nft_set_elem_ext(set, catchall->elem); + if (nft_set_elem_active(ext, genmask) && +- !nft_set_elem_expired(ext)) ++ !nft_set_elem_expired(ext) && ++ !nft_set_elem_is_dead(ext)) + return ext; + } + +@@ -6055,29 +6080,6 @@ struct nft_set_ext *nft_set_catchall_lookup(const struct net *net, + } + EXPORT_SYMBOL_GPL(nft_set_catchall_lookup); + +-void *nft_set_catchall_gc(const struct nft_set *set) +-{ +- struct nft_set_elem_catchall *catchall, *next; +- struct nft_set_ext *ext; +- void *elem = NULL; +- +- list_for_each_entry_safe(catchall, next, &set->catchall_list, list) { +- ext = nft_set_elem_ext(set, catchall->elem); +- +- if (!nft_set_elem_expired(ext) || +- nft_set_elem_mark_busy(ext)) +- continue; +- +- elem = catchall->elem; +- list_del_rcu(&catchall->list); +- kfree_rcu(catchall, rcu); +- break; +- } +- +- return elem; +-} +-EXPORT_SYMBOL_GPL(nft_set_catchall_gc); +- + static int nft_setelem_catchall_insert(const struct net *net, + struct nft_set *set, + const struct nft_set_elem *elem, +@@ -6139,7 +6141,6 @@ static void nft_setelem_activate(struct net *net, struct nft_set *set, + + if (nft_setelem_is_catchall(set, elem)) { + nft_set_elem_change_active(net, set, ext); +- nft_set_elem_clear_busy(ext); + } else { + set->ops->activate(net, set, elem); + } +@@ -6154,8 +6155,7 @@ static int nft_setelem_catchall_deactivate(const struct net *net, + + list_for_each_entry(catchall, &set->catchall_list, list) { + ext = nft_set_elem_ext(set, catchall->elem); +- if (!nft_is_active(net, ext) || +- nft_set_elem_mark_busy(ext)) ++ if (!nft_is_active(net, ext)) + continue; + + kfree(elem->priv); +@@ -6550,7 +6550,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set, + goto err_elem_free; + } + +- ext->genmask = nft_genmask_cur(ctx->net) | NFT_SET_ELEM_BUSY_MASK; ++ ext->genmask = nft_genmask_cur(ctx->net); + + err = nft_setelem_insert(ctx->net, set, &elem, &ext2, flags); + if (err) { +@@ -6700,9 +6700,9 @@ static void nft_setelem_data_activate(const struct net *net, + nft_use_inc_restore(&(*nft_set_ext_obj(ext))->use); + } + +-static void nft_setelem_data_deactivate(const struct net *net, +- const struct nft_set *set, +- struct nft_set_elem *elem) ++void nft_setelem_data_deactivate(const struct net *net, ++ const struct nft_set *set, ++ struct nft_set_elem *elem) + { + const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv); + +@@ -6866,8 +6866,7 @@ static int nft_set_catchall_flush(const struct nft_ctx *ctx, + + list_for_each_entry_rcu(catchall, &set->catchall_list, list) { + ext = nft_set_elem_ext(set, catchall->elem); +- if (!nft_set_elem_active(ext, genmask) || +- nft_set_elem_mark_busy(ext)) ++ if (!nft_set_elem_active(ext, genmask)) + continue; + + elem.priv = catchall->elem; +@@ -6919,8 +6918,10 @@ static int nf_tables_delsetelem(struct sk_buff *skb, + if (IS_ERR(set)) + return PTR_ERR(set); + +- if (!list_empty(&set->bindings) && +- (set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS))) ++ if (nft_set_is_anonymous(set)) ++ return -EOPNOTSUPP; ++ ++ if (!list_empty(&set->bindings) && (set->flags & NFT_SET_CONSTANT)) + return -EBUSY; + + nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla); +@@ -6938,29 +6939,6 @@ static int nf_tables_delsetelem(struct sk_buff *skb, + return err; + } + +-void nft_set_gc_batch_release(struct rcu_head *rcu) +-{ +- struct nft_set_gc_batch *gcb; +- unsigned int i; +- +- gcb = container_of(rcu, struct nft_set_gc_batch, head.rcu); +- for (i = 0; i < gcb->head.cnt; i++) +- nft_set_elem_destroy(gcb->head.set, gcb->elems[i], true); +- kfree(gcb); +-} +- +-struct nft_set_gc_batch *nft_set_gc_batch_alloc(const struct nft_set *set, +- gfp_t gfp) +-{ +- struct nft_set_gc_batch *gcb; +- +- gcb = kzalloc(sizeof(*gcb), gfp); +- if (gcb == NULL) +- return gcb; +- gcb->head.set = set; +- return gcb; +-} +- + /* + * Stateful objects + */ +@@ -9089,6 +9067,234 @@ void nft_chain_del(struct nft_chain *chain) + list_del_rcu(&chain->list); + } + ++static void nft_trans_gc_setelem_remove(struct nft_ctx *ctx, ++ struct nft_trans_gc *trans) ++{ ++ void **priv = trans->priv; ++ unsigned int i; ++ ++ for (i = 0; i < trans->count; i++) { ++ struct nft_set_elem elem = { ++ .priv = priv[i], ++ }; ++ ++ nft_setelem_data_deactivate(ctx->net, trans->set, &elem); ++ nft_setelem_remove(ctx->net, trans->set, &elem); ++ } ++} ++ ++void nft_trans_gc_destroy(struct nft_trans_gc *trans) ++{ ++ nft_set_put(trans->set); ++ put_net(trans->net); ++ kfree(trans); ++} ++ ++static void nft_trans_gc_trans_free(struct rcu_head *rcu) ++{ ++ struct nft_set_elem elem = {}; ++ struct nft_trans_gc *trans; ++ struct nft_ctx ctx = {}; ++ unsigned int i; ++ ++ trans = container_of(rcu, struct nft_trans_gc, rcu); ++ ctx.net = read_pnet(&trans->set->net); ++ ++ for (i = 0; i < trans->count; i++) { ++ elem.priv = trans->priv[i]; ++ if (!nft_setelem_is_catchall(trans->set, &elem)) ++ atomic_dec(&trans->set->nelems); ++ ++ nf_tables_set_elem_destroy(&ctx, trans->set, elem.priv); ++ } ++ ++ nft_trans_gc_destroy(trans); ++} ++ ++static bool nft_trans_gc_work_done(struct nft_trans_gc *trans) ++{ ++ struct nftables_pernet *nft_net; ++ struct nft_ctx ctx = {}; ++ ++ nft_net = nft_pernet(trans->net); ++ ++ mutex_lock(&nft_net->commit_mutex); ++ ++ /* Check for race with transaction, otherwise this batch refers to ++ * stale objects that might not be there anymore. Skip transaction if ++ * set has been destroyed from control plane transaction in case gc ++ * worker loses race. ++ */ ++ if (READ_ONCE(nft_net->gc_seq) != trans->seq || trans->set->dead) { ++ mutex_unlock(&nft_net->commit_mutex); ++ return false; ++ } ++ ++ ctx.net = trans->net; ++ ctx.table = trans->set->table; ++ ++ nft_trans_gc_setelem_remove(&ctx, trans); ++ mutex_unlock(&nft_net->commit_mutex); ++ ++ return true; ++} ++ ++static void nft_trans_gc_work(struct work_struct *work) ++{ ++ struct nft_trans_gc *trans, *next; ++ LIST_HEAD(trans_gc_list); ++ ++ spin_lock(&nf_tables_gc_list_lock); ++ list_splice_init(&nf_tables_gc_list, &trans_gc_list); ++ spin_unlock(&nf_tables_gc_list_lock); ++ ++ list_for_each_entry_safe(trans, next, &trans_gc_list, list) { ++ list_del(&trans->list); ++ if (!nft_trans_gc_work_done(trans)) { ++ nft_trans_gc_destroy(trans); ++ continue; ++ } ++ call_rcu(&trans->rcu, nft_trans_gc_trans_free); ++ } ++} ++ ++struct nft_trans_gc *nft_trans_gc_alloc(struct nft_set *set, ++ unsigned int gc_seq, gfp_t gfp) ++{ ++ struct net *net = read_pnet(&set->net); ++ struct nft_trans_gc *trans; ++ ++ trans = kzalloc(sizeof(*trans), gfp); ++ if (!trans) ++ return NULL; ++ ++ trans->net = maybe_get_net(net); ++ if (!trans->net) { ++ kfree(trans); ++ return NULL; ++ } ++ ++ refcount_inc(&set->refs); ++ trans->set = set; ++ trans->seq = gc_seq; ++ ++ return trans; ++} ++ ++void nft_trans_gc_elem_add(struct nft_trans_gc *trans, void *priv) ++{ ++ trans->priv[trans->count++] = priv; ++} ++ ++static void nft_trans_gc_queue_work(struct nft_trans_gc *trans) ++{ ++ spin_lock(&nf_tables_gc_list_lock); ++ list_add_tail(&trans->list, &nf_tables_gc_list); ++ spin_unlock(&nf_tables_gc_list_lock); ++ ++ schedule_work(&trans_gc_work); ++} ++ ++static int nft_trans_gc_space(struct nft_trans_gc *trans) ++{ ++ return NFT_TRANS_GC_BATCHCOUNT - trans->count; ++} ++ ++struct nft_trans_gc *nft_trans_gc_queue_async(struct nft_trans_gc *gc, ++ unsigned int gc_seq, gfp_t gfp) ++{ ++ struct nft_set *set; ++ ++ if (nft_trans_gc_space(gc)) ++ return gc; ++ ++ set = gc->set; ++ nft_trans_gc_queue_work(gc); ++ ++ return nft_trans_gc_alloc(set, gc_seq, gfp); ++} ++ ++void nft_trans_gc_queue_async_done(struct nft_trans_gc *trans) ++{ ++ if (trans->count == 0) { ++ nft_trans_gc_destroy(trans); ++ return; ++ } ++ ++ nft_trans_gc_queue_work(trans); ++} ++ ++struct nft_trans_gc *nft_trans_gc_queue_sync(struct nft_trans_gc *gc, gfp_t gfp) ++{ ++ struct nft_set *set; ++ ++ if (WARN_ON_ONCE(!lockdep_commit_lock_is_held(gc->net))) ++ return NULL; ++ ++ if (nft_trans_gc_space(gc)) ++ return gc; ++ ++ set = gc->set; ++ call_rcu(&gc->rcu, nft_trans_gc_trans_free); ++ ++ return nft_trans_gc_alloc(set, 0, gfp); ++} ++ ++void nft_trans_gc_queue_sync_done(struct nft_trans_gc *trans) ++{ ++ WARN_ON_ONCE(!lockdep_commit_lock_is_held(trans->net)); ++ ++ if (trans->count == 0) { ++ nft_trans_gc_destroy(trans); ++ return; ++ } ++ ++ call_rcu(&trans->rcu, nft_trans_gc_trans_free); ++} ++ ++static struct nft_trans_gc *nft_trans_gc_catchall(struct nft_trans_gc *gc, ++ unsigned int gc_seq, ++ bool sync) ++{ ++ struct nft_set_elem_catchall *catchall; ++ const struct nft_set *set = gc->set; ++ struct nft_set_ext *ext; ++ ++ list_for_each_entry_rcu(catchall, &set->catchall_list, list) { ++ ext = nft_set_elem_ext(set, catchall->elem); ++ ++ if (!nft_set_elem_expired(ext)) ++ continue; ++ if (nft_set_elem_is_dead(ext)) ++ goto dead_elem; ++ ++ nft_set_elem_dead(ext); ++dead_elem: ++ if (sync) ++ gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC); ++ else ++ gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC); ++ ++ if (!gc) ++ return NULL; ++ ++ nft_trans_gc_elem_add(gc, catchall->elem); ++ } ++ ++ return gc; ++} ++ ++struct nft_trans_gc *nft_trans_gc_catchall_async(struct nft_trans_gc *gc, ++ unsigned int gc_seq) ++{ ++ return nft_trans_gc_catchall(gc, gc_seq, false); ++} ++ ++struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc) ++{ ++ return nft_trans_gc_catchall(gc, 0, true); ++} ++ + static void nf_tables_module_autoload_cleanup(struct net *net) + { + struct nftables_pernet *nft_net = nft_pernet(net); +@@ -9247,15 +9453,31 @@ static void nft_set_commit_update(struct list_head *set_update_list) + } + } + ++static unsigned int nft_gc_seq_begin(struct nftables_pernet *nft_net) ++{ ++ unsigned int gc_seq; ++ ++ /* Bump gc counter, it becomes odd, this is the busy mark. */ ++ gc_seq = READ_ONCE(nft_net->gc_seq); ++ WRITE_ONCE(nft_net->gc_seq, ++gc_seq); ++ ++ return gc_seq; ++} ++ ++static void nft_gc_seq_end(struct nftables_pernet *nft_net, unsigned int gc_seq) ++{ ++ WRITE_ONCE(nft_net->gc_seq, ++gc_seq); ++} ++ + static int nf_tables_commit(struct net *net, struct sk_buff *skb) + { + struct nftables_pernet *nft_net = nft_pernet(net); + struct nft_trans *trans, *next; ++ unsigned int base_seq, gc_seq; + LIST_HEAD(set_update_list); + struct nft_trans_elem *te; + struct nft_chain *chain; + struct nft_table *table; +- unsigned int base_seq; + LIST_HEAD(adl); + int err; + +@@ -9332,6 +9554,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) + + WRITE_ONCE(nft_net->base_seq, base_seq); + ++ gc_seq = nft_gc_seq_begin(nft_net); ++ + /* step 3. Start new generation, rules_gen_X now in use. */ + net->nft.gencursor = nft_gencursor_next(net); + +@@ -9420,6 +9644,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) + nft_trans_destroy(trans); + break; + case NFT_MSG_DELSET: ++ nft_trans_set(trans)->dead = 1; + list_del_rcu(&nft_trans_set(trans)->list); + nf_tables_set_notify(&trans->ctx, nft_trans_set(trans), + NFT_MSG_DELSET, GFP_KERNEL); +@@ -9519,6 +9744,8 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) + nft_commit_notify(net, NETLINK_CB(skb).portid); + nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN); + nf_tables_commit_audit_log(&adl, nft_net->base_seq); ++ ++ nft_gc_seq_end(nft_net, gc_seq); + nf_tables_commit_release(net); + + return 0; +@@ -9777,7 +10004,12 @@ static int nf_tables_abort(struct net *net, struct sk_buff *skb, + enum nfnl_abort_action action) + { + struct nftables_pernet *nft_net = nft_pernet(net); +- int ret = __nf_tables_abort(net, action); ++ unsigned int gc_seq; ++ int ret; ++ ++ gc_seq = nft_gc_seq_begin(nft_net); ++ ret = __nf_tables_abort(net, action); ++ nft_gc_seq_end(nft_net, gc_seq); + + mutex_unlock(&nft_net->commit_mutex); + +@@ -10440,7 +10672,7 @@ static void __nft_release_table(struct net *net, struct nft_table *table) + ctx.family = table->family; + ctx.table = table; + list_for_each_entry(chain, &table->chains, list) { +- if (nft_chain_is_bound(chain)) ++ if (nft_chain_binding(chain)) + continue; + + ctx.chain = chain; +@@ -10501,6 +10733,7 @@ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event, + struct net *net = n->net; + unsigned int deleted; + bool restart = false; ++ unsigned int gc_seq; + + if (event != NETLINK_URELEASE || n->protocol != NETLINK_NETFILTER) + return NOTIFY_DONE; +@@ -10508,6 +10741,9 @@ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event, + nft_net = nft_pernet(net); + deleted = 0; + mutex_lock(&nft_net->commit_mutex); ++ ++ gc_seq = nft_gc_seq_begin(nft_net); ++ + if (!list_empty(&nf_tables_destroy_list)) + nf_tables_trans_destroy_flush_work(); + again: +@@ -10530,6 +10766,8 @@ again: + if (restart) + goto again; + } ++ nft_gc_seq_end(nft_net, gc_seq); ++ + mutex_unlock(&nft_net->commit_mutex); + + return NOTIFY_DONE; +@@ -10551,6 +10789,7 @@ static int __net_init nf_tables_init_net(struct net *net) + mutex_init(&nft_net->commit_mutex); + nft_net->base_seq = 1; + nft_net->validate_state = NFT_VALIDATE_SKIP; ++ nft_net->gc_seq = 0; + + return 0; + } +@@ -10567,22 +10806,36 @@ static void __net_exit nf_tables_pre_exit_net(struct net *net) + static void __net_exit nf_tables_exit_net(struct net *net) + { + struct nftables_pernet *nft_net = nft_pernet(net); ++ unsigned int gc_seq; + + mutex_lock(&nft_net->commit_mutex); ++ ++ gc_seq = nft_gc_seq_begin(nft_net); ++ + if (!list_empty(&nft_net->commit_list) || + !list_empty(&nft_net->module_list)) + __nf_tables_abort(net, NFNL_ABORT_NONE); ++ + __nft_release_tables(net); ++ ++ nft_gc_seq_end(nft_net, gc_seq); ++ + mutex_unlock(&nft_net->commit_mutex); + WARN_ON_ONCE(!list_empty(&nft_net->tables)); + WARN_ON_ONCE(!list_empty(&nft_net->module_list)); + WARN_ON_ONCE(!list_empty(&nft_net->notify_list)); + } + ++static void nf_tables_exit_batch(struct list_head *net_exit_list) ++{ ++ flush_work(&trans_gc_work); ++} ++ + static struct pernet_operations nf_tables_net_ops = { + .init = nf_tables_init_net, + .pre_exit = nf_tables_pre_exit_net, + .exit = nf_tables_exit_net, ++ .exit_batch = nf_tables_exit_batch, + .id = &nf_tables_net_id, + .size = sizeof(struct nftables_pernet), + }; +@@ -10654,6 +10907,7 @@ static void __exit nf_tables_module_exit(void) + nft_chain_filter_fini(); + nft_chain_route_fini(); + unregister_pernet_subsys(&nf_tables_net_ops); ++ cancel_work_sync(&trans_gc_work); + cancel_work_sync(&trans_destroy_work); + rcu_barrier(); + rhltable_destroy(&nft_objname_ht); +diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c +index 0b73cb0e752f7..2013de934cef0 100644 +--- a/net/netfilter/nft_set_hash.c ++++ b/net/netfilter/nft_set_hash.c +@@ -59,6 +59,8 @@ static inline int nft_rhash_cmp(struct rhashtable_compare_arg *arg, + + if (memcmp(nft_set_ext_key(&he->ext), x->key, x->set->klen)) + return 1; ++ if (nft_set_elem_is_dead(&he->ext)) ++ return 1; + if (nft_set_elem_expired(&he->ext)) + return 1; + if (!nft_set_elem_active(&he->ext, x->genmask)) +@@ -188,7 +190,6 @@ static void nft_rhash_activate(const struct net *net, const struct nft_set *set, + struct nft_rhash_elem *he = elem->priv; + + nft_set_elem_change_active(net, set, &he->ext); +- nft_set_elem_clear_busy(&he->ext); + } + + static bool nft_rhash_flush(const struct net *net, +@@ -196,12 +197,9 @@ static bool nft_rhash_flush(const struct net *net, + { + struct nft_rhash_elem *he = priv; + +- if (!nft_set_elem_mark_busy(&he->ext) || +- !nft_is_active(net, &he->ext)) { +- nft_set_elem_change_active(net, set, &he->ext); +- return true; +- } +- return false; ++ nft_set_elem_change_active(net, set, &he->ext); ++ ++ return true; + } + + static void *nft_rhash_deactivate(const struct net *net, +@@ -218,9 +216,8 @@ static void *nft_rhash_deactivate(const struct net *net, + + rcu_read_lock(); + he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params); +- if (he != NULL && +- !nft_rhash_flush(net, set, he)) +- he = NULL; ++ if (he) ++ nft_set_elem_change_active(net, set, &he->ext); + + rcu_read_unlock(); + +@@ -252,7 +249,9 @@ static bool nft_rhash_delete(const struct nft_set *set, + if (he == NULL) + return false; + +- return rhashtable_remove_fast(&priv->ht, &he->node, nft_rhash_params) == 0; ++ nft_set_elem_dead(&he->ext); ++ ++ return true; + } + + static void nft_rhash_walk(const struct nft_ctx *ctx, struct nft_set *set, +@@ -278,8 +277,6 @@ static void nft_rhash_walk(const struct nft_ctx *ctx, struct nft_set *set, + + if (iter->count < iter->skip) + goto cont; +- if (nft_set_elem_expired(&he->ext)) +- goto cont; + if (!nft_set_elem_active(&he->ext, iter->genmask)) + goto cont; + +@@ -314,25 +311,48 @@ static bool nft_rhash_expr_needs_gc_run(const struct nft_set *set, + + static void nft_rhash_gc(struct work_struct *work) + { ++ struct nftables_pernet *nft_net; + struct nft_set *set; + struct nft_rhash_elem *he; + struct nft_rhash *priv; +- struct nft_set_gc_batch *gcb = NULL; + struct rhashtable_iter hti; ++ struct nft_trans_gc *gc; ++ struct net *net; ++ u32 gc_seq; + + priv = container_of(work, struct nft_rhash, gc_work.work); + set = nft_set_container_of(priv); ++ net = read_pnet(&set->net); ++ nft_net = nft_pernet(net); ++ gc_seq = READ_ONCE(nft_net->gc_seq); ++ ++ if (nft_set_gc_is_pending(set)) ++ goto done; ++ ++ gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL); ++ if (!gc) ++ goto done; + + rhashtable_walk_enter(&priv->ht, &hti); + rhashtable_walk_start(&hti); + + while ((he = rhashtable_walk_next(&hti))) { + if (IS_ERR(he)) { +- if (PTR_ERR(he) != -EAGAIN) +- break; +- continue; ++ nft_trans_gc_destroy(gc); ++ gc = NULL; ++ goto try_later; ++ } ++ ++ /* Ruleset has been updated, try later. */ ++ if (READ_ONCE(nft_net->gc_seq) != gc_seq) { ++ nft_trans_gc_destroy(gc); ++ gc = NULL; ++ goto try_later; + } + ++ if (nft_set_elem_is_dead(&he->ext)) ++ goto dead_elem; ++ + if (nft_set_ext_exists(&he->ext, NFT_SET_EXT_EXPRESSIONS) && + nft_rhash_expr_needs_gc_run(set, &he->ext)) + goto needs_gc_run; +@@ -340,26 +360,26 @@ static void nft_rhash_gc(struct work_struct *work) + if (!nft_set_elem_expired(&he->ext)) + continue; + needs_gc_run: +- if (nft_set_elem_mark_busy(&he->ext)) +- continue; ++ nft_set_elem_dead(&he->ext); ++dead_elem: ++ gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC); ++ if (!gc) ++ goto try_later; + +- gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC); +- if (gcb == NULL) +- break; +- rhashtable_remove_fast(&priv->ht, &he->node, nft_rhash_params); +- atomic_dec(&set->nelems); +- nft_set_gc_batch_add(gcb, he); ++ nft_trans_gc_elem_add(gc, he); + } ++ ++ gc = nft_trans_gc_catchall_async(gc, gc_seq); ++ ++try_later: ++ /* catchall list iteration requires rcu read side lock. */ + rhashtable_walk_stop(&hti); + rhashtable_walk_exit(&hti); + +- he = nft_set_catchall_gc(set); +- if (he) { +- gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC); +- if (gcb) +- nft_set_gc_batch_add(gcb, he); +- } +- nft_set_gc_batch_complete(gcb); ++ if (gc) ++ nft_trans_gc_queue_async_done(gc); ++ ++done: + queue_delayed_work(system_power_efficient_wq, &priv->gc_work, + nft_set_gc_interval(set)); + } +@@ -394,7 +414,7 @@ static int nft_rhash_init(const struct nft_set *set, + return err; + + INIT_DEFERRABLE_WORK(&priv->gc_work, nft_rhash_gc); +- if (set->flags & NFT_SET_TIMEOUT) ++ if (set->flags & (NFT_SET_TIMEOUT | NFT_SET_EVAL)) + nft_rhash_gc_init(set); + + return 0; +@@ -422,7 +442,6 @@ static void nft_rhash_destroy(const struct nft_ctx *ctx, + }; + + cancel_delayed_work_sync(&priv->gc_work); +- rcu_barrier(); + rhashtable_free_and_destroy(&priv->ht, nft_rhash_elem_destroy, + (void *)&rhash_ctx); + } +diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c +index 8c16681884b7e..deea6196d9925 100644 +--- a/net/netfilter/nft_set_pipapo.c ++++ b/net/netfilter/nft_set_pipapo.c +@@ -566,8 +566,9 @@ next_match: + goto out; + + if (last) { +- if (nft_set_elem_expired(&f->mt[b].e->ext) || +- (genmask && ++ if (nft_set_elem_expired(&f->mt[b].e->ext)) ++ goto next_match; ++ if ((genmask && + !nft_set_elem_active(&f->mt[b].e->ext, genmask))) + goto next_match; + +@@ -602,7 +603,7 @@ static void *nft_pipapo_get(const struct net *net, const struct nft_set *set, + const struct nft_set_elem *elem, unsigned int flags) + { + return pipapo_get(net, set, (const u8 *)elem->key.val.data, +- nft_genmask_cur(net)); ++ nft_genmask_cur(net)); + } + + /** +@@ -1536,16 +1537,34 @@ static void pipapo_drop(struct nft_pipapo_match *m, + } + } + ++static void nft_pipapo_gc_deactivate(struct net *net, struct nft_set *set, ++ struct nft_pipapo_elem *e) ++ ++{ ++ struct nft_set_elem elem = { ++ .priv = e, ++ }; ++ ++ nft_setelem_data_deactivate(net, set, &elem); ++} ++ + /** + * pipapo_gc() - Drop expired entries from set, destroy start and end elements +- * @set: nftables API set representation ++ * @_set: nftables API set representation + * @m: Matching data + */ +-static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m) ++static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m) + { ++ struct nft_set *set = (struct nft_set *) _set; + struct nft_pipapo *priv = nft_set_priv(set); ++ struct net *net = read_pnet(&set->net); + int rules_f0, first_rule = 0; + struct nft_pipapo_elem *e; ++ struct nft_trans_gc *gc; ++ ++ gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL); ++ if (!gc) ++ return; + + while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) { + union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS]; +@@ -1569,13 +1588,20 @@ static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m) + f--; + i--; + e = f->mt[rulemap[i].to].e; +- if (nft_set_elem_expired(&e->ext) && +- !nft_set_elem_mark_busy(&e->ext)) { ++ ++ /* synchronous gc never fails, there is no need to set on ++ * NFT_SET_ELEM_DEAD_BIT. ++ */ ++ if (nft_set_elem_expired(&e->ext)) { + priv->dirty = true; +- pipapo_drop(m, rulemap); + +- rcu_barrier(); +- nft_set_elem_destroy(set, e, true); ++ gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC); ++ if (!gc) ++ return; ++ ++ nft_pipapo_gc_deactivate(net, set, e); ++ pipapo_drop(m, rulemap); ++ nft_trans_gc_elem_add(gc, e); + + /* And check again current first rule, which is now the + * first we haven't checked. +@@ -1585,11 +1611,11 @@ static void pipapo_gc(const struct nft_set *set, struct nft_pipapo_match *m) + } + } + +- e = nft_set_catchall_gc(set); +- if (e) +- nft_set_elem_destroy(set, e, true); +- +- priv->last_gc = jiffies; ++ gc = nft_trans_gc_catchall_sync(gc); ++ if (gc) { ++ nft_trans_gc_queue_sync_done(gc); ++ priv->last_gc = jiffies; ++ } + } + + /** +@@ -1718,14 +1744,9 @@ static void nft_pipapo_activate(const struct net *net, + const struct nft_set *set, + const struct nft_set_elem *elem) + { +- struct nft_pipapo_elem *e; +- +- e = pipapo_get(net, set, (const u8 *)elem->key.val.data, 0); +- if (IS_ERR(e)) +- return; ++ struct nft_pipapo_elem *e = elem->priv; + + nft_set_elem_change_active(net, set, &e->ext); +- nft_set_elem_clear_busy(&e->ext); + } + + /** +@@ -1937,10 +1958,6 @@ static void nft_pipapo_remove(const struct net *net, const struct nft_set *set, + + data = (const u8 *)nft_set_ext_key(&e->ext); + +- e = pipapo_get(net, set, data, 0); +- if (IS_ERR(e)) +- return; +- + while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) { + union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS]; + const u8 *match_start, *match_end; +@@ -2024,8 +2041,6 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set, + goto cont; + + e = f->mt[r].e; +- if (nft_set_elem_expired(&e->ext)) +- goto cont; + + elem.priv = e; + +diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c +index 8d73fffd2d09d..487572dcd6144 100644 +--- a/net/netfilter/nft_set_rbtree.c ++++ b/net/netfilter/nft_set_rbtree.c +@@ -46,6 +46,12 @@ static int nft_rbtree_cmp(const struct nft_set *set, + set->klen); + } + ++static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe) ++{ ++ return nft_set_elem_expired(&rbe->ext) || ++ nft_set_elem_is_dead(&rbe->ext); ++} ++ + static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set, + const u32 *key, const struct nft_set_ext **ext, + unsigned int seq) +@@ -80,7 +86,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set + continue; + } + +- if (nft_set_elem_expired(&rbe->ext)) ++ if (nft_rbtree_elem_expired(rbe)) + return false; + + if (nft_rbtree_interval_end(rbe)) { +@@ -98,7 +104,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set + + if (set->flags & NFT_SET_INTERVAL && interval != NULL && + nft_set_elem_active(&interval->ext, genmask) && +- !nft_set_elem_expired(&interval->ext) && ++ !nft_rbtree_elem_expired(interval) && + nft_rbtree_interval_start(interval)) { + *ext = &interval->ext; + return true; +@@ -215,6 +221,18 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set, + return rbe; + } + ++static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set, ++ struct nft_rbtree *priv, ++ struct nft_rbtree_elem *rbe) ++{ ++ struct nft_set_elem elem = { ++ .priv = rbe, ++ }; ++ ++ nft_setelem_data_deactivate(net, set, &elem); ++ rb_erase(&rbe->node, &priv->root); ++} ++ + static int nft_rbtree_gc_elem(const struct nft_set *__set, + struct nft_rbtree *priv, + struct nft_rbtree_elem *rbe, +@@ -222,11 +240,12 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set, + { + struct nft_set *set = (struct nft_set *)__set; + struct rb_node *prev = rb_prev(&rbe->node); ++ struct net *net = read_pnet(&set->net); + struct nft_rbtree_elem *rbe_prev; +- struct nft_set_gc_batch *gcb; ++ struct nft_trans_gc *gc; + +- gcb = nft_set_gc_batch_check(set, NULL, GFP_ATOMIC); +- if (!gcb) ++ gc = nft_trans_gc_alloc(set, 0, GFP_ATOMIC); ++ if (!gc) + return -ENOMEM; + + /* search for end interval coming before this element. +@@ -244,17 +263,28 @@ static int nft_rbtree_gc_elem(const struct nft_set *__set, + + if (prev) { + rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node); ++ nft_rbtree_gc_remove(net, set, priv, rbe_prev); ++ ++ /* There is always room in this trans gc for this element, ++ * memory allocation never actually happens, hence, the warning ++ * splat in such case. No need to set NFT_SET_ELEM_DEAD_BIT, ++ * this is synchronous gc which never fails. ++ */ ++ gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC); ++ if (WARN_ON_ONCE(!gc)) ++ return -ENOMEM; + +- rb_erase(&rbe_prev->node, &priv->root); +- atomic_dec(&set->nelems); +- nft_set_gc_batch_add(gcb, rbe_prev); ++ nft_trans_gc_elem_add(gc, rbe_prev); + } + +- rb_erase(&rbe->node, &priv->root); +- atomic_dec(&set->nelems); ++ nft_rbtree_gc_remove(net, set, priv, rbe); ++ gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC); ++ if (WARN_ON_ONCE(!gc)) ++ return -ENOMEM; ++ ++ nft_trans_gc_elem_add(gc, rbe); + +- nft_set_gc_batch_add(gcb, rbe); +- nft_set_gc_batch_complete(gcb); ++ nft_trans_gc_queue_sync_done(gc); + + return 0; + } +@@ -282,6 +312,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, + struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL; + struct rb_node *node, *next, *parent, **p, *first = NULL; + struct nft_rbtree *priv = nft_set_priv(set); ++ u8 cur_genmask = nft_genmask_cur(net); + u8 genmask = nft_genmask_next(net); + int d, err; + +@@ -327,8 +358,11 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set, + if (!nft_set_elem_active(&rbe->ext, genmask)) + continue; + +- /* perform garbage collection to avoid bogus overlap reports. */ +- if (nft_set_elem_expired(&rbe->ext)) { ++ /* perform garbage collection to avoid bogus overlap reports ++ * but skip new elements in this transaction. ++ */ ++ if (nft_set_elem_expired(&rbe->ext) && ++ nft_set_elem_active(&rbe->ext, cur_genmask)) { + err = nft_rbtree_gc_elem(set, priv, rbe, genmask); + if (err < 0) + return err; +@@ -482,7 +516,6 @@ static void nft_rbtree_activate(const struct net *net, + struct nft_rbtree_elem *rbe = elem->priv; + + nft_set_elem_change_active(net, set, &rbe->ext); +- nft_set_elem_clear_busy(&rbe->ext); + } + + static bool nft_rbtree_flush(const struct net *net, +@@ -490,12 +523,9 @@ static bool nft_rbtree_flush(const struct net *net, + { + struct nft_rbtree_elem *rbe = priv; + +- if (!nft_set_elem_mark_busy(&rbe->ext) || +- !nft_is_active(net, &rbe->ext)) { +- nft_set_elem_change_active(net, set, &rbe->ext); +- return true; +- } +- return false; ++ nft_set_elem_change_active(net, set, &rbe->ext); ++ ++ return true; + } + + static void *nft_rbtree_deactivate(const struct net *net, +@@ -552,8 +582,6 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx, + + if (iter->count < iter->skip) + goto cont; +- if (nft_set_elem_expired(&rbe->ext)) +- goto cont; + if (!nft_set_elem_active(&rbe->ext, iter->genmask)) + goto cont; + +@@ -572,26 +600,42 @@ cont: + + static void nft_rbtree_gc(struct work_struct *work) + { +- struct nft_rbtree_elem *rbe, *rbe_end = NULL, *rbe_prev = NULL; +- struct nft_set_gc_batch *gcb = NULL; ++ struct nft_rbtree_elem *rbe, *rbe_end = NULL; ++ struct nftables_pernet *nft_net; + struct nft_rbtree *priv; ++ struct nft_trans_gc *gc; + struct rb_node *node; + struct nft_set *set; ++ unsigned int gc_seq; + struct net *net; +- u8 genmask; + + priv = container_of(work, struct nft_rbtree, gc_work.work); + set = nft_set_container_of(priv); + net = read_pnet(&set->net); +- genmask = nft_genmask_cur(net); ++ nft_net = nft_pernet(net); ++ gc_seq = READ_ONCE(nft_net->gc_seq); + +- write_lock_bh(&priv->lock); +- write_seqcount_begin(&priv->count); ++ if (nft_set_gc_is_pending(set)) ++ goto done; ++ ++ gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL); ++ if (!gc) ++ goto done; ++ ++ read_lock_bh(&priv->lock); + for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) { ++ ++ /* Ruleset has been updated, try later. */ ++ if (READ_ONCE(nft_net->gc_seq) != gc_seq) { ++ nft_trans_gc_destroy(gc); ++ gc = NULL; ++ goto try_later; ++ } ++ + rbe = rb_entry(node, struct nft_rbtree_elem, node); + +- if (!nft_set_elem_active(&rbe->ext, genmask)) +- continue; ++ if (nft_set_elem_is_dead(&rbe->ext)) ++ goto dead_elem; + + /* elements are reversed in the rbtree for historical reasons, + * from highest to lowest value, that is why end element is +@@ -604,46 +648,35 @@ static void nft_rbtree_gc(struct work_struct *work) + if (!nft_set_elem_expired(&rbe->ext)) + continue; + +- if (nft_set_elem_mark_busy(&rbe->ext)) { +- rbe_end = NULL; ++ nft_set_elem_dead(&rbe->ext); ++ ++ if (!rbe_end) + continue; +- } + +- if (rbe_prev) { +- rb_erase(&rbe_prev->node, &priv->root); +- rbe_prev = NULL; +- } +- gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC); +- if (!gcb) +- break; ++ nft_set_elem_dead(&rbe_end->ext); + +- atomic_dec(&set->nelems); +- nft_set_gc_batch_add(gcb, rbe); +- rbe_prev = rbe; ++ gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC); ++ if (!gc) ++ goto try_later; + +- if (rbe_end) { +- atomic_dec(&set->nelems); +- nft_set_gc_batch_add(gcb, rbe_end); +- rb_erase(&rbe_end->node, &priv->root); +- rbe_end = NULL; +- } +- node = rb_next(node); +- if (!node) +- break; +- } +- if (rbe_prev) +- rb_erase(&rbe_prev->node, &priv->root); +- write_seqcount_end(&priv->count); +- write_unlock_bh(&priv->lock); ++ nft_trans_gc_elem_add(gc, rbe_end); ++ rbe_end = NULL; ++dead_elem: ++ gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC); ++ if (!gc) ++ goto try_later; + +- rbe = nft_set_catchall_gc(set); +- if (rbe) { +- gcb = nft_set_gc_batch_check(set, gcb, GFP_ATOMIC); +- if (gcb) +- nft_set_gc_batch_add(gcb, rbe); ++ nft_trans_gc_elem_add(gc, rbe); + } +- nft_set_gc_batch_complete(gcb); + ++ gc = nft_trans_gc_catchall_async(gc, gc_seq); ++ ++try_later: ++ read_unlock_bh(&priv->lock); ++ ++ if (gc) ++ nft_trans_gc_queue_async_done(gc); ++done: + queue_delayed_work(system_power_efficient_wq, &priv->gc_work, + nft_set_gc_interval(set)); + } +diff --git a/net/rds/rdma_transport.c b/net/rds/rdma_transport.c +index d36f3f6b43510..b15cf316b23a2 100644 +--- a/net/rds/rdma_transport.c ++++ b/net/rds/rdma_transport.c +@@ -86,11 +86,13 @@ static int rds_rdma_cm_event_handler_cmn(struct rdma_cm_id *cm_id, + break; + + case RDMA_CM_EVENT_ADDR_RESOLVED: +- rdma_set_service_type(cm_id, conn->c_tos); +- rdma_set_min_rnr_timer(cm_id, IB_RNR_TIMER_000_32); +- /* XXX do we need to clean up if this fails? */ +- ret = rdma_resolve_route(cm_id, +- RDS_RDMA_RESOLVE_TIMEOUT_MS); ++ if (conn) { ++ rdma_set_service_type(cm_id, conn->c_tos); ++ rdma_set_min_rnr_timer(cm_id, IB_RNR_TIMER_000_32); ++ /* XXX do we need to clean up if this fails? */ ++ ret = rdma_resolve_route(cm_id, ++ RDS_RDMA_RESOLVE_TIMEOUT_MS); ++ } + break; + + case RDMA_CM_EVENT_ROUTE_RESOLVED: +diff --git a/net/smc/smc_stats.h b/net/smc/smc_stats.h +index 84b7ecd8c05ca..4dbc237b7c19e 100644 +--- a/net/smc/smc_stats.h ++++ b/net/smc/smc_stats.h +@@ -244,8 +244,9 @@ while (0) + #define SMC_STAT_SERV_SUCC_INC(net, _ini) \ + do { \ + typeof(_ini) i = (_ini); \ +- bool is_v2 = (i->smcd_version & SMC_V2); \ + bool is_smcd = (i->is_smcd); \ ++ u8 version = is_smcd ? i->smcd_version : i->smcr_version; \ ++ bool is_v2 = (version & SMC_V2); \ + typeof(net->smc.smc_stats) smc_stats = (net)->smc.smc_stats; \ + if (is_v2 && is_smcd) \ + this_cpu_inc(smc_stats->smc[SMC_TYPE_D].srv_v2_succ_cnt); \ +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c +index b0258507b236c..2b803383c7b31 100644 +--- a/net/sunrpc/clnt.c ++++ b/net/sunrpc/clnt.c +@@ -2462,8 +2462,7 @@ call_status(struct rpc_task *task) + goto out_exit; + } + task->tk_action = call_encode; +- if (status != -ECONNRESET && status != -ECONNABORTED) +- rpc_check_timeout(task); ++ rpc_check_timeout(task); + return; + out_exit: + rpc_call_rpcerror(task, status); +@@ -2736,6 +2735,7 @@ out_msg_denied: + case rpc_autherr_rejectedverf: + case rpcsec_gsserr_credproblem: + case rpcsec_gsserr_ctxproblem: ++ rpcauth_invalcred(task); + if (!task->tk_cred_retry) + break; + task->tk_cred_retry--; +@@ -2889,19 +2889,22 @@ static const struct rpc_call_ops rpc_cb_add_xprt_call_ops = { + * @clnt: pointer to struct rpc_clnt + * @xps: pointer to struct rpc_xprt_switch, + * @xprt: pointer struct rpc_xprt +- * @dummy: unused ++ * @in_max_connect: pointer to the max_connect value for the passed in xprt transport + */ + int rpc_clnt_test_and_add_xprt(struct rpc_clnt *clnt, + struct rpc_xprt_switch *xps, struct rpc_xprt *xprt, +- void *dummy) ++ void *in_max_connect) + { + struct rpc_cb_add_xprt_calldata *data; + struct rpc_task *task; ++ int max_connect = clnt->cl_max_connect; + +- if (xps->xps_nunique_destaddr_xprts + 1 > clnt->cl_max_connect) { ++ if (in_max_connect) ++ max_connect = *(int *)in_max_connect; ++ if (xps->xps_nunique_destaddr_xprts + 1 > max_connect) { + rcu_read_lock(); + pr_warn("SUNRPC: reached max allowed number (%d) did not add " +- "transport to server: %s\n", clnt->cl_max_connect, ++ "transport to server: %s\n", max_connect, + rpc_peeraddr2str(clnt, RPC_DISPLAY_ADDR)); + rcu_read_unlock(); + return -EINVAL; +diff --git a/security/smack/smack.h b/security/smack/smack.h +index e2239be7bd60a..aa15ff56ed6e7 100644 +--- a/security/smack/smack.h ++++ b/security/smack/smack.h +@@ -120,6 +120,7 @@ struct inode_smack { + struct task_smack { + struct smack_known *smk_task; /* label for access control */ + struct smack_known *smk_forked; /* label when forked */ ++ struct smack_known *smk_transmuted;/* label when transmuted */ + struct list_head smk_rules; /* per task access rules */ + struct mutex smk_rules_lock; /* lock for the rules */ + struct list_head smk_relabel; /* transit allowed labels */ +diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c +index 67dcd31cd3f3d..cd6a03e945eb7 100644 +--- a/security/smack/smack_lsm.c ++++ b/security/smack/smack_lsm.c +@@ -999,8 +999,9 @@ static int smack_inode_init_security(struct inode *inode, struct inode *dir, + const struct qstr *qstr, const char **name, + void **value, size_t *len) + { ++ struct task_smack *tsp = smack_cred(current_cred()); + struct inode_smack *issp = smack_inode(inode); +- struct smack_known *skp = smk_of_current(); ++ struct smack_known *skp = smk_of_task(tsp); + struct smack_known *isp = smk_of_inode(inode); + struct smack_known *dsp = smk_of_inode(dir); + int may; +@@ -1009,20 +1010,34 @@ static int smack_inode_init_security(struct inode *inode, struct inode *dir, + *name = XATTR_SMACK_SUFFIX; + + if (value && len) { +- rcu_read_lock(); +- may = smk_access_entry(skp->smk_known, dsp->smk_known, +- &skp->smk_rules); +- rcu_read_unlock(); ++ /* ++ * If equal, transmuting already occurred in ++ * smack_dentry_create_files_as(). No need to check again. ++ */ ++ if (tsp->smk_task != tsp->smk_transmuted) { ++ rcu_read_lock(); ++ may = smk_access_entry(skp->smk_known, dsp->smk_known, ++ &skp->smk_rules); ++ rcu_read_unlock(); ++ } + + /* +- * If the access rule allows transmutation and +- * the directory requests transmutation then +- * by all means transmute. ++ * In addition to having smk_task equal to smk_transmuted, ++ * if the access rule allows transmutation and the directory ++ * requests transmutation then by all means transmute. + * Mark the inode as changed. + */ +- if (may > 0 && ((may & MAY_TRANSMUTE) != 0) && +- smk_inode_transmutable(dir)) { +- isp = dsp; ++ if ((tsp->smk_task == tsp->smk_transmuted) || ++ (may > 0 && ((may & MAY_TRANSMUTE) != 0) && ++ smk_inode_transmutable(dir))) { ++ /* ++ * The caller of smack_dentry_create_files_as() ++ * should have overridden the current cred, so the ++ * inode label was already set correctly in ++ * smack_inode_alloc_security(). ++ */ ++ if (tsp->smk_task != tsp->smk_transmuted) ++ isp = dsp; + issp->smk_flags |= SMK_INODE_CHANGED; + } + +@@ -1461,10 +1476,19 @@ static int smack_inode_getsecurity(struct user_namespace *mnt_userns, + struct super_block *sbp; + struct inode *ip = (struct inode *)inode; + struct smack_known *isp; ++ struct inode_smack *ispp; ++ size_t label_len; ++ char *label = NULL; + +- if (strcmp(name, XATTR_SMACK_SUFFIX) == 0) ++ if (strcmp(name, XATTR_SMACK_SUFFIX) == 0) { + isp = smk_of_inode(inode); +- else { ++ } else if (strcmp(name, XATTR_SMACK_TRANSMUTE) == 0) { ++ ispp = smack_inode(inode); ++ if (ispp->smk_flags & SMK_INODE_TRANSMUTE) ++ label = TRANS_TRUE; ++ else ++ label = ""; ++ } else { + /* + * The rest of the Smack xattrs are only on sockets. + */ +@@ -1486,13 +1510,18 @@ static int smack_inode_getsecurity(struct user_namespace *mnt_userns, + return -EOPNOTSUPP; + } + ++ if (!label) ++ label = isp->smk_known; ++ ++ label_len = strlen(label); ++ + if (alloc) { +- *buffer = kstrdup(isp->smk_known, GFP_KERNEL); ++ *buffer = kstrdup(label, GFP_KERNEL); + if (*buffer == NULL) + return -ENOMEM; + } + +- return strlen(isp->smk_known); ++ return label_len; + } + + +@@ -4750,8 +4779,10 @@ static int smack_dentry_create_files_as(struct dentry *dentry, int mode, + * providing access is transmuting use the containing + * directory label instead of the process label. + */ +- if (may > 0 && (may & MAY_TRANSMUTE)) ++ if (may > 0 && (may & MAY_TRANSMUTE)) { + ntsp->smk_task = isp->smk_inode; ++ ntsp->smk_transmuted = ntsp->smk_task; ++ } + } + return 0; + } +diff --git a/sound/hda/intel-sdw-acpi.c b/sound/hda/intel-sdw-acpi.c +index 5cb92f7ccbcac..b57d72ea4503f 100644 +--- a/sound/hda/intel-sdw-acpi.c ++++ b/sound/hda/intel-sdw-acpi.c +@@ -23,7 +23,7 @@ static int ctrl_link_mask; + module_param_named(sdw_link_mask, ctrl_link_mask, int, 0444); + MODULE_PARM_DESC(sdw_link_mask, "Intel link mask (one bit per link)"); + +-static bool is_link_enabled(struct fwnode_handle *fw_node, int i) ++static bool is_link_enabled(struct fwnode_handle *fw_node, u8 idx) + { + struct fwnode_handle *link; + char name[32]; +@@ -31,7 +31,7 @@ static bool is_link_enabled(struct fwnode_handle *fw_node, int i) + + /* Find master handle */ + snprintf(name, sizeof(name), +- "mipi-sdw-link-%d-subproperties", i); ++ "mipi-sdw-link-%hhu-subproperties", idx); + + link = fwnode_get_named_child_node(fw_node, name); + if (!link) +@@ -51,8 +51,8 @@ static int + sdw_intel_scan_controller(struct sdw_intel_acpi_info *info) + { + struct acpi_device *adev = acpi_fetch_acpi_dev(info->handle); +- int ret, i; +- u8 count; ++ u8 count, i; ++ int ret; + + if (!adev) + return -EINVAL; +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 3226691ac923c..54f4b593a1158 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2208,6 +2208,7 @@ static const struct snd_pci_quirk power_save_denylist[] = { + SND_PCI_QUIRK(0x8086, 0x2068, "Intel NUC7i3BNB", 0), + /* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */ + SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0), ++ SND_PCI_QUIRK(0x17aa, 0x316e, "Lenovo ThinkCentre M70q", 0), + /* https://bugzilla.redhat.com/show_bug.cgi?id=1689623 */ + SND_PCI_QUIRK(0x17aa, 0x367b, "Lenovo IdeaCentre B550", 0), + /* https://bugzilla.redhat.com/show_bug.cgi?id=1572975 */ +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index f70e0ad81607e..57e07aa4e136c 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -9657,7 +9657,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x1d1f, "ASUS ROG Strix G17 2023 (G713PV)", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401), + SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE), +- SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2), ++ SND_PCI_QUIRK(0x1043, 0x16a3, "ASUS UX3402VA", ALC245_FIXUP_CS35L41_SPI_2), + SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), + SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), + SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), +diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c +index 9a9571c3f08c0..533250efcbd83 100644 +--- a/sound/soc/amd/yc/acp6x-mach.c ++++ b/sound/soc/amd/yc/acp6x-mach.c +@@ -213,6 +213,20 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "21J6"), + } + }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "82TL"), ++ } ++ }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "82QF"), ++ } ++ }, + { + .driver_data = &acp6x_card, + .matches = { +@@ -220,6 +234,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "82V2"), + } + }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "82UG"), ++ } ++ }, + { + .driver_data = &acp6x_card, + .matches = { +diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c +index 2fefbcf7bd130..735061690ded0 100644 +--- a/sound/soc/codecs/cs42l42.c ++++ b/sound/soc/codecs/cs42l42.c +@@ -2280,6 +2280,16 @@ int cs42l42_common_probe(struct cs42l42_private *cs42l42, + + if (cs42l42->reset_gpio) { + dev_dbg(cs42l42->dev, "Found reset GPIO\n"); ++ ++ /* ++ * ACPI can override the default GPIO state we requested ++ * so ensure that we start with RESET low. ++ */ ++ gpiod_set_value_cansleep(cs42l42->reset_gpio, 0); ++ ++ /* Ensure minimum reset pulse width */ ++ usleep_range(10, 500); ++ + gpiod_set_value_cansleep(cs42l42->reset_gpio, 1); + } + usleep_range(CS42L42_BOOT_TIME_US, CS42L42_BOOT_TIME_US * 2); +diff --git a/sound/soc/codecs/rt5640.c b/sound/soc/codecs/rt5640.c +index a7071d0a2562f..37ea4d854cb58 100644 +--- a/sound/soc/codecs/rt5640.c ++++ b/sound/soc/codecs/rt5640.c +@@ -2562,10 +2562,9 @@ static void rt5640_enable_jack_detect(struct snd_soc_component *component, + if (jack_data && jack_data->use_platform_clock) + rt5640->use_platform_clock = jack_data->use_platform_clock; + +- ret = devm_request_threaded_irq(component->dev, rt5640->irq, +- NULL, rt5640_irq, +- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT, +- "rt5640", rt5640); ++ ret = request_irq(rt5640->irq, rt5640_irq, ++ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT, ++ "rt5640", rt5640); + if (ret) { + dev_warn(component->dev, "Failed to reguest IRQ %d: %d\n", rt5640->irq, ret); + rt5640_disable_jack_detect(component); +@@ -2618,14 +2617,14 @@ static void rt5640_enable_hda_jack_detect( + + rt5640->jack = jack; + +- ret = devm_request_threaded_irq(component->dev, rt5640->irq, +- NULL, rt5640_irq, IRQF_TRIGGER_RISING | IRQF_ONESHOT, +- "rt5640", rt5640); ++ ret = request_irq(rt5640->irq, rt5640_irq, ++ IRQF_TRIGGER_RISING | IRQF_ONESHOT, "rt5640", rt5640); + if (ret) { + dev_warn(component->dev, "Failed to reguest IRQ %d: %d\n", rt5640->irq, ret); + rt5640->irq = -ENXIO; + return; + } ++ rt5640->irq_requested = true; + + /* sync initial jack state */ + queue_delayed_work(system_long_wq, &rt5640->jack_work, 0); +diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c +index d8e99b263ab21..cbe24d5b4e46a 100644 +--- a/sound/soc/fsl/imx-audmix.c ++++ b/sound/soc/fsl/imx-audmix.c +@@ -320,7 +320,7 @@ static int imx_audmix_probe(struct platform_device *pdev) + if (IS_ERR(priv->cpu_mclk)) { + ret = PTR_ERR(priv->cpu_mclk); + dev_err(&cpu_pdev->dev, "failed to get DAI mclk1: %d\n", ret); +- return -EINVAL; ++ return ret; + } + + priv->audmix_pdev = audmix_pdev; +diff --git a/sound/soc/fsl/imx-pcm-rpmsg.c b/sound/soc/fsl/imx-pcm-rpmsg.c +index 35049043e5322..933bac7ea1864 100644 +--- a/sound/soc/fsl/imx-pcm-rpmsg.c ++++ b/sound/soc/fsl/imx-pcm-rpmsg.c +@@ -19,6 +19,7 @@ + static struct snd_pcm_hardware imx_rpmsg_pcm_hardware = { + .info = SNDRV_PCM_INFO_INTERLEAVED | + SNDRV_PCM_INFO_BLOCK_TRANSFER | ++ SNDRV_PCM_INFO_BATCH | + SNDRV_PCM_INFO_MMAP | + SNDRV_PCM_INFO_MMAP_VALID | + SNDRV_PCM_INFO_NO_PERIOD_WAKEUP | +diff --git a/sound/soc/fsl/imx-rpmsg.c b/sound/soc/fsl/imx-rpmsg.c +index 4d99f4858a14f..76c6febf24990 100644 +--- a/sound/soc/fsl/imx-rpmsg.c ++++ b/sound/soc/fsl/imx-rpmsg.c +@@ -88,6 +88,14 @@ static int imx_rpmsg_probe(struct platform_device *pdev) + SND_SOC_DAIFMT_NB_NF | + SND_SOC_DAIFMT_CBC_CFC; + ++ /* ++ * i.MX rpmsg sound cards work on codec slave mode. MCLK will be ++ * disabled by CPU DAI driver in hw_free(). Some codec requires MCLK ++ * present at power up/down sequence. So need to set ignore_pmdown_time ++ * to power down codec immediately before MCLK is turned off. ++ */ ++ data->dai.ignore_pmdown_time = 1; ++ + /* Optional codec node */ + ret = of_parse_phandle_with_fixed_args(np, "audio-codec", 0, 0, &args); + if (ret) { +diff --git a/sound/soc/intel/avs/boards/hdaudio.c b/sound/soc/intel/avs/boards/hdaudio.c +index 073663ba140d0..a65939f30ac47 100644 +--- a/sound/soc/intel/avs/boards/hdaudio.c ++++ b/sound/soc/intel/avs/boards/hdaudio.c +@@ -54,6 +54,9 @@ static int avs_create_dai_links(struct device *dev, struct hda_codec *codec, int + return -ENOMEM; + + dl[i].codecs->name = devm_kstrdup(dev, cname, GFP_KERNEL); ++ if (!dl[i].codecs->name) ++ return -ENOMEM; ++ + dl[i].codecs->dai_name = pcm->name; + dl[i].num_codecs = 1; + dl[i].num_cpus = 1; +diff --git a/sound/soc/meson/axg-spdifin.c b/sound/soc/meson/axg-spdifin.c +index e2cc4c4be7586..97e81ec4a78ce 100644 +--- a/sound/soc/meson/axg-spdifin.c ++++ b/sound/soc/meson/axg-spdifin.c +@@ -112,34 +112,6 @@ static int axg_spdifin_prepare(struct snd_pcm_substream *substream, + return 0; + } + +-static int axg_spdifin_startup(struct snd_pcm_substream *substream, +- struct snd_soc_dai *dai) +-{ +- struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai); +- int ret; +- +- ret = clk_prepare_enable(priv->refclk); +- if (ret) { +- dev_err(dai->dev, +- "failed to enable spdifin reference clock\n"); +- return ret; +- } +- +- regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, +- SPDIFIN_CTRL0_EN); +- +- return 0; +-} +- +-static void axg_spdifin_shutdown(struct snd_pcm_substream *substream, +- struct snd_soc_dai *dai) +-{ +- struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai); +- +- regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, 0); +- clk_disable_unprepare(priv->refclk); +-} +- + static void axg_spdifin_write_mode_param(struct regmap *map, int mode, + unsigned int val, + unsigned int num_per_reg, +@@ -251,25 +223,38 @@ static int axg_spdifin_dai_probe(struct snd_soc_dai *dai) + ret = axg_spdifin_sample_mode_config(dai, priv); + if (ret) { + dev_err(dai->dev, "mode configuration failed\n"); +- clk_disable_unprepare(priv->pclk); +- return ret; ++ goto pclk_err; + } + ++ ret = clk_prepare_enable(priv->refclk); ++ if (ret) { ++ dev_err(dai->dev, ++ "failed to enable spdifin reference clock\n"); ++ goto pclk_err; ++ } ++ ++ regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, ++ SPDIFIN_CTRL0_EN); ++ + return 0; ++ ++pclk_err: ++ clk_disable_unprepare(priv->pclk); ++ return ret; + } + + static int axg_spdifin_dai_remove(struct snd_soc_dai *dai) + { + struct axg_spdifin *priv = snd_soc_dai_get_drvdata(dai); + ++ regmap_update_bits(priv->map, SPDIFIN_CTRL0, SPDIFIN_CTRL0_EN, 0); ++ clk_disable_unprepare(priv->refclk); + clk_disable_unprepare(priv->pclk); + return 0; + } + + static const struct snd_soc_dai_ops axg_spdifin_ops = { + .prepare = axg_spdifin_prepare, +- .startup = axg_spdifin_startup, +- .shutdown = axg_spdifin_shutdown, + }; + + static int axg_spdifin_iec958_info(struct snd_kcontrol *kcontrol, +diff --git a/sound/soc/sof/core.c b/sound/soc/sof/core.c +index 75a1e2c6539f2..eaa16755a2704 100644 +--- a/sound/soc/sof/core.c ++++ b/sound/soc/sof/core.c +@@ -461,10 +461,9 @@ int snd_sof_device_remove(struct device *dev) + snd_sof_ipc_free(sdev); + snd_sof_free_debug(sdev); + snd_sof_remove(sdev); ++ sof_ops_free(sdev); + } + +- sof_ops_free(sdev); +- + /* release firmware */ + snd_sof_fw_unload(sdev); + +diff --git a/sound/soc/sof/intel/mtl.c b/sound/soc/sof/intel/mtl.c +index 10298532816fe..d7048f1d6a048 100644 +--- a/sound/soc/sof/intel/mtl.c ++++ b/sound/soc/sof/intel/mtl.c +@@ -453,7 +453,7 @@ static int mtl_dsp_cl_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_bo + /* step 3: wait for IPC DONE bit from ROM */ + ret = snd_sof_dsp_read_poll_timeout(sdev, HDA_DSP_BAR, chip->ipc_ack, status, + ((status & chip->ipc_ack_mask) == chip->ipc_ack_mask), +- HDA_DSP_REG_POLL_INTERVAL_US, MTL_DSP_PURGE_TIMEOUT_US); ++ HDA_DSP_REG_POLL_INTERVAL_US, HDA_DSP_INIT_TIMEOUT_US); + if (ret < 0) { + if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS) + dev_err(sdev->dev, "timeout waiting for purge IPC done\n"); +diff --git a/sound/soc/sof/intel/mtl.h b/sound/soc/sof/intel/mtl.h +index 788bf0e3ea879..00e3526889d3d 100644 +--- a/sound/soc/sof/intel/mtl.h ++++ b/sound/soc/sof/intel/mtl.h +@@ -54,7 +54,6 @@ + #define MTL_DSP_IRQSTS_IPC BIT(0) + #define MTL_DSP_IRQSTS_SDW BIT(6) + +-#define MTL_DSP_PURGE_TIMEOUT_US 20000000 /* 20s */ + #define MTL_DSP_REG_POLL_INTERVAL_US 10 /* 10 us */ + + /* Memory windows */ +diff --git a/tools/include/linux/btf_ids.h b/tools/include/linux/btf_ids.h +index 71e54b1e37964..2f882d5cb30f5 100644 +--- a/tools/include/linux/btf_ids.h ++++ b/tools/include/linux/btf_ids.h +@@ -38,7 +38,7 @@ asm( \ + ____BTF_ID(symbol) + + #define __ID(prefix) \ +- __PASTE(prefix, __COUNTER__) ++ __PASTE(__PASTE(prefix, __COUNTER__), __LINE__) + + /* + * The BTF_ID defines unique symbol for each ID pointing +diff --git a/tools/include/linux/mm.h b/tools/include/linux/mm.h +index a03d9bba51514..43be27bcc897d 100644 +--- a/tools/include/linux/mm.h ++++ b/tools/include/linux/mm.h +@@ -11,8 +11,6 @@ + + #define PHYS_ADDR_MAX (~(phys_addr_t)0) + +-#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1) +-#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask)) + #define ALIGN(x, a) __ALIGN_KERNEL((x), (a)) + #define ALIGN_DOWN(x, a) __ALIGN_KERNEL((x) - ((a) - 1), (a)) + +diff --git a/tools/include/linux/seq_file.h b/tools/include/linux/seq_file.h +index 102fd9217f1f9..f6bc226af0c1d 100644 +--- a/tools/include/linux/seq_file.h ++++ b/tools/include/linux/seq_file.h +@@ -1,4 +1,6 @@ + #ifndef _TOOLS_INCLUDE_LINUX_SEQ_FILE_H + #define _TOOLS_INCLUDE_LINUX_SEQ_FILE_H + ++struct seq_file; ++ + #endif /* _TOOLS_INCLUDE_LINUX_SEQ_FILE_H */ +diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h +index 51b9aa640ad2a..53bc487947197 100644 +--- a/tools/include/uapi/linux/bpf.h ++++ b/tools/include/uapi/linux/bpf.h +@@ -1837,7 +1837,9 @@ union bpf_attr { + * performed again, if the helper is used in combination with + * direct packet access. + * Return +- * 0 on success, or a negative error in case of failure. ++ * 0 on success, or a negative error in case of failure. Positive ++ * error indicates a potential drop or congestion in the target ++ * device. The particular positive error codes are not defined. + * + * u64 bpf_get_current_pid_tgid(void) + * Description +diff --git a/tools/perf/util/Build b/tools/perf/util/Build +index e315ecaec3233..2c364a9087a22 100644 +--- a/tools/perf/util/Build ++++ b/tools/perf/util/Build +@@ -276,6 +276,12 @@ ifeq ($(BISON_GE_35),1) + else + bison_flags += -w + endif ++ ++BISON_LT_381 := $(shell expr $(shell $(BISON) --version | grep bison | sed -e 's/.\+ \([0-9]\+\).\([0-9]\+\).\([0-9]\+\)/\1\2\3/g') \< 381) ++ifeq ($(BISON_LT_381),1) ++ bison_flags += -DYYNOMEM=YYABORT ++endif ++ + CFLAGS_parse-events-bison.o += $(bison_flags) + CFLAGS_pmu-bison.o += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags) + CFLAGS_expr-bison.o += -DYYLTYPE_IS_TRIVIAL=0 $(bison_flags) +diff --git a/tools/testing/memblock/tests/basic_api.c b/tools/testing/memblock/tests/basic_api.c +index a13a57ba0815f..7ce628e31a431 100644 +--- a/tools/testing/memblock/tests/basic_api.c ++++ b/tools/testing/memblock/tests/basic_api.c +@@ -1,7 +1,7 @@ + // SPDX-License-Identifier: GPL-2.0-or-later ++#include "basic_api.h" + #include + #include +-#include "basic_api.h" + + #define EXPECTED_MEMBLOCK_REGIONS 128 + #define FUNC_ADD "memblock_add" +diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock/tests/common.h +index d6bbbe63bfc36..4c33ce04c0645 100644 +--- a/tools/testing/memblock/tests/common.h ++++ b/tools/testing/memblock/tests/common.h +@@ -5,6 +5,7 @@ + #include + #include + #include ++#include + #include + #include + #include +diff --git a/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc b/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc +index 0eb47fbb3f44d..42422e4251078 100644 +--- a/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc ++++ b/tools/testing/selftests/ftrace/test.d/instances/instance-event.tc +@@ -39,7 +39,7 @@ instance_read() { + + instance_set() { + while :; do +- echo 1 > foo/events/sched/sched_switch ++ echo 1 > foo/events/sched/sched_switch/enable + done 2> /dev/null + } + +diff --git a/tools/testing/selftests/kselftest_deps.sh b/tools/testing/selftests/kselftest_deps.sh +index 708cb54296336..47a1281a3b702 100755 +--- a/tools/testing/selftests/kselftest_deps.sh ++++ b/tools/testing/selftests/kselftest_deps.sh +@@ -46,11 +46,11 @@ fi + print_targets=0 + + while getopts "p" arg; do +- case $arg in +- p) ++ case $arg in ++ p) + print_targets=1 + shift;; +- esac ++ esac + done + + if [ $# -eq 0 ] +@@ -92,6 +92,10 @@ pass_cnt=0 + # Get all TARGETS from selftests Makefile + targets=$(egrep "^TARGETS +|^TARGETS =" Makefile | cut -d "=" -f2) + ++# Initially, in LDLIBS related lines, the dep checker needs ++# to ignore lines containing the following strings: ++filter="\$(VAR_LDLIBS)\|pkg-config\|PKG_CONFIG\|IOURING_EXTRA_LIBS" ++ + # Single test case + if [ $# -eq 2 ] + then +@@ -100,6 +104,8 @@ then + l1_test $test + l2_test $test + l3_test $test ++ l4_test $test ++ l5_test $test + + print_results $1 $2 + exit $? +@@ -113,7 +119,7 @@ fi + # Append space at the end of the list to append more tests. + + l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \ +- grep -v "VAR_LDLIBS" | awk -F: '{print $1}') ++ grep -v "$filter" | awk -F: '{print $1}' | uniq) + + # Level 2: LDLIBS set dynamically. + # +@@ -126,7 +132,7 @@ l1_tests=$(grep -r --include=Makefile "^LDLIBS" | \ + # Append space at the end of the list to append more tests. + + l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \ +- grep -v "VAR_LDLIBS" | awk -F: '{print $1}') ++ grep -v "$filter" | awk -F: '{print $1}' | uniq) + + # Level 3 + # memfd and others use pkg-config to find mount and fuse libs +@@ -138,11 +144,32 @@ l2_tests=$(grep -r --include=Makefile ": LDLIBS" | \ + # VAR_LDLIBS := $(shell pkg-config fuse --libs 2>/dev/null) + + l3_tests=$(grep -r --include=Makefile "^VAR_LDLIBS" | \ +- grep -v "pkg-config" | awk -F: '{print $1}') ++ grep -v "pkg-config\|PKG_CONFIG" | awk -F: '{print $1}' | uniq) + +-#echo $l1_tests +-#echo $l2_1_tests +-#echo $l3_tests ++# Level 4 ++# some tests may fall back to default using `|| echo -l` ++# if pkg-config doesn't find the libs, instead of using VAR_LDLIBS ++# as per level 3 checks. ++# e.g: ++# netfilter/Makefile ++# LDLIBS += $(shell $(HOSTPKG_CONFIG) --libs libmnl 2>/dev/null || echo -lmnl) ++l4_tests=$(grep -r --include=Makefile "^LDLIBS" | \ ++ grep "pkg-config\|PKG_CONFIG" | awk -F: '{print $1}' | uniq) ++ ++# Level 5 ++# some tests may use IOURING_EXTRA_LIBS to add extra libs to LDLIBS, ++# which in turn may be defined in a sub-Makefile ++# e.g.: ++# mm/Makefile ++# $(OUTPUT)/gup_longterm: LDLIBS += $(IOURING_EXTRA_LIBS) ++l5_tests=$(grep -r --include=Makefile "LDLIBS +=.*\$(IOURING_EXTRA_LIBS)" | \ ++ awk -F: '{print $1}' | uniq) ++ ++#echo l1_tests $l1_tests ++#echo l2_tests $l2_tests ++#echo l3_tests $l3_tests ++#echo l4_tests $l4_tests ++#echo l5_tests $l5_tests + + all_tests + print_results $1 $2 +@@ -164,24 +191,32 @@ all_tests() + for test in $l3_tests; do + l3_test $test + done ++ ++ for test in $l4_tests; do ++ l4_test $test ++ done ++ ++ for test in $l5_tests; do ++ l5_test $test ++ done + } + + # Use same parsing used for l1_tests and pick libraries this time. + l1_test() + { + test_libs=$(grep --include=Makefile "^LDLIBS" $test | \ +- grep -v "VAR_LDLIBS" | \ ++ grep -v "$filter" | \ + sed -e 's/\:/ /' | \ + sed -e 's/+/ /' | cut -d "=" -f 2) + + check_libs $test $test_libs + } + +-# Use same parsing used for l2__tests and pick libraries this time. ++# Use same parsing used for l2_tests and pick libraries this time. + l2_test() + { + test_libs=$(grep --include=Makefile ": LDLIBS" $test | \ +- grep -v "VAR_LDLIBS" | \ ++ grep -v "$filter" | \ + sed -e 's/\:/ /' | sed -e 's/+/ /' | \ + cut -d "=" -f 2) + +@@ -197,6 +232,24 @@ l3_test() + check_libs $test $test_libs + } + ++l4_test() ++{ ++ test_libs=$(grep --include=Makefile "^VAR_LDLIBS\|^LDLIBS" $test | \ ++ grep "\(pkg-config\|PKG_CONFIG\).*|| echo " | \ ++ sed -e 's/.*|| echo //' | sed -e 's/)$//') ++ ++ check_libs $test $test_libs ++} ++ ++l5_test() ++{ ++ tests=$(find $(dirname "$test") -type f -name "*.mk") ++ test_libs=$(grep "^IOURING_EXTRA_LIBS +\?=" $tests | \ ++ cut -d "=" -f 2) ++ ++ check_libs $test $test_libs ++} ++ + check_libs() + { + +diff --git a/tools/testing/selftests/net/tls.c b/tools/testing/selftests/net/tls.c +index c0ad8385441f2..5b80fb155d549 100644 +--- a/tools/testing/selftests/net/tls.c ++++ b/tools/testing/selftests/net/tls.c +@@ -551,11 +551,11 @@ TEST_F(tls, sendmsg_large) + + msg.msg_iov = &vec; + msg.msg_iovlen = 1; +- EXPECT_EQ(sendmsg(self->cfd, &msg, 0), send_len); ++ EXPECT_EQ(sendmsg(self->fd, &msg, 0), send_len); + } + + while (recvs++ < sends) { +- EXPECT_NE(recv(self->fd, mem, send_len, 0), -1); ++ EXPECT_NE(recv(self->cfd, mem, send_len, 0), -1); + } + + free(mem); +@@ -584,9 +584,9 @@ TEST_F(tls, sendmsg_multiple) + msg.msg_iov = vec; + msg.msg_iovlen = iov_len; + +- EXPECT_EQ(sendmsg(self->cfd, &msg, 0), total_len); ++ EXPECT_EQ(sendmsg(self->fd, &msg, 0), total_len); + buf = malloc(total_len); +- EXPECT_NE(recv(self->fd, buf, total_len, 0), -1); ++ EXPECT_NE(recv(self->cfd, buf, total_len, 0), -1); + for (i = 0; i < iov_len; i++) { + EXPECT_EQ(memcmp(test_strs[i], buf + len_cmp, + strlen(test_strs[i])), +diff --git a/tools/testing/selftests/powerpc/Makefile b/tools/testing/selftests/powerpc/Makefile +index 6ba95cd19e423..c8c085fa05b05 100644 +--- a/tools/testing/selftests/powerpc/Makefile ++++ b/tools/testing/selftests/powerpc/Makefile +@@ -45,28 +45,27 @@ $(SUB_DIRS): + include ../lib.mk + + override define RUN_TESTS +- @for TARGET in $(SUB_DIRS); do \ ++ +@for TARGET in $(SUB_DIRS); do \ + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests;\ + done; + endef + + override define INSTALL_RULE +- @for TARGET in $(SUB_DIRS); do \ ++ +@for TARGET in $(SUB_DIRS); do \ + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install;\ + done; + endef + +-override define EMIT_TESTS +- @for TARGET in $(SUB_DIRS); do \ ++emit_tests: ++ +@for TARGET in $(SUB_DIRS); do \ + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ +- $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests;\ ++ $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET $@;\ + done; +-endef + + override define CLEAN +- @for TARGET in $(SUB_DIRS); do \ ++ +@for TARGET in $(SUB_DIRS); do \ + BUILD_TARGET=$(OUTPUT)/$$TARGET; \ + $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean; \ + done; +@@ -76,4 +75,4 @@ endef + tags: + find . -name '*.c' -o -name '*.h' | xargs ctags + +-.PHONY: tags $(SUB_DIRS) ++.PHONY: tags $(SUB_DIRS) emit_tests +diff --git a/tools/testing/selftests/powerpc/pmu/Makefile b/tools/testing/selftests/powerpc/pmu/Makefile +index 30803353bd7cc..a284fa874a9f1 100644 +--- a/tools/testing/selftests/powerpc/pmu/Makefile ++++ b/tools/testing/selftests/powerpc/pmu/Makefile +@@ -25,32 +25,36 @@ $(OUTPUT)/per_event_excludes: ../utils.c + DEFAULT_RUN_TESTS := $(RUN_TESTS) + override define RUN_TESTS + $(DEFAULT_RUN_TESTS) +- TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests +- TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests +- TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests ++ +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests ++ +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests ++ +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET run_tests + endef + +-DEFAULT_EMIT_TESTS := $(EMIT_TESTS) +-override define EMIT_TESTS +- $(DEFAULT_EMIT_TESTS) +- TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests +- TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests +- TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests +-endef ++emit_tests: ++ for TEST in $(TEST_GEN_PROGS); do \ ++ BASENAME_TEST=`basename $$TEST`; \ ++ echo "$(COLLECTION):$$BASENAME_TEST"; \ ++ done ++ +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests ++ +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests ++ +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -s -C $$TARGET emit_tests + + DEFAULT_INSTALL_RULE := $(INSTALL_RULE) + override define INSTALL_RULE + $(DEFAULT_INSTALL_RULE) +- TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install +- TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install +- TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install ++ +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install ++ +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install ++ +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET install + endef + +-clean: ++DEFAULT_CLEAN := $(CLEAN) ++override define CLEAN ++ $(DEFAULT_CLEAN) + $(RM) $(TEST_GEN_PROGS) $(OUTPUT)/loop.o +- TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean +- TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean +- TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean ++ +TARGET=ebb; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean ++ +TARGET=sampling_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean ++ +TARGET=event_code_tests; BUILD_TARGET=$$OUTPUT/$$TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -C $$TARGET clean ++endef + + ebb: + TARGET=$@; BUILD_TARGET=$$OUTPUT/$$TARGET; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $$TARGET all +@@ -61,4 +65,4 @@ sampling_tests: + event_code_tests: + TARGET=$@; BUILD_TARGET=$$OUTPUT/$$TARGET; mkdir -p $$BUILD_TARGET; $(MAKE) OUTPUT=$$BUILD_TARGET -k -C $$TARGET all + +-.PHONY: all run_tests clean ebb sampling_tests event_code_tests ++.PHONY: all run_tests ebb sampling_tests event_code_tests emit_tests