From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id CB1251382C5 for ; Thu, 4 Mar 2021 12:04:48 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id E0E5EE0877; Thu, 4 Mar 2021 12:04:46 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 465F8E0877 for ; Thu, 4 Mar 2021 12:04:46 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 5B72B335C77 for ; Thu, 4 Mar 2021 12:04:44 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id E92F2478 for ; Thu, 4 Mar 2021 12:04:42 +0000 (UTC) From: "Alice Ferrazzi" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Alice Ferrazzi" Message-ID: <1614859464.8f0e5b98da760bc5b682dd22470dd161336ed39c.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1019_linux-5.10.20.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Alice Ferrazzi X-VCS-Revision: 8f0e5b98da760bc5b682dd22470dd161336ed39c X-VCS-Branch: 5.10 Date: Thu, 4 Mar 2021 12:04:42 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 67659298-5d49-434c-aaa2-78321fa964ad X-Archives-Hash: dcbbeba3e0e31eabe1abe68ce6f04e20 commit: 8f0e5b98da760bc5b682dd22470dd161336ed39c Author: Alice Ferrazzi gentoo org> AuthorDate: Thu Mar 4 12:04:11 2021 +0000 Commit: Alice Ferrazzi gentoo org> CommitDate: Thu Mar 4 12:04:24 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8f0e5b98 Linux patch 5.10.20 Signed-off-by: Alice Ferrazzi gentoo.org> 0000_README | 4 + 1019_linux-5.10.20.patch | 25078 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 25082 insertions(+) diff --git a/0000_README b/0000_README index 19abff2..c338847 100644 --- a/0000_README +++ b/0000_README @@ -119,6 +119,10 @@ Patch: 1018_linux-5.10.19.patch From: http://www.kernel.org Desc: Linux 5.10.19 +Patch: 1019_linux-5.10.20.patch +From: http://www.kernel.org +Desc: Linux 5.10.20 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1019_linux-5.10.20.patch b/1019_linux-5.10.20.patch new file mode 100644 index 0000000..4f20d62 --- /dev/null +++ b/1019_linux-5.10.20.patch @@ -0,0 +1,25078 @@ +diff --git a/Documentation/admin-guide/perf/arm-cmn.rst b/Documentation/admin-guide/perf/arm-cmn.rst +index 0e48093460140..796e25b7027b2 100644 +--- a/Documentation/admin-guide/perf/arm-cmn.rst ++++ b/Documentation/admin-guide/perf/arm-cmn.rst +@@ -17,7 +17,7 @@ PMU events + ---------- + + The PMU driver registers a single PMU device for the whole interconnect, +-see /sys/bus/event_source/devices/arm_cmn. Multi-chip systems may link ++see /sys/bus/event_source/devices/arm_cmn_0. Multi-chip systems may link + more than one CMN together via external CCIX links - in this situation, + each mesh counts its own events entirely independently, and additional + PMU devices will be named arm_cmn_{1..n}. +diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst +index f455fa00c00fa..06027c6a233ab 100644 +--- a/Documentation/admin-guide/sysctl/vm.rst ++++ b/Documentation/admin-guide/sysctl/vm.rst +@@ -978,11 +978,11 @@ that benefit from having their data cached, zone_reclaim_mode should be + left disabled as the caching effect is likely to be more important than + data locality. + +-zone_reclaim may be enabled if it's known that the workload is partitioned +-such that each partition fits within a NUMA node and that accessing remote +-memory would cause a measurable performance reduction. The page allocator +-will then reclaim easily reusable pages (those page cache pages that are +-currently not used) before allocating off node pages. ++Consider enabling one or more zone_reclaim mode bits if it's known that the ++workload is partitioned such that each partition fits within a NUMA node ++and that accessing remote memory would cause a measurable performance ++reduction. The page allocator will take additional actions before ++allocating off node pages. + + Allowing zone reclaim to write out pages stops processes that are + writing large amounts of data from dirtying pages on other nodes. Zone +diff --git a/Documentation/filesystems/seq_file.rst b/Documentation/filesystems/seq_file.rst +index 56856481dc8d8..a6726082a7c25 100644 +--- a/Documentation/filesystems/seq_file.rst ++++ b/Documentation/filesystems/seq_file.rst +@@ -217,6 +217,12 @@ between the calls to start() and stop(), so holding a lock during that time + is a reasonable thing to do. The seq_file code will also avoid taking any + other locks while the iterator is active. + ++The iterater value returned by start() or next() is guaranteed to be ++passed to a subsequent next() or stop() call. This allows resources ++such as locks that were taken to be reliably released. There is *no* ++guarantee that the iterator will be passed to show(), though in practice ++it often will be. ++ + + Formatted output + ================ +diff --git a/Documentation/scsi/libsas.rst b/Documentation/scsi/libsas.rst +index 7216b5d258001..f9b77c7879dbb 100644 +--- a/Documentation/scsi/libsas.rst ++++ b/Documentation/scsi/libsas.rst +@@ -189,7 +189,6 @@ num_phys + The event interface:: + + /* LLDD calls these to notify the class of an event. */ +- void (*notify_ha_event)(struct sas_ha_struct *, enum ha_event); + void (*notify_port_event)(struct sas_phy *, enum port_event); + void (*notify_phy_event)(struct sas_phy *, enum phy_event); + +diff --git a/Documentation/security/keys/core.rst b/Documentation/security/keys/core.rst +index aa0081685ee11..b3ed5c581034c 100644 +--- a/Documentation/security/keys/core.rst ++++ b/Documentation/security/keys/core.rst +@@ -1040,8 +1040,8 @@ The keyctl syscall functions are: + + "key" is the ID of the key to be watched. + +- "queue_fd" is a file descriptor referring to an open "/dev/watch_queue" +- which manages the buffer into which notifications will be delivered. ++ "queue_fd" is a file descriptor referring to an open pipe which ++ manages the buffer into which notifications will be delivered. + + "filter" is either NULL to remove a watch or a filter specification to + indicate what events are required from the key. +diff --git a/Makefile b/Makefile +index f700bdea626d9..1ebc8a6bf9b06 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 10 +-SUBLEVEL = 19 ++SUBLEVEL = 20 + EXTRAVERSION = + NAME = Dare mighty things + +diff --git a/arch/arm/boot/compressed/head.S b/arch/arm/boot/compressed/head.S +index 3a392983ac079..a0de09f994d88 100644 +--- a/arch/arm/boot/compressed/head.S ++++ b/arch/arm/boot/compressed/head.S +@@ -1175,9 +1175,9 @@ __armv4_mmu_cache_off: + __armv7_mmu_cache_off: + mrc p15, 0, r0, c1, c0 + #ifdef CONFIG_MMU +- bic r0, r0, #0x000d ++ bic r0, r0, #0x0005 + #else +- bic r0, r0, #0x000c ++ bic r0, r0, #0x0004 + #endif + mcr p15, 0, r0, c1, c0 @ turn MMU and cache off + mov r0, #0 +diff --git a/arch/arm/boot/dts/armada-388-helios4.dts b/arch/arm/boot/dts/armada-388-helios4.dts +index fb49df2a3bce7..a7ff774d797c8 100644 +--- a/arch/arm/boot/dts/armada-388-helios4.dts ++++ b/arch/arm/boot/dts/armada-388-helios4.dts +@@ -70,6 +70,9 @@ + + system-leds { + compatible = "gpio-leds"; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&helios_system_led_pins>; ++ + status-led { + label = "helios4:green:status"; + gpios = <&gpio0 24 GPIO_ACTIVE_LOW>; +@@ -86,6 +89,9 @@ + + io-leds { + compatible = "gpio-leds"; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&helios_io_led_pins>; ++ + sata1-led { + label = "helios4:green:ata1"; + gpios = <&gpio1 17 GPIO_ACTIVE_LOW>; +@@ -121,11 +127,15 @@ + fan1: j10-pwm { + compatible = "pwm-fan"; + pwms = <&gpio1 9 40000>; /* Target freq:25 kHz */ ++ pinctrl-names = "default"; ++ pinctrl-0 = <&helios_fan1_pins>; + }; + + fan2: j17-pwm { + compatible = "pwm-fan"; + pwms = <&gpio1 23 40000>; /* Target freq:25 kHz */ ++ pinctrl-names = "default"; ++ pinctrl-0 = <&helios_fan2_pins>; + }; + + usb2_phy: usb2-phy { +@@ -286,16 +296,22 @@ + "mpp39", "mpp40"; + marvell,function = "sd0"; + }; +- helios_led_pins: helios-led-pins { +- marvell,pins = "mpp24", "mpp25", +- "mpp49", "mpp50", ++ helios_system_led_pins: helios-system-led-pins { ++ marvell,pins = "mpp24", "mpp25"; ++ marvell,function = "gpio"; ++ }; ++ helios_io_led_pins: helios-io-led-pins { ++ marvell,pins = "mpp49", "mpp50", + "mpp52", "mpp53", + "mpp54"; + marvell,function = "gpio"; + }; +- helios_fan_pins: helios-fan-pins { +- marvell,pins = "mpp41", "mpp43", +- "mpp48", "mpp55"; ++ helios_fan1_pins: helios_fan1_pins { ++ marvell,pins = "mpp41", "mpp43"; ++ marvell,function = "gpio"; ++ }; ++ helios_fan2_pins: helios_fan2_pins { ++ marvell,pins = "mpp48", "mpp55"; + marvell,function = "gpio"; + }; + microsom_spi1_cs_pins: spi1-cs-pins { +diff --git a/arch/arm/boot/dts/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed-g4.dtsi +index 82f0213e3a3c3..f81a540a35296 100644 +--- a/arch/arm/boot/dts/aspeed-g4.dtsi ++++ b/arch/arm/boot/dts/aspeed-g4.dtsi +@@ -370,6 +370,7 @@ + compatible = "aspeed,ast2400-lpc-snoop"; + reg = <0x10 0x8>; + interrupts = <8>; ++ clocks = <&syscon ASPEED_CLK_GATE_LCLK>; + status = "disabled"; + }; + +diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi +index a93009aa2f040..39e690be17044 100644 +--- a/arch/arm/boot/dts/aspeed-g5.dtsi ++++ b/arch/arm/boot/dts/aspeed-g5.dtsi +@@ -492,6 +492,7 @@ + compatible = "aspeed,ast2500-lpc-snoop"; + reg = <0x10 0x8>; + interrupts = <8>; ++ clocks = <&syscon ASPEED_CLK_GATE_LCLK>; + status = "disabled"; + }; + +diff --git a/arch/arm/boot/dts/aspeed-g6.dtsi b/arch/arm/boot/dts/aspeed-g6.dtsi +index bf97aaad7be9b..1cf71bdb4fabe 100644 +--- a/arch/arm/boot/dts/aspeed-g6.dtsi ++++ b/arch/arm/boot/dts/aspeed-g6.dtsi +@@ -513,6 +513,7 @@ + compatible = "aspeed,ast2600-lpc-snoop"; + reg = <0x0 0x80>; + interrupts = ; ++ clocks = <&syscon ASPEED_CLK_GATE_LCLK>; + status = "disabled"; + }; + +diff --git a/arch/arm/boot/dts/exynos3250-artik5.dtsi b/arch/arm/boot/dts/exynos3250-artik5.dtsi +index 12887b3924af8..ad525f2accbb4 100644 +--- a/arch/arm/boot/dts/exynos3250-artik5.dtsi ++++ b/arch/arm/boot/dts/exynos3250-artik5.dtsi +@@ -79,7 +79,7 @@ + s2mps14_pmic@66 { + compatible = "samsung,s2mps14-pmic"; + interrupt-parent = <&gpx3>; +- interrupts = <5 IRQ_TYPE_NONE>; ++ interrupts = <5 IRQ_TYPE_LEVEL_LOW>; + pinctrl-names = "default"; + pinctrl-0 = <&s2mps14_irq>; + reg = <0x66>; +diff --git a/arch/arm/boot/dts/exynos3250-monk.dts b/arch/arm/boot/dts/exynos3250-monk.dts +index c1a68e6120370..7e99e5812a4d3 100644 +--- a/arch/arm/boot/dts/exynos3250-monk.dts ++++ b/arch/arm/boot/dts/exynos3250-monk.dts +@@ -200,7 +200,7 @@ + s2mps14_pmic@66 { + compatible = "samsung,s2mps14-pmic"; + interrupt-parent = <&gpx0>; +- interrupts = <7 IRQ_TYPE_NONE>; ++ interrupts = <7 IRQ_TYPE_LEVEL_LOW>; + reg = <0x66>; + wakeup-source; + +diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts +index b55afaaa691e8..f9e3b13d3aac2 100644 +--- a/arch/arm/boot/dts/exynos3250-rinato.dts ++++ b/arch/arm/boot/dts/exynos3250-rinato.dts +@@ -270,7 +270,7 @@ + s2mps14_pmic@66 { + compatible = "samsung,s2mps14-pmic"; + interrupt-parent = <&gpx0>; +- interrupts = <7 IRQ_TYPE_NONE>; ++ interrupts = <7 IRQ_TYPE_LEVEL_LOW>; + reg = <0x66>; + wakeup-source; + +diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts +index a92ade33779cf..5a9c936407ea3 100644 +--- a/arch/arm/boot/dts/exynos5250-spring.dts ++++ b/arch/arm/boot/dts/exynos5250-spring.dts +@@ -109,7 +109,7 @@ + compatible = "samsung,s5m8767-pmic"; + reg = <0x66>; + interrupt-parent = <&gpx3>; +- interrupts = <2 IRQ_TYPE_NONE>; ++ interrupts = <2 IRQ_TYPE_LEVEL_LOW>; + pinctrl-names = "default"; + pinctrl-0 = <&s5m8767_irq &s5m8767_dvs &s5m8767_ds>; + wakeup-source; +diff --git a/arch/arm/boot/dts/exynos5420-arndale-octa.dts b/arch/arm/boot/dts/exynos5420-arndale-octa.dts +index dd7f8385d81e7..3d9b93d2b242c 100644 +--- a/arch/arm/boot/dts/exynos5420-arndale-octa.dts ++++ b/arch/arm/boot/dts/exynos5420-arndale-octa.dts +@@ -349,7 +349,7 @@ + reg = <0x66>; + + interrupt-parent = <&gpx3>; +- interrupts = <2 IRQ_TYPE_EDGE_FALLING>; ++ interrupts = <2 IRQ_TYPE_LEVEL_LOW>; + pinctrl-names = "default"; + pinctrl-0 = <&s2mps11_irq>; + +diff --git a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi +index b1cf9414ce17f..d51c1d8620a09 100644 +--- a/arch/arm/boot/dts/exynos5422-odroid-core.dtsi ++++ b/arch/arm/boot/dts/exynos5422-odroid-core.dtsi +@@ -509,7 +509,7 @@ + samsung,s2mps11-acokb-ground; + + interrupt-parent = <&gpx0>; +- interrupts = <4 IRQ_TYPE_EDGE_FALLING>; ++ interrupts = <4 IRQ_TYPE_LEVEL_LOW>; + pinctrl-names = "default"; + pinctrl-0 = <&s2mps11_irq>; + +diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi +index cb309743de5da..dd8ef58cbaed4 100644 +--- a/arch/arm/boot/dts/omap443x.dtsi ++++ b/arch/arm/boot/dts/omap443x.dtsi +@@ -33,10 +33,12 @@ + }; + + ocp { ++ /* 4430 has only gpio_86 tshut and no talert interrupt */ + bandgap: bandgap@4a002260 { + reg = <0x4a002260 0x4 + 0x4a00232C 0x4>; + compatible = "ti,omap4430-bandgap"; ++ gpios = <&gpio3 22 GPIO_ACTIVE_HIGH>; + + #thermal-sensor-cells = <0>; + }; +diff --git a/arch/arm/kernel/sys_oabi-compat.c b/arch/arm/kernel/sys_oabi-compat.c +index 0203e545bbc8d..075a2e0ed2c15 100644 +--- a/arch/arm/kernel/sys_oabi-compat.c ++++ b/arch/arm/kernel/sys_oabi-compat.c +@@ -248,6 +248,7 @@ struct oabi_epoll_event { + __u64 data; + } __attribute__ ((packed,aligned(4))); + ++#ifdef CONFIG_EPOLL + asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd, + struct oabi_epoll_event __user *event) + { +@@ -298,6 +299,20 @@ asmlinkage long sys_oabi_epoll_wait(int epfd, + kfree(kbuf); + return err ? -EFAULT : ret; + } ++#else ++asmlinkage long sys_oabi_epoll_ctl(int epfd, int op, int fd, ++ struct oabi_epoll_event __user *event) ++{ ++ return -EINVAL; ++} ++ ++asmlinkage long sys_oabi_epoll_wait(int epfd, ++ struct oabi_epoll_event __user *events, ++ int maxevents, int timeout) ++{ ++ return -EINVAL; ++} ++#endif + + struct oabi_sembuf { + unsigned short sem_num; +diff --git a/arch/arm/mach-at91/pm_suspend.S b/arch/arm/mach-at91/pm_suspend.S +index 0184de05c1be1..b683c2caa40b9 100644 +--- a/arch/arm/mach-at91/pm_suspend.S ++++ b/arch/arm/mach-at91/pm_suspend.S +@@ -442,7 +442,7 @@ ENDPROC(at91_backup_mode) + str tmp1, [pmc, #AT91_PMC_PLL_UPDT] + + /* step 2. */ +- ldr tmp1, =#AT91_PMC_PLL_ACR_DEFAULT_PLLA ++ ldr tmp1, =AT91_PMC_PLL_ACR_DEFAULT_PLLA + str tmp1, [pmc, #AT91_PMC_PLL_ACR] + + /* step 3. */ +diff --git a/arch/arm/mach-ixp4xx/Kconfig b/arch/arm/mach-ixp4xx/Kconfig +index f7211b57b1e78..165c184801e19 100644 +--- a/arch/arm/mach-ixp4xx/Kconfig ++++ b/arch/arm/mach-ixp4xx/Kconfig +@@ -13,7 +13,6 @@ config MACH_IXP4XX_OF + select I2C + select I2C_IOP3XX + select PCI +- select TIMER_OF + select USE_OF + help + Say 'Y' here to support Device Tree-based IXP4xx platforms. +diff --git a/arch/arm/mach-s3c/irq-s3c24xx-fiq.S b/arch/arm/mach-s3c/irq-s3c24xx-fiq.S +index b54cbd0122413..5d238d9a798e1 100644 +--- a/arch/arm/mach-s3c/irq-s3c24xx-fiq.S ++++ b/arch/arm/mach-s3c/irq-s3c24xx-fiq.S +@@ -35,7 +35,6 @@ + @ and an offset to the irq acknowledgment word + + ENTRY(s3c24xx_spi_fiq_rx) +-s3c24xx_spi_fix_rx: + .word fiq_rx_end - fiq_rx_start + .word fiq_rx_irq_ack - fiq_rx_start + fiq_rx_start: +@@ -49,7 +48,7 @@ fiq_rx_start: + strb fiq_rtmp, [ fiq_rspi, # S3C2410_SPTDAT ] + + subs fiq_rcount, fiq_rcount, #1 +- subnes pc, lr, #4 @@ return, still have work to do ++ subsne pc, lr, #4 @@ return, still have work to do + + @@ set IRQ controller so that next op will trigger IRQ + mov fiq_rtmp, #0 +@@ -61,7 +60,6 @@ fiq_rx_irq_ack: + fiq_rx_end: + + ENTRY(s3c24xx_spi_fiq_txrx) +-s3c24xx_spi_fiq_txrx: + .word fiq_txrx_end - fiq_txrx_start + .word fiq_txrx_irq_ack - fiq_txrx_start + fiq_txrx_start: +@@ -76,7 +74,7 @@ fiq_txrx_start: + strb fiq_rtmp, [ fiq_rspi, # S3C2410_SPTDAT ] + + subs fiq_rcount, fiq_rcount, #1 +- subnes pc, lr, #4 @@ return, still have work to do ++ subsne pc, lr, #4 @@ return, still have work to do + + mov fiq_rtmp, #0 + str fiq_rtmp, [ fiq_rirq, # S3C2410_INTMOD - S3C24XX_VA_IRQ ] +@@ -88,7 +86,6 @@ fiq_txrx_irq_ack: + fiq_txrx_end: + + ENTRY(s3c24xx_spi_fiq_tx) +-s3c24xx_spi_fix_tx: + .word fiq_tx_end - fiq_tx_start + .word fiq_tx_irq_ack - fiq_tx_start + fiq_tx_start: +@@ -101,7 +98,7 @@ fiq_tx_start: + strb fiq_rtmp, [ fiq_rspi, # S3C2410_SPTDAT ] + + subs fiq_rcount, fiq_rcount, #1 +- subnes pc, lr, #4 @@ return, still have work to do ++ subsne pc, lr, #4 @@ return, still have work to do + + mov fiq_rtmp, #0 + str fiq_rtmp, [ fiq_rirq, # S3C2410_INTMOD - S3C24XX_VA_IRQ ] +diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig +index a6b5b7ef40aea..afe4bc55d4eba 100644 +--- a/arch/arm64/Kconfig ++++ b/arch/arm64/Kconfig +@@ -520,7 +520,7 @@ config ARM64_ERRATUM_1024718 + help + This option adds a workaround for ARM Cortex-A55 Erratum 1024718. + +- Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect ++ Affected Cortex-A55 cores (all revisions) could cause incorrect + update of the hardware dirty bit when the DBM/AP bits are updated + without a break-before-make. The workaround is to disable the usage + of hardware DBM locally on the affected cores. CPUs not affected by +diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts +index 896f34fd9fc3a..7ae16541d14f5 100644 +--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts ++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts +@@ -126,8 +126,6 @@ + }; + + &ehci0 { +- phys = <&usbphy 0>; +- phy-names = "usb"; + status = "okay"; + }; + +@@ -169,6 +167,7 @@ + pinctrl-0 = <&mmc2_pins>, <&mmc2_ds_pin>; + vmmc-supply = <®_dcdc1>; + vqmmc-supply = <®_eldo1>; ++ max-frequency = <200000000>; + bus-width = <8>; + non-removable; + cap-mmc-hw-reset; +@@ -177,8 +176,6 @@ + }; + + &ohci0 { +- phys = <&usbphy 0>; +- phy-names = "usb"; + status = "okay"; + }; + +diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi +index c48692b06e1fa..3402cec87035b 100644 +--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi ++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-sopine.dtsi +@@ -32,7 +32,6 @@ + pinctrl-names = "default"; + pinctrl-0 = <&mmc0_pins>; + vmmc-supply = <®_dcdc1>; +- non-removable; + disable-wp; + bus-width = <4>; + cd-gpios = <&pio 5 6 GPIO_ACTIVE_LOW>; /* PF6 */ +diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi +index dc238814013cb..7a41015a9ce59 100644 +--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi ++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi +@@ -514,7 +514,7 @@ + resets = <&ccu RST_BUS_MMC2>; + reset-names = "ahb"; + interrupts = ; +- max-frequency = <200000000>; ++ max-frequency = <150000000>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; +@@ -593,6 +593,8 @@ + <&ccu CLK_USB_OHCI0>; + resets = <&ccu RST_BUS_OHCI0>, + <&ccu RST_BUS_EHCI0>; ++ phys = <&usbphy 0>; ++ phy-names = "usb"; + status = "disabled"; + }; + +@@ -603,6 +605,8 @@ + clocks = <&ccu CLK_BUS_OHCI0>, + <&ccu CLK_USB_OHCI0>; + resets = <&ccu RST_BUS_OHCI0>; ++ phys = <&usbphy 0>; ++ phy-names = "usb"; + status = "disabled"; + }; + +diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi +index 28c77d6872f64..4592fb7a6161d 100644 +--- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi ++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi +@@ -436,6 +436,7 @@ + interrupts = ; + pinctrl-names = "default"; + pinctrl-0 = <&mmc0_pins>; ++ max-frequency = <150000000>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; +@@ -452,6 +453,7 @@ + interrupts = ; + pinctrl-names = "default"; + pinctrl-0 = <&mmc1_pins>; ++ max-frequency = <150000000>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; +@@ -468,6 +470,7 @@ + interrupts = ; + pinctrl-names = "default"; + pinctrl-0 = <&mmc2_pins>; ++ max-frequency = <150000000>; + status = "disabled"; + #address-cells = <1>; + #size-cells = <0>; +@@ -667,6 +670,8 @@ + <&ccu CLK_USB_OHCI0>; + resets = <&ccu RST_BUS_OHCI0>, + <&ccu RST_BUS_EHCI0>; ++ phys = <&usb2phy 0>; ++ phy-names = "usb"; + status = "disabled"; + }; + +@@ -677,6 +682,8 @@ + clocks = <&ccu CLK_BUS_OHCI0>, + <&ccu CLK_USB_OHCI0>; + resets = <&ccu RST_BUS_OHCI0>; ++ phys = <&usb2phy 0>; ++ phy-names = "usb"; + status = "disabled"; + }; + +diff --git a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts +index 4b517ca720597..06de0b1ce7267 100644 +--- a/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts ++++ b/arch/arm64/boot/dts/amlogic/meson-sm1-khadas-vim3l.dts +@@ -89,13 +89,12 @@ + status = "okay"; + }; + +-&sd_emmc_a { +- sd-uhs-sdr50; +-}; +- + &usb { + phys = <&usb2_phy0>, <&usb2_phy1>; + phy-names = "usb2-phy0", "usb2-phy1"; + }; + */ + ++&sd_emmc_a { ++ sd-uhs-sdr50; ++}; +diff --git a/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi b/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi +index 829fea23d4ab1..106397a99da6b 100644 +--- a/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi ++++ b/arch/arm64/boot/dts/exynos/exynos5433-tm2-common.dtsi +@@ -389,7 +389,7 @@ + s2mps13-pmic@66 { + compatible = "samsung,s2mps13-pmic"; + interrupt-parent = <&gpa0>; +- interrupts = <7 IRQ_TYPE_NONE>; ++ interrupts = <7 IRQ_TYPE_LEVEL_LOW>; + reg = <0x66>; + samsung,s2mps11-wrstbi-ground; + +diff --git a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts +index 92fecc539c6c7..358b7b6ea84f1 100644 +--- a/arch/arm64/boot/dts/exynos/exynos7-espresso.dts ++++ b/arch/arm64/boot/dts/exynos/exynos7-espresso.dts +@@ -90,7 +90,7 @@ + s2mps15_pmic@66 { + compatible = "samsung,s2mps15-pmic"; + reg = <0x66>; +- interrupts = <2 IRQ_TYPE_NONE>; ++ interrupts = <2 IRQ_TYPE_LEVEL_LOW>; + interrupt-parent = <&gpa0>; + pinctrl-names = "default"; + pinctrl-0 = <&pmic_irq>; +diff --git a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi +index e1c0fcba5c206..07c099b4ed5b5 100644 +--- a/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi ++++ b/arch/arm64/boot/dts/intel/socfpga_agilex.dtsi +@@ -166,7 +166,7 @@ + rx-fifo-depth = <16384>; + snps,multicast-filter-bins = <256>; + iommus = <&smmu 2>; +- altr,sysmgr-syscon = <&sysmgr 0x48 8>; ++ altr,sysmgr-syscon = <&sysmgr 0x48 0>; + clocks = <&clkmgr AGILEX_EMAC1_CLK>, <&clkmgr AGILEX_EMAC_PTP_CLK>; + clock-names = "stmmaceth", "ptp_ref"; + status = "disabled"; +@@ -184,7 +184,7 @@ + rx-fifo-depth = <16384>; + snps,multicast-filter-bins = <256>; + iommus = <&smmu 3>; +- altr,sysmgr-syscon = <&sysmgr 0x4c 16>; ++ altr,sysmgr-syscon = <&sysmgr 0x4c 0>; + clocks = <&clkmgr AGILEX_EMAC2_CLK>, <&clkmgr AGILEX_EMAC_PTP_CLK>; + clock-names = "stmmaceth", "ptp_ref"; + status = "disabled"; +diff --git a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts +index bf76ebe463794..cca143e4b6bf8 100644 +--- a/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts ++++ b/arch/arm64/boot/dts/marvell/armada-3720-turris-mox.dts +@@ -204,7 +204,7 @@ + }; + + partition@20000 { +- label = "u-boot"; ++ label = "a53-firmware"; + reg = <0x20000 0x160000>; + }; + +diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi +index 5b9ec032ce8d8..7c6d871538a63 100644 +--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi ++++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi +@@ -698,6 +698,8 @@ + clocks = <&pericfg CLK_PERI_MSDC30_1_PD>, + <&topckgen CLK_TOP_AXI_SEL>; + clock-names = "source", "hclk"; ++ resets = <&pericfg MT7622_PERI_MSDC1_SW_RST>; ++ reset-names = "hrst"; + status = "disabled"; + }; + +diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi +index f7ac4c4033db6..7bf2cb01513e3 100644 +--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi ++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a2015-common.dtsi +@@ -106,6 +106,9 @@ + interrupt-parent = <&msmgpio>; + interrupts = <115 IRQ_TYPE_EDGE_RISING>; + ++ vdd-supply = <&pm8916_l17>; ++ vddio-supply = <&pm8916_l5>; ++ + pinctrl-names = "default"; + pinctrl-0 = <&accel_int_default>; + }; +@@ -113,6 +116,9 @@ + magnetometer@12 { + compatible = "bosch,bmc150_magn"; + reg = <0x12>; ++ ++ vdd-supply = <&pm8916_l17>; ++ vddio-supply = <&pm8916_l5>; + }; + }; + +diff --git a/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts b/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts +index e39c04d977c25..dd35c3344358c 100644 +--- a/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts ++++ b/arch/arm64/boot/dts/qcom/msm8916-samsung-a5u-eur.dts +@@ -38,7 +38,7 @@ + + &pronto { + iris { +- compatible = "qcom,wcn3680"; ++ compatible = "qcom,wcn3660b"; + }; + }; + +diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi +index aaa21899f1a63..0e34ed48b9fae 100644 +--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi ++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi +@@ -55,7 +55,7 @@ + no-map; + }; + +- reserved@8668000 { ++ reserved@86680000 { + reg = <0x0 0x86680000 0x0 0x80000>; + no-map; + }; +@@ -68,7 +68,7 @@ + qcom,client-id = <1>; + }; + +- rfsa@867e00000 { ++ rfsa@867e0000 { + reg = <0x0 0x867e0000 0x0 0x20000>; + no-map; + }; +diff --git a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts +index 1528a865f1f8e..949fee6949e61 100644 +--- a/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts ++++ b/arch/arm64/boot/dts/qcom/qrb5165-rb5.dts +@@ -114,7 +114,7 @@ + + &apps_rsc { + pm8009-rpmh-regulators { +- compatible = "qcom,pm8009-rpmh-regulators"; ++ compatible = "qcom,pm8009-1-rpmh-regulators"; + qcom,pmic-id = "f"; + + vdd-s1-supply = <&vph_pwr>; +@@ -123,6 +123,13 @@ + vdd-l5-l6-supply = <&vreg_bob>; + vdd-l7-supply = <&vreg_s4a_1p8>; + ++ vreg_s2f_0p95: smps2 { ++ regulator-name = "vreg_s2f_0p95"; ++ regulator-min-microvolt = <900000>; ++ regulator-max-microvolt = <952000>; ++ regulator-initial-mode = ; ++ }; ++ + vreg_l1f_1p1: ldo1 { + regulator-name = "vreg_l1f_1p1"; + regulator-min-microvolt = <1104000>; +diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts +index c0b93813ea9ac..c4ac6f5dc008d 100644 +--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts ++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts +@@ -1114,11 +1114,11 @@ + reg = <0x10>; + + // CAM0_RST_N +- reset-gpios = <&tlmm 9 0>; ++ reset-gpios = <&tlmm 9 GPIO_ACTIVE_LOW>; + pinctrl-names = "default"; + pinctrl-0 = <&cam0_default>; + gpios = <&tlmm 13 0>, +- <&tlmm 9 0>; ++ <&tlmm 9 GPIO_ACTIVE_LOW>; + + clocks = <&clock_camcc CAM_CC_MCLK0_CLK>; + clock-names = "xvclk"; +diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi +index 66c9153b31015..597388f871272 100644 +--- a/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi ++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-baseboard.dtsi +@@ -150,7 +150,7 @@ + regulator-name = "audio-1.8V"; + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; +- gpio = <&gpio_exp2 7 GPIO_ACTIVE_HIGH>; ++ gpio = <&gpio_exp4 1 GPIO_ACTIVE_HIGH>; + enable-active-high; + }; + +diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi +index 97272f5fa0abf..289cf711307d6 100644 +--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi ++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi +@@ -88,7 +88,6 @@ + pinctrl-names = "default"; + uart-has-rtscts; + status = "okay"; +- max-speed = <4000000>; + + bluetooth { + compatible = "brcm,bcm43438-bt"; +@@ -97,6 +96,7 @@ + device-wakeup-gpios = <&pca9654 5 GPIO_ACTIVE_HIGH>; + clocks = <&osc_32k>; + clock-names = "extclk"; ++ max-speed = <4000000>; + }; + }; + +@@ -147,7 +147,7 @@ + }; + + eeprom@50 { +- compatible = "microchip,at24c64", "atmel,24c64"; ++ compatible = "microchip,24c64", "atmel,24c64"; + pagesize = <32>; + read-only; /* Manufacturing EEPROM programmed at factory */ + reg = <0x50>; +diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi +index db0d5c8e5f96a..93c734d8a46c2 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi +@@ -928,6 +928,7 @@ + phy-mode = "rmii"; + phy-handle = <&phy>; + snps,txpbl = <0x4>; ++ clock_in_out = "output"; + status = "disabled"; + + mdio { +diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c +index 395bbf64b2abb..53c92e060c3dd 100644 +--- a/arch/arm64/crypto/aes-glue.c ++++ b/arch/arm64/crypto/aes-glue.c +@@ -55,7 +55,7 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions"); + #define aes_mac_update neon_aes_mac_update + MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 NEON"); + #endif +-#if defined(USE_V8_CRYPTO_EXTENSIONS) || !defined(CONFIG_CRYPTO_AES_ARM64_BS) ++#if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS) + MODULE_ALIAS_CRYPTO("ecb(aes)"); + MODULE_ALIAS_CRYPTO("cbc(aes)"); + MODULE_ALIAS_CRYPTO("ctr(aes)"); +@@ -650,7 +650,7 @@ static int __maybe_unused xts_decrypt(struct skcipher_request *req) + } + + static struct skcipher_alg aes_algs[] = { { +-#if defined(USE_V8_CRYPTO_EXTENSIONS) || !defined(CONFIG_CRYPTO_AES_ARM64_BS) ++#if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS) + .base = { + .cra_name = "__ecb(aes)", + .cra_driver_name = "__ecb-aes-" MODE, +diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c +index c63b99211db3d..8baf8d1846b64 100644 +--- a/arch/arm64/crypto/sha1-ce-glue.c ++++ b/arch/arm64/crypto/sha1-ce-glue.c +@@ -19,6 +19,7 @@ + MODULE_DESCRIPTION("SHA1 secure hash using ARMv8 Crypto Extensions"); + MODULE_AUTHOR("Ard Biesheuvel "); + MODULE_LICENSE("GPL v2"); ++MODULE_ALIAS_CRYPTO("sha1"); + + struct sha1_ce_state { + struct sha1_state sst; +diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c +index 5e956d7582a56..d33d3ee92cc98 100644 +--- a/arch/arm64/crypto/sha2-ce-glue.c ++++ b/arch/arm64/crypto/sha2-ce-glue.c +@@ -19,6 +19,8 @@ + MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensions"); + MODULE_AUTHOR("Ard Biesheuvel "); + MODULE_LICENSE("GPL v2"); ++MODULE_ALIAS_CRYPTO("sha224"); ++MODULE_ALIAS_CRYPTO("sha256"); + + struct sha256_ce_state { + struct sha256_state sst; +diff --git a/arch/arm64/crypto/sha3-ce-glue.c b/arch/arm64/crypto/sha3-ce-glue.c +index 9a4bbfc45f407..ddf7aca9ff459 100644 +--- a/arch/arm64/crypto/sha3-ce-glue.c ++++ b/arch/arm64/crypto/sha3-ce-glue.c +@@ -23,6 +23,10 @@ + MODULE_DESCRIPTION("SHA3 secure hash using ARMv8 Crypto Extensions"); + MODULE_AUTHOR("Ard Biesheuvel "); + MODULE_LICENSE("GPL v2"); ++MODULE_ALIAS_CRYPTO("sha3-224"); ++MODULE_ALIAS_CRYPTO("sha3-256"); ++MODULE_ALIAS_CRYPTO("sha3-384"); ++MODULE_ALIAS_CRYPTO("sha3-512"); + + asmlinkage void sha3_ce_transform(u64 *st, const u8 *data, int blocks, + int md_len); +diff --git a/arch/arm64/crypto/sha512-ce-glue.c b/arch/arm64/crypto/sha512-ce-glue.c +index dc890a719f54c..57c6f086dfb4c 100644 +--- a/arch/arm64/crypto/sha512-ce-glue.c ++++ b/arch/arm64/crypto/sha512-ce-glue.c +@@ -23,6 +23,8 @@ + MODULE_DESCRIPTION("SHA-384/SHA-512 secure hash using ARMv8 Crypto Extensions"); + MODULE_AUTHOR("Ard Biesheuvel "); + MODULE_LICENSE("GPL v2"); ++MODULE_ALIAS_CRYPTO("sha384"); ++MODULE_ALIAS_CRYPTO("sha512"); + + asmlinkage void sha512_ce_transform(struct sha512_state *sst, u8 const *src, + int blocks); +diff --git a/arch/arm64/include/asm/module.lds.h b/arch/arm64/include/asm/module.lds.h +index 691f15af788e4..810045628c66e 100644 +--- a/arch/arm64/include/asm/module.lds.h ++++ b/arch/arm64/include/asm/module.lds.h +@@ -1,7 +1,7 @@ + #ifdef CONFIG_ARM64_MODULE_PLTS + SECTIONS { +- .plt (NOLOAD) : { BYTE(0) } +- .init.plt (NOLOAD) : { BYTE(0) } +- .text.ftrace_trampoline (NOLOAD) : { BYTE(0) } ++ .plt 0 (NOLOAD) : { BYTE(0) } ++ .init.plt 0 (NOLOAD) : { BYTE(0) } ++ .text.ftrace_trampoline 0 (NOLOAD) : { BYTE(0) } + } + #endif +diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c +index 65a522fbd8743..7da9a7cee4cef 100644 +--- a/arch/arm64/kernel/cpufeature.c ++++ b/arch/arm64/kernel/cpufeature.c +@@ -1457,7 +1457,7 @@ static bool cpu_has_broken_dbm(void) + /* List of CPUs which have broken DBM support. */ + static const struct midr_range cpus[] = { + #ifdef CONFIG_ARM64_ERRATUM_1024718 +- MIDR_RANGE(MIDR_CORTEX_A55, 0, 0, 1, 0), // A55 r0p0 -r1p0 ++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55), + /* Kryo4xx Silver (rdpe => r1p0) */ + MIDR_REV(MIDR_QCOM_KRYO_4XX_SILVER, 0xd, 0xe), + #endif +diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S +index d8d9caf02834e..e7550a5289fef 100644 +--- a/arch/arm64/kernel/head.S ++++ b/arch/arm64/kernel/head.S +@@ -985,6 +985,7 @@ SYM_FUNC_START_LOCAL(__primary_switch) + + tlbi vmalle1 // Remove any stale TLB entries + dsb nsh ++ isb + + msr sctlr_el1, x19 // re-enable the MMU + isb +diff --git a/arch/arm64/kernel/machine_kexec_file.c b/arch/arm64/kernel/machine_kexec_file.c +index 03210f6447900..0cde47a63bebf 100644 +--- a/arch/arm64/kernel/machine_kexec_file.c ++++ b/arch/arm64/kernel/machine_kexec_file.c +@@ -182,8 +182,10 @@ static int create_dtb(struct kimage *image, + + /* duplicate a device tree blob */ + ret = fdt_open_into(initial_boot_params, buf, buf_size); +- if (ret) ++ if (ret) { ++ vfree(buf); + return -EINVAL; ++ } + + ret = setup_dtb(image, initrd_load_addr, initrd_len, + cmdline, buf); +diff --git a/arch/arm64/kernel/probes/uprobes.c b/arch/arm64/kernel/probes/uprobes.c +index a412d8edbcd24..2c247634552b1 100644 +--- a/arch/arm64/kernel/probes/uprobes.c ++++ b/arch/arm64/kernel/probes/uprobes.c +@@ -38,7 +38,7 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, + + /* TODO: Currently we do not support AARCH32 instruction probing */ + if (mm->context.flags & MMCF_AARCH32) +- return -ENOTSUPP; ++ return -EOPNOTSUPP; + else if (!IS_ALIGNED(addr, AARCH64_INSN_SIZE)) + return -EINVAL; + +diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c +index f49b349e16a34..66256603bd596 100644 +--- a/arch/arm64/kernel/ptrace.c ++++ b/arch/arm64/kernel/ptrace.c +@@ -1799,7 +1799,7 @@ int syscall_trace_enter(struct pt_regs *regs) + + if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) { + tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER); +- if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU)) ++ if (flags & _TIF_SYSCALL_EMU) + return NO_SYSCALL; + } + +diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c +index 96cd347c7a465..9f8cdeccd1ba9 100644 +--- a/arch/arm64/kernel/suspend.c ++++ b/arch/arm64/kernel/suspend.c +@@ -120,7 +120,7 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long)) + if (!ret) + ret = -EOPNOTSUPP; + } else { +- __cpu_suspend_exit(); ++ RCU_NONIDLE(__cpu_suspend_exit()); + } + + unpause_graph_tracing(); +diff --git a/arch/csky/kernel/ptrace.c b/arch/csky/kernel/ptrace.c +index d822144906ac1..a4cf2e2ac15ac 100644 +--- a/arch/csky/kernel/ptrace.c ++++ b/arch/csky/kernel/ptrace.c +@@ -83,7 +83,7 @@ static int gpr_get(struct task_struct *target, + /* Abiv1 regs->tls is fake and we need sync here. */ + regs->tls = task_thread_info(target)->tp_value; + +- return membuf_write(&to, regs, sizeof(regs)); ++ return membuf_write(&to, regs, sizeof(*regs)); + } + + static int gpr_set(struct task_struct *target, +diff --git a/arch/mips/Makefile b/arch/mips/Makefile +index 0d0f29d662c9a..686990fcc5f0f 100644 +--- a/arch/mips/Makefile ++++ b/arch/mips/Makefile +@@ -136,6 +136,25 @@ cflags-$(CONFIG_SB1XXX_CORELIS) += $(call cc-option,-mno-sched-prolog) \ + # + cflags-y += -fno-stack-check + ++# binutils from v2.35 when built with --enable-mips-fix-loongson3-llsc=yes, ++# supports an -mfix-loongson3-llsc flag which emits a sync prior to each ll ++# instruction to work around a CPU bug (see __SYNC_loongson3_war in asm/sync.h ++# for a description). ++# ++# We disable this in order to prevent the assembler meddling with the ++# instruction that labels refer to, ie. if we label an ll instruction: ++# ++# 1: ll v0, 0(a0) ++# ++# ...then with the assembler fix applied the label may actually point at a sync ++# instruction inserted by the assembler, and if we were using the label in an ++# exception table the table would no longer contain the address of the ll ++# instruction. ++# ++# Avoid this by explicitly disabling that assembler behaviour. ++# ++cflags-y += $(call as-option,-Wa$(comma)-mno-fix-loongson3-llsc,) ++ + # + # CPU-dependent compiler/assembler options for optimization. + # +diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c +index 561389d3fadb2..b329cdb6134d2 100644 +--- a/arch/mips/cavium-octeon/setup.c ++++ b/arch/mips/cavium-octeon/setup.c +@@ -1158,12 +1158,15 @@ void __init device_tree_init(void) + bool do_prune; + bool fill_mac; + +- if (fw_passed_dtb) { +- fdt = (void *)fw_passed_dtb; ++#ifdef CONFIG_MIPS_ELF_APPENDED_DTB ++ if (!fdt_check_header(&__appended_dtb)) { ++ fdt = &__appended_dtb; + do_prune = false; + fill_mac = true; + pr_info("Using appended Device Tree.\n"); +- } else if (octeon_bootinfo->minor_version >= 3 && octeon_bootinfo->fdt_addr) { ++ } else ++#endif ++ if (octeon_bootinfo->minor_version >= 3 && octeon_bootinfo->fdt_addr) { + fdt = phys_to_virt(octeon_bootinfo->fdt_addr); + if (fdt_check_header(fdt)) + panic("Corrupt Device Tree passed to kernel."); +diff --git a/arch/mips/include/asm/asm.h b/arch/mips/include/asm/asm.h +index 3682d1a0bb808..ea4b62ece3366 100644 +--- a/arch/mips/include/asm/asm.h ++++ b/arch/mips/include/asm/asm.h +@@ -20,10 +20,27 @@ + #include + #include + ++#ifndef __VDSO__ ++/* ++ * Emit CFI data in .debug_frame sections, not .eh_frame sections. ++ * We don't do DWARF unwinding at runtime, so only the offline DWARF ++ * information is useful to anyone. Note we should change this if we ++ * ever decide to enable DWARF unwinding at runtime. ++ */ ++#define CFI_SECTIONS .cfi_sections .debug_frame ++#else ++ /* ++ * For the vDSO, emit both runtime unwind information and debug ++ * symbols for the .dbg file. ++ */ ++#define CFI_SECTIONS ++#endif ++ + /* + * LEAF - declare leaf routine + */ + #define LEAF(symbol) \ ++ CFI_SECTIONS; \ + .globl symbol; \ + .align 2; \ + .type symbol, @function; \ +@@ -36,6 +53,7 @@ symbol: .frame sp, 0, ra; \ + * NESTED - declare nested routine entry point + */ + #define NESTED(symbol, framesize, rpc) \ ++ CFI_SECTIONS; \ + .globl symbol; \ + .align 2; \ + .type symbol, @function; \ +diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h +index f904084fcb1fd..27ad767915390 100644 +--- a/arch/mips/include/asm/atomic.h ++++ b/arch/mips/include/asm/atomic.h +@@ -248,7 +248,7 @@ static __inline__ int pfx##_sub_if_positive(type i, pfx##_t * v) \ + * bltz that can branch to code outside of the LL/SC loop. As \ + * such, we don't need to emit another barrier here. \ + */ \ +- if (!__SYNC_loongson3_war) \ ++ if (__SYNC_loongson3_war == 0) \ + smp_mb__after_atomic(); \ + \ + return result; \ +diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h +index 5b0b3a6777ea5..ed8f3f3c4304a 100644 +--- a/arch/mips/include/asm/cmpxchg.h ++++ b/arch/mips/include/asm/cmpxchg.h +@@ -99,7 +99,7 @@ unsigned long __xchg(volatile void *ptr, unsigned long x, int size) + * contains a completion barrier prior to the LL, so we don't \ + * need to emit an extra one here. \ + */ \ +- if (!__SYNC_loongson3_war) \ ++ if (__SYNC_loongson3_war == 0) \ + smp_mb__before_llsc(); \ + \ + __res = (__typeof__(*(ptr))) \ +@@ -191,7 +191,7 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old, + * contains a completion barrier prior to the LL, so we don't \ + * need to emit an extra one here. \ + */ \ +- if (!__SYNC_loongson3_war) \ ++ if (__SYNC_loongson3_war == 0) \ + smp_mb__before_llsc(); \ + \ + __res = cmpxchg_local((ptr), (old), (new)); \ +@@ -201,7 +201,7 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old, + * contains a completion barrier after the SC, so we don't \ + * need to emit an extra one here. \ + */ \ +- if (!__SYNC_loongson3_war) \ ++ if (__SYNC_loongson3_war == 0) \ + smp_llsc_mb(); \ + \ + __res; \ +diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c +index e6853697a0561..31cb9199197ca 100644 +--- a/arch/mips/kernel/cpu-probe.c ++++ b/arch/mips/kernel/cpu-probe.c +@@ -1830,16 +1830,17 @@ static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu) + */ + case PRID_COMP_INGENIC_D0: + c->isa_level &= ~MIPS_CPU_ISA_M32R2; +- break; ++ fallthrough; + + /* + * The config0 register in the XBurst CPUs with a processor ID of +- * PRID_COMP_INGENIC_D1 has an abandoned huge page tlb mode, this +- * mode is not compatible with the MIPS standard, it will cause +- * tlbmiss and into an infinite loop (line 21 in the tlb-funcs.S) +- * when starting the init process. After chip reset, the default +- * is HPTLB mode, Write 0xa9000000 to cp0 register 5 sel 4 to +- * switch back to VTLB mode to prevent getting stuck. ++ * PRID_COMP_INGENIC_D0 or PRID_COMP_INGENIC_D1 has an abandoned ++ * huge page tlb mode, this mode is not compatible with the MIPS ++ * standard, it will cause tlbmiss and into an infinite loop ++ * (line 21 in the tlb-funcs.S) when starting the init process. ++ * After chip reset, the default is HPTLB mode, Write 0xa9000000 ++ * to cp0 register 5 sel 4 to switch back to VTLB mode to prevent ++ * getting stuck. + */ + case PRID_COMP_INGENIC_D1: + write_c0_page_ctrl(XBURST_PAGECTRL_HPTLB_DIS); +diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S +index 5e97e9d02f98d..09fa4705ce8eb 100644 +--- a/arch/mips/kernel/vmlinux.lds.S ++++ b/arch/mips/kernel/vmlinux.lds.S +@@ -90,6 +90,7 @@ SECTIONS + + INIT_TASK_DATA(THREAD_SIZE) + NOSAVE_DATA ++ PAGE_ALIGNED_DATA(PAGE_SIZE) + CACHELINE_ALIGNED_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT) + READ_MOSTLY_DATA(1 << CONFIG_MIPS_L1_CACHE_SHIFT) + DATA_DATA +@@ -223,6 +224,5 @@ SECTIONS + *(.options) + *(.pdr) + *(.reginfo) +- *(.eh_frame) + } + } +diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c +index df8eed3875f6d..43c2f271e6ab4 100644 +--- a/arch/mips/lantiq/irq.c ++++ b/arch/mips/lantiq/irq.c +@@ -302,7 +302,7 @@ static void ltq_hw_irq_handler(struct irq_desc *desc) + generic_handle_irq(irq_linear_revmap(ltq_domain, hwirq)); + + /* if this is a EBU irq, we need to ack it or get a deadlock */ +- if ((irq == LTQ_ICU_EBU_IRQ) && (module == 0) && LTQ_EBU_PCC_ISTAT) ++ if (irq == LTQ_ICU_EBU_IRQ && !module && LTQ_EBU_PCC_ISTAT != 0) + ltq_ebu_w32(ltq_ebu_r32(LTQ_EBU_PCC_ISTAT) | 0x10, + LTQ_EBU_PCC_ISTAT); + } +diff --git a/arch/mips/loongson64/Platform b/arch/mips/loongson64/Platform +index ec42c5085905c..e2354e128d9a0 100644 +--- a/arch/mips/loongson64/Platform ++++ b/arch/mips/loongson64/Platform +@@ -5,28 +5,6 @@ + + cflags-$(CONFIG_CPU_LOONGSON64) += -Wa,--trap + +-# +-# Some versions of binutils, not currently mainline as of 2019/02/04, support +-# an -mfix-loongson3-llsc flag which emits a sync prior to each ll instruction +-# to work around a CPU bug (see __SYNC_loongson3_war in asm/sync.h for a +-# description). +-# +-# We disable this in order to prevent the assembler meddling with the +-# instruction that labels refer to, ie. if we label an ll instruction: +-# +-# 1: ll v0, 0(a0) +-# +-# ...then with the assembler fix applied the label may actually point at a sync +-# instruction inserted by the assembler, and if we were using the label in an +-# exception table the table would no longer contain the address of the ll +-# instruction. +-# +-# Avoid this by explicitly disabling that assembler behaviour. If upstream +-# binutils does not merge support for the flag then we can revisit & remove +-# this later - for now it ensures vendor toolchains don't cause problems. +-# +-cflags-$(CONFIG_CPU_LOONGSON64) += $(call as-option,-Wa$(comma)-mno-fix-loongson3-llsc,) +- + # + # binutils from v2.25 on and gcc starting from v4.9.0 treat -march=loongson3a + # as MIPS64 R2; older versions as just R1. This leaves the possibility open +diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c +index c9644c38ec28f..96adc3d23bd2d 100644 +--- a/arch/mips/mm/c-r4k.c ++++ b/arch/mips/mm/c-r4k.c +@@ -1593,7 +1593,7 @@ static int probe_scache(void) + return 1; + } + +-static void __init loongson2_sc_init(void) ++static void loongson2_sc_init(void) + { + struct cpuinfo_mips *c = ¤t_cpu_data; + +diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile +index 5810cc12bc1d9..2131d3fd73333 100644 +--- a/arch/mips/vdso/Makefile ++++ b/arch/mips/vdso/Makefile +@@ -16,16 +16,13 @@ ccflags-vdso := \ + $(filter -march=%,$(KBUILD_CFLAGS)) \ + $(filter -m%-float,$(KBUILD_CFLAGS)) \ + $(filter -mno-loongson-%,$(KBUILD_CFLAGS)) \ ++ $(CLANG_FLAGS) \ + -D__VDSO__ + + ifndef CONFIG_64BIT + ccflags-vdso += -DBUILD_VDSO32 + endif + +-ifdef CONFIG_CC_IS_CLANG +-ccflags-vdso += $(filter --target=%,$(KBUILD_CFLAGS)) +-endif +- + # + # The -fno-jump-tables flag only prevents the compiler from generating + # jump tables but does not prevent the compiler from emitting absolute +diff --git a/arch/nios2/kernel/entry.S b/arch/nios2/kernel/entry.S +index da8442450e460..0794cd7803dfe 100644 +--- a/arch/nios2/kernel/entry.S ++++ b/arch/nios2/kernel/entry.S +@@ -389,7 +389,10 @@ ENTRY(ret_from_interrupt) + */ + ENTRY(sys_clone) + SAVE_SWITCH_STACK ++ subi sp, sp, 4 /* make space for tls pointer */ ++ stw r8, 0(sp) /* pass tls pointer (r8) via stack (5th argument) */ + call nios2_clone ++ addi sp, sp, 4 + RESTORE_SWITCH_STACK + ret + +diff --git a/arch/nios2/kernel/sys_nios2.c b/arch/nios2/kernel/sys_nios2.c +index cd390ec4f88bf..b1ca856999521 100644 +--- a/arch/nios2/kernel/sys_nios2.c ++++ b/arch/nios2/kernel/sys_nios2.c +@@ -22,6 +22,7 @@ asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len, + unsigned int op) + { + struct vm_area_struct *vma; ++ struct mm_struct *mm = current->mm; + + if (len == 0) + return 0; +@@ -34,16 +35,22 @@ asmlinkage int sys_cacheflush(unsigned long addr, unsigned long len, + if (addr + len < addr) + return -EFAULT; + ++ if (mmap_read_lock_killable(mm)) ++ return -EINTR; ++ + /* + * Verify that the specified address region actually belongs + * to this process. + */ +- vma = find_vma(current->mm, addr); +- if (vma == NULL || addr < vma->vm_start || addr + len > vma->vm_end) ++ vma = find_vma(mm, addr); ++ if (vma == NULL || addr < vma->vm_start || addr + len > vma->vm_end) { ++ mmap_read_unlock(mm); + return -EFAULT; ++ } + + flush_cache_range(vma, addr, addr + len); + ++ mmap_read_unlock(mm); + return 0; + } + +diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig +index 5181872f94523..31ed8083571ff 100644 +--- a/arch/powerpc/Kconfig ++++ b/arch/powerpc/Kconfig +@@ -761,7 +761,7 @@ config PPC_64K_PAGES + + config PPC_256K_PAGES + bool "256k page size" +- depends on 44x && !STDBINUTILS ++ depends on 44x && !STDBINUTILS && !PPC_47x + help + Make the page size 256k. + +diff --git a/arch/powerpc/include/asm/kexec.h b/arch/powerpc/include/asm/kexec.h +index 55d6ede30c19a..9ab344d29a545 100644 +--- a/arch/powerpc/include/asm/kexec.h ++++ b/arch/powerpc/include/asm/kexec.h +@@ -136,6 +136,7 @@ int load_crashdump_segments_ppc64(struct kimage *image, + int setup_purgatory_ppc64(struct kimage *image, const void *slave_code, + const void *fdt, unsigned long kernel_load_addr, + unsigned long fdt_load_addr); ++unsigned int kexec_fdt_totalsize_ppc64(struct kimage *image); + int setup_new_fdt_ppc64(const struct kimage *image, void *fdt, + unsigned long initrd_load_addr, + unsigned long initrd_len, const char *cmdline); +diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h +index 501c9a79038c0..f53bfefb4a577 100644 +--- a/arch/powerpc/include/asm/uaccess.h ++++ b/arch/powerpc/include/asm/uaccess.h +@@ -216,8 +216,6 @@ do { \ + #define __put_user_nocheck_goto(x, ptr, size, label) \ + do { \ + __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ +- if (!is_kernel_addr((unsigned long)__pu_addr)) \ +- might_fault(); \ + __chk_user_ptr(ptr); \ + __put_user_size_goto((x), __pu_addr, (size), label); \ + } while (0) +@@ -313,7 +311,7 @@ do { \ + __typeof__(size) __gu_size = (size); \ + \ + __chk_user_ptr(__gu_addr); \ +- if (!is_kernel_addr((unsigned long)__gu_addr)) \ ++ if (do_allow && !is_kernel_addr((unsigned long)__gu_addr)) \ + might_fault(); \ + barrier_nospec(); \ + if (do_allow) \ +@@ -508,6 +506,9 @@ static __must_check inline bool user_access_begin(const void __user *ptr, size_t + { + if (unlikely(!access_ok(ptr, len))) + return false; ++ ++ might_fault(); ++ + allow_read_write_user((void __user *)ptr, ptr, len); + return true; + } +@@ -521,6 +522,9 @@ user_read_access_begin(const void __user *ptr, size_t len) + { + if (unlikely(!access_ok(ptr, len))) + return false; ++ ++ might_fault(); ++ + allow_read_from_user(ptr, len); + return true; + } +@@ -532,6 +536,9 @@ user_write_access_begin(const void __user *ptr, size_t len) + { + if (unlikely(!access_ok(ptr, len))) + return false; ++ ++ might_fault(); ++ + allow_write_to_user((void __user *)ptr, len); + return true; + } +diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S +index 8cdc8bcde7038..459f5d00b9904 100644 +--- a/arch/powerpc/kernel/entry_32.S ++++ b/arch/powerpc/kernel/entry_32.S +@@ -347,6 +347,9 @@ trace_syscall_entry_irq_off: + + .globl transfer_to_syscall + transfer_to_syscall: ++#ifdef CONFIG_PPC_BOOK3S_32 ++ kuep_lock r11, r12 ++#endif + #ifdef CONFIG_TRACE_IRQFLAGS + andi. r12,r9,MSR_EE + beq- trace_syscall_entry_irq_off +diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h +index c88e66adecb52..fef0b34a77c9d 100644 +--- a/arch/powerpc/kernel/head_32.h ++++ b/arch/powerpc/kernel/head_32.h +@@ -56,7 +56,7 @@ + 1: + tophys_novmstack r11, r11 + #ifdef CONFIG_VMAP_STACK +- mtcrf 0x7f, r1 ++ mtcrf 0x3f, r1 + bt 32 - THREAD_ALIGN_SHIFT, stack_overflow + #endif + .endm +diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S +index ee0bfebc375f2..ce5fd93499a74 100644 +--- a/arch/powerpc/kernel/head_8xx.S ++++ b/arch/powerpc/kernel/head_8xx.S +@@ -175,7 +175,7 @@ SystemCall: + /* On the MPC8xx, this is a software emulation interrupt. It occurs + * for all unimplemented and illegal instructions. + */ +- EXCEPTION(0x1000, SoftEmu, program_check_exception, EXC_XFER_STD) ++ EXCEPTION(0x1000, SoftEmu, emulation_assist_interrupt, EXC_XFER_STD) + + . = 0x1100 + /* +diff --git a/arch/powerpc/kernel/head_book3s_32.S b/arch/powerpc/kernel/head_book3s_32.S +index d66da35f2e8d3..2729d8fa6e77c 100644 +--- a/arch/powerpc/kernel/head_book3s_32.S ++++ b/arch/powerpc/kernel/head_book3s_32.S +@@ -280,12 +280,6 @@ MachineCheck: + 7: EXCEPTION_PROLOG_2 + addi r3,r1,STACK_FRAME_OVERHEAD + #ifdef CONFIG_PPC_CHRP +-#ifdef CONFIG_VMAP_STACK +- mfspr r4, SPRN_SPRG_THREAD +- tovirt(r4, r4) +- lwz r4, RTAS_SP(r4) +- cmpwi cr1, r4, 0 +-#endif + beq cr1, machine_check_tramp + twi 31, 0, 0 + #else +diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c +index cc7a6271b6b4e..e8a548447dd68 100644 +--- a/arch/powerpc/kernel/irq.c ++++ b/arch/powerpc/kernel/irq.c +@@ -269,6 +269,31 @@ again: + } + } + ++#if defined(CONFIG_PPC_BOOK3S_64) && defined(CONFIG_PPC_KUAP) ++static inline void replay_soft_interrupts_irqrestore(void) ++{ ++ unsigned long kuap_state = get_kuap(); ++ ++ /* ++ * Check if anything calls local_irq_enable/restore() when KUAP is ++ * disabled (user access enabled). We handle that case here by saving ++ * and re-locking AMR but we shouldn't get here in the first place, ++ * hence the warning. ++ */ ++ kuap_check_amr(); ++ ++ if (kuap_state != AMR_KUAP_BLOCKED) ++ set_kuap(AMR_KUAP_BLOCKED); ++ ++ replay_soft_interrupts(); ++ ++ if (kuap_state != AMR_KUAP_BLOCKED) ++ set_kuap(kuap_state); ++} ++#else ++#define replay_soft_interrupts_irqrestore() replay_soft_interrupts() ++#endif ++ + notrace void arch_local_irq_restore(unsigned long mask) + { + unsigned char irq_happened; +@@ -332,7 +357,7 @@ notrace void arch_local_irq_restore(unsigned long mask) + irq_soft_mask_set(IRQS_ALL_DISABLED); + trace_hardirqs_off(); + +- replay_soft_interrupts(); ++ replay_soft_interrupts_irqrestore(); + local_paca->irq_happened = 0; + + trace_hardirqs_on(); +diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c +index 38ae5933d9174..7e337c570ea6b 100644 +--- a/arch/powerpc/kernel/prom_init.c ++++ b/arch/powerpc/kernel/prom_init.c +@@ -1330,14 +1330,10 @@ static void __init prom_check_platform_support(void) + if (prop_len > sizeof(vec)) + prom_printf("WARNING: ibm,arch-vec-5-platform-support longer than expected (len: %d)\n", + prop_len); +- prom_getprop(prom.chosen, "ibm,arch-vec-5-platform-support", +- &vec, sizeof(vec)); +- for (i = 0; i < sizeof(vec); i += 2) { +- prom_debug("%d: index = 0x%x val = 0x%x\n", i / 2 +- , vec[i] +- , vec[i + 1]); +- prom_parse_platform_support(vec[i], vec[i + 1], +- &supported); ++ prom_getprop(prom.chosen, "ibm,arch-vec-5-platform-support", &vec, sizeof(vec)); ++ for (i = 0; i < prop_len; i += 2) { ++ prom_debug("%d: index = 0x%x val = 0x%x\n", i / 2, vec[i], vec[i + 1]); ++ prom_parse_platform_support(vec[i], vec[i + 1], &supported); + } + } + +diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c +index 7d372ff3504b2..1d20f0f77a920 100644 +--- a/arch/powerpc/kernel/time.c ++++ b/arch/powerpc/kernel/time.c +@@ -53,6 +53,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -1095,6 +1096,7 @@ void __init time_init(void) + tick_setup_hrtimer_broadcast(); + + of_clk_init(NULL); ++ enable_sched_clock_irqtime(); + } + + /* +diff --git a/arch/powerpc/kexec/elf_64.c b/arch/powerpc/kexec/elf_64.c +index d0e459bb2f05a..9842e33533df1 100644 +--- a/arch/powerpc/kexec/elf_64.c ++++ b/arch/powerpc/kexec/elf_64.c +@@ -102,7 +102,7 @@ static void *elf64_load(struct kimage *image, char *kernel_buf, + pr_debug("Loaded initrd at 0x%lx\n", initrd_load_addr); + } + +- fdt_size = fdt_totalsize(initial_boot_params) * 2; ++ fdt_size = kexec_fdt_totalsize_ppc64(image); + fdt = kmalloc(fdt_size, GFP_KERNEL); + if (!fdt) { + pr_err("Not enough memory for the device tree.\n"); +diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c +index c69bcf9b547a8..02b9e4d0dc40b 100644 +--- a/arch/powerpc/kexec/file_load_64.c ++++ b/arch/powerpc/kexec/file_load_64.c +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -925,6 +926,40 @@ out: + return ret; + } + ++/** ++ * kexec_fdt_totalsize_ppc64 - Return the estimated size needed to setup FDT ++ * for kexec/kdump kernel. ++ * @image: kexec image being loaded. ++ * ++ * Returns the estimated size needed for kexec/kdump kernel FDT. ++ */ ++unsigned int kexec_fdt_totalsize_ppc64(struct kimage *image) ++{ ++ unsigned int fdt_size; ++ u64 usm_entries; ++ ++ /* ++ * The below estimate more than accounts for a typical kexec case where ++ * the additional space is to accommodate things like kexec cmdline, ++ * chosen node with properties for initrd start & end addresses and ++ * a property to indicate kexec boot.. ++ */ ++ fdt_size = fdt_totalsize(initial_boot_params) + (2 * COMMAND_LINE_SIZE); ++ if (image->type != KEXEC_TYPE_CRASH) ++ return fdt_size; ++ ++ /* ++ * For kdump kernel, also account for linux,usable-memory and ++ * linux,drconf-usable-memory properties. Get an approximate on the ++ * number of usable memory entries and use for FDT size estimation. ++ */ ++ usm_entries = ((memblock_end_of_DRAM() / drmem_lmb_size()) + ++ (2 * (resource_size(&crashk_res) / drmem_lmb_size()))); ++ fdt_size += (unsigned int)(usm_entries * sizeof(u64)); ++ ++ return fdt_size; ++} ++ + /** + * setup_new_fdt_ppc64 - Update the flattend device-tree of the kernel + * being loaded. +diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c +index 13999123b7358..32fa0fa3d4ff5 100644 +--- a/arch/powerpc/kvm/powerpc.c ++++ b/arch/powerpc/kvm/powerpc.c +@@ -1518,7 +1518,7 @@ int kvmppc_handle_vmx_load(struct kvm_vcpu *vcpu, + return emulated; + } + +-int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val) ++static int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val) + { + union kvmppc_one_reg reg; + int vmx_offset = 0; +@@ -1536,7 +1536,7 @@ int kvmppc_get_vmx_dword(struct kvm_vcpu *vcpu, int index, u64 *val) + return result; + } + +-int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val) ++static int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val) + { + union kvmppc_one_reg reg; + int vmx_offset = 0; +@@ -1554,7 +1554,7 @@ int kvmppc_get_vmx_word(struct kvm_vcpu *vcpu, int index, u64 *val) + return result; + } + +-int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val) ++static int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val) + { + union kvmppc_one_reg reg; + int vmx_offset = 0; +@@ -1572,7 +1572,7 @@ int kvmppc_get_vmx_hword(struct kvm_vcpu *vcpu, int index, u64 *val) + return result; + } + +-int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val) ++static int kvmppc_get_vmx_byte(struct kvm_vcpu *vcpu, int index, u64 *val) + { + union kvmppc_one_reg reg; + int vmx_offset = 0; +diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c +index 16e86ba8aa209..f6b7749d6ada7 100644 +--- a/arch/powerpc/platforms/pseries/dlpar.c ++++ b/arch/powerpc/platforms/pseries/dlpar.c +@@ -127,7 +127,6 @@ void dlpar_free_cc_nodes(struct device_node *dn) + #define NEXT_PROPERTY 3 + #define PREV_PARENT 4 + #define MORE_MEMORY 5 +-#define CALL_AGAIN -2 + #define ERR_CFG_USE -9003 + + struct device_node *dlpar_configure_connector(__be32 drc_index, +@@ -168,6 +167,9 @@ struct device_node *dlpar_configure_connector(__be32 drc_index, + + spin_unlock(&rtas_data_buf_lock); + ++ if (rtas_busy_delay(rc)) ++ continue; ++ + switch (rc) { + case COMPLETE: + break; +@@ -216,9 +218,6 @@ struct device_node *dlpar_configure_connector(__be32 drc_index, + last_dn = last_dn->parent; + break; + +- case CALL_AGAIN: +- break; +- + case MORE_MEMORY: + case ERR_CFG_USE: + default: +diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile +index 0cfd6da784f84..71a315e73cbe7 100644 +--- a/arch/riscv/kernel/vdso/Makefile ++++ b/arch/riscv/kernel/vdso/Makefile +@@ -32,9 +32,10 @@ CPPFLAGS_vdso.lds += -P -C -U$(ARCH) + # Disable -pg to prevent insert call site + CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os + +-# Disable gcov profiling for VDSO code ++# Disable profiling and instrumentation for VDSO code + GCOV_PROFILE := n + KCOV_INSTRUMENT := n ++KASAN_SANITIZE := n + + # Force dependency + $(obj)/vdso.o: $(obj)/vdso.so +diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c +index 8df10d3c8f6cf..7b3af2d6b9baa 100644 +--- a/arch/s390/kernel/vtime.c ++++ b/arch/s390/kernel/vtime.c +@@ -136,7 +136,8 @@ static int do_account_vtime(struct task_struct *tsk) + " stck %1" /* Store current tod clock value */ + #endif + : "=Q" (S390_lowcore.last_update_timer), +- "=Q" (S390_lowcore.last_update_clock)); ++ "=Q" (S390_lowcore.last_update_clock) ++ : : "cc"); + clock = S390_lowcore.last_update_clock - clock; + timer -= S390_lowcore.last_update_timer; + +diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig +index a6ca135442f9a..530b7ec5d3ca9 100644 +--- a/arch/sparc/Kconfig ++++ b/arch/sparc/Kconfig +@@ -496,7 +496,7 @@ config COMPAT + bool + depends on SPARC64 + default y +- select COMPAT_BINFMT_ELF ++ select COMPAT_BINFMT_ELF if BINFMT_ELF + select HAVE_UID16 + select ARCH_WANT_OLD_COMPAT_IPC + select COMPAT_OLD_SIGACTION +diff --git a/arch/sparc/kernel/led.c b/arch/sparc/kernel/led.c +index bd48575172c32..3a66e62eb2a0e 100644 +--- a/arch/sparc/kernel/led.c ++++ b/arch/sparc/kernel/led.c +@@ -50,6 +50,7 @@ static void led_blink(struct timer_list *unused) + add_timer(&led_blink_timer); + } + ++#ifdef CONFIG_PROC_FS + static int led_proc_show(struct seq_file *m, void *v) + { + if (get_auxio() & AUXIO_LED) +@@ -111,6 +112,7 @@ static const struct proc_ops led_proc_ops = { + .proc_release = single_release, + .proc_write = led_proc_write, + }; ++#endif + + static struct proc_dir_entry *led; + +diff --git a/arch/sparc/lib/memset.S b/arch/sparc/lib/memset.S +index b89d42b29e344..f427f34b8b79b 100644 +--- a/arch/sparc/lib/memset.S ++++ b/arch/sparc/lib/memset.S +@@ -142,6 +142,7 @@ __bzero: + ZERO_LAST_BLOCKS(%o0, 0x48, %g2) + ZERO_LAST_BLOCKS(%o0, 0x08, %g2) + 13: ++ EXT(12b, 13b, 21f) + be 8f + andcc %o1, 4, %g0 + +diff --git a/arch/um/include/shared/skas/mm_id.h b/arch/um/include/shared/skas/mm_id.h +index 4337b4ced0954..e82e203f5f419 100644 +--- a/arch/um/include/shared/skas/mm_id.h ++++ b/arch/um/include/shared/skas/mm_id.h +@@ -12,6 +12,7 @@ struct mm_id { + int pid; + } u; + unsigned long stack; ++ int kill; + }; + + #endif +diff --git a/arch/um/kernel/tlb.c b/arch/um/kernel/tlb.c +index 61776790cd678..5be1b0da9f3be 100644 +--- a/arch/um/kernel/tlb.c ++++ b/arch/um/kernel/tlb.c +@@ -125,6 +125,9 @@ static int add_mmap(unsigned long virt, unsigned long phys, unsigned long len, + struct host_vm_op *last; + int fd = -1, ret = 0; + ++ if (virt + len > STUB_START && virt < STUB_END) ++ return -EINVAL; ++ + if (hvc->userspace) + fd = phys_mapping(phys, &offset); + else +@@ -162,7 +165,7 @@ static int add_munmap(unsigned long addr, unsigned long len, + struct host_vm_op *last; + int ret = 0; + +- if ((addr >= STUB_START) && (addr < STUB_END)) ++ if (addr + len > STUB_START && addr < STUB_END) + return -EINVAL; + + if (hvc->index != 0) { +@@ -192,6 +195,9 @@ static int add_mprotect(unsigned long addr, unsigned long len, + struct host_vm_op *last; + int ret = 0; + ++ if (addr + len > STUB_START && addr < STUB_END) ++ return -EINVAL; ++ + if (hvc->index != 0) { + last = &hvc->ops[hvc->index - 1]; + if ((last->type == MPROTECT) && +@@ -346,12 +352,11 @@ void fix_range_common(struct mm_struct *mm, unsigned long start_addr, + + /* This is not an else because ret is modified above */ + if (ret) { ++ struct mm_id *mm_idp = ¤t->mm->context.id; ++ + printk(KERN_ERR "fix_range_common: failed, killing current " + "process: %d\n", task_tgid_vnr(current)); +- /* We are under mmap_lock, release it such that current can terminate */ +- mmap_write_unlock(current->mm); +- force_sig(SIGKILL); +- do_signal(¤t->thread.regs); ++ mm_idp->kill = 1; + } + } + +@@ -472,6 +477,10 @@ void flush_tlb_page(struct vm_area_struct *vma, unsigned long address) + struct mm_id *mm_id; + + address &= PAGE_MASK; ++ ++ if (address >= STUB_START && address < STUB_END) ++ goto kill; ++ + pgd = pgd_offset(mm, address); + if (!pgd_present(*pgd)) + goto kill; +diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c +index 4fb877b99dded..94a7c4125ebc8 100644 +--- a/arch/um/os-Linux/skas/process.c ++++ b/arch/um/os-Linux/skas/process.c +@@ -249,6 +249,7 @@ static int userspace_tramp(void *stack) + } + + int userspace_pid[NR_CPUS]; ++int kill_userspace_mm[NR_CPUS]; + + /** + * start_userspace() - prepare a new userspace process +@@ -342,6 +343,8 @@ void userspace(struct uml_pt_regs *regs, unsigned long *aux_fp_regs) + interrupt_end(); + + while (1) { ++ if (kill_userspace_mm[0]) ++ fatal_sigsegv(); + + /* + * This can legitimately fail if the process loads a +@@ -650,4 +653,5 @@ void reboot_skas(void) + void __switch_mm(struct mm_id *mm_idp) + { + userspace_pid[0] = mm_idp->u.pid; ++ kill_userspace_mm[0] = mm_idp->kill; + } +diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c +index ad8a7188a2bf7..f9a1d98e75349 100644 +--- a/arch/x86/crypto/aesni-intel_glue.c ++++ b/arch/x86/crypto/aesni-intel_glue.c +@@ -686,7 +686,8 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + unsigned long auth_tag_len = crypto_aead_authsize(tfm); + const struct aesni_gcm_tfm_s *gcm_tfm = aesni_gcm_tfm; +- struct gcm_context_data data AESNI_ALIGN_ATTR; ++ u8 databuf[sizeof(struct gcm_context_data) + (AESNI_ALIGN - 8)] __aligned(8); ++ struct gcm_context_data *data = PTR_ALIGN((void *)databuf, AESNI_ALIGN); + struct scatter_walk dst_sg_walk = {}; + unsigned long left = req->cryptlen; + unsigned long len, srclen, dstlen; +@@ -735,8 +736,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, + } + + kernel_fpu_begin(); +- gcm_tfm->init(aes_ctx, &data, iv, +- hash_subkey, assoc, assoclen); ++ gcm_tfm->init(aes_ctx, data, iv, hash_subkey, assoc, assoclen); + if (req->src != req->dst) { + while (left) { + src = scatterwalk_map(&src_sg_walk); +@@ -746,10 +746,10 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, + len = min(srclen, dstlen); + if (len) { + if (enc) +- gcm_tfm->enc_update(aes_ctx, &data, ++ gcm_tfm->enc_update(aes_ctx, data, + dst, src, len); + else +- gcm_tfm->dec_update(aes_ctx, &data, ++ gcm_tfm->dec_update(aes_ctx, data, + dst, src, len); + } + left -= len; +@@ -767,10 +767,10 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, + len = scatterwalk_clamp(&src_sg_walk, left); + if (len) { + if (enc) +- gcm_tfm->enc_update(aes_ctx, &data, ++ gcm_tfm->enc_update(aes_ctx, data, + src, src, len); + else +- gcm_tfm->dec_update(aes_ctx, &data, ++ gcm_tfm->dec_update(aes_ctx, data, + src, src, len); + } + left -= len; +@@ -779,7 +779,7 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, + scatterwalk_done(&src_sg_walk, 1, left); + } + } +- gcm_tfm->finalize(aes_ctx, &data, authTag, auth_tag_len); ++ gcm_tfm->finalize(aes_ctx, data, authTag, auth_tag_len); + kernel_fpu_end(); + + if (!assocmem) +@@ -828,7 +828,8 @@ static int helper_rfc4106_encrypt(struct aead_request *req) + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); + void *aes_ctx = &(ctx->aes_key_expanded); +- u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); ++ u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); ++ u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + unsigned int i; + __be32 counter = cpu_to_be32(1); + +@@ -855,7 +856,8 @@ static int helper_rfc4106_decrypt(struct aead_request *req) + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm); + void *aes_ctx = &(ctx->aes_key_expanded); +- u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); ++ u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); ++ u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + unsigned int i; + + if (unlikely(req->assoclen != 16 && req->assoclen != 20)) +@@ -985,7 +987,8 @@ static int generic_gcmaes_encrypt(struct aead_request *req) + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); + void *aes_ctx = &(ctx->aes_key_expanded); +- u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); ++ u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); ++ u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + __be32 counter = cpu_to_be32(1); + + memcpy(iv, req->iv, 12); +@@ -1001,7 +1004,8 @@ static int generic_gcmaes_decrypt(struct aead_request *req) + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm); + void *aes_ctx = &(ctx->aes_key_expanded); +- u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN))); ++ u8 ivbuf[16 + (AESNI_ALIGN - 8)] __aligned(8); ++ u8 *iv = PTR_ALIGN(&ivbuf[0], AESNI_ALIGN); + + memcpy(iv, req->iv, 12); + *((__be32 *)(iv+12)) = counter; +diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c +index 94c6e6330e043..de5358671750d 100644 +--- a/arch/x86/entry/common.c ++++ b/arch/x86/entry/common.c +@@ -304,7 +304,7 @@ __visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs) + + instrumentation_begin(); + run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs); +- instrumentation_begin(); ++ instrumentation_end(); + + set_irq_regs(old_regs); + +diff --git a/arch/x86/include/asm/virtext.h b/arch/x86/include/asm/virtext.h +index 9aad0e0876fba..fda3e7747c223 100644 +--- a/arch/x86/include/asm/virtext.h ++++ b/arch/x86/include/asm/virtext.h +@@ -30,15 +30,22 @@ static inline int cpu_has_vmx(void) + } + + +-/** Disable VMX on the current CPU ++/** ++ * cpu_vmxoff() - Disable VMX on the current CPU + * +- * vmxoff causes a undefined-opcode exception if vmxon was not run +- * on the CPU previously. Only call this function if you know VMX +- * is enabled. ++ * Disable VMX and clear CR4.VMXE (even if VMXOFF faults) ++ * ++ * Note, VMXOFF causes a #UD if the CPU is !post-VMXON, but it's impossible to ++ * atomically track post-VMXON state, e.g. this may be called in NMI context. ++ * Eat all faults as all other faults on VMXOFF faults are mode related, i.e. ++ * faults are guaranteed to be due to the !post-VMXON check unless the CPU is ++ * magically in RM, VM86, compat mode, or at CPL>0. + */ + static inline void cpu_vmxoff(void) + { +- asm volatile ("vmxoff"); ++ asm_volatile_goto("1: vmxoff\n\t" ++ _ASM_EXTABLE(1b, %l[fault]) :::: fault); ++fault: + cr4_clear_bits(X86_CR4_VMXE); + } + +diff --git a/arch/x86/kernel/msr.c b/arch/x86/kernel/msr.c +index c0d4098106589..79f900ffde4c5 100644 +--- a/arch/x86/kernel/msr.c ++++ b/arch/x86/kernel/msr.c +@@ -184,6 +184,13 @@ static long msr_ioctl(struct file *file, unsigned int ioc, unsigned long arg) + err = security_locked_down(LOCKDOWN_MSR); + if (err) + break; ++ ++ err = filter_write(regs[1]); ++ if (err) ++ return err; ++ ++ add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK); ++ + err = wrmsr_safe_regs_on_cpu(cpu, regs); + if (err) + break; +diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c +index db115943e8bdc..efbaef8b4de98 100644 +--- a/arch/x86/kernel/reboot.c ++++ b/arch/x86/kernel/reboot.c +@@ -538,31 +538,21 @@ static void emergency_vmx_disable_all(void) + local_irq_disable(); + + /* +- * We need to disable VMX on all CPUs before rebooting, otherwise +- * we risk hanging up the machine, because the CPU ignores INIT +- * signals when VMX is enabled. ++ * Disable VMX on all CPUs before rebooting, otherwise we risk hanging ++ * the machine, because the CPU blocks INIT when it's in VMX root. + * +- * We can't take any locks and we may be on an inconsistent +- * state, so we use NMIs as IPIs to tell the other CPUs to disable +- * VMX and halt. ++ * We can't take any locks and we may be on an inconsistent state, so ++ * use NMIs as IPIs to tell the other CPUs to exit VMX root and halt. + * +- * For safety, we will avoid running the nmi_shootdown_cpus() +- * stuff unnecessarily, but we don't have a way to check +- * if other CPUs have VMX enabled. So we will call it only if the +- * CPU we are running on has VMX enabled. +- * +- * We will miss cases where VMX is not enabled on all CPUs. This +- * shouldn't do much harm because KVM always enable VMX on all +- * CPUs anyway. But we can miss it on the small window where KVM +- * is still enabling VMX. ++ * Do the NMI shootdown even if VMX if off on _this_ CPU, as that ++ * doesn't prevent a different CPU from being in VMX root operation. + */ +- if (cpu_has_vmx() && cpu_vmx_enabled()) { +- /* Disable VMX on this CPU. */ +- cpu_vmxoff(); ++ if (cpu_has_vmx()) { ++ /* Safely force _this_ CPU out of VMX root operation. */ ++ __cpu_emergency_vmxoff(); + +- /* Halt and disable VMX on the other CPUs */ ++ /* Halt and exit VMX root operation on the other CPUs. */ + nmi_shootdown_cpus(vmxoff_nmi); +- + } + } + +diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c +index 66a08322988f2..1453b9b794425 100644 +--- a/arch/x86/kvm/emulate.c ++++ b/arch/x86/kvm/emulate.c +@@ -2564,12 +2564,12 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt, + ctxt->_eip = GET_SMSTATE(u64, smstate, 0x7f78); + ctxt->eflags = GET_SMSTATE(u32, smstate, 0x7f70) | X86_EFLAGS_FIXED; + +- val = GET_SMSTATE(u32, smstate, 0x7f68); ++ val = GET_SMSTATE(u64, smstate, 0x7f68); + + if (ctxt->ops->set_dr(ctxt, 6, (val & DR6_VOLATILE) | DR6_FIXED_1)) + return X86EMUL_UNHANDLEABLE; + +- val = GET_SMSTATE(u32, smstate, 0x7f60); ++ val = GET_SMSTATE(u64, smstate, 0x7f60); + + if (ctxt->ops->set_dr(ctxt, 7, (val & DR7_VOLATILE) | DR7_FIXED_1)) + return X86EMUL_UNHANDLEABLE; +diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c +index c842d17240ccb..ffa0bd0e033fb 100644 +--- a/arch/x86/kvm/mmu/tdp_mmu.c ++++ b/arch/x86/kvm/mmu/tdp_mmu.c +@@ -1055,7 +1055,8 @@ static void zap_collapsible_spte_range(struct kvm *kvm, + + pfn = spte_to_pfn(iter.old_spte); + if (kvm_is_reserved_pfn(pfn) || +- !PageTransCompoundMap(pfn_to_page(pfn))) ++ (!PageCompound(pfn_to_page(pfn)) && ++ !kvm_is_zone_device_pfn(pfn))) + continue; + + tdp_mmu_set_spte(kvm, &iter, 0); +diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c +index 4fbe190c79159..1008cc6cb66c5 100644 +--- a/arch/x86/kvm/svm/nested.c ++++ b/arch/x86/kvm/svm/nested.c +@@ -51,6 +51,23 @@ static void nested_svm_inject_npf_exit(struct kvm_vcpu *vcpu, + nested_svm_vmexit(svm); + } + ++static void svm_inject_page_fault_nested(struct kvm_vcpu *vcpu, struct x86_exception *fault) ++{ ++ struct vcpu_svm *svm = to_svm(vcpu); ++ WARN_ON(!is_guest_mode(vcpu)); ++ ++ if (vmcb_is_intercept(&svm->nested.ctl, INTERCEPT_EXCEPTION_OFFSET + PF_VECTOR) && ++ !svm->nested.nested_run_pending) { ++ svm->vmcb->control.exit_code = SVM_EXIT_EXCP_BASE + PF_VECTOR; ++ svm->vmcb->control.exit_code_hi = 0; ++ svm->vmcb->control.exit_info_1 = fault->error_code; ++ svm->vmcb->control.exit_info_2 = fault->address; ++ nested_svm_vmexit(svm); ++ } else { ++ kvm_inject_page_fault(vcpu, fault); ++ } ++} ++ + static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) + { + struct vcpu_svm *svm = to_svm(vcpu); +@@ -58,7 +75,7 @@ static u64 nested_svm_get_tdp_pdptr(struct kvm_vcpu *vcpu, int index) + u64 pdpte; + int ret; + +- ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(__sme_clr(cr3)), &pdpte, ++ ret = kvm_vcpu_read_guest_page(vcpu, gpa_to_gfn(cr3), &pdpte, + offset_in_page(cr3) + index * 8, 8); + if (ret) + return 0; +@@ -446,6 +463,9 @@ int enter_svm_guest_mode(struct vcpu_svm *svm, u64 vmcb12_gpa, + if (ret) + return ret; + ++ if (!npt_enabled) ++ svm->vcpu.arch.mmu->inject_page_fault = svm_inject_page_fault_nested; ++ + svm_set_gif(svm, true); + + return 0; +diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c +index f4ae3871e412a..76ab1ee0784ae 100644 +--- a/arch/x86/kvm/svm/svm.c ++++ b/arch/x86/kvm/svm/svm.c +@@ -1092,12 +1092,12 @@ static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) + static void svm_check_invpcid(struct vcpu_svm *svm) + { + /* +- * Intercept INVPCID instruction only if shadow page table is +- * enabled. Interception is not required with nested page table +- * enabled. ++ * Intercept INVPCID if shadow paging is enabled to sync/free shadow ++ * roots, or if INVPCID is disabled in the guest to inject #UD. + */ + if (kvm_cpu_cap_has(X86_FEATURE_INVPCID)) { +- if (!npt_enabled) ++ if (!npt_enabled || ++ !guest_cpuid_has(&svm->vcpu, X86_FEATURE_INVPCID)) + svm_set_intercept(svm, INTERCEPT_INVPCID); + else + svm_clr_intercept(svm, INTERCEPT_INVPCID); +diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c +index 82bf37a5c9ecc..9c1545c376e9b 100644 +--- a/arch/x86/mm/fault.c ++++ b/arch/x86/mm/fault.c +@@ -53,7 +53,7 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr) + * 32-bit mode: + * + * Sometimes AMD Athlon/Opteron CPUs report invalid exceptions on prefetch. +- * Check that here and ignore it. ++ * Check that here and ignore it. This is AMD erratum #91. + * + * 64-bit mode: + * +@@ -82,11 +82,7 @@ check_prefetch_opcode(struct pt_regs *regs, unsigned char *instr, + #ifdef CONFIG_X86_64 + case 0x40: + /* +- * In AMD64 long mode 0x40..0x4F are valid REX prefixes +- * Need to figure out under what instruction mode the +- * instruction was issued. Could check the LDT for lm, +- * but for now it's good enough to assume that long +- * mode only uses well known segments or kernel. ++ * In 64-bit mode 0x40..0x4F are valid REX prefixes + */ + return (!user_mode(regs) || user_64bit_mode(regs)); + #endif +@@ -126,20 +122,31 @@ is_prefetch(struct pt_regs *regs, unsigned long error_code, unsigned long addr) + instr = (void *)convert_ip_to_linear(current, regs); + max_instr = instr + 15; + +- if (user_mode(regs) && instr >= (unsigned char *)TASK_SIZE_MAX) +- return 0; ++ /* ++ * This code has historically always bailed out if IP points to a ++ * not-present page (e.g. due to a race). No one has ever ++ * complained about this. ++ */ ++ pagefault_disable(); + + while (instr < max_instr) { + unsigned char opcode; + +- if (get_kernel_nofault(opcode, instr)) +- break; ++ if (user_mode(regs)) { ++ if (get_user(opcode, instr)) ++ break; ++ } else { ++ if (get_kernel_nofault(opcode, instr)) ++ break; ++ } + + instr++; + + if (!check_prefetch_opcode(regs, instr, opcode, &prefetch)) + break; + } ++ ++ pagefault_enable(); + return prefetch; + } + +diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c +index 8f665c352bf0d..ca311aaa67b88 100644 +--- a/arch/x86/mm/pat/memtype.c ++++ b/arch/x86/mm/pat/memtype.c +@@ -1164,12 +1164,14 @@ static void *memtype_seq_start(struct seq_file *seq, loff_t *pos) + + static void *memtype_seq_next(struct seq_file *seq, void *v, loff_t *pos) + { ++ kfree(v); + ++*pos; + return memtype_get_idx(*pos); + } + + static void memtype_seq_stop(struct seq_file *seq, void *v) + { ++ kfree(v); + } + + static int memtype_seq_show(struct seq_file *seq, void *v) +@@ -1181,8 +1183,6 @@ static int memtype_seq_show(struct seq_file *seq, void *v) + entry_print->end, + cattr_name(entry_print->type)); + +- kfree(entry_print); +- + return 0; + } + +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index 9e81d1052091f..5720978e4d09b 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -2937,6 +2937,7 @@ static void __bfq_set_in_service_queue(struct bfq_data *bfqd, + } + + bfqd->in_service_queue = bfqq; ++ bfqd->in_serv_last_pos = 0; + } + + /* +diff --git a/block/blk-settings.c b/block/blk-settings.c +index 659cdb8a07fef..c3aa7f8ee3883 100644 +--- a/block/blk-settings.c ++++ b/block/blk-settings.c +@@ -468,6 +468,14 @@ void blk_queue_io_opt(struct request_queue *q, unsigned int opt) + } + EXPORT_SYMBOL(blk_queue_io_opt); + ++static unsigned int blk_round_down_sectors(unsigned int sectors, unsigned int lbs) ++{ ++ sectors = round_down(sectors, lbs >> SECTOR_SHIFT); ++ if (sectors < PAGE_SIZE >> SECTOR_SHIFT) ++ sectors = PAGE_SIZE >> SECTOR_SHIFT; ++ return sectors; ++} ++ + /** + * blk_stack_limits - adjust queue_limits for stacked devices + * @t: the stacking driver limits (top device) +@@ -594,6 +602,10 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, + ret = -1; + } + ++ t->max_sectors = blk_round_down_sectors(t->max_sectors, t->logical_block_size); ++ t->max_hw_sectors = blk_round_down_sectors(t->max_hw_sectors, t->logical_block_size); ++ t->max_dev_sectors = blk_round_down_sectors(t->max_dev_sectors, t->logical_block_size); ++ + /* Discard alignment and granularity */ + if (b->discard_granularity) { + alignment = queue_limit_discard_alignment(b, start); +diff --git a/block/bsg.c b/block/bsg.c +index d7bae94b64d95..3d78e843a83f6 100644 +--- a/block/bsg.c ++++ b/block/bsg.c +@@ -157,8 +157,10 @@ static int bsg_sg_io(struct request_queue *q, fmode_t mode, void __user *uarg) + return PTR_ERR(rq); + + ret = q->bsg_dev.ops->fill_hdr(rq, &hdr, mode); +- if (ret) ++ if (ret) { ++ blk_put_request(rq); + return ret; ++ } + + rq->timeout = msecs_to_jiffies(hdr.timeout); + if (!rq->timeout) +diff --git a/block/ioctl.c b/block/ioctl.c +index 3fbc382eb926d..3be4d0e2a96c3 100644 +--- a/block/ioctl.c ++++ b/block/ioctl.c +@@ -90,20 +90,27 @@ static int compat_blkpg_ioctl(struct block_device *bdev, + } + #endif + +-static int blkdev_reread_part(struct block_device *bdev) ++static int blkdev_reread_part(struct block_device *bdev, fmode_t mode) + { +- int ret; ++ struct block_device *tmp; + + if (!disk_part_scan_enabled(bdev->bd_disk) || bdev_is_partition(bdev)) + return -EINVAL; + if (!capable(CAP_SYS_ADMIN)) + return -EACCES; + +- mutex_lock(&bdev->bd_mutex); +- ret = bdev_disk_changed(bdev, false); +- mutex_unlock(&bdev->bd_mutex); ++ /* ++ * Reopen the device to revalidate the driver state and force a ++ * partition rescan. ++ */ ++ mode &= ~FMODE_EXCL; ++ set_bit(GD_NEED_PART_SCAN, &bdev->bd_disk->state); + +- return ret; ++ tmp = blkdev_get_by_dev(bdev->bd_dev, mode, NULL); ++ if (IS_ERR(tmp)) ++ return PTR_ERR(tmp); ++ blkdev_put(tmp, mode); ++ return 0; + } + + static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode, +@@ -549,7 +556,7 @@ static int blkdev_common_ioctl(struct block_device *bdev, fmode_t mode, + bdev->bd_bdi->ra_pages = (arg * 512) / PAGE_SIZE; + return 0; + case BLKRRPART: +- return blkdev_reread_part(bdev); ++ return blkdev_reread_part(bdev, mode); + case BLKTRACESTART: + case BLKTRACESTOP: + case BLKTRACETEARDOWN: +diff --git a/certs/blacklist.c b/certs/blacklist.c +index 6514f9ebc943f..f1c434b04b5e4 100644 +--- a/certs/blacklist.c ++++ b/certs/blacklist.c +@@ -162,7 +162,7 @@ static int __init blacklist_init(void) + KEY_USR_VIEW | KEY_USR_READ | + KEY_USR_SEARCH, + KEY_ALLOC_NOT_IN_QUOTA | +- KEY_FLAG_KEEP, ++ KEY_ALLOC_SET_KEEP, + NULL, NULL); + if (IS_ERR(blacklist_keyring)) + panic("Can't allocate system blacklist keyring\n"); +diff --git a/crypto/ecdh_helper.c b/crypto/ecdh_helper.c +index 66fcb2ea81544..fca63b559f655 100644 +--- a/crypto/ecdh_helper.c ++++ b/crypto/ecdh_helper.c +@@ -67,6 +67,9 @@ int crypto_ecdh_decode_key(const char *buf, unsigned int len, + if (secret.type != CRYPTO_KPP_SECRET_TYPE_ECDH) + return -EINVAL; + ++ if (unlikely(len < secret.len)) ++ return -EINVAL; ++ + ptr = ecdh_unpack_data(¶ms->curve_id, ptr, sizeof(params->curve_id)); + ptr = ecdh_unpack_data(¶ms->key_size, ptr, sizeof(params->key_size)); + if (secret.len != crypto_ecdh_key_len(params)) +diff --git a/crypto/michael_mic.c b/crypto/michael_mic.c +index 63350c4ad4617..f4c31049601c9 100644 +--- a/crypto/michael_mic.c ++++ b/crypto/michael_mic.c +@@ -7,7 +7,7 @@ + * Copyright (c) 2004 Jouni Malinen + */ + #include +-#include ++#include + #include + #include + #include +@@ -19,7 +19,7 @@ struct michael_mic_ctx { + }; + + struct michael_mic_desc_ctx { +- u8 pending[4]; ++ __le32 pending; + size_t pending_len; + + u32 l, r; +@@ -60,13 +60,12 @@ static int michael_update(struct shash_desc *desc, const u8 *data, + unsigned int len) + { + struct michael_mic_desc_ctx *mctx = shash_desc_ctx(desc); +- const __le32 *src; + + if (mctx->pending_len) { + int flen = 4 - mctx->pending_len; + if (flen > len) + flen = len; +- memcpy(&mctx->pending[mctx->pending_len], data, flen); ++ memcpy((u8 *)&mctx->pending + mctx->pending_len, data, flen); + mctx->pending_len += flen; + data += flen; + len -= flen; +@@ -74,23 +73,21 @@ static int michael_update(struct shash_desc *desc, const u8 *data, + if (mctx->pending_len < 4) + return 0; + +- src = (const __le32 *)mctx->pending; +- mctx->l ^= le32_to_cpup(src); ++ mctx->l ^= le32_to_cpu(mctx->pending); + michael_block(mctx->l, mctx->r); + mctx->pending_len = 0; + } + +- src = (const __le32 *)data; +- + while (len >= 4) { +- mctx->l ^= le32_to_cpup(src++); ++ mctx->l ^= get_unaligned_le32(data); + michael_block(mctx->l, mctx->r); ++ data += 4; + len -= 4; + } + + if (len > 0) { + mctx->pending_len = len; +- memcpy(mctx->pending, src, len); ++ memcpy(&mctx->pending, data, len); + } + + return 0; +@@ -100,8 +97,7 @@ static int michael_update(struct shash_desc *desc, const u8 *data, + static int michael_final(struct shash_desc *desc, u8 *out) + { + struct michael_mic_desc_ctx *mctx = shash_desc_ctx(desc); +- u8 *data = mctx->pending; +- __le32 *dst = (__le32 *)out; ++ u8 *data = (u8 *)&mctx->pending; + + /* Last block and padding (0x5a, 4..7 x 0) */ + switch (mctx->pending_len) { +@@ -123,8 +119,8 @@ static int michael_final(struct shash_desc *desc, u8 *out) + /* l ^= 0; */ + michael_block(mctx->l, mctx->r); + +- dst[0] = cpu_to_le32(mctx->l); +- dst[1] = cpu_to_le32(mctx->r); ++ put_unaligned_le32(mctx->l, out); ++ put_unaligned_le32(mctx->r, out + 4); + + return 0; + } +@@ -135,13 +131,11 @@ static int michael_setkey(struct crypto_shash *tfm, const u8 *key, + { + struct michael_mic_ctx *mctx = crypto_shash_ctx(tfm); + +- const __le32 *data = (const __le32 *)key; +- + if (keylen != 8) + return -EINVAL; + +- mctx->l = le32_to_cpu(data[0]); +- mctx->r = le32_to_cpu(data[1]); ++ mctx->l = get_unaligned_le32(key); ++ mctx->r = get_unaligned_le32(key + 4); + return 0; + } + +@@ -156,7 +150,6 @@ static struct shash_alg alg = { + .cra_name = "michael_mic", + .cra_driver_name = "michael_mic-generic", + .cra_blocksize = 8, +- .cra_alignmask = 3, + .cra_ctxsize = sizeof(struct michael_mic_ctx), + .cra_module = THIS_MODULE, + } +diff --git a/drivers/acpi/acpi_configfs.c b/drivers/acpi/acpi_configfs.c +index cf91f49101eac..3a14859dbb757 100644 +--- a/drivers/acpi/acpi_configfs.c ++++ b/drivers/acpi/acpi_configfs.c +@@ -268,7 +268,12 @@ static int __init acpi_configfs_init(void) + + acpi_table_group = configfs_register_default_group(root, "table", + &acpi_tables_type); +- return PTR_ERR_OR_ZERO(acpi_table_group); ++ if (IS_ERR(acpi_table_group)) { ++ configfs_unregister_subsystem(&acpi_configfs); ++ return PTR_ERR(acpi_table_group); ++ } ++ ++ return 0; + } + module_init(acpi_configfs_init); + +diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c +index d04de10a63e4d..e3dd64aa43737 100644 +--- a/drivers/acpi/property.c ++++ b/drivers/acpi/property.c +@@ -787,9 +787,6 @@ static int acpi_data_prop_read_single(const struct acpi_device_data *data, + const union acpi_object *obj; + int ret; + +- if (!val) +- return -EINVAL; +- + if (proptype >= DEV_PROP_U8 && proptype <= DEV_PROP_U64) { + ret = acpi_data_get_property(data, propname, ACPI_TYPE_INTEGER, &obj); + if (ret) +@@ -799,28 +796,43 @@ static int acpi_data_prop_read_single(const struct acpi_device_data *data, + case DEV_PROP_U8: + if (obj->integer.value > U8_MAX) + return -EOVERFLOW; +- *(u8 *)val = obj->integer.value; ++ ++ if (val) ++ *(u8 *)val = obj->integer.value; ++ + break; + case DEV_PROP_U16: + if (obj->integer.value > U16_MAX) + return -EOVERFLOW; +- *(u16 *)val = obj->integer.value; ++ ++ if (val) ++ *(u16 *)val = obj->integer.value; ++ + break; + case DEV_PROP_U32: + if (obj->integer.value > U32_MAX) + return -EOVERFLOW; +- *(u32 *)val = obj->integer.value; ++ ++ if (val) ++ *(u32 *)val = obj->integer.value; ++ + break; + default: +- *(u64 *)val = obj->integer.value; ++ if (val) ++ *(u64 *)val = obj->integer.value; ++ + break; + } ++ ++ if (!val) ++ return 1; + } else if (proptype == DEV_PROP_STRING) { + ret = acpi_data_get_property(data, propname, ACPI_TYPE_STRING, &obj); + if (ret) + return ret; + +- *(char **)val = obj->string.pointer; ++ if (val) ++ *(char **)val = obj->string.pointer; + + return 1; + } else { +@@ -834,7 +846,7 @@ int acpi_dev_prop_read_single(struct acpi_device *adev, const char *propname, + { + int ret; + +- if (!adev) ++ if (!adev || !val) + return -EINVAL; + + ret = acpi_data_prop_read_single(&adev->data, propname, proptype, val); +@@ -928,10 +940,20 @@ static int acpi_data_prop_read(const struct acpi_device_data *data, + const union acpi_object *items; + int ret; + +- if (val && nval == 1) { ++ if (nval == 1 || !val) { + ret = acpi_data_prop_read_single(data, propname, proptype, val); +- if (ret >= 0) ++ /* ++ * The overflow error means that the property is there and it is ++ * single-value, but its type does not match, so return. ++ */ ++ if (ret >= 0 || ret == -EOVERFLOW) + return ret; ++ ++ /* ++ * Reading this property as a single-value one failed, but its ++ * value may still be represented as one-element array, so ++ * continue. ++ */ + } + + ret = acpi_data_get_property_array(data, propname, ACPI_TYPE_ANY, &obj); +diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c +index ecc304149067c..b5f5ca4e3f343 100644 +--- a/drivers/amba/bus.c ++++ b/drivers/amba/bus.c +@@ -299,10 +299,11 @@ static int amba_remove(struct device *dev) + { + struct amba_device *pcdev = to_amba_device(dev); + struct amba_driver *drv = to_amba_driver(dev->driver); +- int ret; ++ int ret = 0; + + pm_runtime_get_sync(dev); +- ret = drv->remove(pcdev); ++ if (drv->remove) ++ ret = drv->remove(pcdev); + pm_runtime_put_noidle(dev); + + /* Undo the runtime PM settings in amba_probe() */ +@@ -319,7 +320,9 @@ static int amba_remove(struct device *dev) + static void amba_shutdown(struct device *dev) + { + struct amba_driver *drv = to_amba_driver(dev->driver); +- drv->shutdown(to_amba_device(dev)); ++ ++ if (drv->shutdown) ++ drv->shutdown(to_amba_device(dev)); + } + + /** +@@ -332,12 +335,13 @@ static void amba_shutdown(struct device *dev) + */ + int amba_driver_register(struct amba_driver *drv) + { +- drv->drv.bus = &amba_bustype; ++ if (!drv->probe) ++ return -EINVAL; + +-#define SETFN(fn) if (drv->fn) drv->drv.fn = amba_##fn +- SETFN(probe); +- SETFN(remove); +- SETFN(shutdown); ++ drv->drv.bus = &amba_bustype; ++ drv->drv.probe = amba_probe; ++ drv->drv.remove = amba_remove; ++ drv->drv.shutdown = amba_shutdown; + + return driver_register(&drv->drv); + } +diff --git a/drivers/ata/ahci_brcm.c b/drivers/ata/ahci_brcm.c +index 49f7acbfcf01e..5b32df5d33adc 100644 +--- a/drivers/ata/ahci_brcm.c ++++ b/drivers/ata/ahci_brcm.c +@@ -377,6 +377,10 @@ static int __maybe_unused brcm_ahci_resume(struct device *dev) + if (ret) + return ret; + ++ ret = ahci_platform_enable_regulators(hpriv); ++ if (ret) ++ goto out_disable_clks; ++ + brcm_sata_init(priv); + brcm_sata_phys_enable(priv); + brcm_sata_alpm_init(hpriv); +@@ -406,6 +410,8 @@ out_disable_platform_phys: + ahci_platform_disable_phys(hpriv); + out_disable_phys: + brcm_sata_phys_disable(priv); ++ ahci_platform_disable_regulators(hpriv); ++out_disable_clks: + ahci_platform_disable_clks(hpriv); + return ret; + } +@@ -490,6 +496,10 @@ static int brcm_ahci_probe(struct platform_device *pdev) + if (ret) + goto out_reset; + ++ ret = ahci_platform_enable_regulators(hpriv); ++ if (ret) ++ goto out_disable_clks; ++ + /* Must be first so as to configure endianness including that + * of the standard AHCI register space. + */ +@@ -499,7 +509,7 @@ static int brcm_ahci_probe(struct platform_device *pdev) + priv->port_mask = brcm_ahci_get_portmask(hpriv, priv); + if (!priv->port_mask) { + ret = -ENODEV; +- goto out_disable_clks; ++ goto out_disable_regulators; + } + + /* Must be done before ahci_platform_enable_phys() */ +@@ -524,6 +534,8 @@ out_disable_platform_phys: + ahci_platform_disable_phys(hpriv); + out_disable_phys: + brcm_sata_phys_disable(priv); ++out_disable_regulators: ++ ahci_platform_disable_regulators(hpriv); + out_disable_clks: + ahci_platform_disable_clks(hpriv); + out_reset: +diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c +index d951d54b26f52..d8602843e8a53 100644 +--- a/drivers/auxdisplay/ht16k33.c ++++ b/drivers/auxdisplay/ht16k33.c +@@ -117,8 +117,7 @@ static void ht16k33_fb_queue(struct ht16k33_priv *priv) + { + struct ht16k33_fbdev *fbdev = &priv->fbdev; + +- schedule_delayed_work(&fbdev->work, +- msecs_to_jiffies(HZ / fbdev->refresh_rate)); ++ schedule_delayed_work(&fbdev->work, HZ / fbdev->refresh_rate); + } + + /* +diff --git a/drivers/base/regmap/regmap-sdw.c b/drivers/base/regmap/regmap-sdw.c +index c92d614b49432..4b8d2d010cab9 100644 +--- a/drivers/base/regmap/regmap-sdw.c ++++ b/drivers/base/regmap/regmap-sdw.c +@@ -11,7 +11,7 @@ static int regmap_sdw_write(void *context, unsigned int reg, unsigned int val) + struct device *dev = context; + struct sdw_slave *slave = dev_to_sdw_dev(dev); + +- return sdw_write(slave, reg, val); ++ return sdw_write_no_pm(slave, reg, val); + } + + static int regmap_sdw_read(void *context, unsigned int reg, unsigned int *val) +@@ -20,7 +20,7 @@ static int regmap_sdw_read(void *context, unsigned int reg, unsigned int *val) + struct sdw_slave *slave = dev_to_sdw_dev(dev); + int read; + +- read = sdw_read(slave, reg); ++ read = sdw_read_no_pm(slave, reg); + if (read < 0) + return read; + +diff --git a/drivers/base/swnode.c b/drivers/base/swnode.c +index 010828fc785bc..615a0c93e1166 100644 +--- a/drivers/base/swnode.c ++++ b/drivers/base/swnode.c +@@ -443,14 +443,18 @@ software_node_get_next_child(const struct fwnode_handle *fwnode, + struct swnode *c = to_swnode(child); + + if (!p || list_empty(&p->children) || +- (c && list_is_last(&c->entry, &p->children))) ++ (c && list_is_last(&c->entry, &p->children))) { ++ fwnode_handle_put(child); + return NULL; ++ } + + if (c) + c = list_next_entry(c, entry); + else + c = list_first_entry(&p->children, struct swnode, entry); +- return &c->fwnode; ++ ++ fwnode_handle_put(child); ++ return fwnode_handle_get(&c->fwnode); + } + + static struct fwnode_handle * +diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c +index 7df79ae6b0a1e..295da442329f3 100644 +--- a/drivers/block/floppy.c ++++ b/drivers/block/floppy.c +@@ -4120,23 +4120,23 @@ static int floppy_open(struct block_device *bdev, fmode_t mode) + if (fdc_state[FDC(drive)].rawcmd == 1) + fdc_state[FDC(drive)].rawcmd = 2; + +- if (!(mode & FMODE_NDELAY)) { +- if (mode & (FMODE_READ|FMODE_WRITE)) { +- drive_state[drive].last_checked = 0; +- clear_bit(FD_OPEN_SHOULD_FAIL_BIT, +- &drive_state[drive].flags); +- if (bdev_check_media_change(bdev)) +- floppy_revalidate(bdev->bd_disk); +- if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags)) +- goto out; +- if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags)) +- goto out; +- } +- res = -EROFS; +- if ((mode & FMODE_WRITE) && +- !test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags)) ++ if (mode & (FMODE_READ|FMODE_WRITE)) { ++ drive_state[drive].last_checked = 0; ++ clear_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags); ++ if (bdev_check_media_change(bdev)) ++ floppy_revalidate(bdev->bd_disk); ++ if (test_bit(FD_DISK_CHANGED_BIT, &drive_state[drive].flags)) ++ goto out; ++ if (test_bit(FD_OPEN_SHOULD_FAIL_BIT, &drive_state[drive].flags)) + goto out; + } ++ ++ res = -EROFS; ++ ++ if ((mode & FMODE_WRITE) && ++ !test_bit(FD_DISK_WRITABLE_BIT, &drive_state[drive].flags)) ++ goto out; ++ + mutex_unlock(&open_lock); + mutex_unlock(&floppy_mutex); + return 0; +diff --git a/drivers/bluetooth/btqcomsmd.c b/drivers/bluetooth/btqcomsmd.c +index 98d53764871f5..2acb719e596f5 100644 +--- a/drivers/bluetooth/btqcomsmd.c ++++ b/drivers/bluetooth/btqcomsmd.c +@@ -142,12 +142,16 @@ static int btqcomsmd_probe(struct platform_device *pdev) + + btq->cmd_channel = qcom_wcnss_open_channel(wcnss, "APPS_RIVA_BT_CMD", + btqcomsmd_cmd_callback, btq); +- if (IS_ERR(btq->cmd_channel)) +- return PTR_ERR(btq->cmd_channel); ++ if (IS_ERR(btq->cmd_channel)) { ++ ret = PTR_ERR(btq->cmd_channel); ++ goto destroy_acl_channel; ++ } + + hdev = hci_alloc_dev(); +- if (!hdev) +- return -ENOMEM; ++ if (!hdev) { ++ ret = -ENOMEM; ++ goto destroy_cmd_channel; ++ } + + hci_set_drvdata(hdev, btq); + btq->hdev = hdev; +@@ -161,14 +165,21 @@ static int btqcomsmd_probe(struct platform_device *pdev) + hdev->set_bdaddr = qca_set_bdaddr_rome; + + ret = hci_register_dev(hdev); +- if (ret < 0) { +- hci_free_dev(hdev); +- return ret; +- } ++ if (ret < 0) ++ goto hci_free_dev; + + platform_set_drvdata(pdev, btq); + + return 0; ++ ++hci_free_dev: ++ hci_free_dev(hdev); ++destroy_cmd_channel: ++ rpmsg_destroy_ept(btq->cmd_channel); ++destroy_acl_channel: ++ rpmsg_destroy_ept(btq->acl_channel); ++ ++ return ret; + } + + static int btqcomsmd_remove(struct platform_device *pdev) +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index 1c942869baacc..2953b96b3ceda 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -2827,7 +2827,7 @@ static void btusb_mtk_wmt_recv(struct urb *urb) + skb = bt_skb_alloc(HCI_WMT_MAX_EVENT_SIZE, GFP_ATOMIC); + if (!skb) { + hdev->stat.err_rx++; +- goto err_out; ++ return; + } + + hci_skb_pkt_type(skb) = HCI_EVENT_PKT; +@@ -2845,13 +2845,18 @@ static void btusb_mtk_wmt_recv(struct urb *urb) + */ + if (test_bit(BTUSB_TX_WAIT_VND_EVT, &data->flags)) { + data->evt_skb = skb_clone(skb, GFP_ATOMIC); +- if (!data->evt_skb) +- goto err_out; ++ if (!data->evt_skb) { ++ kfree_skb(skb); ++ return; ++ } + } + + err = hci_recv_frame(hdev, skb); +- if (err < 0) +- goto err_free_skb; ++ if (err < 0) { ++ kfree_skb(data->evt_skb); ++ data->evt_skb = NULL; ++ return; ++ } + + if (test_and_clear_bit(BTUSB_TX_WAIT_VND_EVT, + &data->flags)) { +@@ -2860,11 +2865,6 @@ static void btusb_mtk_wmt_recv(struct urb *urb) + wake_up_bit(&data->flags, + BTUSB_TX_WAIT_VND_EVT); + } +-err_out: +- return; +-err_free_skb: +- kfree_skb(data->evt_skb); +- data->evt_skb = NULL; + return; + } else if (urb->status == -ENOENT) { + /* Avoid suspend failed when usb_kill_urb */ +diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c +index f83d67eafc9f0..637c5b8c2aa1a 100644 +--- a/drivers/bluetooth/hci_ldisc.c ++++ b/drivers/bluetooth/hci_ldisc.c +@@ -127,10 +127,9 @@ int hci_uart_tx_wakeup(struct hci_uart *hu) + if (!test_bit(HCI_UART_PROTO_READY, &hu->flags)) + goto no_schedule; + +- if (test_and_set_bit(HCI_UART_SENDING, &hu->tx_state)) { +- set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state); ++ set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state); ++ if (test_and_set_bit(HCI_UART_SENDING, &hu->tx_state)) + goto no_schedule; +- } + + BT_DBG(""); + +@@ -174,10 +173,10 @@ restart: + kfree_skb(skb); + } + ++ clear_bit(HCI_UART_SENDING, &hu->tx_state); + if (test_bit(HCI_UART_TX_WAKEUP, &hu->tx_state)) + goto restart; + +- clear_bit(HCI_UART_SENDING, &hu->tx_state); + wake_up_bit(&hu->tx_state, HCI_UART_SENDING); + } + +@@ -802,7 +801,8 @@ static int hci_uart_tty_ioctl(struct tty_struct *tty, struct file *file, + * We don't provide read/write/poll interface for user space. + */ + static ssize_t hci_uart_tty_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr) ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset) + { + return 0; + } +@@ -819,29 +819,28 @@ static __poll_t hci_uart_tty_poll(struct tty_struct *tty, + return 0; + } + ++static struct tty_ldisc_ops hci_uart_ldisc = { ++ .owner = THIS_MODULE, ++ .magic = TTY_LDISC_MAGIC, ++ .name = "n_hci", ++ .open = hci_uart_tty_open, ++ .close = hci_uart_tty_close, ++ .read = hci_uart_tty_read, ++ .write = hci_uart_tty_write, ++ .ioctl = hci_uart_tty_ioctl, ++ .compat_ioctl = hci_uart_tty_ioctl, ++ .poll = hci_uart_tty_poll, ++ .receive_buf = hci_uart_tty_receive, ++ .write_wakeup = hci_uart_tty_wakeup, ++}; ++ + static int __init hci_uart_init(void) + { +- static struct tty_ldisc_ops hci_uart_ldisc; + int err; + + BT_INFO("HCI UART driver ver %s", VERSION); + + /* Register the tty discipline */ +- +- memset(&hci_uart_ldisc, 0, sizeof(hci_uart_ldisc)); +- hci_uart_ldisc.magic = TTY_LDISC_MAGIC; +- hci_uart_ldisc.name = "n_hci"; +- hci_uart_ldisc.open = hci_uart_tty_open; +- hci_uart_ldisc.close = hci_uart_tty_close; +- hci_uart_ldisc.read = hci_uart_tty_read; +- hci_uart_ldisc.write = hci_uart_tty_write; +- hci_uart_ldisc.ioctl = hci_uart_tty_ioctl; +- hci_uart_ldisc.compat_ioctl = hci_uart_tty_ioctl; +- hci_uart_ldisc.poll = hci_uart_tty_poll; +- hci_uart_ldisc.receive_buf = hci_uart_tty_receive; +- hci_uart_ldisc.write_wakeup = hci_uart_tty_wakeup; +- hci_uart_ldisc.owner = THIS_MODULE; +- + err = tty_register_ldisc(N_HCI, &hci_uart_ldisc); + if (err) { + BT_ERR("HCI line discipline registration failed. (%d)", err); +diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c +index 244b8feba5232..5c26c7d941731 100644 +--- a/drivers/bluetooth/hci_qca.c ++++ b/drivers/bluetooth/hci_qca.c +@@ -1020,7 +1020,9 @@ static void qca_controller_memdump(struct work_struct *work) + dump_size = __le32_to_cpu(dump->dump_size); + if (!(dump_size)) { + bt_dev_err(hu->hdev, "Rx invalid memdump size"); ++ kfree(qca_memdump); + kfree_skb(skb); ++ qca->qca_memdump = NULL; + mutex_unlock(&qca->hci_memdump_lock); + return; + } +diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c +index ef96ad06fa54e..9e03402ef1b37 100644 +--- a/drivers/bluetooth/hci_serdev.c ++++ b/drivers/bluetooth/hci_serdev.c +@@ -83,9 +83,9 @@ static void hci_uart_write_work(struct work_struct *work) + hci_uart_tx_complete(hu, hci_skb_pkt_type(skb)); + kfree_skb(skb); + } +- } while (test_bit(HCI_UART_TX_WAKEUP, &hu->tx_state)); + +- clear_bit(HCI_UART_SENDING, &hu->tx_state); ++ clear_bit(HCI_UART_SENDING, &hu->tx_state); ++ } while (test_bit(HCI_UART_TX_WAKEUP, &hu->tx_state)); + } + + /* ------- Interface to HCI layer ------ */ +diff --git a/drivers/char/hw_random/ingenic-trng.c b/drivers/char/hw_random/ingenic-trng.c +index 954a8411d67d2..0eb80f786f4dd 100644 +--- a/drivers/char/hw_random/ingenic-trng.c ++++ b/drivers/char/hw_random/ingenic-trng.c +@@ -113,13 +113,17 @@ static int ingenic_trng_probe(struct platform_device *pdev) + ret = hwrng_register(&trng->rng); + if (ret) { + dev_err(&pdev->dev, "Failed to register hwrng\n"); +- return ret; ++ goto err_unprepare_clk; + } + + platform_set_drvdata(pdev, trng); + + dev_info(&pdev->dev, "Ingenic DTRNG driver registered\n"); + return 0; ++ ++err_unprepare_clk: ++ clk_disable_unprepare(trng->clk); ++ return ret; + } + + static int ingenic_trng_remove(struct platform_device *pdev) +diff --git a/drivers/char/hw_random/timeriomem-rng.c b/drivers/char/hw_random/timeriomem-rng.c +index e262445fed5f5..f35f0f31f52ad 100644 +--- a/drivers/char/hw_random/timeriomem-rng.c ++++ b/drivers/char/hw_random/timeriomem-rng.c +@@ -69,7 +69,7 @@ static int timeriomem_rng_read(struct hwrng *hwrng, void *data, + */ + if (retval > 0) + usleep_range(period_us, +- period_us + min(1, period_us / 100)); ++ period_us + max(1, period_us / 100)); + + *(u32 *)data = readl(priv->io_base); + retval += sizeof(u32); +diff --git a/drivers/char/random.c b/drivers/char/random.c +index 2a41b21623ae4..f462b9d2f5a52 100644 +--- a/drivers/char/random.c ++++ b/drivers/char/random.c +@@ -1972,7 +1972,7 @@ static long random_ioctl(struct file *f, unsigned int cmd, unsigned long arg) + return -EPERM; + if (crng_init < 2) + return -ENODATA; +- crng_reseed(&primary_crng, NULL); ++ crng_reseed(&primary_crng, &input_pool); + crng_global_init_time = jiffies - 1; + return 0; + default: +diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h +index 947d1db0a5ccf..283f78211c3a7 100644 +--- a/drivers/char/tpm/tpm.h ++++ b/drivers/char/tpm/tpm.h +@@ -164,8 +164,6 @@ extern const struct file_operations tpmrm_fops; + extern struct idr dev_nums_idr; + + ssize_t tpm_transmit(struct tpm_chip *chip, u8 *buf, size_t bufsiz); +-ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf, +- size_t min_rsp_body_length, const char *desc); + int tpm_get_timeouts(struct tpm_chip *); + int tpm_auto_startup(struct tpm_chip *chip); + +@@ -194,8 +192,6 @@ static inline void tpm_msleep(unsigned int delay_msec) + int tpm_chip_start(struct tpm_chip *chip); + void tpm_chip_stop(struct tpm_chip *chip); + struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip); +-__must_check int tpm_try_get_ops(struct tpm_chip *chip); +-void tpm_put_ops(struct tpm_chip *chip); + + struct tpm_chip *tpm_chip_alloc(struct device *dev, + const struct tpm_class_ops *ops); +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index 92c51c6cfd1b7..431919d5f48af 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -125,7 +125,8 @@ static bool check_locality(struct tpm_chip *chip, int l) + if (rc < 0) + return false; + +- if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) == ++ if ((access & (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID ++ | TPM_ACCESS_REQUEST_USE)) == + (TPM_ACCESS_ACTIVE_LOCALITY | TPM_ACCESS_VALID)) { + priv->locality = l; + return true; +@@ -134,58 +135,13 @@ static bool check_locality(struct tpm_chip *chip, int l) + return false; + } + +-static bool locality_inactive(struct tpm_chip *chip, int l) +-{ +- struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); +- int rc; +- u8 access; +- +- rc = tpm_tis_read8(priv, TPM_ACCESS(l), &access); +- if (rc < 0) +- return false; +- +- if ((access & (TPM_ACCESS_VALID | TPM_ACCESS_ACTIVE_LOCALITY)) +- == TPM_ACCESS_VALID) +- return true; +- +- return false; +-} +- + static int release_locality(struct tpm_chip *chip, int l) + { + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); +- unsigned long stop, timeout; +- long rc; + + tpm_tis_write8(priv, TPM_ACCESS(l), TPM_ACCESS_ACTIVE_LOCALITY); + +- stop = jiffies + chip->timeout_a; +- +- if (chip->flags & TPM_CHIP_FLAG_IRQ) { +-again: +- timeout = stop - jiffies; +- if ((long)timeout <= 0) +- return -1; +- +- rc = wait_event_interruptible_timeout(priv->int_queue, +- (locality_inactive(chip, l)), +- timeout); +- +- if (rc > 0) +- return 0; +- +- if (rc == -ERESTARTSYS && freezing(current)) { +- clear_thread_flag(TIF_SIGPENDING); +- goto again; +- } +- } else { +- do { +- if (locality_inactive(chip, l)) +- return 0; +- tpm_msleep(TPM_TIMEOUT); +- } while (time_before(jiffies, stop)); +- } +- return -1; ++ return 0; + } + + static int request_locality(struct tpm_chip *chip, int l) +diff --git a/drivers/clk/clk-ast2600.c b/drivers/clk/clk-ast2600.c +index 177368cac6dd6..a55b37fc2c8bd 100644 +--- a/drivers/clk/clk-ast2600.c ++++ b/drivers/clk/clk-ast2600.c +@@ -17,7 +17,8 @@ + + #define ASPEED_G6_NUM_CLKS 71 + +-#define ASPEED_G6_SILICON_REV 0x004 ++#define ASPEED_G6_SILICON_REV 0x014 ++#define CHIP_REVISION_ID GENMASK(23, 16) + + #define ASPEED_G6_RESET_CTRL 0x040 + #define ASPEED_G6_RESET_CTRL2 0x050 +@@ -190,18 +191,34 @@ static struct clk_hw *ast2600_calc_pll(const char *name, u32 val) + static struct clk_hw *ast2600_calc_apll(const char *name, u32 val) + { + unsigned int mult, div; ++ u32 chip_id = readl(scu_g6_base + ASPEED_G6_SILICON_REV); + +- if (val & BIT(20)) { +- /* Pass through mode */ +- mult = div = 1; ++ if (((chip_id & CHIP_REVISION_ID) >> 16) >= 2) { ++ if (val & BIT(24)) { ++ /* Pass through mode */ ++ mult = div = 1; ++ } else { ++ /* F = 25Mhz * [(m + 1) / (n + 1)] / (p + 1) */ ++ u32 m = val & 0x1fff; ++ u32 n = (val >> 13) & 0x3f; ++ u32 p = (val >> 19) & 0xf; ++ ++ mult = (m + 1); ++ div = (n + 1) * (p + 1); ++ } + } else { +- /* F = 25Mhz * (2-od) * [(m + 2) / (n + 1)] */ +- u32 m = (val >> 5) & 0x3f; +- u32 od = (val >> 4) & 0x1; +- u32 n = val & 0xf; ++ if (val & BIT(20)) { ++ /* Pass through mode */ ++ mult = div = 1; ++ } else { ++ /* F = 25Mhz * (2-od) * [(m + 2) / (n + 1)] */ ++ u32 m = (val >> 5) & 0x3f; ++ u32 od = (val >> 4) & 0x1; ++ u32 n = val & 0xf; + +- mult = (2 - od) * (m + 2); +- div = n + 1; ++ mult = (2 - od) * (m + 2); ++ div = n + 1; ++ } + } + return clk_hw_register_fixed_factor(NULL, name, "clkin", 0, + mult, div); +diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c +index 8de12cb0c43d8..f32157cb40138 100644 +--- a/drivers/clk/clk-divider.c ++++ b/drivers/clk/clk-divider.c +@@ -493,8 +493,13 @@ struct clk_hw *__clk_hw_register_divider(struct device *dev, + else + init.ops = &clk_divider_ops; + init.flags = flags; +- init.parent_names = (parent_name ? &parent_name: NULL); +- init.num_parents = (parent_name ? 1 : 0); ++ init.parent_names = parent_name ? &parent_name : NULL; ++ init.parent_hws = parent_hw ? &parent_hw : NULL; ++ init.parent_data = parent_data; ++ if (parent_name || parent_hw || parent_data) ++ init.num_parents = 1; ++ else ++ init.num_parents = 0; + + /* struct clk_divider assignments */ + div->reg = reg; +diff --git a/drivers/clk/meson/clk-pll.c b/drivers/clk/meson/clk-pll.c +index b17a13e9337c4..49f27fe532139 100644 +--- a/drivers/clk/meson/clk-pll.c ++++ b/drivers/clk/meson/clk-pll.c +@@ -365,13 +365,14 @@ static int meson_clk_pll_set_rate(struct clk_hw *hw, unsigned long rate, + { + struct clk_regmap *clk = to_clk_regmap(hw); + struct meson_clk_pll_data *pll = meson_clk_pll_data(clk); +- unsigned int enabled, m, n, frac = 0, ret; ++ unsigned int enabled, m, n, frac = 0; + unsigned long old_rate; ++ int ret; + + if (parent_rate == 0 || rate == 0) + return -EINVAL; + +- old_rate = rate; ++ old_rate = clk_hw_get_rate(hw); + + ret = meson_clk_get_pll_settings(rate, parent_rate, &m, &n, pll); + if (ret) +@@ -393,7 +394,8 @@ static int meson_clk_pll_set_rate(struct clk_hw *hw, unsigned long rate, + if (!enabled) + return 0; + +- if (meson_clk_pll_enable(hw)) { ++ ret = meson_clk_pll_enable(hw); ++ if (ret) { + pr_warn("%s: pll did not lock, trying to restore old rate %lu\n", + __func__, old_rate); + /* +@@ -405,7 +407,7 @@ static int meson_clk_pll_set_rate(struct clk_hw *hw, unsigned long rate, + meson_clk_pll_set_rate(hw, old_rate, parent_rate); + } + +- return 0; ++ return ret; + } + + /* +diff --git a/drivers/clk/qcom/gcc-msm8998.c b/drivers/clk/qcom/gcc-msm8998.c +index 9d7016bcd6800..b8dcfe62312bb 100644 +--- a/drivers/clk/qcom/gcc-msm8998.c ++++ b/drivers/clk/qcom/gcc-msm8998.c +@@ -135,7 +135,7 @@ static struct pll_vco fabia_vco[] = { + + static struct clk_alpha_pll gpll0 = { + .offset = 0x0, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .vco_table = fabia_vco, + .num_vco = ARRAY_SIZE(fabia_vco), + .clkr = { +@@ -145,58 +145,58 @@ static struct clk_alpha_pll gpll0 = { + .name = "gpll0", + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_ops, ++ .ops = &clk_alpha_pll_fixed_fabia_ops, + } + }, + }; + + static struct clk_alpha_pll_postdiv gpll0_out_even = { + .offset = 0x0, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll0_out_even", + .parent_names = (const char *[]){ "gpll0" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll0_out_main = { + .offset = 0x0, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll0_out_main", + .parent_names = (const char *[]){ "gpll0" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll0_out_odd = { + .offset = 0x0, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll0_out_odd", + .parent_names = (const char *[]){ "gpll0" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll0_out_test = { + .offset = 0x0, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll0_out_test", + .parent_names = (const char *[]){ "gpll0" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll gpll1 = { + .offset = 0x1000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .vco_table = fabia_vco, + .num_vco = ARRAY_SIZE(fabia_vco), + .clkr = { +@@ -206,58 +206,58 @@ static struct clk_alpha_pll gpll1 = { + .name = "gpll1", + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_ops, ++ .ops = &clk_alpha_pll_fixed_fabia_ops, + } + }, + }; + + static struct clk_alpha_pll_postdiv gpll1_out_even = { + .offset = 0x1000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll1_out_even", + .parent_names = (const char *[]){ "gpll1" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll1_out_main = { + .offset = 0x1000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll1_out_main", + .parent_names = (const char *[]){ "gpll1" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll1_out_odd = { + .offset = 0x1000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll1_out_odd", + .parent_names = (const char *[]){ "gpll1" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll1_out_test = { + .offset = 0x1000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll1_out_test", + .parent_names = (const char *[]){ "gpll1" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll gpll2 = { + .offset = 0x2000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .vco_table = fabia_vco, + .num_vco = ARRAY_SIZE(fabia_vco), + .clkr = { +@@ -267,58 +267,58 @@ static struct clk_alpha_pll gpll2 = { + .name = "gpll2", + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_ops, ++ .ops = &clk_alpha_pll_fixed_fabia_ops, + } + }, + }; + + static struct clk_alpha_pll_postdiv gpll2_out_even = { + .offset = 0x2000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll2_out_even", + .parent_names = (const char *[]){ "gpll2" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll2_out_main = { + .offset = 0x2000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll2_out_main", + .parent_names = (const char *[]){ "gpll2" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll2_out_odd = { + .offset = 0x2000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll2_out_odd", + .parent_names = (const char *[]){ "gpll2" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll2_out_test = { + .offset = 0x2000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll2_out_test", + .parent_names = (const char *[]){ "gpll2" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll gpll3 = { + .offset = 0x3000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .vco_table = fabia_vco, + .num_vco = ARRAY_SIZE(fabia_vco), + .clkr = { +@@ -328,58 +328,58 @@ static struct clk_alpha_pll gpll3 = { + .name = "gpll3", + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_ops, ++ .ops = &clk_alpha_pll_fixed_fabia_ops, + } + }, + }; + + static struct clk_alpha_pll_postdiv gpll3_out_even = { + .offset = 0x3000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll3_out_even", + .parent_names = (const char *[]){ "gpll3" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll3_out_main = { + .offset = 0x3000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll3_out_main", + .parent_names = (const char *[]){ "gpll3" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll3_out_odd = { + .offset = 0x3000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll3_out_odd", + .parent_names = (const char *[]){ "gpll3" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll3_out_test = { + .offset = 0x3000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll3_out_test", + .parent_names = (const char *[]){ "gpll3" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll gpll4 = { + .offset = 0x77000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .vco_table = fabia_vco, + .num_vco = ARRAY_SIZE(fabia_vco), + .clkr = { +@@ -389,52 +389,52 @@ static struct clk_alpha_pll gpll4 = { + .name = "gpll4", + .parent_names = (const char *[]){ "xo" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_ops, ++ .ops = &clk_alpha_pll_fixed_fabia_ops, + } + }, + }; + + static struct clk_alpha_pll_postdiv gpll4_out_even = { + .offset = 0x77000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll4_out_even", + .parent_names = (const char *[]){ "gpll4" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll4_out_main = { + .offset = 0x77000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll4_out_main", + .parent_names = (const char *[]){ "gpll4" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll4_out_odd = { + .offset = 0x77000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll4_out_odd", + .parent_names = (const char *[]){ "gpll4" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + + static struct clk_alpha_pll_postdiv gpll4_out_test = { + .offset = 0x77000, +- .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], ++ .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_FABIA], + .clkr.hw.init = &(struct clk_init_data){ + .name = "gpll4_out_test", + .parent_names = (const char *[]){ "gpll4" }, + .num_parents = 1, +- .ops = &clk_alpha_pll_postdiv_ops, ++ .ops = &clk_alpha_pll_postdiv_fabia_ops, + }, + }; + +diff --git a/drivers/clk/renesas/r8a779a0-cpg-mssr.c b/drivers/clk/renesas/r8a779a0-cpg-mssr.c +index 046d79416b7d0..4ee2706c9c6a0 100644 +--- a/drivers/clk/renesas/r8a779a0-cpg-mssr.c ++++ b/drivers/clk/renesas/r8a779a0-cpg-mssr.c +@@ -69,7 +69,6 @@ enum clk_ids { + CLK_PLL5_DIV2, + CLK_PLL5_DIV4, + CLK_S1, +- CLK_S2, + CLK_S3, + CLK_SDSRC, + CLK_RPCSRC, +@@ -137,7 +136,7 @@ static const struct cpg_core_clk r8a779a0_core_clks[] __initconst = { + DEF_FIXED("icu", R8A779A0_CLK_ICU, CLK_PLL5_DIV4, 2, 1), + DEF_FIXED("icud2", R8A779A0_CLK_ICUD2, CLK_PLL5_DIV4, 4, 1), + DEF_FIXED("vcbus", R8A779A0_CLK_VCBUS, CLK_PLL5_DIV4, 1, 1), +- DEF_FIXED("cbfusa", R8A779A0_CLK_CBFUSA, CLK_MAIN, 2, 1), ++ DEF_FIXED("cbfusa", R8A779A0_CLK_CBFUSA, CLK_EXTAL, 2, 1), + + DEF_DIV6P1("mso", R8A779A0_CLK_MSO, CLK_PLL5_DIV4, 0x87c), + DEF_DIV6P1("canfd", R8A779A0_CLK_CANFD, CLK_PLL5_DIV4, 0x878), +diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c +index f2497d0a4683a..bff446b782907 100644 +--- a/drivers/clk/sunxi-ng/ccu-sun50i-h6.c ++++ b/drivers/clk/sunxi-ng/ccu-sun50i-h6.c +@@ -237,7 +237,7 @@ static const char * const psi_ahb1_ahb2_parents[] = { "osc24M", "osc32k", + static SUNXI_CCU_MP_WITH_MUX(psi_ahb1_ahb2_clk, "psi-ahb1-ahb2", + psi_ahb1_ahb2_parents, + 0x510, +- 0, 5, /* M */ ++ 0, 2, /* M */ + 8, 2, /* P */ + 24, 2, /* mux */ + 0); +@@ -246,19 +246,19 @@ static const char * const ahb3_apb1_apb2_parents[] = { "osc24M", "osc32k", + "psi-ahb1-ahb2", + "pll-periph0" }; + static SUNXI_CCU_MP_WITH_MUX(ahb3_clk, "ahb3", ahb3_apb1_apb2_parents, 0x51c, +- 0, 5, /* M */ ++ 0, 2, /* M */ + 8, 2, /* P */ + 24, 2, /* mux */ + 0); + + static SUNXI_CCU_MP_WITH_MUX(apb1_clk, "apb1", ahb3_apb1_apb2_parents, 0x520, +- 0, 5, /* M */ ++ 0, 2, /* M */ + 8, 2, /* P */ + 24, 2, /* mux */ + 0); + + static SUNXI_CCU_MP_WITH_MUX(apb2_clk, "apb2", ahb3_apb1_apb2_parents, 0x524, +- 0, 5, /* M */ ++ 0, 2, /* M */ + 8, 2, /* P */ + 24, 2, /* mux */ + 0); +@@ -682,7 +682,7 @@ static struct ccu_mux hdmi_cec_clk = { + + .common = { + .reg = 0xb10, +- .features = CCU_FEATURE_VARIABLE_PREDIV, ++ .features = CCU_FEATURE_FIXED_PREDIV, + .hw.init = CLK_HW_INIT_PARENTS("hdmi-cec", + hdmi_cec_parents, + &ccu_mux_ops, +diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig +index 2be849bb794ac..39f4d88662002 100644 +--- a/drivers/clocksource/Kconfig ++++ b/drivers/clocksource/Kconfig +@@ -79,6 +79,7 @@ config IXP4XX_TIMER + bool "Intel XScale IXP4xx timer driver" if COMPILE_TEST + depends on HAS_IOMEM + select CLKSRC_MMIO ++ select TIMER_OF if OF + help + Enables support for the Intel XScale IXP4xx SoC timer. + +diff --git a/drivers/clocksource/mxs_timer.c b/drivers/clocksource/mxs_timer.c +index bc96a4cbf26c6..e52e12d27d2aa 100644 +--- a/drivers/clocksource/mxs_timer.c ++++ b/drivers/clocksource/mxs_timer.c +@@ -131,10 +131,7 @@ static void mxs_irq_clear(char *state) + + /* Clear pending interrupt */ + timrot_irq_acknowledge(); +- +-#ifdef DEBUG +- pr_info("%s: changing mode to %s\n", __func__, state) +-#endif /* DEBUG */ ++ pr_debug("%s: changing mode to %s\n", __func__, state); + } + + static int mxs_shutdown(struct clock_event_device *evt) +diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c +index d3e5a6fceb61b..d1bbc16fba4b4 100644 +--- a/drivers/cpufreq/acpi-cpufreq.c ++++ b/drivers/cpufreq/acpi-cpufreq.c +@@ -54,7 +54,6 @@ struct acpi_cpufreq_data { + unsigned int resume; + unsigned int cpu_feature; + unsigned int acpi_perf_cpu; +- unsigned int first_perf_state; + cpumask_var_t freqdomain_cpus; + void (*cpu_freq_write)(struct acpi_pct_register *reg, u32 val); + u32 (*cpu_freq_read)(struct acpi_pct_register *reg); +@@ -223,10 +222,10 @@ static unsigned extract_msr(struct cpufreq_policy *policy, u32 msr) + + perf = to_perf_data(data); + +- cpufreq_for_each_entry(pos, policy->freq_table + data->first_perf_state) ++ cpufreq_for_each_entry(pos, policy->freq_table) + if (msr == perf->states[pos->driver_data].status) + return pos->frequency; +- return policy->freq_table[data->first_perf_state].frequency; ++ return policy->freq_table[0].frequency; + } + + static unsigned extract_freq(struct cpufreq_policy *policy, u32 val) +@@ -365,7 +364,6 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu) + struct cpufreq_policy *policy; + unsigned int freq; + unsigned int cached_freq; +- unsigned int state; + + pr_debug("%s (%d)\n", __func__, cpu); + +@@ -377,11 +375,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu) + if (unlikely(!data || !policy->freq_table)) + return 0; + +- state = to_perf_data(data)->state; +- if (state < data->first_perf_state) +- state = data->first_perf_state; +- +- cached_freq = policy->freq_table[state].frequency; ++ cached_freq = policy->freq_table[to_perf_data(data)->state].frequency; + freq = extract_freq(policy, get_cur_val(cpumask_of(cpu), data)); + if (freq != cached_freq) { + /* +@@ -680,7 +674,6 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) + struct cpuinfo_x86 *c = &cpu_data(cpu); + unsigned int valid_states = 0; + unsigned int result = 0; +- unsigned int state_count; + u64 max_boost_ratio; + unsigned int i; + #ifdef CONFIG_SMP +@@ -795,28 +788,8 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) + goto err_unreg; + } + +- state_count = perf->state_count + 1; +- +- max_boost_ratio = get_max_boost_ratio(cpu); +- if (max_boost_ratio) { +- /* +- * Make a room for one more entry to represent the highest +- * available "boost" frequency. +- */ +- state_count++; +- valid_states++; +- data->first_perf_state = valid_states; +- } else { +- /* +- * If the maximum "boost" frequency is unknown, ask the arch +- * scale-invariance code to use the "nominal" performance for +- * CPU utilization scaling so as to prevent the schedutil +- * governor from selecting inadequate CPU frequencies. +- */ +- arch_set_max_freq_ratio(true); +- } +- +- freq_table = kcalloc(state_count, sizeof(*freq_table), GFP_KERNEL); ++ freq_table = kcalloc(perf->state_count + 1, sizeof(*freq_table), ++ GFP_KERNEL); + if (!freq_table) { + result = -ENOMEM; + goto err_unreg; +@@ -851,27 +824,25 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy) + } + freq_table[valid_states].frequency = CPUFREQ_TABLE_END; + ++ max_boost_ratio = get_max_boost_ratio(cpu); + if (max_boost_ratio) { +- unsigned int state = data->first_perf_state; +- unsigned int freq = freq_table[state].frequency; ++ unsigned int freq = freq_table[0].frequency; + + /* + * Because the loop above sorts the freq_table entries in the + * descending order, freq is the maximum frequency in the table. + * Assume that it corresponds to the CPPC nominal frequency and +- * use it to populate the frequency field of the extra "boost" +- * frequency entry. ++ * use it to set cpuinfo.max_freq. + */ +- freq_table[0].frequency = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT; ++ policy->cpuinfo.max_freq = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT; ++ } else { + /* +- * The purpose of the extra "boost" frequency entry is to make +- * the rest of cpufreq aware of the real maximum frequency, but +- * the way to request it is the same as for the first_perf_state +- * entry that is expected to cover the entire range of "boost" +- * frequencies of the CPU, so copy the driver_data value from +- * that entry. ++ * If the maximum "boost" frequency is unknown, ask the arch ++ * scale-invariance code to use the "nominal" performance for ++ * CPU utilization scaling so as to prevent the schedutil ++ * governor from selecting inadequate CPU frequencies. + */ +- freq_table[0].driver_data = freq_table[state].driver_data; ++ arch_set_max_freq_ratio(true); + } + + policy->freq_table = freq_table; +@@ -947,8 +918,7 @@ static void acpi_cpufreq_cpu_ready(struct cpufreq_policy *policy) + { + struct acpi_processor_performance *perf = per_cpu_ptr(acpi_perf_data, + policy->cpu); +- struct acpi_cpufreq_data *data = policy->driver_data; +- unsigned int freq = policy->freq_table[data->first_perf_state].frequency; ++ unsigned int freq = policy->freq_table[0].frequency; + + if (perf->states[0].core_frequency * 1000 != freq) + pr_warn(FW_WARN "P-state 0 is not max freq\n"); +diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c +index 3e31e5d28b79c..4153150e20db5 100644 +--- a/drivers/cpufreq/brcmstb-avs-cpufreq.c ++++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c +@@ -597,6 +597,16 @@ unmap_base: + return ret; + } + ++static void brcm_avs_prepare_uninit(struct platform_device *pdev) ++{ ++ struct private_data *priv; ++ ++ priv = platform_get_drvdata(pdev); ++ ++ iounmap(priv->avs_intr_base); ++ iounmap(priv->base); ++} ++ + static int brcm_avs_cpufreq_init(struct cpufreq_policy *policy) + { + struct cpufreq_frequency_table *freq_table; +@@ -732,21 +742,21 @@ static int brcm_avs_cpufreq_probe(struct platform_device *pdev) + + brcm_avs_driver.driver_data = pdev; + +- return cpufreq_register_driver(&brcm_avs_driver); ++ ret = cpufreq_register_driver(&brcm_avs_driver); ++ if (ret) ++ brcm_avs_prepare_uninit(pdev); ++ ++ return ret; + } + + static int brcm_avs_cpufreq_remove(struct platform_device *pdev) + { +- struct private_data *priv; + int ret; + + ret = cpufreq_unregister_driver(&brcm_avs_driver); +- if (ret) +- return ret; ++ WARN_ON(ret); + +- priv = platform_get_drvdata(pdev); +- iounmap(priv->base); +- iounmap(priv->avs_intr_base); ++ brcm_avs_prepare_uninit(pdev); + + return 0; + } +diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c +index f839dc9852c08..d3f756f7b5a05 100644 +--- a/drivers/cpufreq/freq_table.c ++++ b/drivers/cpufreq/freq_table.c +@@ -52,7 +52,13 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy, + } + + policy->min = policy->cpuinfo.min_freq = min_freq; +- policy->max = policy->cpuinfo.max_freq = max_freq; ++ policy->max = max_freq; ++ /* ++ * If the driver has set its own cpuinfo.max_freq above max_freq, leave ++ * it as is. ++ */ ++ if (policy->cpuinfo.max_freq < max_freq) ++ policy->max = policy->cpuinfo.max_freq = max_freq; + + if (policy->min == ~0) + return -EINVAL; +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c +index cb95da684457f..c8ae8554f4c91 100644 +--- a/drivers/cpufreq/intel_pstate.c ++++ b/drivers/cpufreq/intel_pstate.c +@@ -829,13 +829,13 @@ static struct freq_attr *hwp_cpufreq_attrs[] = { + NULL, + }; + +-static void intel_pstate_get_hwp_max(unsigned int cpu, int *phy_max, ++static void intel_pstate_get_hwp_max(struct cpudata *cpu, int *phy_max, + int *current_max) + { + u64 cap; + +- rdmsrl_on_cpu(cpu, MSR_HWP_CAPABILITIES, &cap); +- WRITE_ONCE(all_cpu_data[cpu]->hwp_cap_cached, cap); ++ rdmsrl_on_cpu(cpu->cpu, MSR_HWP_CAPABILITIES, &cap); ++ WRITE_ONCE(cpu->hwp_cap_cached, cap); + if (global.no_turbo || global.turbo_disabled) + *current_max = HWP_GUARANTEED_PERF(cap); + else +@@ -1223,7 +1223,7 @@ static void update_qos_request(enum freq_qos_req_type type) + continue; + + if (hwp_active) +- intel_pstate_get_hwp_max(i, &turbo_max, &max_state); ++ intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); + else + turbo_max = cpu->pstate.turbo_pstate; + +@@ -1724,21 +1724,22 @@ static void intel_pstate_max_within_limits(struct cpudata *cpu) + static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) + { + cpu->pstate.min_pstate = pstate_funcs.get_min(); +- cpu->pstate.max_pstate = pstate_funcs.get_max(); + cpu->pstate.max_pstate_physical = pstate_funcs.get_max_physical(); + cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); + cpu->pstate.scaling = pstate_funcs.get_scaling(); +- cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling; + + if (hwp_active && !hwp_mode_bdw) { + unsigned int phy_max, current_max; + +- intel_pstate_get_hwp_max(cpu->cpu, &phy_max, ¤t_max); ++ intel_pstate_get_hwp_max(cpu, &phy_max, ¤t_max); + cpu->pstate.turbo_freq = phy_max * cpu->pstate.scaling; + cpu->pstate.turbo_pstate = phy_max; ++ cpu->pstate.max_pstate = HWP_GUARANTEED_PERF(READ_ONCE(cpu->hwp_cap_cached)); + } else { + cpu->pstate.turbo_freq = cpu->pstate.turbo_pstate * cpu->pstate.scaling; ++ cpu->pstate.max_pstate = pstate_funcs.get_max(); + } ++ cpu->pstate.max_freq = cpu->pstate.max_pstate * cpu->pstate.scaling; + + if (pstate_funcs.get_aperf_mperf_shift) + cpu->aperf_mperf_shift = pstate_funcs.get_aperf_mperf_shift(); +@@ -2217,7 +2218,7 @@ static void intel_pstate_update_perf_limits(struct cpudata *cpu, + * rather than pure ratios. + */ + if (hwp_active) { +- intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state); ++ intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); + } else { + max_state = global.no_turbo || global.turbo_disabled ? + cpu->pstate.max_pstate : cpu->pstate.turbo_pstate; +@@ -2332,7 +2333,7 @@ static void intel_pstate_verify_cpu_policy(struct cpudata *cpu, + if (hwp_active) { + int max_state, turbo_max; + +- intel_pstate_get_hwp_max(cpu->cpu, &turbo_max, &max_state); ++ intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); + max_freq = max_state * cpu->pstate.scaling; + } else { + max_freq = intel_pstate_get_max_freq(cpu); +@@ -2675,7 +2676,7 @@ static int intel_cpufreq_cpu_init(struct cpufreq_policy *policy) + if (hwp_active) { + u64 value; + +- intel_pstate_get_hwp_max(policy->cpu, &turbo_max, &max_state); ++ intel_pstate_get_hwp_max(cpu, &turbo_max, &max_state); + policy->transition_delay_us = INTEL_CPUFREQ_TRANSITION_DELAY_HWP; + rdmsrl_on_cpu(cpu->cpu, MSR_HWP_REQUEST, &value); + WRITE_ONCE(cpu->hwp_req_cached, value); +diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c +index 9ed5341dc515b..2726e77c9e5a9 100644 +--- a/drivers/cpufreq/qcom-cpufreq-hw.c ++++ b/drivers/cpufreq/qcom-cpufreq-hw.c +@@ -32,6 +32,7 @@ struct qcom_cpufreq_soc_data { + + struct qcom_cpufreq_data { + void __iomem *base; ++ struct resource *res; + const struct qcom_cpufreq_soc_data *soc_data; + }; + +@@ -280,6 +281,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy) + struct of_phandle_args args; + struct device_node *cpu_np; + struct device *cpu_dev; ++ struct resource *res; + void __iomem *base; + struct qcom_cpufreq_data *data; + int ret, index; +@@ -303,18 +305,33 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy) + + index = args.args[0]; + +- base = devm_platform_ioremap_resource(pdev, index); +- if (IS_ERR(base)) +- return PTR_ERR(base); ++ res = platform_get_resource(pdev, IORESOURCE_MEM, index); ++ if (!res) { ++ dev_err(dev, "failed to get mem resource %d\n", index); ++ return -ENODEV; ++ } ++ ++ if (!request_mem_region(res->start, resource_size(res), res->name)) { ++ dev_err(dev, "failed to request resource %pR\n", res); ++ return -EBUSY; ++ } + +- data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); ++ base = ioremap(res->start, resource_size(res)); ++ if (IS_ERR(base)) { ++ dev_err(dev, "failed to map resource %pR\n", res); ++ ret = PTR_ERR(base); ++ goto release_region; ++ } ++ ++ data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) { + ret = -ENOMEM; +- goto error; ++ goto unmap_base; + } + + data->soc_data = of_device_get_match_data(&pdev->dev); + data->base = base; ++ data->res = res; + + /* HW should be in enabled state to proceed */ + if (!(readl_relaxed(base + data->soc_data->reg_enable) & 0x1)) { +@@ -349,7 +366,11 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy) + + return 0; + error: +- devm_iounmap(dev, base); ++ kfree(data); ++unmap_base: ++ iounmap(data->base); ++release_region: ++ release_mem_region(res->start, resource_size(res)); + return ret; + } + +@@ -357,12 +378,15 @@ static int qcom_cpufreq_hw_cpu_exit(struct cpufreq_policy *policy) + { + struct device *cpu_dev = get_cpu_device(policy->cpu); + struct qcom_cpufreq_data *data = policy->driver_data; +- struct platform_device *pdev = cpufreq_get_driver_data(); ++ struct resource *res = data->res; ++ void __iomem *base = data->base; + + dev_pm_opp_remove_all_dynamic(cpu_dev); + dev_pm_opp_of_cpumask_remove_table(policy->related_cpus); + kfree(policy->freq_table); +- devm_iounmap(&pdev->dev, data->base); ++ kfree(data); ++ iounmap(base); ++ release_mem_region(res->start, resource_size(res)); + + return 0; + } +diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c +index b72de8939497b..ffa628c89e21f 100644 +--- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c ++++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss-cipher.c +@@ -20,6 +20,7 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) + unsigned int ivsize = crypto_skcipher_ivsize(tfm); + struct sun4i_cipher_req_ctx *ctx = skcipher_request_ctx(areq); + u32 mode = ctx->mode; ++ void *backup_iv = NULL; + /* when activating SS, the default FIFO space is SS_RX_DEFAULT(32) */ + u32 rx_cnt = SS_RX_DEFAULT; + u32 tx_cnt = 0; +@@ -30,6 +31,8 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) + unsigned int ileft = areq->cryptlen; + unsigned int oleft = areq->cryptlen; + unsigned int todo; ++ unsigned long pi = 0, po = 0; /* progress for in and out */ ++ bool miter_err; + struct sg_mapping_iter mi, mo; + unsigned int oi, oo; /* offset for in and out */ + unsigned long flags; +@@ -42,52 +45,71 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) + return -EINVAL; + } + ++ if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) { ++ backup_iv = kzalloc(ivsize, GFP_KERNEL); ++ if (!backup_iv) ++ return -ENOMEM; ++ scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0); ++ } ++ + spin_lock_irqsave(&ss->slock, flags); + +- for (i = 0; i < op->keylen; i += 4) +- writel(*(op->key + i / 4), ss->base + SS_KEY0 + i); ++ for (i = 0; i < op->keylen / 4; i++) ++ writesl(ss->base + SS_KEY0 + i * 4, &op->key[i], 1); + + if (areq->iv) { + for (i = 0; i < 4 && i < ivsize / 4; i++) { + v = *(u32 *)(areq->iv + i * 4); +- writel(v, ss->base + SS_IV0 + i * 4); ++ writesl(ss->base + SS_IV0 + i * 4, &v, 1); + } + } + writel(mode, ss->base + SS_CTL); + +- sg_miter_start(&mi, areq->src, sg_nents(areq->src), +- SG_MITER_FROM_SG | SG_MITER_ATOMIC); +- sg_miter_start(&mo, areq->dst, sg_nents(areq->dst), +- SG_MITER_TO_SG | SG_MITER_ATOMIC); +- sg_miter_next(&mi); +- sg_miter_next(&mo); +- if (!mi.addr || !mo.addr) { +- dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n"); +- err = -EINVAL; +- goto release_ss; +- } + + ileft = areq->cryptlen / 4; + oleft = areq->cryptlen / 4; + oi = 0; + oo = 0; + do { +- todo = min(rx_cnt, ileft); +- todo = min_t(size_t, todo, (mi.length - oi) / 4); +- if (todo) { +- ileft -= todo; +- writesl(ss->base + SS_RXFIFO, mi.addr + oi, todo); +- oi += todo * 4; +- } +- if (oi == mi.length) { +- sg_miter_next(&mi); +- oi = 0; ++ if (ileft) { ++ sg_miter_start(&mi, areq->src, sg_nents(areq->src), ++ SG_MITER_FROM_SG | SG_MITER_ATOMIC); ++ if (pi) ++ sg_miter_skip(&mi, pi); ++ miter_err = sg_miter_next(&mi); ++ if (!miter_err || !mi.addr) { ++ dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n"); ++ err = -EINVAL; ++ goto release_ss; ++ } ++ todo = min(rx_cnt, ileft); ++ todo = min_t(size_t, todo, (mi.length - oi) / 4); ++ if (todo) { ++ ileft -= todo; ++ writesl(ss->base + SS_RXFIFO, mi.addr + oi, todo); ++ oi += todo * 4; ++ } ++ if (oi == mi.length) { ++ pi += mi.length; ++ oi = 0; ++ } ++ sg_miter_stop(&mi); + } + + spaces = readl(ss->base + SS_FCSR); + rx_cnt = SS_RXFIFO_SPACES(spaces); + tx_cnt = SS_TXFIFO_SPACES(spaces); + ++ sg_miter_start(&mo, areq->dst, sg_nents(areq->dst), ++ SG_MITER_TO_SG | SG_MITER_ATOMIC); ++ if (po) ++ sg_miter_skip(&mo, po); ++ miter_err = sg_miter_next(&mo); ++ if (!miter_err || !mo.addr) { ++ dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n"); ++ err = -EINVAL; ++ goto release_ss; ++ } + todo = min(tx_cnt, oleft); + todo = min_t(size_t, todo, (mo.length - oo) / 4); + if (todo) { +@@ -96,21 +118,23 @@ static int noinline_for_stack sun4i_ss_opti_poll(struct skcipher_request *areq) + oo += todo * 4; + } + if (oo == mo.length) { +- sg_miter_next(&mo); + oo = 0; ++ po += mo.length; + } ++ sg_miter_stop(&mo); + } while (oleft); + + if (areq->iv) { +- for (i = 0; i < 4 && i < ivsize / 4; i++) { +- v = readl(ss->base + SS_IV0 + i * 4); +- *(u32 *)(areq->iv + i * 4) = v; ++ if (mode & SS_DECRYPTION) { ++ memcpy(areq->iv, backup_iv, ivsize); ++ kfree_sensitive(backup_iv); ++ } else { ++ scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize, ++ ivsize, 0); + } + } + + release_ss: +- sg_miter_stop(&mi); +- sg_miter_stop(&mo); + writel(0, ss->base + SS_CTL); + spin_unlock_irqrestore(&ss->slock, flags); + return err; +@@ -161,13 +185,16 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) + unsigned int ileft = areq->cryptlen; + unsigned int oleft = areq->cryptlen; + unsigned int todo; ++ void *backup_iv = NULL; + struct sg_mapping_iter mi, mo; ++ unsigned long pi = 0, po = 0; /* progress for in and out */ ++ bool miter_err; + unsigned int oi, oo; /* offset for in and out */ + unsigned int ob = 0; /* offset in buf */ + unsigned int obo = 0; /* offset in bufo*/ + unsigned int obl = 0; /* length of data in bufo */ + unsigned long flags; +- bool need_fallback; ++ bool need_fallback = false; + + if (!areq->cryptlen) + return 0; +@@ -186,12 +213,12 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) + * we can use the SS optimized function + */ + while (in_sg && no_chunk == 1) { +- if (in_sg->length % 4) ++ if ((in_sg->length | in_sg->offset) & 3u) + no_chunk = 0; + in_sg = sg_next(in_sg); + } + while (out_sg && no_chunk == 1) { +- if (out_sg->length % 4) ++ if ((out_sg->length | out_sg->offset) & 3u) + no_chunk = 0; + out_sg = sg_next(out_sg); + } +@@ -202,30 +229,26 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) + if (need_fallback) + return sun4i_ss_cipher_poll_fallback(areq); + ++ if (areq->iv && ivsize > 0 && mode & SS_DECRYPTION) { ++ backup_iv = kzalloc(ivsize, GFP_KERNEL); ++ if (!backup_iv) ++ return -ENOMEM; ++ scatterwalk_map_and_copy(backup_iv, areq->src, areq->cryptlen - ivsize, ivsize, 0); ++ } ++ + spin_lock_irqsave(&ss->slock, flags); + +- for (i = 0; i < op->keylen; i += 4) +- writel(*(op->key + i / 4), ss->base + SS_KEY0 + i); ++ for (i = 0; i < op->keylen / 4; i++) ++ writesl(ss->base + SS_KEY0 + i * 4, &op->key[i], 1); + + if (areq->iv) { + for (i = 0; i < 4 && i < ivsize / 4; i++) { + v = *(u32 *)(areq->iv + i * 4); +- writel(v, ss->base + SS_IV0 + i * 4); ++ writesl(ss->base + SS_IV0 + i * 4, &v, 1); + } + } + writel(mode, ss->base + SS_CTL); + +- sg_miter_start(&mi, areq->src, sg_nents(areq->src), +- SG_MITER_FROM_SG | SG_MITER_ATOMIC); +- sg_miter_start(&mo, areq->dst, sg_nents(areq->dst), +- SG_MITER_TO_SG | SG_MITER_ATOMIC); +- sg_miter_next(&mi); +- sg_miter_next(&mo); +- if (!mi.addr || !mo.addr) { +- dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n"); +- err = -EINVAL; +- goto release_ss; +- } + ileft = areq->cryptlen; + oleft = areq->cryptlen; + oi = 0; +@@ -233,8 +256,16 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) + + while (oleft) { + if (ileft) { +- char buf[4 * SS_RX_MAX];/* buffer for linearize SG src */ +- ++ sg_miter_start(&mi, areq->src, sg_nents(areq->src), ++ SG_MITER_FROM_SG | SG_MITER_ATOMIC); ++ if (pi) ++ sg_miter_skip(&mi, pi); ++ miter_err = sg_miter_next(&mi); ++ if (!miter_err || !mi.addr) { ++ dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n"); ++ err = -EINVAL; ++ goto release_ss; ++ } + /* + * todo is the number of consecutive 4byte word that we + * can read from current SG +@@ -256,52 +287,57 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) + */ + todo = min(rx_cnt * 4 - ob, ileft); + todo = min_t(size_t, todo, mi.length - oi); +- memcpy(buf + ob, mi.addr + oi, todo); ++ memcpy(ss->buf + ob, mi.addr + oi, todo); + ileft -= todo; + oi += todo; + ob += todo; + if (!(ob % 4)) { +- writesl(ss->base + SS_RXFIFO, buf, ++ writesl(ss->base + SS_RXFIFO, ss->buf, + ob / 4); + ob = 0; + } + } + if (oi == mi.length) { +- sg_miter_next(&mi); ++ pi += mi.length; + oi = 0; + } ++ sg_miter_stop(&mi); + } + + spaces = readl(ss->base + SS_FCSR); + rx_cnt = SS_RXFIFO_SPACES(spaces); + tx_cnt = SS_TXFIFO_SPACES(spaces); +- dev_dbg(ss->dev, +- "%x %u/%zu %u/%u cnt=%u %u/%zu %u/%u cnt=%u %u\n", +- mode, +- oi, mi.length, ileft, areq->cryptlen, rx_cnt, +- oo, mo.length, oleft, areq->cryptlen, tx_cnt, ob); + + if (!tx_cnt) + continue; ++ sg_miter_start(&mo, areq->dst, sg_nents(areq->dst), ++ SG_MITER_TO_SG | SG_MITER_ATOMIC); ++ if (po) ++ sg_miter_skip(&mo, po); ++ miter_err = sg_miter_next(&mo); ++ if (!miter_err || !mo.addr) { ++ dev_err_ratelimited(ss->dev, "ERROR: sg_miter return null\n"); ++ err = -EINVAL; ++ goto release_ss; ++ } + /* todo in 4bytes word */ + todo = min(tx_cnt, oleft / 4); + todo = min_t(size_t, todo, (mo.length - oo) / 4); ++ + if (todo) { + readsl(ss->base + SS_TXFIFO, mo.addr + oo, todo); + oleft -= todo * 4; + oo += todo * 4; + if (oo == mo.length) { +- sg_miter_next(&mo); ++ po += mo.length; + oo = 0; + } + } else { +- char bufo[4 * SS_TX_MAX]; /* buffer for linearize SG dst */ +- + /* + * read obl bytes in bufo, we read at maximum for + * emptying the device + */ +- readsl(ss->base + SS_TXFIFO, bufo, tx_cnt); ++ readsl(ss->base + SS_TXFIFO, ss->bufo, tx_cnt); + obl = tx_cnt * 4; + obo = 0; + do { +@@ -313,28 +349,31 @@ static int sun4i_ss_cipher_poll(struct skcipher_request *areq) + */ + todo = min_t(size_t, + mo.length - oo, obl - obo); +- memcpy(mo.addr + oo, bufo + obo, todo); ++ memcpy(mo.addr + oo, ss->bufo + obo, todo); + oleft -= todo; + obo += todo; + oo += todo; + if (oo == mo.length) { ++ po += mo.length; + sg_miter_next(&mo); + oo = 0; + } + } while (obo < obl); + /* bufo must be fully used here */ + } ++ sg_miter_stop(&mo); + } + if (areq->iv) { +- for (i = 0; i < 4 && i < ivsize / 4; i++) { +- v = readl(ss->base + SS_IV0 + i * 4); +- *(u32 *)(areq->iv + i * 4) = v; ++ if (mode & SS_DECRYPTION) { ++ memcpy(areq->iv, backup_iv, ivsize); ++ kfree_sensitive(backup_iv); ++ } else { ++ scatterwalk_map_and_copy(areq->iv, areq->dst, areq->cryptlen - ivsize, ++ ivsize, 0); + } + } + + release_ss: +- sg_miter_stop(&mi); +- sg_miter_stop(&mo); + writel(0, ss->base + SS_CTL); + spin_unlock_irqrestore(&ss->slock, flags); + +diff --git a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h +index 163962f9e2845..02105b39fbfec 100644 +--- a/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h ++++ b/drivers/crypto/allwinner/sun4i-ss/sun4i-ss.h +@@ -148,6 +148,8 @@ struct sun4i_ss_ctx { + struct reset_control *reset; + struct device *dev; + struct resource *res; ++ char buf[4 * SS_RX_MAX];/* buffer for linearize SG src */ ++ char bufo[4 * SS_TX_MAX]; /* buffer for linearize SG dst */ + spinlock_t slock; /* control the use of the device */ + #ifdef CONFIG_CRYPTO_DEV_SUN4I_SS_PRNG + u32 seed[SS_SEED_LEN / BITS_PER_LONG]; +diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c +index 50d169e61b41d..1cb310a133b3f 100644 +--- a/drivers/crypto/bcm/cipher.c ++++ b/drivers/crypto/bcm/cipher.c +@@ -41,7 +41,7 @@ + + /* ================= Device Structure ================== */ + +-struct device_private iproc_priv; ++struct bcm_device_private iproc_priv; + + /* ==================== Parameters ===================== */ + +diff --git a/drivers/crypto/bcm/cipher.h b/drivers/crypto/bcm/cipher.h +index 035c8389cb3dd..892823ef4a019 100644 +--- a/drivers/crypto/bcm/cipher.h ++++ b/drivers/crypto/bcm/cipher.h +@@ -419,7 +419,7 @@ struct spu_hw { + u32 num_chan; + }; + +-struct device_private { ++struct bcm_device_private { + struct platform_device *pdev; + + struct spu_hw spu; +@@ -466,6 +466,6 @@ struct device_private { + struct mbox_chan **mbox; + }; + +-extern struct device_private iproc_priv; ++extern struct bcm_device_private iproc_priv; + + #endif +diff --git a/drivers/crypto/bcm/util.c b/drivers/crypto/bcm/util.c +index 2b304fc780595..77aeedb840555 100644 +--- a/drivers/crypto/bcm/util.c ++++ b/drivers/crypto/bcm/util.c +@@ -348,7 +348,7 @@ char *spu_alg_name(enum spu_cipher_alg alg, enum spu_cipher_mode mode) + static ssize_t spu_debugfs_read(struct file *filp, char __user *ubuf, + size_t count, loff_t *offp) + { +- struct device_private *ipriv; ++ struct bcm_device_private *ipriv; + char *buf; + ssize_t ret, out_offset, out_count; + int i; +diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c +index a713a35dc5022..ae86557291c3f 100644 +--- a/drivers/crypto/talitos.c ++++ b/drivers/crypto/talitos.c +@@ -1092,11 +1092,12 @@ static void ipsec_esp_decrypt_hwauth_done(struct device *dev, + */ + static int sg_to_link_tbl_offset(struct scatterlist *sg, int sg_count, + unsigned int offset, int datalen, int elen, +- struct talitos_ptr *link_tbl_ptr) ++ struct talitos_ptr *link_tbl_ptr, int align) + { + int n_sg = elen ? sg_count + 1 : sg_count; + int count = 0; + int cryptlen = datalen + elen; ++ int padding = ALIGN(cryptlen, align) - cryptlen; + + while (cryptlen && sg && n_sg--) { + unsigned int len = sg_dma_len(sg); +@@ -1120,7 +1121,7 @@ static int sg_to_link_tbl_offset(struct scatterlist *sg, int sg_count, + offset += datalen; + } + to_talitos_ptr(link_tbl_ptr + count, +- sg_dma_address(sg) + offset, len, 0); ++ sg_dma_address(sg) + offset, sg_next(sg) ? len : len + padding, 0); + to_talitos_ptr_ext_set(link_tbl_ptr + count, 0, 0); + count++; + cryptlen -= len; +@@ -1143,10 +1144,11 @@ static int talitos_sg_map_ext(struct device *dev, struct scatterlist *src, + unsigned int len, struct talitos_edesc *edesc, + struct talitos_ptr *ptr, int sg_count, + unsigned int offset, int tbl_off, int elen, +- bool force) ++ bool force, int align) + { + struct talitos_private *priv = dev_get_drvdata(dev); + bool is_sec1 = has_ftr_sec1(priv); ++ int aligned_len = ALIGN(len, align); + + if (!src) { + to_talitos_ptr(ptr, 0, 0, is_sec1); +@@ -1154,22 +1156,22 @@ static int talitos_sg_map_ext(struct device *dev, struct scatterlist *src, + } + to_talitos_ptr_ext_set(ptr, elen, is_sec1); + if (sg_count == 1 && !force) { +- to_talitos_ptr(ptr, sg_dma_address(src) + offset, len, is_sec1); ++ to_talitos_ptr(ptr, sg_dma_address(src) + offset, aligned_len, is_sec1); + return sg_count; + } + if (is_sec1) { +- to_talitos_ptr(ptr, edesc->dma_link_tbl + offset, len, is_sec1); ++ to_talitos_ptr(ptr, edesc->dma_link_tbl + offset, aligned_len, is_sec1); + return sg_count; + } + sg_count = sg_to_link_tbl_offset(src, sg_count, offset, len, elen, +- &edesc->link_tbl[tbl_off]); ++ &edesc->link_tbl[tbl_off], align); + if (sg_count == 1 && !force) { + /* Only one segment now, so no link tbl needed*/ + copy_talitos_ptr(ptr, &edesc->link_tbl[tbl_off], is_sec1); + return sg_count; + } + to_talitos_ptr(ptr, edesc->dma_link_tbl + +- tbl_off * sizeof(struct talitos_ptr), len, is_sec1); ++ tbl_off * sizeof(struct talitos_ptr), aligned_len, is_sec1); + to_talitos_ptr_ext_or(ptr, DESC_PTR_LNKTBL_JUMP, is_sec1); + + return sg_count; +@@ -1181,7 +1183,7 @@ static int talitos_sg_map(struct device *dev, struct scatterlist *src, + unsigned int offset, int tbl_off) + { + return talitos_sg_map_ext(dev, src, len, edesc, ptr, sg_count, offset, +- tbl_off, 0, false); ++ tbl_off, 0, false, 1); + } + + /* +@@ -1250,7 +1252,7 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq, + + ret = talitos_sg_map_ext(dev, areq->src, cryptlen, edesc, &desc->ptr[4], + sg_count, areq->assoclen, tbl_off, elen, +- false); ++ false, 1); + + if (ret > 1) { + tbl_off += ret; +@@ -1270,7 +1272,7 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq, + elen = 0; + ret = talitos_sg_map_ext(dev, areq->dst, cryptlen, edesc, &desc->ptr[5], + sg_count, areq->assoclen, tbl_off, elen, +- is_ipsec_esp && !encrypt); ++ is_ipsec_esp && !encrypt, 1); + tbl_off += ret; + + if (!encrypt && is_ipsec_esp) { +@@ -1576,6 +1578,8 @@ static int common_nonsnoop(struct talitos_edesc *edesc, + bool sync_needed = false; + struct talitos_private *priv = dev_get_drvdata(dev); + bool is_sec1 = has_ftr_sec1(priv); ++ bool is_ctr = (desc->hdr & DESC_HDR_SEL0_MASK) == DESC_HDR_SEL0_AESU && ++ (desc->hdr & DESC_HDR_MODE0_AESU_MASK) == DESC_HDR_MODE0_AESU_CTR; + + /* first DWORD empty */ + +@@ -1596,8 +1600,8 @@ static int common_nonsnoop(struct talitos_edesc *edesc, + /* + * cipher in + */ +- sg_count = talitos_sg_map(dev, areq->src, cryptlen, edesc, +- &desc->ptr[3], sg_count, 0, 0); ++ sg_count = talitos_sg_map_ext(dev, areq->src, cryptlen, edesc, &desc->ptr[3], ++ sg_count, 0, 0, 0, false, is_ctr ? 16 : 1); + if (sg_count > 1) + sync_needed = true; + +@@ -2760,6 +2764,22 @@ static struct talitos_alg_template driver_algs[] = { + DESC_HDR_SEL0_AESU | + DESC_HDR_MODE0_AESU_CTR, + }, ++ { .type = CRYPTO_ALG_TYPE_SKCIPHER, ++ .alg.skcipher = { ++ .base.cra_name = "ctr(aes)", ++ .base.cra_driver_name = "ctr-aes-talitos", ++ .base.cra_blocksize = 1, ++ .base.cra_flags = CRYPTO_ALG_ASYNC | ++ CRYPTO_ALG_ALLOCATES_MEMORY, ++ .min_keysize = AES_MIN_KEY_SIZE, ++ .max_keysize = AES_MAX_KEY_SIZE, ++ .ivsize = AES_BLOCK_SIZE, ++ .setkey = skcipher_aes_setkey, ++ }, ++ .desc_hdr_template = DESC_HDR_TYPE_COMMON_NONSNOOP_NO_AFEU | ++ DESC_HDR_SEL0_AESU | ++ DESC_HDR_MODE0_AESU_CTR, ++ }, + { .type = CRYPTO_ALG_TYPE_SKCIPHER, + .alg.skcipher = { + .base.cra_name = "ecb(des)", +@@ -3177,6 +3197,12 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev, + t_alg->algt.alg.skcipher.setkey ?: skcipher_setkey; + t_alg->algt.alg.skcipher.encrypt = skcipher_encrypt; + t_alg->algt.alg.skcipher.decrypt = skcipher_decrypt; ++ if (!strcmp(alg->cra_name, "ctr(aes)") && !has_ftr_sec1(priv) && ++ DESC_TYPE(t_alg->algt.desc_hdr_template) != ++ DESC_TYPE(DESC_HDR_TYPE_AESU_CTR_NONSNOOP)) { ++ devm_kfree(dev, t_alg); ++ return ERR_PTR(-ENOTSUPP); ++ } + break; + case CRYPTO_ALG_TYPE_AEAD: + alg = &t_alg->algt.alg.aead.base; +diff --git a/drivers/crypto/talitos.h b/drivers/crypto/talitos.h +index 1469b956948ab..32825119e8805 100644 +--- a/drivers/crypto/talitos.h ++++ b/drivers/crypto/talitos.h +@@ -344,6 +344,7 @@ static inline bool has_ftr_sec1(struct talitos_private *priv) + + /* primary execution unit mode (MODE0) and derivatives */ + #define DESC_HDR_MODE0_ENCRYPT cpu_to_be32(0x00100000) ++#define DESC_HDR_MODE0_AESU_MASK cpu_to_be32(0x00600000) + #define DESC_HDR_MODE0_AESU_CBC cpu_to_be32(0x00200000) + #define DESC_HDR_MODE0_AESU_CTR cpu_to_be32(0x00600000) + #define DESC_HDR_MODE0_DEU_CBC cpu_to_be32(0x00400000) +diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c +index de7b74505e75e..c1d379bd7af33 100644 +--- a/drivers/dax/bus.c ++++ b/drivers/dax/bus.c +@@ -1046,7 +1046,7 @@ static ssize_t range_parse(const char *opt, size_t len, struct range *range) + { + unsigned long long addr = 0; + char *start, *end, *str; +- ssize_t rc = EINVAL; ++ ssize_t rc = -EINVAL; + + str = kstrdup(opt, GFP_KERNEL); + if (!str) +diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c +index 0feb323bae1e3..f8459cc5315df 100644 +--- a/drivers/dma/fsldma.c ++++ b/drivers/dma/fsldma.c +@@ -1214,6 +1214,7 @@ static int fsldma_of_probe(struct platform_device *op) + { + struct fsldma_device *fdev; + struct device_node *child; ++ unsigned int i; + int err; + + fdev = kzalloc(sizeof(*fdev), GFP_KERNEL); +@@ -1292,6 +1293,10 @@ static int fsldma_of_probe(struct platform_device *op) + return 0; + + out_free_fdev: ++ for (i = 0; i < FSL_DMA_MAX_CHANS_PER_DEVICE; i++) { ++ if (fdev->chan[i]) ++ fsl_dma_chan_remove(fdev->chan[i]); ++ } + irq_dispose_mapping(fdev->irq); + iounmap(fdev->regs); + out_free: +@@ -1314,6 +1319,7 @@ static int fsldma_of_remove(struct platform_device *op) + if (fdev->chan[i]) + fsl_dma_chan_remove(fdev->chan[i]); + } ++ irq_dispose_mapping(fdev->irq); + + iounmap(fdev->regs); + kfree(fdev); +diff --git a/drivers/dma/hsu/pci.c b/drivers/dma/hsu/pci.c +index 07cc7320a614f..9045a6f7f5893 100644 +--- a/drivers/dma/hsu/pci.c ++++ b/drivers/dma/hsu/pci.c +@@ -26,22 +26,12 @@ + static irqreturn_t hsu_pci_irq(int irq, void *dev) + { + struct hsu_dma_chip *chip = dev; +- struct pci_dev *pdev = to_pci_dev(chip->dev); + u32 dmaisr; + u32 status; + unsigned short i; + int ret = 0; + int err; + +- /* +- * On Intel Tangier B0 and Anniedale the interrupt line, disregarding +- * to have different numbers, is shared between HSU DMA and UART IPs. +- * Thus on such SoCs we are expecting that IRQ handler is called in +- * UART driver only. +- */ +- if (pdev->device == PCI_DEVICE_ID_INTEL_MRFLD_HSU_DMA) +- return IRQ_HANDLED; +- + dmaisr = readl(chip->regs + HSU_PCI_DMAISR); + for (i = 0; i < chip->hsu->nr_channels; i++) { + if (dmaisr & 0x1) { +@@ -105,6 +95,17 @@ static int hsu_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) + if (ret) + goto err_register_irq; + ++ /* ++ * On Intel Tangier B0 and Anniedale the interrupt line, disregarding ++ * to have different numbers, is shared between HSU DMA and UART IPs. ++ * Thus on such SoCs we are expecting that IRQ handler is called in ++ * UART driver only. Instead of handling the spurious interrupt ++ * from HSU DMA here and waste CPU time and delay HSU UART interrupt ++ * handling, disable the interrupt entirely. ++ */ ++ if (pdev->device == PCI_DEVICE_ID_INTEL_MRFLD_HSU_DMA) ++ disable_irq_nosync(chip->irq); ++ + pci_set_drvdata(pdev, chip); + + return 0; +diff --git a/drivers/dma/idxd/dma.c b/drivers/dma/idxd/dma.c +index 8b14ba0bae1cd..ec177a535d6dd 100644 +--- a/drivers/dma/idxd/dma.c ++++ b/drivers/dma/idxd/dma.c +@@ -174,6 +174,7 @@ int idxd_register_dma_device(struct idxd_device *idxd) + INIT_LIST_HEAD(&dma->channels); + dma->dev = &idxd->pdev->dev; + ++ dma_cap_set(DMA_PRIVATE, dma->cap_mask); + dma_cap_set(DMA_COMPLETION_NO_ORDER, dma->cap_mask); + dma->device_release = idxd_dma_release; + +diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c +index 9fede32641e9e..04202d75f4eed 100644 +--- a/drivers/dma/owl-dma.c ++++ b/drivers/dma/owl-dma.c +@@ -1245,6 +1245,7 @@ static int owl_dma_remove(struct platform_device *pdev) + owl_dma_free(od); + + clk_disable_unprepare(od->clk); ++ dma_pool_destroy(od->lli_pool); + + return 0; + } +diff --git a/drivers/firmware/arm_scmi/driver.c b/drivers/firmware/arm_scmi/driver.c +index 3dfd8b6a0ebf7..6b2ce3f28f7b9 100644 +--- a/drivers/firmware/arm_scmi/driver.c ++++ b/drivers/firmware/arm_scmi/driver.c +@@ -847,8 +847,6 @@ static int scmi_remove(struct platform_device *pdev) + struct scmi_info *info = platform_get_drvdata(pdev); + struct idr *idr = &info->tx_idr; + +- scmi_notification_exit(&info->handle); +- + mutex_lock(&scmi_list_mutex); + if (info->users) + ret = -EBUSY; +@@ -859,6 +857,8 @@ static int scmi_remove(struct platform_device *pdev) + if (ret) + return ret; + ++ scmi_notification_exit(&info->handle); ++ + /* Safe to free channels since no more users */ + ret = idr_for_each(idr, info->desc->ops->chan_free, idr); + idr_destroy(&info->tx_idr); +diff --git a/drivers/gpio/gpio-pcf857x.c b/drivers/gpio/gpio-pcf857x.c +index a2a8d155c75e3..b7568ee33696d 100644 +--- a/drivers/gpio/gpio-pcf857x.c ++++ b/drivers/gpio/gpio-pcf857x.c +@@ -332,7 +332,7 @@ static int pcf857x_probe(struct i2c_client *client, + * reset state. Otherwise it flags pins to be driven low. + */ + gpio->out = ~n_latch; +- gpio->status = gpio->out; ++ gpio->status = gpio->read(gpio->client); + + /* Enable irqchip if we have an interrupt */ + if (client->irq) { +diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig +index 147d61b9674ea..16f73c1023943 100644 +--- a/drivers/gpu/drm/Kconfig ++++ b/drivers/gpu/drm/Kconfig +@@ -15,6 +15,9 @@ menuconfig DRM + select I2C_ALGOBIT + select DMA_SHARED_BUFFER + select SYNC_FILE ++# gallium uses SYS_kcmp for os_same_file_description() to de-duplicate ++# device and dmabuf fd. Let's make sure that is available for our userspace. ++ select KCMP + help + Kernel-level support for the Direct Rendering Infrastructure (DRI) + introduced in XFree86 4.0. If you say Y here, you need to select +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c +index 82cd8e55595af..eb22a190c2423 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c +@@ -844,7 +844,7 @@ static int amdgpu_ras_error_inject_xgmi(struct amdgpu_device *adev, + if (amdgpu_dpm_allow_xgmi_power_down(adev, true)) + dev_warn(adev->dev, "Failed to allow XGMI power down"); + +- if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_DISALLOW)) ++ if (amdgpu_dpm_set_df_cstate(adev, DF_CSTATE_ALLOW)) + dev_warn(adev->dev, "Failed to allow df cstate"); + + return ret; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h +index ee9480d14cbc3..86cfb3d55477f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_trace.h +@@ -21,7 +21,7 @@ + * + */ + +-#if !defined(_AMDGPU_TRACE_H) || defined(TRACE_HEADER_MULTI_READ) ++#if !defined(_AMDGPU_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ) + #define _AMDGPU_TRACE_H_ + + #include +diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c +index 41cd108214d6d..7efc618887e21 100644 +--- a/drivers/gpu/drm/amd/amdgpu/soc15.c ++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c +@@ -246,6 +246,8 @@ static u32 soc15_get_xclk(struct amdgpu_device *adev) + { + u32 reference_clock = adev->clock.spll.reference_freq; + ++ if (adev->asic_type == CHIP_RENOIR) ++ return 10000; + if (adev->asic_type == CHIP_RAVEN) + return reference_clock / 4; + +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h +index 16262e5d93f5c..7351dd195274e 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h +@@ -243,11 +243,11 @@ get_sh_mem_bases_nybble_64(struct kfd_process_device *pdd) + static inline void dqm_lock(struct device_queue_manager *dqm) + { + mutex_lock(&dqm->lock_hidden); +- dqm->saved_flags = memalloc_nofs_save(); ++ dqm->saved_flags = memalloc_noreclaim_save(); + } + static inline void dqm_unlock(struct device_queue_manager *dqm) + { +- memalloc_nofs_restore(dqm->saved_flags); ++ memalloc_noreclaim_restore(dqm->saved_flags); + mutex_unlock(&dqm->lock_hidden); + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index fdca76fc598c0..bffaefaf5a292 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -1096,7 +1096,7 @@ static void amdgpu_dm_fini(struct amdgpu_device *adev) + + #ifdef CONFIG_DRM_AMD_DC_HDCP + if (adev->dm.hdcp_workqueue) { +- hdcp_destroy(adev->dm.hdcp_workqueue); ++ hdcp_destroy(&adev->dev->kobj, adev->dm.hdcp_workqueue); + adev->dm.hdcp_workqueue = NULL; + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +index c2cd184f0bbd4..79de68ac03f20 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c +@@ -376,7 +376,7 @@ static void event_cpirq(struct work_struct *work) + } + + +-void hdcp_destroy(struct hdcp_workqueue *hdcp_work) ++void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *hdcp_work) + { + int i = 0; + +@@ -385,6 +385,7 @@ void hdcp_destroy(struct hdcp_workqueue *hdcp_work) + cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork); + } + ++ sysfs_remove_bin_file(kobj, &hdcp_work[0].attr); + kfree(hdcp_work->srm); + kfree(hdcp_work->srm_temp); + kfree(hdcp_work); +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h +index 5159b3a5e5b03..09294ff122fea 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.h +@@ -69,7 +69,7 @@ void hdcp_update_display(struct hdcp_workqueue *hdcp_work, + + void hdcp_reset_display(struct hdcp_workqueue *work, unsigned int link_index); + void hdcp_handle_cpirq(struct hdcp_workqueue *work, unsigned int link_index); +-void hdcp_destroy(struct hdcp_workqueue *work); ++void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *work); + + struct hdcp_workqueue *hdcp_create_workqueue(struct amdgpu_device *adev, struct cp_psp *cp_psp, struct dc *dc); + +diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table.c b/drivers/gpu/drm/amd/display/dc/bios/command_table.c +index 070459e3e4070..afc10b954ffa7 100644 +--- a/drivers/gpu/drm/amd/display/dc/bios/command_table.c ++++ b/drivers/gpu/drm/amd/display/dc/bios/command_table.c +@@ -245,6 +245,23 @@ static enum bp_result encoder_control_digx_v3( + cntl->enable_dp_audio); + params.ucLaneNum = (uint8_t)(cntl->lanes_number); + ++ switch (cntl->color_depth) { ++ case COLOR_DEPTH_888: ++ params.ucBitPerColor = PANEL_8BIT_PER_COLOR; ++ break; ++ case COLOR_DEPTH_101010: ++ params.ucBitPerColor = PANEL_10BIT_PER_COLOR; ++ break; ++ case COLOR_DEPTH_121212: ++ params.ucBitPerColor = PANEL_12BIT_PER_COLOR; ++ break; ++ case COLOR_DEPTH_161616: ++ params.ucBitPerColor = PANEL_16BIT_PER_COLOR; ++ break; ++ default: ++ break; ++ } ++ + if (EXEC_BIOS_CMD_TABLE(DIGxEncoderControl, params)) + result = BP_RESULT_OK; + +@@ -274,6 +291,23 @@ static enum bp_result encoder_control_digx_v4( + cntl->enable_dp_audio)); + params.ucLaneNum = (uint8_t)(cntl->lanes_number); + ++ switch (cntl->color_depth) { ++ case COLOR_DEPTH_888: ++ params.ucBitPerColor = PANEL_8BIT_PER_COLOR; ++ break; ++ case COLOR_DEPTH_101010: ++ params.ucBitPerColor = PANEL_10BIT_PER_COLOR; ++ break; ++ case COLOR_DEPTH_121212: ++ params.ucBitPerColor = PANEL_12BIT_PER_COLOR; ++ break; ++ case COLOR_DEPTH_161616: ++ params.ucBitPerColor = PANEL_16BIT_PER_COLOR; ++ break; ++ default: ++ break; ++ } ++ + if (EXEC_BIOS_CMD_TABLE(DIGxEncoderControl, params)) + result = BP_RESULT_OK; + +@@ -1057,6 +1091,19 @@ static enum bp_result set_pixel_clock_v5( + * driver choose program it itself, i.e. here we program it + * to 888 by default. + */ ++ if (bp_params->signal_type == SIGNAL_TYPE_HDMI_TYPE_A) ++ switch (bp_params->color_depth) { ++ case TRANSMITTER_COLOR_DEPTH_30: ++ /* yes this is correct, the atom define is wrong */ ++ clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_32BPP; ++ break; ++ case TRANSMITTER_COLOR_DEPTH_36: ++ /* yes this is correct, the atom define is wrong */ ++ clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V5_MISC_HDMI_30BPP; ++ break; ++ default: ++ break; ++ } + + if (EXEC_BIOS_CMD_TABLE(SetPixelClock, clk)) + result = BP_RESULT_OK; +@@ -1135,6 +1182,20 @@ static enum bp_result set_pixel_clock_v6( + * driver choose program it itself, i.e. here we pass required + * target rate that includes deep color. + */ ++ if (bp_params->signal_type == SIGNAL_TYPE_HDMI_TYPE_A) ++ switch (bp_params->color_depth) { ++ case TRANSMITTER_COLOR_DEPTH_30: ++ clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_30BPP_V6; ++ break; ++ case TRANSMITTER_COLOR_DEPTH_36: ++ clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_36BPP_V6; ++ break; ++ case TRANSMITTER_COLOR_DEPTH_48: ++ clk.sPCLKInput.ucMiscInfo |= PIXEL_CLOCK_V6_MISC_HDMI_48BPP; ++ break; ++ default: ++ break; ++ } + + if (EXEC_BIOS_CMD_TABLE(SetPixelClock, clk)) + result = BP_RESULT_OK; +diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c +index 49ae5ff12da63..bae3a146b2cc2 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c ++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_clock_source.c +@@ -871,6 +871,20 @@ static bool dce110_program_pix_clk( + bp_pc_params.flags.SET_EXTERNAL_REF_DIV_SRC = + pll_settings->use_external_clk; + ++ switch (pix_clk_params->color_depth) { ++ case COLOR_DEPTH_101010: ++ bp_pc_params.color_depth = TRANSMITTER_COLOR_DEPTH_30; ++ break; ++ case COLOR_DEPTH_121212: ++ bp_pc_params.color_depth = TRANSMITTER_COLOR_DEPTH_36; ++ break; ++ case COLOR_DEPTH_161616: ++ bp_pc_params.color_depth = TRANSMITTER_COLOR_DEPTH_48; ++ break; ++ default: ++ break; ++ } ++ + if (clk_src->bios->funcs->set_pixel_clock( + clk_src->bios, &bp_pc_params) != BP_RESULT_OK) + return false; +diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c +index 5054bb567b748..99ad475fc1ff5 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c ++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_stream_encoder.c +@@ -564,6 +564,7 @@ static void dce110_stream_encoder_hdmi_set_stream_attribute( + cntl.enable_dp_audio = enable_audio; + cntl.pixel_clock = actual_pix_clk_khz; + cntl.lanes_number = LANE_COUNT_FOUR; ++ cntl.color_depth = crtc_timing->display_color_depth; + + if (enc110->base.bp->funcs->encoder_control( + enc110->base.bp, &cntl) != BP_RESULT_OK) +diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c +index 2a32b66959ba2..e2e79025825f8 100644 +--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c ++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c +@@ -601,12 +601,12 @@ static void set_clamp( + clamp_max = 0x3FC0; + break; + case COLOR_DEPTH_101010: +- /* 10bit MSB aligned on 14 bit bus '11 1111 1111 1100' */ +- clamp_max = 0x3FFC; ++ /* 10bit MSB aligned on 14 bit bus '11 1111 1111 0000' */ ++ clamp_max = 0x3FF0; + break; + case COLOR_DEPTH_121212: +- /* 12bit MSB aligned on 14 bit bus '11 1111 1111 1111' */ +- clamp_max = 0x3FFF; ++ /* 12bit MSB aligned on 14 bit bus '11 1111 1111 1100' */ ++ clamp_max = 0x3FFC; + break; + default: + clamp_max = 0x3FC0; +diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c +index 81db0179f7ea8..85dc2b16c9418 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_link_encoder.c +@@ -480,7 +480,6 @@ unsigned int dcn10_get_dig_frontend(struct link_encoder *enc) + break; + default: + // invalid source select DIG +- ASSERT(false); + result = ENGINE_ID_UNKNOWN; + } + +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +index 121643ddb719b..4ea53c543e082 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +@@ -408,8 +408,8 @@ static struct _vcs_dpi_soc_bounding_box_st dcn2_0_nv14_soc = { + }, + }, + .num_states = 5, +- .sr_exit_time_us = 11.6, +- .sr_enter_plus_exit_time_us = 13.9, ++ .sr_exit_time_us = 8.6, ++ .sr_enter_plus_exit_time_us = 10.9, + .urgent_latency_us = 4.0, + .urgent_latency_pixel_data_only_us = 4.0, + .urgent_latency_pixel_mixed_with_vm_data_us = 4.0, +@@ -3248,7 +3248,7 @@ restore_dml_state: + bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context, + bool fast_validate) + { +- bool voltage_supported = false; ++ bool voltage_supported; + DC_FP_START(); + voltage_supported = dcn20_validate_bandwidth_fp(dc, context, fast_validate); + DC_FP_END(); +diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c +index c993854404124..4e2dcf259428f 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c +@@ -1173,8 +1173,8 @@ void dcn21_calculate_wm( + } + + +-bool dcn21_validate_bandwidth(struct dc *dc, struct dc_state *context, +- bool fast_validate) ++static noinline bool dcn21_validate_bandwidth_fp(struct dc *dc, ++ struct dc_state *context, bool fast_validate) + { + bool out = false; + +@@ -1227,6 +1227,22 @@ validate_out: + + return out; + } ++ ++/* ++ * Some of the functions further below use the FPU, so we need to wrap this ++ * with DC_FP_START()/DC_FP_END(). Use the same approach as for ++ * dcn20_validate_bandwidth in dcn20_resource.c. ++ */ ++bool dcn21_validate_bandwidth(struct dc *dc, struct dc_state *context, ++ bool fast_validate) ++{ ++ bool voltage_supported; ++ DC_FP_START(); ++ voltage_supported = dcn21_validate_bandwidth_fp(dc, context, fast_validate); ++ DC_FP_END(); ++ return voltage_supported; ++} ++ + static void dcn21_destroy_resource_pool(struct resource_pool **pool) + { + struct dcn21_resource_pool *dcn21_pool = TO_DCN21_RES_POOL(*pool); +diff --git a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c +index 204773ffc376f..97909d5aab344 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_hwseq.c +@@ -526,6 +526,8 @@ void dcn30_init_hw(struct dc *dc) + + fe = dc->links[i]->link_enc->funcs->get_dig_frontend( + dc->links[i]->link_enc); ++ if (fe == ENGINE_ID_UNKNOWN) ++ continue; + + for (j = 0; j < dc->res_pool->stream_enc_count; j++) { + if (fe == dc->res_pool->stream_enc[j]->id) { +diff --git a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c +index 1b971265418b6..0e0f494fbb5e1 100644 +--- a/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c ++++ b/drivers/gpu/drm/amd/display/dc/irq/dcn21/irq_service_dcn21.c +@@ -168,6 +168,11 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = { + .ack = NULL + }; + ++static const struct irq_source_info_funcs vupdate_no_lock_irq_info_funcs = { ++ .set = NULL, ++ .ack = NULL ++}; ++ + #undef BASE_INNER + #define BASE_INNER(seg) DMU_BASE__INST0_SEG ## seg + +@@ -230,6 +235,17 @@ static const struct irq_source_info_funcs vblank_irq_info_funcs = { + .funcs = &vblank_irq_info_funcs\ + } + ++/* vupdate_no_lock_int_entry maps to DC_IRQ_SOURCE_VUPDATEx, to match semantic ++ * of DCE's DC_IRQ_SOURCE_VUPDATEx. ++ */ ++#define vupdate_no_lock_int_entry(reg_num)\ ++ [DC_IRQ_SOURCE_VUPDATE1 + reg_num] = {\ ++ IRQ_REG_ENTRY(OTG, reg_num,\ ++ OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_INT_EN,\ ++ OTG_GLOBAL_SYNC_STATUS, VUPDATE_NO_LOCK_EVENT_CLEAR),\ ++ .funcs = &vupdate_no_lock_irq_info_funcs\ ++ } ++ + #define vblank_int_entry(reg_num)\ + [DC_IRQ_SOURCE_VBLANK1 + reg_num] = {\ + IRQ_REG_ENTRY(OTG, reg_num,\ +@@ -338,6 +354,12 @@ irq_source_info_dcn21[DAL_IRQ_SOURCES_NUMBER] = { + vupdate_int_entry(3), + vupdate_int_entry(4), + vupdate_int_entry(5), ++ vupdate_no_lock_int_entry(0), ++ vupdate_no_lock_int_entry(1), ++ vupdate_no_lock_int_entry(2), ++ vupdate_no_lock_int_entry(3), ++ vupdate_no_lock_int_entry(4), ++ vupdate_no_lock_int_entry(5), + vblank_int_entry(0), + vblank_int_entry(1), + vblank_int_entry(2), +diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c +index 529816637c731..9f383b9041d28 100644 +--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c ++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c +@@ -1070,7 +1070,7 @@ static ssize_t amdgpu_get_pp_dpm_sclk(struct device *dev, + static ssize_t amdgpu_read_mask(const char *buf, size_t count, uint32_t *mask) + { + int ret; +- long level; ++ unsigned long level; + char *sub_str = NULL; + char *tmp; + char buf_cpy[AMDGPU_MASK_BUF_MAX + 1]; +@@ -1086,8 +1086,8 @@ static ssize_t amdgpu_read_mask(const char *buf, size_t count, uint32_t *mask) + while (tmp[0]) { + sub_str = strsep(&tmp, delimiter); + if (strlen(sub_str)) { +- ret = kstrtol(sub_str, 0, &level); +- if (ret) ++ ret = kstrtoul(sub_str, 0, &level); ++ if (ret || level > 31) + return -EINVAL; + *mask |= 1 << level; + } else +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index 17bdad95978a1..9cf35dab25273 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -2302,7 +2302,8 @@ drm_dp_mst_port_add_connector(struct drm_dp_mst_branch *mstb, + } + + if (port->pdt != DP_PEER_DEVICE_NONE && +- drm_dp_mst_is_end_device(port->pdt, port->mcs)) { ++ drm_dp_mst_is_end_device(port->pdt, port->mcs) && ++ port->port_num >= DP_MST_LOGICAL_PORT_0) { + port->cached_edid = drm_get_edid(port->connector, + &port->aux.ddc); + drm_connector_set_tile_property(port->connector); +diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c +index 1543d9d109705..8033467db4bee 100644 +--- a/drivers/gpu/drm/drm_fb_helper.c ++++ b/drivers/gpu/drm/drm_fb_helper.c +@@ -923,11 +923,15 @@ static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info) + drm_modeset_lock_all(fb_helper->dev); + drm_client_for_each_modeset(modeset, &fb_helper->client) { + crtc = modeset->crtc; +- if (!crtc->funcs->gamma_set || !crtc->gamma_size) +- return -EINVAL; ++ if (!crtc->funcs->gamma_set || !crtc->gamma_size) { ++ ret = -EINVAL; ++ goto out; ++ } + +- if (cmap->start + cmap->len > crtc->gamma_size) +- return -EINVAL; ++ if (cmap->start + cmap->len > crtc->gamma_size) { ++ ret = -EINVAL; ++ goto out; ++ } + + r = crtc->gamma_store; + g = r + crtc->gamma_size; +@@ -940,8 +944,9 @@ static int setcmap_legacy(struct fb_cmap *cmap, struct fb_info *info) + ret = crtc->funcs->gamma_set(crtc, r, g, b, + crtc->gamma_size, NULL); + if (ret) +- return ret; ++ goto out; + } ++out: + drm_modeset_unlock_all(fb_helper->dev); + + return ret; +diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c +index 501b4fe55a3db..511cde5c7fa6f 100644 +--- a/drivers/gpu/drm/drm_modes.c ++++ b/drivers/gpu/drm/drm_modes.c +@@ -762,7 +762,7 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode) + if (mode->htotal == 0 || mode->vtotal == 0) + return 0; + +- num = mode->clock * 1000; ++ num = mode->clock; + den = mode->htotal * mode->vtotal; + + if (mode->flags & DRM_MODE_FLAG_INTERLACE) +@@ -772,7 +772,7 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode) + if (mode->vscan > 1) + den *= mode->vscan; + +- return DIV_ROUND_CLOSEST(num, den); ++ return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(num, 1000), den); + } + EXPORT_SYMBOL(drm_mode_vrefresh); + +diff --git a/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c b/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c +index e281070611480..fc9a34ed58bd1 100644 +--- a/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c ++++ b/drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c +@@ -279,11 +279,8 @@ int oaktrail_hdmi_i2c_init(struct pci_dev *dev) + hdmi_dev = pci_get_drvdata(dev); + + i2c_dev = kzalloc(sizeof(struct hdmi_i2c_dev), GFP_KERNEL); +- if (i2c_dev == NULL) { +- DRM_ERROR("Can't allocate interface\n"); +- ret = -ENOMEM; +- goto exit; +- } ++ if (!i2c_dev) ++ return -ENOMEM; + + i2c_dev->adap = &oaktrail_hdmi_i2c_adapter; + i2c_dev->status = I2C_STAT_INIT; +@@ -300,16 +297,23 @@ int oaktrail_hdmi_i2c_init(struct pci_dev *dev) + oaktrail_hdmi_i2c_adapter.name, hdmi_dev); + if (ret) { + DRM_ERROR("Failed to request IRQ for I2C controller\n"); +- goto err; ++ goto free_dev; + } + + /* Adapter registration */ + ret = i2c_add_numbered_adapter(&oaktrail_hdmi_i2c_adapter); +- return ret; ++ if (ret) { ++ DRM_ERROR("Failed to add I2C adapter\n"); ++ goto free_irq; ++ } + +-err: ++ return 0; ++ ++free_irq: ++ free_irq(dev->irq, hdmi_dev); ++free_dev: + kfree(i2c_dev); +-exit: ++ + return ret; + } + +diff --git a/drivers/gpu/drm/gma500/psb_drv.c b/drivers/gpu/drm/gma500/psb_drv.c +index 34b4aae9a15e3..074f403d7ca07 100644 +--- a/drivers/gpu/drm/gma500/psb_drv.c ++++ b/drivers/gpu/drm/gma500/psb_drv.c +@@ -313,6 +313,8 @@ static int psb_driver_load(struct drm_device *dev, unsigned long flags) + if (ret) + goto out_err; + ++ ret = -ENOMEM; ++ + dev_priv->mmu = psb_mmu_driver_init(dev, 1, 0, 0); + if (!dev_priv->mmu) + goto out_err; +diff --git a/drivers/gpu/drm/i915/display/intel_hdmi.c b/drivers/gpu/drm/i915/display/intel_hdmi.c +index 3f2008d845c20..1d616da4f1657 100644 +--- a/drivers/gpu/drm/i915/display/intel_hdmi.c ++++ b/drivers/gpu/drm/i915/display/intel_hdmi.c +@@ -2216,7 +2216,11 @@ hdmi_port_clock_valid(struct intel_hdmi *hdmi, + has_hdmi_sink)) + return MODE_CLOCK_HIGH; + +- /* BXT DPLL can't generate 223-240 MHz */ ++ /* GLK DPLL can't generate 446-480 MHz */ ++ if (IS_GEMINILAKE(dev_priv) && clock > 446666 && clock < 480000) ++ return MODE_CLOCK_RANGE; ++ ++ /* BXT/GLK DPLL can't generate 223-240 MHz */ + if (IS_GEN9_LP(dev_priv) && clock > 223333 && clock < 240000) + return MODE_CLOCK_RANGE; + +diff --git a/drivers/gpu/drm/i915/gt/gen7_renderclear.c b/drivers/gpu/drm/i915/gt/gen7_renderclear.c +index e961ad6a31294..4adbc2bba97fb 100644 +--- a/drivers/gpu/drm/i915/gt/gen7_renderclear.c ++++ b/drivers/gpu/drm/i915/gt/gen7_renderclear.c +@@ -240,7 +240,7 @@ gen7_emit_state_base_address(struct batch_chunk *batch, + /* general */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* surface */ +- *cs++ = batch_addr(batch) | surface_state_base | BASE_ADDRESS_MODIFY; ++ *cs++ = (batch_addr(batch) + surface_state_base) | BASE_ADDRESS_MODIFY; + /* dynamic */ + *cs++ = batch_addr(batch) | BASE_ADDRESS_MODIFY; + /* indirect */ +@@ -353,19 +353,21 @@ static void gen7_emit_pipeline_flush(struct batch_chunk *batch) + + static void gen7_emit_pipeline_invalidate(struct batch_chunk *batch) + { +- u32 *cs = batch_alloc_items(batch, 0, 8); ++ u32 *cs = batch_alloc_items(batch, 0, 10); + + /* ivb: Stall before STATE_CACHE_INVALIDATE */ +- *cs++ = GFX_OP_PIPE_CONTROL(4); ++ *cs++ = GFX_OP_PIPE_CONTROL(5); + *cs++ = PIPE_CONTROL_STALL_AT_SCOREBOARD | + PIPE_CONTROL_CS_STALL; + *cs++ = 0; + *cs++ = 0; ++ *cs++ = 0; + +- *cs++ = GFX_OP_PIPE_CONTROL(4); ++ *cs++ = GFX_OP_PIPE_CONTROL(5); + *cs++ = PIPE_CONTROL_STATE_CACHE_INVALIDATE; + *cs++ = 0; + *cs++ = 0; ++ *cs++ = 0; + + batch_advance(batch, cs); + } +@@ -391,12 +393,14 @@ static void emit_batch(struct i915_vma * const vma, + desc_count); + + /* Reset inherited context registers */ ++ gen7_emit_pipeline_flush(&cmds); + gen7_emit_pipeline_invalidate(&cmds); + batch_add(&cmds, MI_LOAD_REGISTER_IMM(2)); + batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_0_GEN7)); + batch_add(&cmds, 0xffff0000); + batch_add(&cmds, i915_mmio_reg_offset(CACHE_MODE_1)); + batch_add(&cmds, 0xffff0000 | PIXEL_SUBSPAN_COLLECT_OPT_DISABLE); ++ gen7_emit_pipeline_invalidate(&cmds); + gen7_emit_pipeline_flush(&cmds); + + /* Switch to the media pipeline and our base address */ +diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c +index dc6df9e9a40d8..f6e7a88a56f1b 100644 +--- a/drivers/gpu/drm/lima/lima_sched.c ++++ b/drivers/gpu/drm/lima/lima_sched.c +@@ -200,7 +200,7 @@ static int lima_pm_busy(struct lima_device *ldev) + int ret; + + /* resume GPU if it has been suspended by runtime PM */ +- ret = pm_runtime_get_sync(ldev->dev); ++ ret = pm_runtime_resume_and_get(ldev->dev); + if (ret < 0) + return ret; + +diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c +index 28651bc579bc9..faff41183d173 100644 +--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c ++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c +@@ -266,7 +266,7 @@ static void mtk_ovl_layer_config(struct mtk_ddp_comp *comp, unsigned int idx, + } + + con = ovl_fmt_convert(ovl, fmt); +- if (state->base.fb->format->has_alpha) ++ if (state->base.fb && state->base.fb->format->has_alpha) + con |= OVL_CON_AEN | OVL_CON_ALPHA; + + if (pending->rotation & DRM_MODE_REFLECT_Y) { +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +index 491fee410dafe..8d78d95d29fcd 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +@@ -266,6 +266,16 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state) + } + name = "GPU_SET"; + break; ++ case GMU_OOB_PERFCOUNTER_SET: ++ if (gmu->legacy) { ++ request = GMU_OOB_PERFCOUNTER_REQUEST; ++ ack = GMU_OOB_PERFCOUNTER_ACK; ++ } else { ++ request = GMU_OOB_PERFCOUNTER_REQUEST_NEW; ++ ack = GMU_OOB_PERFCOUNTER_ACK_NEW; ++ } ++ name = "PERFCOUNTER"; ++ break; + case GMU_OOB_BOOT_SLUMBER: + request = GMU_OOB_BOOT_SLUMBER_REQUEST; + ack = GMU_OOB_BOOT_SLUMBER_ACK; +@@ -303,9 +313,14 @@ int a6xx_gmu_set_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state) + void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state) + { + if (!gmu->legacy) { +- WARN_ON(state != GMU_OOB_GPU_SET); +- gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, +- 1 << GMU_OOB_GPU_SET_CLEAR_NEW); ++ if (state == GMU_OOB_GPU_SET) { ++ gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, ++ 1 << GMU_OOB_GPU_SET_CLEAR_NEW); ++ } else { ++ WARN_ON(state != GMU_OOB_PERFCOUNTER_SET); ++ gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, ++ 1 << GMU_OOB_PERFCOUNTER_CLEAR_NEW); ++ } + return; + } + +@@ -314,6 +329,10 @@ void a6xx_gmu_clear_oob(struct a6xx_gmu *gmu, enum a6xx_gmu_oob_state state) + gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, + 1 << GMU_OOB_GPU_SET_CLEAR); + break; ++ case GMU_OOB_PERFCOUNTER_SET: ++ gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, ++ 1 << GMU_OOB_PERFCOUNTER_CLEAR); ++ break; + case GMU_OOB_BOOT_SLUMBER: + gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, + 1 << GMU_OOB_BOOT_SLUMBER_CLEAR); +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +index c6d2bced8e5de..9fa278de2106a 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +@@ -156,6 +156,7 @@ enum a6xx_gmu_oob_state { + GMU_OOB_BOOT_SLUMBER = 0, + GMU_OOB_GPU_SET, + GMU_OOB_DCVS_SET, ++ GMU_OOB_PERFCOUNTER_SET, + }; + + /* These are the interrupt / ack bits for each OOB request that are set +@@ -190,6 +191,13 @@ enum a6xx_gmu_oob_state { + #define GMU_OOB_GPU_SET_ACK_NEW 31 + #define GMU_OOB_GPU_SET_CLEAR_NEW 31 + ++#define GMU_OOB_PERFCOUNTER_REQUEST 17 ++#define GMU_OOB_PERFCOUNTER_ACK 25 ++#define GMU_OOB_PERFCOUNTER_CLEAR 25 ++ ++#define GMU_OOB_PERFCOUNTER_REQUEST_NEW 28 ++#define GMU_OOB_PERFCOUNTER_ACK_NEW 30 ++#define GMU_OOB_PERFCOUNTER_CLEAR_NEW 30 + + void a6xx_hfi_init(struct a6xx_gmu *gmu); + int a6xx_hfi_start(struct a6xx_gmu *gmu, int boot_state); +diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +index 420ca4a0eb5f7..83b50f6d6bb78 100644 +--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +@@ -1066,14 +1066,18 @@ static int a6xx_get_timestamp(struct msm_gpu *gpu, uint64_t *value) + { + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); + struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); ++ static DEFINE_MUTEX(perfcounter_oob); ++ ++ mutex_lock(&perfcounter_oob); + + /* Force the GPU power on so we can read this register */ +- a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); ++ a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); + + *value = gpu_read64(gpu, REG_A6XX_RBBM_PERFCTR_CP_0_LO, + REG_A6XX_RBBM_PERFCTR_CP_0_HI); + +- a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET); ++ a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_PERFCOUNTER_SET); ++ mutex_unlock(&perfcounter_oob); + return 0; + } + +diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +index c39dad151bb6d..7d7668998501a 100644 +--- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c ++++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +@@ -1176,7 +1176,7 @@ static void mdp5_crtc_pp_done_irq(struct mdp_irq *irq, uint32_t irqstatus) + struct mdp5_crtc *mdp5_crtc = container_of(irq, struct mdp5_crtc, + pp_done); + +- complete(&mdp5_crtc->pp_completion); ++ complete_all(&mdp5_crtc->pp_completion); + } + + static void mdp5_crtc_wait_for_pp_done(struct drm_crtc *crtc) +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index fe0279542a1c2..a2db14f852f11 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -620,8 +620,8 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data) + dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND); + + /* signal the disconnect event early to ensure proper teardown */ +- dp_display_handle_plugged_change(g_dp_display, false); + reinit_completion(&dp->audio_comp); ++ dp_display_handle_plugged_change(g_dp_display, false); + + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_PLUG_INT_MASK | + DP_DP_IRQ_HPD_INT_MASK, true); +@@ -840,6 +840,9 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data) + + /* wait only if audio was enabled */ + if (dp_display->audio_enabled) { ++ /* signal the disconnect event */ ++ reinit_completion(&dp->audio_comp); ++ dp_display_handle_plugged_change(dp_display, false); + if (!wait_for_completion_timeout(&dp->audio_comp, + HZ * 5)) + DRM_ERROR("audio comp timeout\n"); +diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c +index 1afb7c579dbbb..eca86bf448f74 100644 +--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c ++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c +@@ -139,7 +139,7 @@ const struct msm_dsi_phy_cfg dsi_phy_20nm_cfgs = { + .disable = dsi_20nm_phy_disable, + .init = msm_dsi_phy_init_common, + }, +- .io_start = { 0xfd998300, 0xfd9a0300 }, ++ .io_start = { 0xfd998500, 0xfd9a0500 }, + .num_dsi_phy = 2, + }; + +diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c +index d556c353e5aea..3d0adfa6736a5 100644 +--- a/drivers/gpu/drm/msm/msm_drv.c ++++ b/drivers/gpu/drm/msm/msm_drv.c +@@ -775,9 +775,10 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, + struct drm_file *file, struct drm_gem_object *obj, + uint64_t *iova) + { ++ struct msm_drm_private *priv = dev->dev_private; + struct msm_file_private *ctx = file->driver_priv; + +- if (!ctx->aspace) ++ if (!priv->gpu) + return -EINVAL; + + /* +diff --git a/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h b/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h +index f5f59261ea819..d1beaad0c82b6 100644 +--- a/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h ++++ b/drivers/gpu/drm/nouveau/include/nvkm/subdev/bios/conn.h +@@ -14,6 +14,7 @@ enum dcb_connector_type { + DCB_CONNECTOR_LVDS_SPWG = 0x41, + DCB_CONNECTOR_DP = 0x46, + DCB_CONNECTOR_eDP = 0x47, ++ DCB_CONNECTOR_mDP = 0x48, + DCB_CONNECTOR_HDMI_0 = 0x60, + DCB_CONNECTOR_HDMI_1 = 0x61, + DCB_CONNECTOR_HDMI_C = 0x63, +diff --git a/drivers/gpu/drm/nouveau/nouveau_chan.c b/drivers/gpu/drm/nouveau/nouveau_chan.c +index 8f099601d2f2d..9b6f2c1414d72 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_chan.c ++++ b/drivers/gpu/drm/nouveau/nouveau_chan.c +@@ -533,6 +533,7 @@ nouveau_channel_new(struct nouveau_drm *drm, struct nvif_device *device, + if (ret) { + NV_PRINTK(err, cli, "channel failed to initialise, %d\n", ret); + nouveau_channel_del(pchan); ++ goto done; + } + + ret = nouveau_svmm_join((*pchan)->vmm->svmm, (*pchan)->inst); +diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c +index 8b4b3688c7ae3..4c992fd5bd68a 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_connector.c ++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c +@@ -1210,6 +1210,7 @@ drm_conntype_from_dcb(enum dcb_connector_type dcb) + case DCB_CONNECTOR_DMS59_DP0: + case DCB_CONNECTOR_DMS59_DP1: + case DCB_CONNECTOR_DP : ++ case DCB_CONNECTOR_mDP : + case DCB_CONNECTOR_USB_C : return DRM_MODE_CONNECTOR_DisplayPort; + case DCB_CONNECTOR_eDP : return DRM_MODE_CONNECTOR_eDP; + case DCB_CONNECTOR_HDMI_0 : +diff --git a/drivers/gpu/drm/panel/panel-elida-kd35t133.c b/drivers/gpu/drm/panel/panel-elida-kd35t133.c +index bc36aa3c11234..fe5ac3ef90185 100644 +--- a/drivers/gpu/drm/panel/panel-elida-kd35t133.c ++++ b/drivers/gpu/drm/panel/panel-elida-kd35t133.c +@@ -265,7 +265,8 @@ static int kd35t133_probe(struct mipi_dsi_device *dsi) + dsi->lanes = 1; + dsi->format = MIPI_DSI_FMT_RGB888; + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | +- MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_EOT_PACKET; ++ MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_EOT_PACKET | ++ MIPI_DSI_CLOCK_NON_CONTINUOUS; + + drm_panel_init(&ctx->panel, &dsi->dev, &kd35t133_funcs, + DRM_MODE_CONNECTOR_DSI); +diff --git a/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c b/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c +index 0c5f22e95c2db..624d17b96a693 100644 +--- a/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c ++++ b/drivers/gpu/drm/panel/panel-mantix-mlaf057we51.c +@@ -22,6 +22,7 @@ + /* Manufacturer specific Commands send via DSI */ + #define MANTIX_CMD_OTP_STOP_RELOAD_MIPI 0x41 + #define MANTIX_CMD_INT_CANCEL 0x4C ++#define MANTIX_CMD_SPI_FINISH 0x90 + + struct mantix { + struct device *dev; +@@ -66,6 +67,10 @@ static int mantix_init_sequence(struct mantix *ctx) + dsi_generic_write_seq(dsi, 0x80, 0x64, 0x00, 0x64, 0x00, 0x00); + msleep(20); + ++ dsi_generic_write_seq(dsi, MANTIX_CMD_SPI_FINISH, 0xA5); ++ dsi_generic_write_seq(dsi, MANTIX_CMD_OTP_STOP_RELOAD_MIPI, 0x00, 0x2F); ++ msleep(20); ++ + dev_dbg(dev, "Panel init sequence done\n"); + return 0; + } +diff --git a/drivers/gpu/drm/rcar-du/rcar_cmm.c b/drivers/gpu/drm/rcar-du/rcar_cmm.c +index c578095b09a53..382d53f8a22e8 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_cmm.c ++++ b/drivers/gpu/drm/rcar-du/rcar_cmm.c +@@ -122,7 +122,7 @@ int rcar_cmm_enable(struct platform_device *pdev) + { + int ret; + +- ret = pm_runtime_get_sync(&pdev->dev); ++ ret = pm_runtime_resume_and_get(&pdev->dev); + if (ret < 0) + return ret; + +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +index fe86a3e677571..1b9738e44909d 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +@@ -727,13 +727,10 @@ static void rcar_du_crtc_atomic_enable(struct drm_crtc *crtc, + */ + if (rcdu->info->lvds_clk_mask & BIT(rcrtc->index) && + rstate->outputs == BIT(RCAR_DU_OUTPUT_DPAD0)) { +- struct rcar_du_encoder *encoder = +- rcdu->encoders[RCAR_DU_OUTPUT_LVDS0 + rcrtc->index]; ++ struct drm_bridge *bridge = rcdu->lvds[rcrtc->index]; + const struct drm_display_mode *mode = + &crtc->state->adjusted_mode; +- struct drm_bridge *bridge; + +- bridge = drm_bridge_chain_get_first_bridge(&encoder->base); + rcar_lvds_clk_enable(bridge, mode->clock * 1000); + } + +@@ -759,15 +756,12 @@ static void rcar_du_crtc_atomic_disable(struct drm_crtc *crtc, + + if (rcdu->info->lvds_clk_mask & BIT(rcrtc->index) && + rstate->outputs == BIT(RCAR_DU_OUTPUT_DPAD0)) { +- struct rcar_du_encoder *encoder = +- rcdu->encoders[RCAR_DU_OUTPUT_LVDS0 + rcrtc->index]; +- struct drm_bridge *bridge; ++ struct drm_bridge *bridge = rcdu->lvds[rcrtc->index]; + + /* + * Disable the LVDS clock output, see + * rcar_du_crtc_atomic_enable(). + */ +- bridge = drm_bridge_chain_get_first_bridge(&encoder->base); + rcar_lvds_clk_disable(bridge); + } + +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.h b/drivers/gpu/drm/rcar-du/rcar_du_drv.h +index 61504c54e2ecf..3597a179bfb78 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.h ++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.h +@@ -20,10 +20,10 @@ + + struct clk; + struct device; ++struct drm_bridge; + struct drm_device; + struct drm_property; + struct rcar_du_device; +-struct rcar_du_encoder; + + #define RCAR_DU_FEATURE_CRTC_IRQ_CLOCK BIT(0) /* Per-CRTC IRQ and clock */ + #define RCAR_DU_FEATURE_VSP1_SOURCE BIT(1) /* Has inputs from VSP1 */ +@@ -71,6 +71,7 @@ struct rcar_du_device_info { + #define RCAR_DU_MAX_CRTCS 4 + #define RCAR_DU_MAX_GROUPS DIV_ROUND_UP(RCAR_DU_MAX_CRTCS, 2) + #define RCAR_DU_MAX_VSPS 4 ++#define RCAR_DU_MAX_LVDS 2 + + struct rcar_du_device { + struct device *dev; +@@ -83,11 +84,10 @@ struct rcar_du_device { + struct rcar_du_crtc crtcs[RCAR_DU_MAX_CRTCS]; + unsigned int num_crtcs; + +- struct rcar_du_encoder *encoders[RCAR_DU_OUTPUT_MAX]; +- + struct rcar_du_group groups[RCAR_DU_MAX_GROUPS]; + struct platform_device *cmms[RCAR_DU_MAX_CRTCS]; + struct rcar_du_vsp vsps[RCAR_DU_MAX_VSPS]; ++ struct drm_bridge *lvds[RCAR_DU_MAX_LVDS]; + + struct { + struct drm_property *colorkey; +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c +index b0335da0c1614..50fc14534fa4d 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_encoder.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_encoder.c +@@ -57,7 +57,6 @@ int rcar_du_encoder_init(struct rcar_du_device *rcdu, + if (renc == NULL) + return -ENOMEM; + +- rcdu->encoders[output] = renc; + renc->output = output; + encoder = rcar_encoder_to_drm_encoder(renc); + +@@ -91,6 +90,10 @@ int rcar_du_encoder_init(struct rcar_du_device *rcdu, + ret = -EPROBE_DEFER; + goto done; + } ++ ++ if (output == RCAR_DU_OUTPUT_LVDS0 || ++ output == RCAR_DU_OUTPUT_LVDS1) ++ rcdu->lvds[output - RCAR_DU_OUTPUT_LVDS0] = bridge; + } + + /* +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_kms.c b/drivers/gpu/drm/rcar-du/rcar_du_kms.c +index 72dda446355fe..7015e22872bbe 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_kms.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_kms.c +@@ -700,10 +700,10 @@ static int rcar_du_cmm_init(struct rcar_du_device *rcdu) + int ret; + + cmm = of_parse_phandle(np, "renesas,cmms", i); +- if (IS_ERR(cmm)) { ++ if (!cmm) { + dev_err(rcdu->dev, + "Failed to parse 'renesas,cmms' property\n"); +- return PTR_ERR(cmm); ++ return -EINVAL; + } + + if (!of_device_is_available(cmm)) { +@@ -713,10 +713,10 @@ static int rcar_du_cmm_init(struct rcar_du_device *rcdu) + } + + pdev = of_find_device_by_node(cmm); +- if (IS_ERR(pdev)) { ++ if (!pdev) { + dev_err(rcdu->dev, "No device found for CMM%u\n", i); + of_node_put(cmm); +- return PTR_ERR(pdev); ++ return -EINVAL; + } + + of_node_put(cmm); +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h +index 4a2099cb582e1..857d97cdc67c6 100644 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.h ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.h +@@ -17,9 +17,20 @@ + + #define NUM_YUV2YUV_COEFFICIENTS 12 + ++/* AFBC supports a number of configurable modes. Relevant to us is block size ++ * (16x16 or 32x8), storage modifiers (SPARSE, SPLIT), and the YUV-like ++ * colourspace transform (YTR). 16x16 SPARSE mode is always used. SPLIT mode ++ * could be enabled via the hreg_block_split register, but is not currently ++ * handled. The colourspace transform is implicitly always assumed by the ++ * decoder, so consumers must use this transform as well. ++ * ++ * Failure to match modifiers will cause errors displaying AFBC buffers ++ * produced by conformant AFBC producers, including Mesa. ++ */ + #define ROCKCHIP_AFBC_MOD \ + DRM_FORMAT_MOD_ARM_AFBC( \ + AFBC_FORMAT_MOD_BLOCK_SIZE_16x16 | AFBC_FORMAT_MOD_SPARSE \ ++ | AFBC_FORMAT_MOD_YTR \ + ) + + enum vop_data_format { +diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c +index 9a0d77a680180..7111e0f527b0b 100644 +--- a/drivers/gpu/drm/scheduler/sched_main.c ++++ b/drivers/gpu/drm/scheduler/sched_main.c +@@ -890,6 +890,9 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched) + if (sched->thread) + kthread_stop(sched->thread); + ++ /* Confirm no work left behind accessing device structures */ ++ cancel_delayed_work_sync(&sched->work_tdr); ++ + sched->ready = false; + } + EXPORT_SYMBOL(drm_sched_fini); +diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.c b/drivers/gpu/drm/sun4i/sun4i_tcon.c +index 1e643bc7e786a..9f06dec0fc61d 100644 +--- a/drivers/gpu/drm/sun4i/sun4i_tcon.c ++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.c +@@ -569,30 +569,13 @@ static void sun4i_tcon0_mode_set_rgb(struct sun4i_tcon *tcon, + if (info->bus_flags & DRM_BUS_FLAG_DE_LOW) + val |= SUN4I_TCON0_IO_POL_DE_NEGATIVE; + +- /* +- * On A20 and similar SoCs, the only way to achieve Positive Edge +- * (Rising Edge), is setting dclk clock phase to 2/3(240°). +- * By default TCON works in Negative Edge(Falling Edge), +- * this is why phase is set to 0 in that case. +- * Unfortunately there's no way to logically invert dclk through +- * IO_POL register. +- * The only acceptable way to work, triple checked with scope, +- * is using clock phase set to 0° for Negative Edge and set to 240° +- * for Positive Edge. +- * On A33 and similar SoCs there would be a 90° phase option, +- * but it divides also dclk by 2. +- * Following code is a way to avoid quirks all around TCON +- * and DOTCLOCK drivers. +- */ +- if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE) +- clk_set_phase(tcon->dclk, 240); +- + if (info->bus_flags & DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE) +- clk_set_phase(tcon->dclk, 0); ++ val |= SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE; + + regmap_update_bits(tcon->regs, SUN4I_TCON0_IO_POL_REG, + SUN4I_TCON0_IO_POL_HSYNC_POSITIVE | + SUN4I_TCON0_IO_POL_VSYNC_POSITIVE | ++ SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE | + SUN4I_TCON0_IO_POL_DE_NEGATIVE, + val); + +diff --git a/drivers/gpu/drm/sun4i/sun4i_tcon.h b/drivers/gpu/drm/sun4i/sun4i_tcon.h +index ee555318e3c2f..e624f6977eb84 100644 +--- a/drivers/gpu/drm/sun4i/sun4i_tcon.h ++++ b/drivers/gpu/drm/sun4i/sun4i_tcon.h +@@ -113,6 +113,7 @@ + #define SUN4I_TCON0_IO_POL_REG 0x88 + #define SUN4I_TCON0_IO_POL_DCLK_PHASE(phase) ((phase & 3) << 28) + #define SUN4I_TCON0_IO_POL_DE_NEGATIVE BIT(27) ++#define SUN4I_TCON0_IO_POL_DCLK_DRIVE_NEGEDGE BIT(26) + #define SUN4I_TCON0_IO_POL_HSYNC_POSITIVE BIT(25) + #define SUN4I_TCON0_IO_POL_VSYNC_POSITIVE BIT(24) + +diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c +index 424ad60b4f388..b2c8c68b7e261 100644 +--- a/drivers/gpu/drm/tegra/dc.c ++++ b/drivers/gpu/drm/tegra/dc.c +@@ -2184,7 +2184,7 @@ static int tegra_dc_runtime_resume(struct host1x_client *client) + struct device *dev = client->dev; + int err; + +- err = pm_runtime_get_sync(dev); ++ err = pm_runtime_resume_and_get(dev); + if (err < 0) { + dev_err(dev, "failed to get runtime PM: %d\n", err); + return err; +diff --git a/drivers/gpu/drm/tegra/dsi.c b/drivers/gpu/drm/tegra/dsi.c +index 5691ef1b0e586..f46d377f0c304 100644 +--- a/drivers/gpu/drm/tegra/dsi.c ++++ b/drivers/gpu/drm/tegra/dsi.c +@@ -1111,7 +1111,7 @@ static int tegra_dsi_runtime_resume(struct host1x_client *client) + struct device *dev = client->dev; + int err; + +- err = pm_runtime_get_sync(dev); ++ err = pm_runtime_resume_and_get(dev); + if (err < 0) { + dev_err(dev, "failed to get runtime PM: %d\n", err); + return err; +diff --git a/drivers/gpu/drm/tegra/hdmi.c b/drivers/gpu/drm/tegra/hdmi.c +index d09a24931c87c..e5d2a40260288 100644 +--- a/drivers/gpu/drm/tegra/hdmi.c ++++ b/drivers/gpu/drm/tegra/hdmi.c +@@ -1510,7 +1510,7 @@ static int tegra_hdmi_runtime_resume(struct host1x_client *client) + struct device *dev = client->dev; + int err; + +- err = pm_runtime_get_sync(dev); ++ err = pm_runtime_resume_and_get(dev); + if (err < 0) { + dev_err(dev, "failed to get runtime PM: %d\n", err); + return err; +diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c +index 22a03f7ffdc12..5ce771cba1335 100644 +--- a/drivers/gpu/drm/tegra/hub.c ++++ b/drivers/gpu/drm/tegra/hub.c +@@ -789,7 +789,7 @@ static int tegra_display_hub_runtime_resume(struct host1x_client *client) + unsigned int i; + int err; + +- err = pm_runtime_get_sync(dev); ++ err = pm_runtime_resume_and_get(dev); + if (err < 0) { + dev_err(dev, "failed to get runtime PM: %d\n", err); + return err; +diff --git a/drivers/gpu/drm/tegra/sor.c b/drivers/gpu/drm/tegra/sor.c +index cc2aa2308a515..f02a035dda453 100644 +--- a/drivers/gpu/drm/tegra/sor.c ++++ b/drivers/gpu/drm/tegra/sor.c +@@ -3218,7 +3218,7 @@ static int tegra_sor_runtime_resume(struct host1x_client *client) + struct device *dev = client->dev; + int err; + +- err = pm_runtime_get_sync(dev); ++ err = pm_runtime_resume_and_get(dev); + if (err < 0) { + dev_err(dev, "failed to get runtime PM: %d\n", err); + return err; +diff --git a/drivers/gpu/drm/tegra/vic.c b/drivers/gpu/drm/tegra/vic.c +index ade56b860cf9d..b77f726303d89 100644 +--- a/drivers/gpu/drm/tegra/vic.c ++++ b/drivers/gpu/drm/tegra/vic.c +@@ -314,7 +314,7 @@ static int vic_open_channel(struct tegra_drm_client *client, + struct vic *vic = to_vic(client); + int err; + +- err = pm_runtime_get_sync(vic->dev); ++ err = pm_runtime_resume_and_get(vic->dev); + if (err < 0) + return err; + +diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c +index eaba98e15de46..af5f01eff872c 100644 +--- a/drivers/gpu/drm/vc4/vc4_hdmi.c ++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c +@@ -119,24 +119,57 @@ static void vc5_hdmi_reset(struct vc4_hdmi *vc4_hdmi) + HDMI_READ(HDMI_CLOCK_STOP) | VC4_DVP_HT_CLOCK_STOP_PIXEL); + } + ++#ifdef CONFIG_DRM_VC4_HDMI_CEC ++static void vc4_hdmi_cec_update_clk_div(struct vc4_hdmi *vc4_hdmi) ++{ ++ u16 clk_cnt; ++ u32 value; ++ ++ value = HDMI_READ(HDMI_CEC_CNTRL_1); ++ value &= ~VC4_HDMI_CEC_DIV_CLK_CNT_MASK; ++ ++ /* ++ * Set the clock divider: the hsm_clock rate and this divider ++ * setting will give a 40 kHz CEC clock. ++ */ ++ clk_cnt = clk_get_rate(vc4_hdmi->hsm_clock) / CEC_CLOCK_FREQ; ++ value |= clk_cnt << VC4_HDMI_CEC_DIV_CLK_CNT_SHIFT; ++ HDMI_WRITE(HDMI_CEC_CNTRL_1, value); ++} ++#else ++static void vc4_hdmi_cec_update_clk_div(struct vc4_hdmi *vc4_hdmi) {} ++#endif ++ + static enum drm_connector_status + vc4_hdmi_connector_detect(struct drm_connector *connector, bool force) + { + struct vc4_hdmi *vc4_hdmi = connector_to_vc4_hdmi(connector); ++ bool connected = false; + + if (vc4_hdmi->hpd_gpio) { + if (gpio_get_value_cansleep(vc4_hdmi->hpd_gpio) ^ + vc4_hdmi->hpd_active_low) +- return connector_status_connected; +- cec_phys_addr_invalidate(vc4_hdmi->cec_adap); +- return connector_status_disconnected; ++ connected = true; ++ } else if (drm_probe_ddc(vc4_hdmi->ddc)) { ++ connected = true; ++ } else if (HDMI_READ(HDMI_HOTPLUG) & VC4_HDMI_HOTPLUG_CONNECTED) { ++ connected = true; + } + +- if (drm_probe_ddc(vc4_hdmi->ddc)) +- return connector_status_connected; ++ if (connected) { ++ if (connector->status != connector_status_connected) { ++ struct edid *edid = drm_get_edid(connector, vc4_hdmi->ddc); ++ ++ if (edid) { ++ cec_s_phys_addr_from_edid(vc4_hdmi->cec_adap, edid); ++ vc4_hdmi->encoder.hdmi_monitor = drm_detect_hdmi_monitor(edid); ++ kfree(edid); ++ } ++ } + +- if (HDMI_READ(HDMI_HOTPLUG) & VC4_HDMI_HOTPLUG_CONNECTED) + return connector_status_connected; ++ } ++ + cec_phys_addr_invalidate(vc4_hdmi->cec_adap); + return connector_status_disconnected; + } +@@ -640,6 +673,8 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder) + return; + } + ++ vc4_hdmi_cec_update_clk_div(vc4_hdmi); ++ + /* + * FIXME: When the pixel freq is 594MHz (4k60), this needs to be setup + * at 300MHz. +@@ -661,9 +696,6 @@ static void vc4_hdmi_encoder_pre_crtc_configure(struct drm_encoder *encoder) + return; + } + +- if (vc4_hdmi->variant->reset) +- vc4_hdmi->variant->reset(vc4_hdmi); +- + if (vc4_hdmi->variant->phy_init) + vc4_hdmi->variant->phy_init(vc4_hdmi, mode); + +@@ -791,6 +823,9 @@ static int vc4_hdmi_encoder_atomic_check(struct drm_encoder *encoder, + pixel_rate = mode->clock * 1000; + } + ++ if (mode->flags & DRM_MODE_FLAG_DBLCLK) ++ pixel_rate = pixel_rate * 2; ++ + if (pixel_rate > vc4_hdmi->variant->max_pixel_clock) + return -EINVAL; + +@@ -1313,13 +1348,20 @@ static irqreturn_t vc4_cec_irq_handler_thread(int irq, void *priv) + + static void vc4_cec_read_msg(struct vc4_hdmi *vc4_hdmi, u32 cntrl1) + { ++ struct drm_device *dev = vc4_hdmi->connector.dev; + struct cec_msg *msg = &vc4_hdmi->cec_rx_msg; + unsigned int i; + + msg->len = 1 + ((cntrl1 & VC4_HDMI_CEC_REC_WRD_CNT_MASK) >> + VC4_HDMI_CEC_REC_WRD_CNT_SHIFT); ++ ++ if (msg->len > 16) { ++ drm_err(dev, "Attempting to read too much data (%d)\n", msg->len); ++ return; ++ } ++ + for (i = 0; i < msg->len; i += 4) { +- u32 val = HDMI_READ(HDMI_CEC_RX_DATA_1 + i); ++ u32 val = HDMI_READ(HDMI_CEC_RX_DATA_1 + (i >> 2)); + + msg->msg[i] = val & 0xff; + msg->msg[i + 1] = (val >> 8) & 0xff; +@@ -1412,11 +1454,17 @@ static int vc4_hdmi_cec_adap_transmit(struct cec_adapter *adap, u8 attempts, + u32 signal_free_time, struct cec_msg *msg) + { + struct vc4_hdmi *vc4_hdmi = cec_get_drvdata(adap); ++ struct drm_device *dev = vc4_hdmi->connector.dev; + u32 val; + unsigned int i; + ++ if (msg->len > 16) { ++ drm_err(dev, "Attempting to transmit too much data (%d)\n", msg->len); ++ return -ENOMEM; ++ } ++ + for (i = 0; i < msg->len; i += 4) +- HDMI_WRITE(HDMI_CEC_TX_DATA_1 + i, ++ HDMI_WRITE(HDMI_CEC_TX_DATA_1 + (i >> 2), + (msg->msg[i]) | + (msg->msg[i + 1] << 8) | + (msg->msg[i + 2] << 16) | +@@ -1461,16 +1509,14 @@ static int vc4_hdmi_cec_init(struct vc4_hdmi *vc4_hdmi) + cec_s_conn_info(vc4_hdmi->cec_adap, &conn_info); + + HDMI_WRITE(HDMI_CEC_CPU_MASK_SET, 0xffffffff); ++ + value = HDMI_READ(HDMI_CEC_CNTRL_1); +- value &= ~VC4_HDMI_CEC_DIV_CLK_CNT_MASK; +- /* +- * Set the logical address to Unregistered and set the clock +- * divider: the hsm_clock rate and this divider setting will +- * give a 40 kHz CEC clock. +- */ +- value |= VC4_HDMI_CEC_ADDR_MASK | +- (4091 << VC4_HDMI_CEC_DIV_CLK_CNT_SHIFT); ++ /* Set the logical address to Unregistered */ ++ value |= VC4_HDMI_CEC_ADDR_MASK; + HDMI_WRITE(HDMI_CEC_CNTRL_1, value); ++ ++ vc4_hdmi_cec_update_clk_div(vc4_hdmi); ++ + ret = devm_request_threaded_irq(&pdev->dev, platform_get_irq(pdev, 0), + vc4_cec_irq_handler, + vc4_cec_irq_handler_thread, 0, +@@ -1741,6 +1787,9 @@ static int vc4_hdmi_bind(struct device *dev, struct device *master, void *data) + vc4_hdmi->disable_wifi_frequencies = + of_property_read_bool(dev->of_node, "wifi-2.4ghz-coexistence"); + ++ if (vc4_hdmi->variant->reset) ++ vc4_hdmi->variant->reset(vc4_hdmi); ++ + pm_runtime_enable(dev); + + drm_simple_encoder_init(drm, encoder, DRM_MODE_ENCODER_TMDS); +diff --git a/drivers/gpu/drm/vc4/vc4_hdmi_regs.h b/drivers/gpu/drm/vc4/vc4_hdmi_regs.h +index 7c6b4818f2455..6c0dfbbe1a7ef 100644 +--- a/drivers/gpu/drm/vc4/vc4_hdmi_regs.h ++++ b/drivers/gpu/drm/vc4/vc4_hdmi_regs.h +@@ -29,6 +29,7 @@ enum vc4_hdmi_field { + HDMI_CEC_CPU_MASK_SET, + HDMI_CEC_CPU_MASK_STATUS, + HDMI_CEC_CPU_STATUS, ++ HDMI_CEC_CPU_SET, + + /* + * Transmit data, first byte is low byte of the 32-bit reg. +@@ -196,9 +197,10 @@ static const struct vc4_hdmi_register vc4_hdmi_fields[] = { + VC4_HDMI_REG(HDMI_TX_PHY_RESET_CTL, 0x02c0), + VC4_HDMI_REG(HDMI_TX_PHY_CTL_0, 0x02c4), + VC4_HDMI_REG(HDMI_CEC_CPU_STATUS, 0x0340), ++ VC4_HDMI_REG(HDMI_CEC_CPU_SET, 0x0344), + VC4_HDMI_REG(HDMI_CEC_CPU_CLEAR, 0x0348), + VC4_HDMI_REG(HDMI_CEC_CPU_MASK_STATUS, 0x034c), +- VC4_HDMI_REG(HDMI_CEC_CPU_MASK_SET, 0x034c), ++ VC4_HDMI_REG(HDMI_CEC_CPU_MASK_SET, 0x0350), + VC4_HDMI_REG(HDMI_CEC_CPU_MASK_CLEAR, 0x0354), + VC4_HDMI_REG(HDMI_RAM_PACKET_START, 0x0400), + }; +diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c b/drivers/gpu/drm/virtio/virtgpu_gem.c +index c30c75ee83fce..8502400b2f9c9 100644 +--- a/drivers/gpu/drm/virtio/virtgpu_gem.c ++++ b/drivers/gpu/drm/virtio/virtgpu_gem.c +@@ -39,9 +39,6 @@ static int virtio_gpu_gem_create(struct drm_file *file, + int ret; + u32 handle; + +- if (vgdev->has_virgl_3d) +- virtio_gpu_create_context(dev, file); +- + ret = virtio_gpu_object_create(vgdev, params, &obj, NULL); + if (ret < 0) + return ret; +@@ -119,6 +116,11 @@ int virtio_gpu_gem_object_open(struct drm_gem_object *obj, + if (!vgdev->has_virgl_3d) + goto out_notify; + ++ /* the context might still be missing when the first ioctl is ++ * DRM_IOCTL_MODE_CREATE_DUMB or DRM_IOCTL_PRIME_FD_TO_HANDLE ++ */ ++ virtio_gpu_create_context(obj->dev, file); ++ + objs = virtio_gpu_array_alloc(1); + if (!objs) + return -ENOMEM; +diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c +index 8a8b2b982f83c..097cb1ee31268 100644 +--- a/drivers/hid/hid-core.c ++++ b/drivers/hid/hid-core.c +@@ -1307,6 +1307,9 @@ EXPORT_SYMBOL_GPL(hid_open_report); + + static s32 snto32(__u32 value, unsigned n) + { ++ if (!value || !n) ++ return 0; ++ + switch (n) { + case 8: return ((__s8)value); + case 16: return ((__s16)value); +diff --git a/drivers/hid/hid-logitech-dj.c b/drivers/hid/hid-logitech-dj.c +index 45e7e0bdd382b..fcdc922bc9733 100644 +--- a/drivers/hid/hid-logitech-dj.c ++++ b/drivers/hid/hid-logitech-dj.c +@@ -980,6 +980,7 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev, + case 0x07: + device_type = "eQUAD step 4 Gaming"; + logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem); ++ workitem.reports_supported |= STD_KEYBOARD; + break; + case 0x08: + device_type = "eQUAD step 4 for gamepads"; +diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c +index 1bd0eb71559ca..44d715c12f6ab 100644 +--- a/drivers/hid/wacom_wac.c ++++ b/drivers/hid/wacom_wac.c +@@ -2600,7 +2600,12 @@ static void wacom_wac_finger_event(struct hid_device *hdev, + wacom_wac->is_invalid_bt_frame = !value; + return; + case HID_DG_CONTACTMAX: +- features->touch_max = value; ++ if (!features->touch_max) { ++ features->touch_max = value; ++ } else { ++ hid_warn(hdev, "%s: ignoring attempt to overwrite non-zero touch_max " ++ "%d -> %d\n", __func__, features->touch_max, value); ++ } + return; + } + +diff --git a/drivers/hsi/controllers/omap_ssi_core.c b/drivers/hsi/controllers/omap_ssi_core.c +index 7596dc1646484..44a3f5660c109 100644 +--- a/drivers/hsi/controllers/omap_ssi_core.c ++++ b/drivers/hsi/controllers/omap_ssi_core.c +@@ -424,7 +424,7 @@ static int ssi_hw_init(struct hsi_controller *ssi) + struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi); + int err; + +- err = pm_runtime_get_sync(ssi->device.parent); ++ err = pm_runtime_resume_and_get(ssi->device.parent); + if (err < 0) { + dev_err(&ssi->device, "runtime PM failed %d\n", err); + return err; +diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c +index 1d44bb635bb84..6be9f56cb6270 100644 +--- a/drivers/hv/channel_mgmt.c ++++ b/drivers/hv/channel_mgmt.c +@@ -1102,8 +1102,7 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr) + vmbus_device_unregister(channel->device_obj); + put_device(dev); + } +- } +- if (channel->primary_channel != NULL) { ++ } else if (channel->primary_channel != NULL) { + /* + * Sub-channel is being rescinded. Following is the channel + * close sequence when initiated from the driveri (refer to +diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c +index 95b54b0a36252..74d3e2fe43d46 100644 +--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c ++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c +@@ -131,7 +131,8 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata) + writel_relaxed(0x0, drvdata->base + TRCAUXCTLR); + writel_relaxed(config->eventctrl0, drvdata->base + TRCEVENTCTL0R); + writel_relaxed(config->eventctrl1, drvdata->base + TRCEVENTCTL1R); +- writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR); ++ if (drvdata->stallctl) ++ writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR); + writel_relaxed(config->ts_ctrl, drvdata->base + TRCTSCTLR); + writel_relaxed(config->syncfreq, drvdata->base + TRCSYNCPR); + writel_relaxed(config->ccctlr, drvdata->base + TRCCCCTLR); +@@ -1187,7 +1188,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata) + state->trcauxctlr = readl(drvdata->base + TRCAUXCTLR); + state->trceventctl0r = readl(drvdata->base + TRCEVENTCTL0R); + state->trceventctl1r = readl(drvdata->base + TRCEVENTCTL1R); +- state->trcstallctlr = readl(drvdata->base + TRCSTALLCTLR); ++ if (drvdata->stallctl) ++ state->trcstallctlr = readl(drvdata->base + TRCSTALLCTLR); + state->trctsctlr = readl(drvdata->base + TRCTSCTLR); + state->trcsyncpr = readl(drvdata->base + TRCSYNCPR); + state->trcccctlr = readl(drvdata->base + TRCCCCTLR); +@@ -1254,7 +1256,8 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata) + + state->trcclaimset = readl(drvdata->base + TRCCLAIMCLR); + +- state->trcpdcr = readl(drvdata->base + TRCPDCR); ++ if (!drvdata->skip_power_up) ++ state->trcpdcr = readl(drvdata->base + TRCPDCR); + + /* wait for TRCSTATR.IDLE to go up */ + if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 1)) { +@@ -1272,9 +1275,9 @@ static int etm4_cpu_save(struct etmv4_drvdata *drvdata) + * potentially save power on systems that respect the TRCPDCR_PU + * despite requesting software to save/restore state. + */ +- writel_relaxed((state->trcpdcr & ~TRCPDCR_PU), +- drvdata->base + TRCPDCR); +- ++ if (!drvdata->skip_power_up) ++ writel_relaxed((state->trcpdcr & ~TRCPDCR_PU), ++ drvdata->base + TRCPDCR); + out: + CS_LOCK(drvdata->base); + return ret; +@@ -1296,7 +1299,8 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata) + writel_relaxed(state->trcauxctlr, drvdata->base + TRCAUXCTLR); + writel_relaxed(state->trceventctl0r, drvdata->base + TRCEVENTCTL0R); + writel_relaxed(state->trceventctl1r, drvdata->base + TRCEVENTCTL1R); +- writel_relaxed(state->trcstallctlr, drvdata->base + TRCSTALLCTLR); ++ if (drvdata->stallctl) ++ writel_relaxed(state->trcstallctlr, drvdata->base + TRCSTALLCTLR); + writel_relaxed(state->trctsctlr, drvdata->base + TRCTSCTLR); + writel_relaxed(state->trcsyncpr, drvdata->base + TRCSYNCPR); + writel_relaxed(state->trcccctlr, drvdata->base + TRCCCCTLR); +@@ -1368,7 +1372,8 @@ static void etm4_cpu_restore(struct etmv4_drvdata *drvdata) + + writel_relaxed(state->trcclaimset, drvdata->base + TRCCLAIMSET); + +- writel_relaxed(state->trcpdcr, drvdata->base + TRCPDCR); ++ if (!drvdata->skip_power_up) ++ writel_relaxed(state->trcpdcr, drvdata->base + TRCPDCR); + + drvdata->state_needs_restore = false; + +diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c +index 989ce7b8ade7c..4682f26139961 100644 +--- a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c ++++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c +@@ -389,7 +389,7 @@ static ssize_t mode_store(struct device *dev, + config->eventctrl1 &= ~BIT(12); + + /* bit[8], Instruction stall bit */ +- if (config->mode & ETM_MODE_ISTALL_EN) ++ if ((config->mode & ETM_MODE_ISTALL_EN) && (drvdata->stallctl == true)) + config->stall_ctrl |= BIT(8); + else + config->stall_ctrl &= ~BIT(8); +diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c +index d8295b1c379d1..35baca2f62c4e 100644 +--- a/drivers/i2c/busses/i2c-bcm-iproc.c ++++ b/drivers/i2c/busses/i2c-bcm-iproc.c +@@ -159,6 +159,11 @@ + + #define IE_S_ALL_INTERRUPT_SHIFT 21 + #define IE_S_ALL_INTERRUPT_MASK 0x3f ++/* ++ * It takes ~18us to reading 10bytes of data, hence to keep tasklet ++ * running for less time, max slave read per tasklet is set to 10 bytes. ++ */ ++#define MAX_SLAVE_RX_PER_INT 10 + + enum i2c_slave_read_status { + I2C_SLAVE_RX_FIFO_EMPTY = 0, +@@ -205,8 +210,18 @@ struct bcm_iproc_i2c_dev { + /* bytes that have been read */ + unsigned int rx_bytes; + unsigned int thld_bytes; ++ ++ bool slave_rx_only; ++ bool rx_start_rcvd; ++ bool slave_read_complete; ++ u32 tx_underrun; ++ u32 slave_int_mask; ++ struct tasklet_struct slave_rx_tasklet; + }; + ++/* tasklet to process slave rx data */ ++static void slave_rx_tasklet_fn(unsigned long); ++ + /* + * Can be expanded in the future if more interrupt status bits are utilized + */ +@@ -215,7 +230,8 @@ struct bcm_iproc_i2c_dev { + + #define ISR_MASK_SLAVE (BIT(IS_S_START_BUSY_SHIFT)\ + | BIT(IS_S_RX_EVENT_SHIFT) | BIT(IS_S_RD_EVENT_SHIFT)\ +- | BIT(IS_S_TX_UNDERRUN_SHIFT)) ++ | BIT(IS_S_TX_UNDERRUN_SHIFT) | BIT(IS_S_RX_FIFO_FULL_SHIFT)\ ++ | BIT(IS_S_RX_THLD_SHIFT)) + + static int bcm_iproc_i2c_reg_slave(struct i2c_client *slave); + static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave); +@@ -259,6 +275,7 @@ static void bcm_iproc_i2c_slave_init( + { + u32 val; + ++ iproc_i2c->tx_underrun = 0; + if (need_reset) { + /* put controller in reset */ + val = iproc_i2c_rd_reg(iproc_i2c, CFG_OFFSET); +@@ -295,8 +312,11 @@ static void bcm_iproc_i2c_slave_init( + + /* Enable interrupt register to indicate a valid byte in receive fifo */ + val = BIT(IE_S_RX_EVENT_SHIFT); ++ /* Enable interrupt register to indicate a Master read transaction */ ++ val |= BIT(IE_S_RD_EVENT_SHIFT); + /* Enable interrupt register for the Slave BUSY command */ + val |= BIT(IE_S_START_BUSY_SHIFT); ++ iproc_i2c->slave_int_mask = val; + iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val); + } + +@@ -321,76 +341,176 @@ static void bcm_iproc_i2c_check_slave_status( + } + } + +-static bool bcm_iproc_i2c_slave_isr(struct bcm_iproc_i2c_dev *iproc_i2c, +- u32 status) ++static void bcm_iproc_i2c_slave_read(struct bcm_iproc_i2c_dev *iproc_i2c) + { ++ u8 rx_data, rx_status; ++ u32 rx_bytes = 0; + u32 val; +- u8 value, rx_status; + +- /* Slave RX byte receive */ +- if (status & BIT(IS_S_RX_EVENT_SHIFT)) { ++ while (rx_bytes < MAX_SLAVE_RX_PER_INT) { + val = iproc_i2c_rd_reg(iproc_i2c, S_RX_OFFSET); + rx_status = (val >> S_RX_STATUS_SHIFT) & S_RX_STATUS_MASK; ++ rx_data = ((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK); ++ + if (rx_status == I2C_SLAVE_RX_START) { +- /* Start of SMBUS for Master write */ ++ /* Start of SMBUS Master write */ + i2c_slave_event(iproc_i2c->slave, +- I2C_SLAVE_WRITE_REQUESTED, &value); +- +- val = iproc_i2c_rd_reg(iproc_i2c, S_RX_OFFSET); +- value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK); ++ I2C_SLAVE_WRITE_REQUESTED, &rx_data); ++ iproc_i2c->rx_start_rcvd = true; ++ iproc_i2c->slave_read_complete = false; ++ } else if (rx_status == I2C_SLAVE_RX_DATA && ++ iproc_i2c->rx_start_rcvd) { ++ /* Middle of SMBUS Master write */ + i2c_slave_event(iproc_i2c->slave, +- I2C_SLAVE_WRITE_RECEIVED, &value); +- } else if (status & BIT(IS_S_RD_EVENT_SHIFT)) { +- /* Start of SMBUS for Master Read */ +- i2c_slave_event(iproc_i2c->slave, +- I2C_SLAVE_READ_REQUESTED, &value); +- iproc_i2c_wr_reg(iproc_i2c, S_TX_OFFSET, value); ++ I2C_SLAVE_WRITE_RECEIVED, &rx_data); ++ } else if (rx_status == I2C_SLAVE_RX_END && ++ iproc_i2c->rx_start_rcvd) { ++ /* End of SMBUS Master write */ ++ if (iproc_i2c->slave_rx_only) ++ i2c_slave_event(iproc_i2c->slave, ++ I2C_SLAVE_WRITE_RECEIVED, ++ &rx_data); ++ ++ i2c_slave_event(iproc_i2c->slave, I2C_SLAVE_STOP, ++ &rx_data); ++ } else if (rx_status == I2C_SLAVE_RX_FIFO_EMPTY) { ++ iproc_i2c->rx_start_rcvd = false; ++ iproc_i2c->slave_read_complete = true; ++ break; ++ } + +- val = BIT(S_CMD_START_BUSY_SHIFT); +- iproc_i2c_wr_reg(iproc_i2c, S_CMD_OFFSET, val); ++ rx_bytes++; ++ } ++} + +- /* +- * Enable interrupt for TX FIFO becomes empty and +- * less than PKT_LENGTH bytes were output on the SMBUS +- */ +- val = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET); +- val |= BIT(IE_S_TX_UNDERRUN_SHIFT); +- iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val); +- } else { +- /* Master write other than start */ +- value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK); ++static void slave_rx_tasklet_fn(unsigned long data) ++{ ++ struct bcm_iproc_i2c_dev *iproc_i2c = (struct bcm_iproc_i2c_dev *)data; ++ u32 int_clr; ++ ++ bcm_iproc_i2c_slave_read(iproc_i2c); ++ ++ /* clear pending IS_S_RX_EVENT_SHIFT interrupt */ ++ int_clr = BIT(IS_S_RX_EVENT_SHIFT); ++ ++ if (!iproc_i2c->slave_rx_only && iproc_i2c->slave_read_complete) { ++ /* ++ * In case of single byte master-read request, ++ * IS_S_TX_UNDERRUN_SHIFT event is generated before ++ * IS_S_START_BUSY_SHIFT event. Hence start slave data send ++ * from first IS_S_TX_UNDERRUN_SHIFT event. ++ * ++ * This means don't send any data from slave when ++ * IS_S_RD_EVENT_SHIFT event is generated else it will increment ++ * eeprom or other backend slave driver read pointer twice. ++ */ ++ iproc_i2c->tx_underrun = 0; ++ iproc_i2c->slave_int_mask |= BIT(IE_S_TX_UNDERRUN_SHIFT); ++ ++ /* clear IS_S_RD_EVENT_SHIFT interrupt */ ++ int_clr |= BIT(IS_S_RD_EVENT_SHIFT); ++ } ++ ++ /* clear slave interrupt */ ++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, int_clr); ++ /* enable slave interrupts */ ++ iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, iproc_i2c->slave_int_mask); ++} ++ ++static bool bcm_iproc_i2c_slave_isr(struct bcm_iproc_i2c_dev *iproc_i2c, ++ u32 status) ++{ ++ u32 val; ++ u8 value; ++ ++ /* ++ * Slave events in case of master-write, master-write-read and, ++ * master-read ++ * ++ * Master-write : only IS_S_RX_EVENT_SHIFT event ++ * Master-write-read: both IS_S_RX_EVENT_SHIFT and IS_S_RD_EVENT_SHIFT ++ * events ++ * Master-read : both IS_S_RX_EVENT_SHIFT and IS_S_RD_EVENT_SHIFT ++ * events or only IS_S_RD_EVENT_SHIFT ++ */ ++ if (status & BIT(IS_S_RX_EVENT_SHIFT) || ++ status & BIT(IS_S_RD_EVENT_SHIFT)) { ++ /* disable slave interrupts */ ++ val = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET); ++ val &= ~iproc_i2c->slave_int_mask; ++ iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val); ++ ++ if (status & BIT(IS_S_RD_EVENT_SHIFT)) ++ /* Master-write-read request */ ++ iproc_i2c->slave_rx_only = false; ++ else ++ /* Master-write request only */ ++ iproc_i2c->slave_rx_only = true; ++ ++ /* schedule tasklet to read data later */ ++ tasklet_schedule(&iproc_i2c->slave_rx_tasklet); ++ ++ /* clear only IS_S_RX_EVENT_SHIFT interrupt */ ++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ++ BIT(IS_S_RX_EVENT_SHIFT)); ++ } ++ ++ if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) { ++ iproc_i2c->tx_underrun++; ++ if (iproc_i2c->tx_underrun == 1) ++ /* Start of SMBUS for Master Read */ + i2c_slave_event(iproc_i2c->slave, +- I2C_SLAVE_WRITE_RECEIVED, &value); +- if (rx_status == I2C_SLAVE_RX_END) +- i2c_slave_event(iproc_i2c->slave, +- I2C_SLAVE_STOP, &value); +- } +- } else if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) { +- /* Master read other than start */ +- i2c_slave_event(iproc_i2c->slave, +- I2C_SLAVE_READ_PROCESSED, &value); ++ I2C_SLAVE_READ_REQUESTED, ++ &value); ++ else ++ /* Master read other than start */ ++ i2c_slave_event(iproc_i2c->slave, ++ I2C_SLAVE_READ_PROCESSED, ++ &value); + + iproc_i2c_wr_reg(iproc_i2c, S_TX_OFFSET, value); ++ /* start transfer */ + val = BIT(S_CMD_START_BUSY_SHIFT); + iproc_i2c_wr_reg(iproc_i2c, S_CMD_OFFSET, val); ++ ++ /* clear interrupt */ ++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ++ BIT(IS_S_TX_UNDERRUN_SHIFT)); + } + +- /* Stop */ ++ /* Stop received from master in case of master read transaction */ + if (status & BIT(IS_S_START_BUSY_SHIFT)) { +- i2c_slave_event(iproc_i2c->slave, I2C_SLAVE_STOP, &value); + /* + * Enable interrupt for TX FIFO becomes empty and + * less than PKT_LENGTH bytes were output on the SMBUS + */ +- val = iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET); +- val &= ~BIT(IE_S_TX_UNDERRUN_SHIFT); +- iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, val); ++ iproc_i2c->slave_int_mask &= ~BIT(IE_S_TX_UNDERRUN_SHIFT); ++ iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, ++ iproc_i2c->slave_int_mask); ++ ++ /* End of SMBUS for Master Read */ ++ val = BIT(S_TX_WR_STATUS_SHIFT); ++ iproc_i2c_wr_reg(iproc_i2c, S_TX_OFFSET, val); ++ ++ val = BIT(S_CMD_START_BUSY_SHIFT); ++ iproc_i2c_wr_reg(iproc_i2c, S_CMD_OFFSET, val); ++ ++ /* flush TX FIFOs */ ++ val = iproc_i2c_rd_reg(iproc_i2c, S_FIFO_CTRL_OFFSET); ++ val |= (BIT(S_FIFO_TX_FLUSH_SHIFT)); ++ iproc_i2c_wr_reg(iproc_i2c, S_FIFO_CTRL_OFFSET, val); ++ ++ i2c_slave_event(iproc_i2c->slave, I2C_SLAVE_STOP, &value); ++ ++ /* clear interrupt */ ++ iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, ++ BIT(IS_S_START_BUSY_SHIFT)); + } + +- /* clear interrupt status */ +- iproc_i2c_wr_reg(iproc_i2c, IS_OFFSET, status); ++ /* check slave transmit status only if slave is transmitting */ ++ if (!iproc_i2c->slave_rx_only) ++ bcm_iproc_i2c_check_slave_status(iproc_i2c); + +- bcm_iproc_i2c_check_slave_status(iproc_i2c); + return true; + } + +@@ -505,12 +625,17 @@ static void bcm_iproc_i2c_process_m_event(struct bcm_iproc_i2c_dev *iproc_i2c, + static irqreturn_t bcm_iproc_i2c_isr(int irq, void *data) + { + struct bcm_iproc_i2c_dev *iproc_i2c = data; +- u32 status = iproc_i2c_rd_reg(iproc_i2c, IS_OFFSET); ++ u32 slave_status; ++ u32 status; + bool ret; +- u32 sl_status = status & ISR_MASK_SLAVE; + +- if (sl_status) { +- ret = bcm_iproc_i2c_slave_isr(iproc_i2c, sl_status); ++ status = iproc_i2c_rd_reg(iproc_i2c, IS_OFFSET); ++ /* process only slave interrupt which are enabled */ ++ slave_status = status & iproc_i2c_rd_reg(iproc_i2c, IE_OFFSET) & ++ ISR_MASK_SLAVE; ++ ++ if (slave_status) { ++ ret = bcm_iproc_i2c_slave_isr(iproc_i2c, slave_status); + if (ret) + return IRQ_HANDLED; + else +@@ -1066,6 +1191,10 @@ static int bcm_iproc_i2c_reg_slave(struct i2c_client *slave) + return -EAFNOSUPPORT; + + iproc_i2c->slave = slave; ++ ++ tasklet_init(&iproc_i2c->slave_rx_tasklet, slave_rx_tasklet_fn, ++ (unsigned long)iproc_i2c); ++ + bcm_iproc_i2c_slave_init(iproc_i2c, false); + return 0; + } +@@ -1086,6 +1215,8 @@ static int bcm_iproc_i2c_unreg_slave(struct i2c_client *slave) + IE_S_ALL_INTERRUPT_SHIFT); + iproc_i2c_wr_reg(iproc_i2c, IE_OFFSET, tmp); + ++ tasklet_kill(&iproc_i2c->slave_rx_tasklet); ++ + /* Erase the slave address programmed */ + tmp = iproc_i2c_rd_reg(iproc_i2c, S_CFG_SMBUS_ADDR_OFFSET); + tmp &= ~BIT(S_CFG_EN_NIC_SMB_ADDR3_SHIFT); +diff --git a/drivers/i2c/busses/i2c-brcmstb.c b/drivers/i2c/busses/i2c-brcmstb.c +index d4e0a0f6732ae..ba766d24219ef 100644 +--- a/drivers/i2c/busses/i2c-brcmstb.c ++++ b/drivers/i2c/busses/i2c-brcmstb.c +@@ -316,7 +316,7 @@ static int brcmstb_send_i2c_cmd(struct brcmstb_i2c_dev *dev, + goto cmd_out; + } + +- if ((CMD_RD || CMD_WR) && ++ if ((cmd == CMD_RD || cmd == CMD_WR) && + bsc_readl(dev, iic_enable) & BSC_IIC_EN_NOACK_MASK) { + rc = -EREMOTEIO; + dev_dbg(dev->device, "controller received NOACK intr for %s\n", +diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c +index 6ce3ec03b5952..b6f2c63776140 100644 +--- a/drivers/i2c/busses/i2c-exynos5.c ++++ b/drivers/i2c/busses/i2c-exynos5.c +@@ -606,6 +606,7 @@ static void exynos5_i2c_message_start(struct exynos5_i2c *i2c, int stop) + u32 i2c_ctl; + u32 int_en = 0; + u32 i2c_auto_conf = 0; ++ u32 i2c_addr = 0; + u32 fifo_ctl; + unsigned long flags; + unsigned short trig_lvl; +@@ -640,7 +641,12 @@ static void exynos5_i2c_message_start(struct exynos5_i2c *i2c, int stop) + int_en |= HSI2C_INT_TX_ALMOSTEMPTY_EN; + } + +- writel(HSI2C_SLV_ADDR_MAS(i2c->msg->addr), i2c->regs + HSI2C_ADDR); ++ i2c_addr = HSI2C_SLV_ADDR_MAS(i2c->msg->addr); ++ ++ if (i2c->op_clock >= I2C_MAX_FAST_MODE_PLUS_FREQ) ++ i2c_addr |= HSI2C_MASTER_ID(MASTER_ID(i2c->adap.nr)); ++ ++ writel(i2c_addr, i2c->regs + HSI2C_ADDR); + + writel(fifo_ctl, i2c->regs + HSI2C_FIFO_CTL); + writel(i2c_ctl, i2c->regs + HSI2C_CTL); +diff --git a/drivers/i2c/busses/i2c-qcom-geni.c b/drivers/i2c/busses/i2c-qcom-geni.c +index dce75b85253c1..4a6dd05d6dbf9 100644 +--- a/drivers/i2c/busses/i2c-qcom-geni.c ++++ b/drivers/i2c/busses/i2c-qcom-geni.c +@@ -86,6 +86,9 @@ struct geni_i2c_dev { + u32 clk_freq_out; + const struct geni_i2c_clk_fld *clk_fld; + int suspended; ++ void *dma_buf; ++ size_t xfer_len; ++ dma_addr_t dma_addr; + }; + + struct geni_i2c_err_log { +@@ -348,14 +351,39 @@ static void geni_i2c_tx_fsm_rst(struct geni_i2c_dev *gi2c) + dev_err(gi2c->se.dev, "Timeout resetting TX_FSM\n"); + } + ++static void geni_i2c_rx_msg_cleanup(struct geni_i2c_dev *gi2c, ++ struct i2c_msg *cur) ++{ ++ gi2c->cur_rd = 0; ++ if (gi2c->dma_buf) { ++ if (gi2c->err) ++ geni_i2c_rx_fsm_rst(gi2c); ++ geni_se_rx_dma_unprep(&gi2c->se, gi2c->dma_addr, gi2c->xfer_len); ++ i2c_put_dma_safe_msg_buf(gi2c->dma_buf, cur, !gi2c->err); ++ } ++} ++ ++static void geni_i2c_tx_msg_cleanup(struct geni_i2c_dev *gi2c, ++ struct i2c_msg *cur) ++{ ++ gi2c->cur_wr = 0; ++ if (gi2c->dma_buf) { ++ if (gi2c->err) ++ geni_i2c_tx_fsm_rst(gi2c); ++ geni_se_tx_dma_unprep(&gi2c->se, gi2c->dma_addr, gi2c->xfer_len); ++ i2c_put_dma_safe_msg_buf(gi2c->dma_buf, cur, !gi2c->err); ++ } ++} ++ + static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, + u32 m_param) + { +- dma_addr_t rx_dma; ++ dma_addr_t rx_dma = 0; + unsigned long time_left; + void *dma_buf = NULL; + struct geni_se *se = &gi2c->se; + size_t len = msg->len; ++ struct i2c_msg *cur; + + if (!of_machine_is_compatible("lenovo,yoga-c630")) + dma_buf = i2c_get_dma_safe_msg_buf(msg, 32); +@@ -372,19 +400,18 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, + geni_se_select_mode(se, GENI_SE_FIFO); + i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + dma_buf = NULL; ++ } else { ++ gi2c->xfer_len = len; ++ gi2c->dma_addr = rx_dma; ++ gi2c->dma_buf = dma_buf; + } + ++ cur = gi2c->cur; + time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT); + if (!time_left) + geni_i2c_abort_xfer(gi2c); + +- gi2c->cur_rd = 0; +- if (dma_buf) { +- if (gi2c->err) +- geni_i2c_rx_fsm_rst(gi2c); +- geni_se_rx_dma_unprep(se, rx_dma, len); +- i2c_put_dma_safe_msg_buf(dma_buf, msg, !gi2c->err); +- } ++ geni_i2c_rx_msg_cleanup(gi2c, cur); + + return gi2c->err; + } +@@ -392,11 +419,12 @@ static int geni_i2c_rx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, + static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, + u32 m_param) + { +- dma_addr_t tx_dma; ++ dma_addr_t tx_dma = 0; + unsigned long time_left; + void *dma_buf = NULL; + struct geni_se *se = &gi2c->se; + size_t len = msg->len; ++ struct i2c_msg *cur; + + if (!of_machine_is_compatible("lenovo,yoga-c630")) + dma_buf = i2c_get_dma_safe_msg_buf(msg, 32); +@@ -413,22 +441,21 @@ static int geni_i2c_tx_one_msg(struct geni_i2c_dev *gi2c, struct i2c_msg *msg, + geni_se_select_mode(se, GENI_SE_FIFO); + i2c_put_dma_safe_msg_buf(dma_buf, msg, false); + dma_buf = NULL; ++ } else { ++ gi2c->xfer_len = len; ++ gi2c->dma_addr = tx_dma; ++ gi2c->dma_buf = dma_buf; + } + + if (!dma_buf) /* Get FIFO IRQ */ + writel_relaxed(1, se->base + SE_GENI_TX_WATERMARK_REG); + ++ cur = gi2c->cur; + time_left = wait_for_completion_timeout(&gi2c->done, XFER_TIMEOUT); + if (!time_left) + geni_i2c_abort_xfer(gi2c); + +- gi2c->cur_wr = 0; +- if (dma_buf) { +- if (gi2c->err) +- geni_i2c_tx_fsm_rst(gi2c); +- geni_se_tx_dma_unprep(se, tx_dma, len); +- i2c_put_dma_safe_msg_buf(dma_buf, msg, !gi2c->err); +- } ++ geni_i2c_tx_msg_cleanup(gi2c, cur); + + return gi2c->err; + } +diff --git a/drivers/ide/falconide.c b/drivers/ide/falconide.c +index dbeb2605e5f6e..607c44bc50f1b 100644 +--- a/drivers/ide/falconide.c ++++ b/drivers/ide/falconide.c +@@ -166,6 +166,7 @@ static int __init falconide_init(struct platform_device *pdev) + if (rc) + goto err_free; + ++ platform_set_drvdata(pdev, host); + return 0; + err_free: + ide_host_free(host); +@@ -176,7 +177,7 @@ err: + + static int falconide_remove(struct platform_device *pdev) + { +- struct ide_host *host = dev_get_drvdata(&pdev->dev); ++ struct ide_host *host = platform_get_drvdata(pdev); + + ide_host_remove(host); + +diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c +index 5afd142fe8c78..8e578f73a074c 100644 +--- a/drivers/infiniband/core/cm.c ++++ b/drivers/infiniband/core/cm.c +@@ -4332,7 +4332,7 @@ static int cm_add_one(struct ib_device *ib_device) + unsigned long flags; + int ret; + int count = 0; +- u8 i; ++ unsigned int i; + + cm_dev = kzalloc(struct_size(cm_dev, port, ib_device->phys_port_cnt), + GFP_KERNEL); +@@ -4344,7 +4344,7 @@ static int cm_add_one(struct ib_device *ib_device) + cm_dev->going_down = 0; + + set_bit(IB_MGMT_METHOD_SEND, reg_req.method_mask); +- for (i = 1; i <= ib_device->phys_port_cnt; i++) { ++ rdma_for_each_port (ib_device, i) { + if (!rdma_cap_ib_cm(ib_device, i)) + continue; + +@@ -4430,7 +4430,7 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data) + .clr_port_cap_mask = IB_PORT_CM_SUP + }; + unsigned long flags; +- int i; ++ unsigned int i; + + write_lock_irqsave(&cm.device_lock, flags); + list_del(&cm_dev->list); +@@ -4440,7 +4440,7 @@ static void cm_remove_one(struct ib_device *ib_device, void *client_data) + cm_dev->going_down = 1; + spin_unlock_irq(&cm.lock); + +- for (i = 1; i <= ib_device->phys_port_cnt; i++) { ++ rdma_for_each_port (ib_device, i) { + if (!rdma_cap_ib_cm(ib_device, i)) + continue; + +diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c +index c51b84b2d2f37..e3638f80e1d52 100644 +--- a/drivers/infiniband/core/cma.c ++++ b/drivers/infiniband/core/cma.c +@@ -352,7 +352,13 @@ struct ib_device *cma_get_ib_dev(struct cma_device *cma_dev) + + struct cma_multicast { + struct rdma_id_private *id_priv; +- struct ib_sa_multicast *sa_mc; ++ union { ++ struct ib_sa_multicast *sa_mc; ++ struct { ++ struct work_struct work; ++ struct rdma_cm_event event; ++ } iboe_join; ++ }; + struct list_head list; + void *context; + struct sockaddr_storage addr; +@@ -1823,6 +1829,8 @@ static void destroy_mc(struct rdma_id_private *id_priv, + cma_igmp_send(ndev, &mgid, false); + dev_put(ndev); + } ++ ++ cancel_work_sync(&mc->iboe_join.work); + } + kfree(mc); + } +@@ -2683,6 +2691,28 @@ static int cma_query_ib_route(struct rdma_id_private *id_priv, + return (id_priv->query_id < 0) ? id_priv->query_id : 0; + } + ++static void cma_iboe_join_work_handler(struct work_struct *work) ++{ ++ struct cma_multicast *mc = ++ container_of(work, struct cma_multicast, iboe_join.work); ++ struct rdma_cm_event *event = &mc->iboe_join.event; ++ struct rdma_id_private *id_priv = mc->id_priv; ++ int ret; ++ ++ mutex_lock(&id_priv->handler_mutex); ++ if (READ_ONCE(id_priv->state) == RDMA_CM_DESTROYING || ++ READ_ONCE(id_priv->state) == RDMA_CM_DEVICE_REMOVAL) ++ goto out_unlock; ++ ++ ret = cma_cm_event_handler(id_priv, event); ++ WARN_ON(ret); ++ ++out_unlock: ++ mutex_unlock(&id_priv->handler_mutex); ++ if (event->event == RDMA_CM_EVENT_MULTICAST_JOIN) ++ rdma_destroy_ah_attr(&event->param.ud.ah_attr); ++} ++ + static void cma_work_handler(struct work_struct *_work) + { + struct cma_work *work = container_of(_work, struct cma_work, work); +@@ -4478,10 +4508,7 @@ static int cma_ib_mc_handler(int status, struct ib_sa_multicast *multicast) + cma_make_mc_event(status, id_priv, multicast, &event, mc); + ret = cma_cm_event_handler(id_priv, &event); + rdma_destroy_ah_attr(&event.param.ud.ah_attr); +- if (ret) { +- destroy_id_handler_unlock(id_priv); +- return 0; +- } ++ WARN_ON(ret); + + out: + mutex_unlock(&id_priv->handler_mutex); +@@ -4604,7 +4631,6 @@ static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid, + static int cma_iboe_join_multicast(struct rdma_id_private *id_priv, + struct cma_multicast *mc) + { +- struct cma_work *work; + struct rdma_dev_addr *dev_addr = &id_priv->id.route.addr.dev_addr; + int err = 0; + struct sockaddr *addr = (struct sockaddr *)&mc->addr; +@@ -4618,10 +4644,6 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv, + if (cma_zero_addr(addr)) + return -EINVAL; + +- work = kzalloc(sizeof *work, GFP_KERNEL); +- if (!work) +- return -ENOMEM; +- + gid_type = id_priv->cma_dev->default_gid_type[id_priv->id.port_num - + rdma_start_port(id_priv->cma_dev->device)]; + cma_iboe_set_mgid(addr, &ib.rec.mgid, gid_type); +@@ -4632,10 +4654,9 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv, + + if (dev_addr->bound_dev_if) + ndev = dev_get_by_index(dev_addr->net, dev_addr->bound_dev_if); +- if (!ndev) { +- err = -ENODEV; +- goto err_free; +- } ++ if (!ndev) ++ return -ENODEV; ++ + ib.rec.rate = iboe_get_rate(ndev); + ib.rec.hop_limit = 1; + ib.rec.mtu = iboe_get_mtu(ndev->mtu); +@@ -4653,24 +4674,15 @@ static int cma_iboe_join_multicast(struct rdma_id_private *id_priv, + err = -ENOTSUPP; + } + dev_put(ndev); +- if (err || !ib.rec.mtu) { +- if (!err) +- err = -EINVAL; +- goto err_free; +- } ++ if (err || !ib.rec.mtu) ++ return err ?: -EINVAL; ++ + rdma_ip2gid((struct sockaddr *)&id_priv->id.route.addr.src_addr, + &ib.rec.port_gid); +- work->id = id_priv; +- INIT_WORK(&work->work, cma_work_handler); +- cma_make_mc_event(0, id_priv, &ib, &work->event, mc); +- /* Balances with cma_id_put() in cma_work_handler */ +- cma_id_get(id_priv); +- queue_work(cma_wq, &work->work); ++ INIT_WORK(&mc->iboe_join.work, cma_iboe_join_work_handler); ++ cma_make_mc_event(0, id_priv, &ib, &mc->iboe_join.event, mc); ++ queue_work(cma_wq, &mc->iboe_join.work); + return 0; +- +-err_free: +- kfree(work); +- return err; + } + + int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr, +diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c +index b0d0b522cc764..4688a6657c875 100644 +--- a/drivers/infiniband/core/user_mad.c ++++ b/drivers/infiniband/core/user_mad.c +@@ -379,6 +379,11 @@ static ssize_t ib_umad_read(struct file *filp, char __user *buf, + + mutex_lock(&file->mutex); + ++ if (file->agents_dead) { ++ mutex_unlock(&file->mutex); ++ return -EIO; ++ } ++ + while (list_empty(&file->recv_list)) { + mutex_unlock(&file->mutex); + +@@ -392,6 +397,11 @@ static ssize_t ib_umad_read(struct file *filp, char __user *buf, + mutex_lock(&file->mutex); + } + ++ if (file->agents_dead) { ++ mutex_unlock(&file->mutex); ++ return -EIO; ++ } ++ + packet = list_entry(file->recv_list.next, struct ib_umad_packet, list); + list_del(&packet->list); + +@@ -524,7 +534,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf, + + agent = __get_agent(file, packet->mad.hdr.id); + if (!agent) { +- ret = -EINVAL; ++ ret = -EIO; + goto err_up; + } + +@@ -653,10 +663,14 @@ static __poll_t ib_umad_poll(struct file *filp, struct poll_table_struct *wait) + /* we will always be able to post a MAD send */ + __poll_t mask = EPOLLOUT | EPOLLWRNORM; + ++ mutex_lock(&file->mutex); + poll_wait(filp, &file->recv_wait, wait); + + if (!list_empty(&file->recv_list)) + mask |= EPOLLIN | EPOLLRDNORM; ++ if (file->agents_dead) ++ mask = EPOLLERR; ++ mutex_unlock(&file->mutex); + + return mask; + } +@@ -1336,6 +1350,7 @@ static void ib_umad_kill_port(struct ib_umad_port *port) + list_for_each_entry(file, &port->file_list, port_list) { + mutex_lock(&file->mutex); + file->agents_dead = 1; ++ wake_up_interruptible(&file->recv_wait); + mutex_unlock(&file->mutex); + + for (id = 0; id < IB_UMAD_MAX_AGENTS; ++id) +diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h +index 1ea87f92aabbe..d9aa7424d2902 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_device.h ++++ b/drivers/infiniband/hw/hns/hns_roce_device.h +@@ -632,7 +632,7 @@ struct hns_roce_qp { + struct hns_roce_db sdb; + unsigned long en_flags; + u32 doorbell_qpn; +- u32 sq_signal_bits; ++ enum ib_sig_type sq_signal_bits; + struct hns_roce_wq sq; + + struct hns_roce_mtr mtr; +diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +index 5c29c7d8c50e6..ebcf26dec1e30 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c ++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c +@@ -1232,7 +1232,7 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev, + u32 timeout = 0; + int handle = 0; + u16 desc_ret; +- int ret = 0; ++ int ret; + int ntc; + + spin_lock_bh(&csq->lock); +@@ -1277,15 +1277,14 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev, + if (hns_roce_cmq_csq_done(hr_dev)) { + complete = true; + handle = 0; ++ ret = 0; + while (handle < num) { + /* get the result of hardware write back */ + desc_to_use = &csq->desc[ntc]; + desc[handle] = *desc_to_use; + dev_dbg(hr_dev->dev, "Get cmq desc:\n"); + desc_ret = le16_to_cpu(desc[handle].retval); +- if (desc_ret == CMD_EXEC_SUCCESS) +- ret = 0; +- else ++ if (unlikely(desc_ret != CMD_EXEC_SUCCESS)) + ret = -EIO; + priv->cmq.last_status = desc_ret; + ntc++; +@@ -1847,7 +1846,6 @@ static void set_default_caps(struct hns_roce_dev *hr_dev) + + caps->flags = HNS_ROCE_CAP_FLAG_REREG_MR | + HNS_ROCE_CAP_FLAG_ROCE_V1_V2 | +- HNS_ROCE_CAP_FLAG_RQ_INLINE | + HNS_ROCE_CAP_FLAG_RECORD_DB | + HNS_ROCE_CAP_FLAG_SQ_RECORD_DB; + +diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c +index ae721fa61e0e4..ba65823a5c0bb 100644 +--- a/drivers/infiniband/hw/hns/hns_roce_main.c ++++ b/drivers/infiniband/hw/hns/hns_roce_main.c +@@ -781,8 +781,7 @@ static int hns_roce_setup_hca(struct hns_roce_dev *hr_dev) + return 0; + + err_qp_table_free: +- if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) +- hns_roce_cleanup_qp_table(hr_dev); ++ hns_roce_cleanup_qp_table(hr_dev); + + err_cq_table_free: + hns_roce_cleanup_cq_table(hr_dev); +diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c +index 9e3d8b8264980..26564e7d34572 100644 +--- a/drivers/infiniband/hw/mlx5/devx.c ++++ b/drivers/infiniband/hw/mlx5/devx.c +@@ -1067,7 +1067,9 @@ static void devx_obj_build_destroy_cmd(void *in, void *out, void *din, + MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_RQT); + break; + case MLX5_CMD_OP_CREATE_TIR: +- MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_TIR); ++ *obj_id = MLX5_GET(create_tir_out, out, tirn); ++ MLX5_SET(destroy_tir_in, din, opcode, MLX5_CMD_OP_DESTROY_TIR); ++ MLX5_SET(destroy_tir_in, din, tirn, *obj_id); + break; + case MLX5_CMD_OP_CREATE_TIS: + MLX5_SET(general_obj_in_cmd_hdr, din, opcode, MLX5_CMD_OP_DESTROY_TIS); +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index e317d7d6d5c0d..beec0d7c0d6e8 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -3921,7 +3921,7 @@ static void mlx5_ib_stage_init_cleanup(struct mlx5_ib_dev *dev) + mlx5_ib_cleanup_multiport_master(dev); + WARN_ON(!xa_empty(&dev->odp_mkeys)); + cleanup_srcu_struct(&dev->odp_srcu); +- ++ mutex_destroy(&dev->cap_mask_mutex); + WARN_ON(!xa_empty(&dev->sig_mrs)); + WARN_ON(!bitmap_empty(dev->dm.memic_alloc_pages, MLX5_MAX_MEMIC_PAGES)); + } +@@ -3972,6 +3972,10 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) + dev->ib_dev.dev.parent = mdev->device; + dev->ib_dev.lag_flags = RDMA_LAG_FLAGS_HASH_ALL_SLAVES; + ++ err = init_srcu_struct(&dev->odp_srcu); ++ if (err) ++ goto err_mp; ++ + mutex_init(&dev->cap_mask_mutex); + INIT_LIST_HEAD(&dev->qp_list); + spin_lock_init(&dev->reset_flow_resource_lock); +@@ -3981,17 +3985,11 @@ static int mlx5_ib_stage_init_init(struct mlx5_ib_dev *dev) + + spin_lock_init(&dev->dm.lock); + dev->dm.dev = mdev; +- +- err = init_srcu_struct(&dev->odp_srcu); +- if (err) +- goto err_mp; +- + return 0; + + err_mp: + mlx5_ib_cleanup_multiport_master(dev); +- +- return -ENOMEM; ++ return err; + } + + static int mlx5_ib_enable_driver(struct ib_device *dev) +diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c +index 943914c2a50c7..bce44502ab0ed 100644 +--- a/drivers/infiniband/sw/rxe/rxe_net.c ++++ b/drivers/infiniband/sw/rxe/rxe_net.c +@@ -414,6 +414,11 @@ int rxe_send(struct rxe_pkt_info *pkt, struct sk_buff *skb) + + void rxe_loopback(struct sk_buff *skb) + { ++ if (skb->protocol == htons(ETH_P_IP)) ++ skb_pull(skb, sizeof(struct iphdr)); ++ else ++ skb_pull(skb, sizeof(struct ipv6hdr)); ++ + rxe_rcv(skb); + } + +diff --git a/drivers/infiniband/sw/rxe/rxe_recv.c b/drivers/infiniband/sw/rxe/rxe_recv.c +index c9984a28eecc7..cb69a125e2806 100644 +--- a/drivers/infiniband/sw/rxe/rxe_recv.c ++++ b/drivers/infiniband/sw/rxe/rxe_recv.c +@@ -9,21 +9,26 @@ + #include "rxe.h" + #include "rxe_loc.h" + ++/* check that QP matches packet opcode type and is in a valid state */ + static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, + struct rxe_qp *qp) + { ++ unsigned int pkt_type; ++ + if (unlikely(!qp->valid)) + goto err1; + ++ pkt_type = pkt->opcode & 0xe0; ++ + switch (qp_type(qp)) { + case IB_QPT_RC: +- if (unlikely((pkt->opcode & IB_OPCODE_RC) != 0)) { ++ if (unlikely(pkt_type != IB_OPCODE_RC)) { + pr_warn_ratelimited("bad qp type\n"); + goto err1; + } + break; + case IB_QPT_UC: +- if (unlikely(!(pkt->opcode & IB_OPCODE_UC))) { ++ if (unlikely(pkt_type != IB_OPCODE_UC)) { + pr_warn_ratelimited("bad qp type\n"); + goto err1; + } +@@ -31,7 +36,7 @@ static int check_type_state(struct rxe_dev *rxe, struct rxe_pkt_info *pkt, + case IB_QPT_UD: + case IB_QPT_SMI: + case IB_QPT_GSI: +- if (unlikely(!(pkt->opcode & IB_OPCODE_UD))) { ++ if (unlikely(pkt_type != IB_OPCODE_UD)) { + pr_warn_ratelimited("bad qp type\n"); + goto err1; + } +@@ -252,7 +257,6 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) + + list_for_each_entry(mce, &mcg->qp_list, qp_list) { + qp = mce->qp; +- pkt = SKB_TO_PKT(skb); + + /* validate qp for incoming packet */ + err = check_type_state(rxe, pkt, qp); +@@ -264,12 +268,18 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) + continue; + + /* for all but the last qp create a new clone of the +- * skb and pass to the qp. ++ * skb and pass to the qp. If an error occurs in the ++ * checks for the last qp in the list we need to ++ * free the skb since it hasn't been passed on to ++ * rxe_rcv_pkt() which would free it later. + */ +- if (mce->qp_list.next != &mcg->qp_list) ++ if (mce->qp_list.next != &mcg->qp_list) { + per_qp_skb = skb_clone(skb, GFP_ATOMIC); +- else ++ } else { + per_qp_skb = skb; ++ /* show we have consumed the skb */ ++ skb = NULL; ++ } + + if (unlikely(!per_qp_skb)) + continue; +@@ -284,9 +294,8 @@ static void rxe_rcv_mcast_pkt(struct rxe_dev *rxe, struct sk_buff *skb) + + rxe_drop_ref(mcg); /* drop ref from rxe_pool_get_key. */ + +- return; +- + err1: ++ /* free skb if not consumed */ + kfree_skb(skb); + } + +diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h +index adda789962196..368959ae9a8cc 100644 +--- a/drivers/infiniband/sw/siw/siw.h ++++ b/drivers/infiniband/sw/siw/siw.h +@@ -653,7 +653,7 @@ static inline struct siw_sqe *orq_get_free(struct siw_qp *qp) + { + struct siw_sqe *orq_e = orq_get_tail(qp); + +- if (orq_e && READ_ONCE(orq_e->flags) == 0) ++ if (READ_ONCE(orq_e->flags) == 0) + return orq_e; + + return NULL; +diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c +index 9d152e198a59b..32a553a1b905e 100644 +--- a/drivers/infiniband/sw/siw/siw_main.c ++++ b/drivers/infiniband/sw/siw/siw_main.c +@@ -135,7 +135,7 @@ static struct { + + static int siw_init_cpulist(void) + { +- int i, num_nodes = num_possible_nodes(); ++ int i, num_nodes = nr_node_ids; + + memset(siw_tx_thread, 0, sizeof(siw_tx_thread)); + +diff --git a/drivers/infiniband/sw/siw/siw_qp.c b/drivers/infiniband/sw/siw/siw_qp.c +index 875d36d4b1c61..ddb2e66f9f133 100644 +--- a/drivers/infiniband/sw/siw/siw_qp.c ++++ b/drivers/infiniband/sw/siw/siw_qp.c +@@ -199,26 +199,26 @@ void siw_qp_llp_write_space(struct sock *sk) + + static int siw_qp_readq_init(struct siw_qp *qp, int irq_size, int orq_size) + { +- irq_size = roundup_pow_of_two(irq_size); +- orq_size = roundup_pow_of_two(orq_size); +- +- qp->attrs.irq_size = irq_size; +- qp->attrs.orq_size = orq_size; +- +- qp->irq = vzalloc(irq_size * sizeof(struct siw_sqe)); +- if (!qp->irq) { +- siw_dbg_qp(qp, "irq malloc for %d failed\n", irq_size); +- qp->attrs.irq_size = 0; +- return -ENOMEM; ++ if (irq_size) { ++ irq_size = roundup_pow_of_two(irq_size); ++ qp->irq = vzalloc(irq_size * sizeof(struct siw_sqe)); ++ if (!qp->irq) { ++ qp->attrs.irq_size = 0; ++ return -ENOMEM; ++ } + } +- qp->orq = vzalloc(orq_size * sizeof(struct siw_sqe)); +- if (!qp->orq) { +- siw_dbg_qp(qp, "orq malloc for %d failed\n", orq_size); +- qp->attrs.orq_size = 0; +- qp->attrs.irq_size = 0; +- vfree(qp->irq); +- return -ENOMEM; ++ if (orq_size) { ++ orq_size = roundup_pow_of_two(orq_size); ++ qp->orq = vzalloc(orq_size * sizeof(struct siw_sqe)); ++ if (!qp->orq) { ++ qp->attrs.orq_size = 0; ++ qp->attrs.irq_size = 0; ++ vfree(qp->irq); ++ return -ENOMEM; ++ } + } ++ qp->attrs.irq_size = irq_size; ++ qp->attrs.orq_size = orq_size; + siw_dbg_qp(qp, "ORD %d, IRD %d\n", orq_size, irq_size); + return 0; + } +@@ -288,13 +288,14 @@ int siw_qp_mpa_rts(struct siw_qp *qp, enum mpa_v2_ctrl ctrl) + if (ctrl & MPA_V2_RDMA_WRITE_RTR) + wqe->sqe.opcode = SIW_OP_WRITE; + else if (ctrl & MPA_V2_RDMA_READ_RTR) { +- struct siw_sqe *rreq; ++ struct siw_sqe *rreq = NULL; + + wqe->sqe.opcode = SIW_OP_READ; + + spin_lock(&qp->orq_lock); + +- rreq = orq_get_free(qp); ++ if (qp->attrs.orq_size) ++ rreq = orq_get_free(qp); + if (rreq) { + siw_read_to_orq(rreq, &wqe->sqe); + qp->orq_put++; +@@ -877,135 +878,88 @@ void siw_read_to_orq(struct siw_sqe *rreq, struct siw_sqe *sqe) + rreq->num_sge = 1; + } + +-/* +- * Must be called with SQ locked. +- * To avoid complete SQ starvation by constant inbound READ requests, +- * the active IRQ will not be served after qp->irq_burst, if the +- * SQ has pending work. +- */ +-int siw_activate_tx(struct siw_qp *qp) ++static int siw_activate_tx_from_sq(struct siw_qp *qp) + { +- struct siw_sqe *irqe, *sqe; ++ struct siw_sqe *sqe; + struct siw_wqe *wqe = tx_wqe(qp); + int rv = 1; + +- irqe = &qp->irq[qp->irq_get % qp->attrs.irq_size]; +- +- if (irqe->flags & SIW_WQE_VALID) { +- sqe = sq_get_next(qp); +- +- /* +- * Avoid local WQE processing starvation in case +- * of constant inbound READ request stream +- */ +- if (sqe && ++qp->irq_burst >= SIW_IRQ_MAXBURST_SQ_ACTIVE) { +- qp->irq_burst = 0; +- goto skip_irq; +- } +- memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE); +- wqe->wr_status = SIW_WR_QUEUED; +- +- /* start READ RESPONSE */ +- wqe->sqe.opcode = SIW_OP_READ_RESPONSE; +- wqe->sqe.flags = 0; +- if (irqe->num_sge) { +- wqe->sqe.num_sge = 1; +- wqe->sqe.sge[0].length = irqe->sge[0].length; +- wqe->sqe.sge[0].laddr = irqe->sge[0].laddr; +- wqe->sqe.sge[0].lkey = irqe->sge[0].lkey; +- } else { +- wqe->sqe.num_sge = 0; +- } +- +- /* Retain original RREQ's message sequence number for +- * potential error reporting cases. +- */ +- wqe->sqe.sge[1].length = irqe->sge[1].length; +- +- wqe->sqe.rkey = irqe->rkey; +- wqe->sqe.raddr = irqe->raddr; ++ sqe = sq_get_next(qp); ++ if (!sqe) ++ return 0; + +- wqe->processed = 0; +- qp->irq_get++; ++ memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE); ++ wqe->wr_status = SIW_WR_QUEUED; + +- /* mark current IRQ entry free */ +- smp_store_mb(irqe->flags, 0); ++ /* First copy SQE to kernel private memory */ ++ memcpy(&wqe->sqe, sqe, sizeof(*sqe)); + ++ if (wqe->sqe.opcode >= SIW_NUM_OPCODES) { ++ rv = -EINVAL; + goto out; + } +- sqe = sq_get_next(qp); +- if (sqe) { +-skip_irq: +- memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE); +- wqe->wr_status = SIW_WR_QUEUED; +- +- /* First copy SQE to kernel private memory */ +- memcpy(&wqe->sqe, sqe, sizeof(*sqe)); +- +- if (wqe->sqe.opcode >= SIW_NUM_OPCODES) { ++ if (wqe->sqe.flags & SIW_WQE_INLINE) { ++ if (wqe->sqe.opcode != SIW_OP_SEND && ++ wqe->sqe.opcode != SIW_OP_WRITE) { + rv = -EINVAL; + goto out; + } +- if (wqe->sqe.flags & SIW_WQE_INLINE) { +- if (wqe->sqe.opcode != SIW_OP_SEND && +- wqe->sqe.opcode != SIW_OP_WRITE) { +- rv = -EINVAL; +- goto out; +- } +- if (wqe->sqe.sge[0].length > SIW_MAX_INLINE) { +- rv = -EINVAL; +- goto out; +- } +- wqe->sqe.sge[0].laddr = (uintptr_t)&wqe->sqe.sge[1]; +- wqe->sqe.sge[0].lkey = 0; +- wqe->sqe.num_sge = 1; ++ if (wqe->sqe.sge[0].length > SIW_MAX_INLINE) { ++ rv = -EINVAL; ++ goto out; + } +- if (wqe->sqe.flags & SIW_WQE_READ_FENCE) { +- /* A READ cannot be fenced */ +- if (unlikely(wqe->sqe.opcode == SIW_OP_READ || +- wqe->sqe.opcode == +- SIW_OP_READ_LOCAL_INV)) { +- siw_dbg_qp(qp, "cannot fence read\n"); +- rv = -EINVAL; +- goto out; +- } +- spin_lock(&qp->orq_lock); ++ wqe->sqe.sge[0].laddr = (uintptr_t)&wqe->sqe.sge[1]; ++ wqe->sqe.sge[0].lkey = 0; ++ wqe->sqe.num_sge = 1; ++ } ++ if (wqe->sqe.flags & SIW_WQE_READ_FENCE) { ++ /* A READ cannot be fenced */ ++ if (unlikely(wqe->sqe.opcode == SIW_OP_READ || ++ wqe->sqe.opcode == ++ SIW_OP_READ_LOCAL_INV)) { ++ siw_dbg_qp(qp, "cannot fence read\n"); ++ rv = -EINVAL; ++ goto out; ++ } ++ spin_lock(&qp->orq_lock); + +- if (!siw_orq_empty(qp)) { +- qp->tx_ctx.orq_fence = 1; +- rv = 0; +- } +- spin_unlock(&qp->orq_lock); ++ if (qp->attrs.orq_size && !siw_orq_empty(qp)) { ++ qp->tx_ctx.orq_fence = 1; ++ rv = 0; ++ } ++ spin_unlock(&qp->orq_lock); + +- } else if (wqe->sqe.opcode == SIW_OP_READ || +- wqe->sqe.opcode == SIW_OP_READ_LOCAL_INV) { +- struct siw_sqe *rreq; ++ } else if (wqe->sqe.opcode == SIW_OP_READ || ++ wqe->sqe.opcode == SIW_OP_READ_LOCAL_INV) { ++ struct siw_sqe *rreq; + +- wqe->sqe.num_sge = 1; ++ if (unlikely(!qp->attrs.orq_size)) { ++ /* We negotiated not to send READ req's */ ++ rv = -EINVAL; ++ goto out; ++ } ++ wqe->sqe.num_sge = 1; + +- spin_lock(&qp->orq_lock); ++ spin_lock(&qp->orq_lock); + +- rreq = orq_get_free(qp); +- if (rreq) { +- /* +- * Make an immediate copy in ORQ to be ready +- * to process loopback READ reply +- */ +- siw_read_to_orq(rreq, &wqe->sqe); +- qp->orq_put++; +- } else { +- qp->tx_ctx.orq_fence = 1; +- rv = 0; +- } +- spin_unlock(&qp->orq_lock); ++ rreq = orq_get_free(qp); ++ if (rreq) { ++ /* ++ * Make an immediate copy in ORQ to be ready ++ * to process loopback READ reply ++ */ ++ siw_read_to_orq(rreq, &wqe->sqe); ++ qp->orq_put++; ++ } else { ++ qp->tx_ctx.orq_fence = 1; ++ rv = 0; + } +- +- /* Clear SQE, can be re-used by application */ +- smp_store_mb(sqe->flags, 0); +- qp->sq_get++; +- } else { +- rv = 0; ++ spin_unlock(&qp->orq_lock); + } ++ ++ /* Clear SQE, can be re-used by application */ ++ smp_store_mb(sqe->flags, 0); ++ qp->sq_get++; + out: + if (unlikely(rv < 0)) { + siw_dbg_qp(qp, "error %d\n", rv); +@@ -1014,6 +968,65 @@ out: + return rv; + } + ++/* ++ * Must be called with SQ locked. ++ * To avoid complete SQ starvation by constant inbound READ requests, ++ * the active IRQ will not be served after qp->irq_burst, if the ++ * SQ has pending work. ++ */ ++int siw_activate_tx(struct siw_qp *qp) ++{ ++ struct siw_sqe *irqe; ++ struct siw_wqe *wqe = tx_wqe(qp); ++ ++ if (!qp->attrs.irq_size) ++ return siw_activate_tx_from_sq(qp); ++ ++ irqe = &qp->irq[qp->irq_get % qp->attrs.irq_size]; ++ ++ if (!(irqe->flags & SIW_WQE_VALID)) ++ return siw_activate_tx_from_sq(qp); ++ ++ /* ++ * Avoid local WQE processing starvation in case ++ * of constant inbound READ request stream ++ */ ++ if (sq_get_next(qp) && ++qp->irq_burst >= SIW_IRQ_MAXBURST_SQ_ACTIVE) { ++ qp->irq_burst = 0; ++ return siw_activate_tx_from_sq(qp); ++ } ++ memset(wqe->mem, 0, sizeof(*wqe->mem) * SIW_MAX_SGE); ++ wqe->wr_status = SIW_WR_QUEUED; ++ ++ /* start READ RESPONSE */ ++ wqe->sqe.opcode = SIW_OP_READ_RESPONSE; ++ wqe->sqe.flags = 0; ++ if (irqe->num_sge) { ++ wqe->sqe.num_sge = 1; ++ wqe->sqe.sge[0].length = irqe->sge[0].length; ++ wqe->sqe.sge[0].laddr = irqe->sge[0].laddr; ++ wqe->sqe.sge[0].lkey = irqe->sge[0].lkey; ++ } else { ++ wqe->sqe.num_sge = 0; ++ } ++ ++ /* Retain original RREQ's message sequence number for ++ * potential error reporting cases. ++ */ ++ wqe->sqe.sge[1].length = irqe->sge[1].length; ++ ++ wqe->sqe.rkey = irqe->rkey; ++ wqe->sqe.raddr = irqe->raddr; ++ ++ wqe->processed = 0; ++ qp->irq_get++; ++ ++ /* mark current IRQ entry free */ ++ smp_store_mb(irqe->flags, 0); ++ ++ return 1; ++} ++ + /* + * Check if current CQ state qualifies for calling CQ completion + * handler. Must be called with CQ lock held. +diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c +index 4bd1f1f84057b..60116f20653c7 100644 +--- a/drivers/infiniband/sw/siw/siw_qp_rx.c ++++ b/drivers/infiniband/sw/siw/siw_qp_rx.c +@@ -680,6 +680,10 @@ static int siw_init_rresp(struct siw_qp *qp, struct siw_rx_stream *srx) + } + spin_lock_irqsave(&qp->sq_lock, flags); + ++ if (unlikely(!qp->attrs.irq_size)) { ++ run_sq = 0; ++ goto error_irq; ++ } + if (tx_work->wr_status == SIW_WR_IDLE) { + /* + * immediately schedule READ response w/o +@@ -712,8 +716,9 @@ static int siw_init_rresp(struct siw_qp *qp, struct siw_rx_stream *srx) + /* RRESP now valid as current TX wqe or placed into IRQ */ + smp_store_mb(resp->flags, SIW_WQE_VALID); + } else { +- pr_warn("siw: [QP %u]: irq %d exceeded %d\n", qp_id(qp), +- qp->irq_put % qp->attrs.irq_size, qp->attrs.irq_size); ++error_irq: ++ pr_warn("siw: [QP %u]: IRQ exceeded or null, size %d\n", ++ qp_id(qp), qp->attrs.irq_size); + + siw_init_terminate(qp, TERM_ERROR_LAYER_RDMAP, + RDMAP_ETYPE_REMOTE_OPERATION, +@@ -740,6 +745,9 @@ static int siw_orqe_start_rx(struct siw_qp *qp) + struct siw_sqe *orqe; + struct siw_wqe *wqe = NULL; + ++ if (unlikely(!qp->attrs.orq_size)) ++ return -EPROTO; ++ + /* make sure ORQ indices are current */ + smp_mb(); + +@@ -796,8 +804,8 @@ int siw_proc_rresp(struct siw_qp *qp) + */ + rv = siw_orqe_start_rx(qp); + if (rv) { +- pr_warn("siw: [QP %u]: ORQ empty at idx %d\n", +- qp_id(qp), qp->orq_get % qp->attrs.orq_size); ++ pr_warn("siw: [QP %u]: ORQ empty, size %d\n", ++ qp_id(qp), qp->attrs.orq_size); + goto error_term; + } + rv = siw_rresp_check_ntoh(srx, frx); +@@ -1290,11 +1298,13 @@ static int siw_rdmap_complete(struct siw_qp *qp, int error) + wc_status); + siw_wqe_put_mem(wqe, SIW_OP_READ); + +- if (!error) ++ if (!error) { + rv = siw_check_tx_fence(qp); +- else +- /* Disable current ORQ eleement */ +- WRITE_ONCE(orq_get_current(qp)->flags, 0); ++ } else { ++ /* Disable current ORQ element */ ++ if (qp->attrs.orq_size) ++ WRITE_ONCE(orq_get_current(qp)->flags, 0); ++ } + break; + + case RDMAP_RDMA_READ_REQ: +diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c +index d19d8325588b5..7989c4043db4e 100644 +--- a/drivers/infiniband/sw/siw/siw_qp_tx.c ++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c +@@ -1107,8 +1107,8 @@ next_wqe: + /* + * RREQ may have already been completed by inbound RRESP! + */ +- if (tx_type == SIW_OP_READ || +- tx_type == SIW_OP_READ_LOCAL_INV) { ++ if ((tx_type == SIW_OP_READ || ++ tx_type == SIW_OP_READ_LOCAL_INV) && qp->attrs.orq_size) { + /* Cleanup pending entry in ORQ */ + qp->orq_put--; + qp->orq[qp->orq_put % qp->attrs.orq_size].flags = 0; +diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c +index 7cf3242ffb41f..fb25e8011f5a4 100644 +--- a/drivers/infiniband/sw/siw/siw_verbs.c ++++ b/drivers/infiniband/sw/siw/siw_verbs.c +@@ -362,13 +362,23 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + if (rv) + goto err_out; + ++ num_sqe = attrs->cap.max_send_wr; ++ num_rqe = attrs->cap.max_recv_wr; ++ + /* All queue indices are derived from modulo operations + * on a free running 'get' (consumer) and 'put' (producer) + * unsigned counter. Having queue sizes at power of two + * avoids handling counter wrap around. + */ +- num_sqe = roundup_pow_of_two(attrs->cap.max_send_wr); +- num_rqe = roundup_pow_of_two(attrs->cap.max_recv_wr); ++ if (num_sqe) ++ num_sqe = roundup_pow_of_two(num_sqe); ++ else { ++ /* Zero sized SQ is not supported */ ++ rv = -EINVAL; ++ goto err_out; ++ } ++ if (num_rqe) ++ num_rqe = roundup_pow_of_two(num_rqe); + + if (udata) + qp->sendq = vmalloc_user(num_sqe * sizeof(struct siw_sqe)); +@@ -376,7 +386,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + qp->sendq = vzalloc(num_sqe * sizeof(struct siw_sqe)); + + if (qp->sendq == NULL) { +- siw_dbg(base_dev, "SQ size %d alloc failed\n", num_sqe); + rv = -ENOMEM; + goto err_out_xa; + } +@@ -410,7 +419,6 @@ struct ib_qp *siw_create_qp(struct ib_pd *pd, + qp->recvq = vzalloc(num_rqe * sizeof(struct siw_rqe)); + + if (qp->recvq == NULL) { +- siw_dbg(base_dev, "RQ size %d alloc failed\n", num_rqe); + rv = -ENOMEM; + goto err_out_xa; + } +@@ -960,9 +968,9 @@ int siw_post_receive(struct ib_qp *base_qp, const struct ib_recv_wr *wr, + unsigned long flags; + int rv = 0; + +- if (qp->srq) { ++ if (qp->srq || qp->attrs.rq_size == 0) { + *bad_wr = wr; +- return -EOPNOTSUPP; /* what else from errno.h? */ ++ return -EINVAL; + } + if (!rdma_is_kernel_res(&qp->base_qp.res)) { + siw_dbg_qp(qp, "no kernel post_recv for user mapped rq\n"); +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c +index ac4c49cbf1538..2ee3806f2df5b 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c ++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt-sysfs.c +@@ -408,6 +408,7 @@ int rtrs_clt_create_sess_files(struct rtrs_clt_sess *sess) + "%s", str); + if (err) { + pr_err("kobject_init_and_add: %d\n", err); ++ kobject_put(&sess->kobj); + return err; + } + err = sysfs_create_group(&sess->kobj, &rtrs_clt_sess_attr_group); +@@ -419,6 +420,7 @@ int rtrs_clt_create_sess_files(struct rtrs_clt_sess *sess) + &sess->kobj, "stats"); + if (err) { + pr_err("kobject_init_and_add: %d\n", err); ++ kobject_put(&sess->stats->kobj_stats); + goto remove_group; + } + +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c +index d54a77ebe1184..fc0e90915678a 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c ++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c +@@ -31,6 +31,8 @@ + */ + #define RTRS_RECONNECT_SEED 8 + ++#define FIRST_CONN 0x01 ++ + MODULE_DESCRIPTION("RDMA Transport Client"); + MODULE_LICENSE("GPL"); + +@@ -1516,7 +1518,7 @@ static void destroy_con(struct rtrs_clt_con *con) + static int create_con_cq_qp(struct rtrs_clt_con *con) + { + struct rtrs_clt_sess *sess = to_clt_sess(con->c.sess); +- u16 wr_queue_size; ++ u32 max_send_wr, max_recv_wr, cq_size; + int err, cq_vector; + struct rtrs_msg_rkey_rsp *rsp; + +@@ -1536,7 +1538,8 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) + * + 2 for drain and heartbeat + * in case qp gets into error state + */ +- wr_queue_size = SERVICE_CON_QUEUE_DEPTH * 3 + 2; ++ max_send_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; ++ max_recv_wr = SERVICE_CON_QUEUE_DEPTH * 2 + 2; + /* We must be the first here */ + if (WARN_ON(sess->s.dev)) + return -EINVAL; +@@ -1568,25 +1571,29 @@ static int create_con_cq_qp(struct rtrs_clt_con *con) + + /* Shared between connections */ + sess->s.dev_ref++; +- wr_queue_size = ++ max_send_wr = + min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr, + /* QD * (REQ + RSP + FR REGS or INVS) + drain */ + sess->queue_depth * 3 + 1); ++ max_recv_wr = ++ min_t(int, sess->s.dev->ib_dev->attrs.max_qp_wr, ++ sess->queue_depth * 3 + 1); + } + /* alloc iu to recv new rkey reply when server reports flags set */ + if (sess->flags == RTRS_MSG_NEW_RKEY_F || con->c.cid == 0) { +- con->rsp_ius = rtrs_iu_alloc(wr_queue_size, sizeof(*rsp), ++ con->rsp_ius = rtrs_iu_alloc(max_recv_wr, sizeof(*rsp), + GFP_KERNEL, sess->s.dev->ib_dev, + DMA_FROM_DEVICE, + rtrs_clt_rdma_done); + if (!con->rsp_ius) + return -ENOMEM; +- con->queue_size = wr_queue_size; ++ con->queue_size = max_recv_wr; + } ++ cq_size = max_send_wr + max_recv_wr; + cq_vector = con->cpu % sess->s.dev->ib_dev->num_comp_vectors; + err = rtrs_cq_qp_create(&sess->s, &con->c, sess->max_send_sge, +- cq_vector, wr_queue_size, wr_queue_size, +- IB_POLL_SOFTIRQ); ++ cq_vector, cq_size, max_send_wr, ++ max_recv_wr, IB_POLL_SOFTIRQ); + /* + * In case of error we do not bother to clean previous allocations, + * since destroy_con_cq_qp() must be called. +@@ -1669,6 +1676,7 @@ static int rtrs_rdma_route_resolved(struct rtrs_clt_con *con) + .cid_num = cpu_to_le16(sess->s.con_num), + .recon_cnt = cpu_to_le16(sess->s.recon_cnt), + }; ++ msg.first_conn = sess->for_new_clt ? FIRST_CONN : 0; + uuid_copy(&msg.sess_uuid, &sess->s.uuid); + uuid_copy(&msg.paths_uuid, &clt->paths_uuid); + +@@ -1754,6 +1762,8 @@ static int rtrs_rdma_conn_established(struct rtrs_clt_con *con, + scnprintf(sess->hca_name, sizeof(sess->hca_name), + sess->s.dev->ib_dev->name); + sess->s.src_addr = con->c.cm_id->route.addr.src_addr; ++ /* set for_new_clt, to allow future reconnect on any path */ ++ sess->for_new_clt = 1; + } + + return 0; +@@ -2571,11 +2581,8 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, + clt->dev.class = rtrs_clt_dev_class; + clt->dev.release = rtrs_clt_dev_release; + err = dev_set_name(&clt->dev, "%s", sessname); +- if (err) { +- free_percpu(clt->pcpu_path); +- kfree(clt); +- return ERR_PTR(err); +- } ++ if (err) ++ goto err; + /* + * Suppress user space notification until + * sysfs files are created +@@ -2583,29 +2590,31 @@ static struct rtrs_clt *alloc_clt(const char *sessname, size_t paths_num, + dev_set_uevent_suppress(&clt->dev, true); + err = device_register(&clt->dev); + if (err) { +- free_percpu(clt->pcpu_path); + put_device(&clt->dev); +- return ERR_PTR(err); ++ goto err; + } + + clt->kobj_paths = kobject_create_and_add("paths", &clt->dev.kobj); + if (!clt->kobj_paths) { +- free_percpu(clt->pcpu_path); +- device_unregister(&clt->dev); +- return NULL; ++ err = -ENOMEM; ++ goto err_dev; + } + err = rtrs_clt_create_sysfs_root_files(clt); + if (err) { +- free_percpu(clt->pcpu_path); + kobject_del(clt->kobj_paths); + kobject_put(clt->kobj_paths); +- device_unregister(&clt->dev); +- return ERR_PTR(err); ++ goto err_dev; + } + dev_set_uevent_suppress(&clt->dev, false); + kobject_uevent(&clt->dev.kobj, KOBJ_ADD); + + return clt; ++err_dev: ++ device_unregister(&clt->dev); ++err: ++ free_percpu(clt->pcpu_path); ++ kfree(clt); ++ return ERR_PTR(err); + } + + static void wait_for_inflight_permits(struct rtrs_clt *clt) +@@ -2678,6 +2687,8 @@ struct rtrs_clt *rtrs_clt_open(struct rtrs_clt_ops *ops, + err = PTR_ERR(sess); + goto close_all_sess; + } ++ if (!i) ++ sess->for_new_clt = 1; + list_add_tail_rcu(&sess->s.entry, &clt->paths_list); + + err = init_sess(sess); +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.h b/drivers/infiniband/ulp/rtrs/rtrs-clt.h +index 167acd3c90fcc..22da5d50c22c4 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.h ++++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.h +@@ -142,6 +142,7 @@ struct rtrs_clt_sess { + int max_send_sge; + u32 flags; + struct kobject kobj; ++ u8 for_new_clt; + struct rtrs_clt_stats *stats; + /* cache hca_port and hca_name to display in sysfs */ + u8 hca_port; +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-pri.h b/drivers/infiniband/ulp/rtrs/rtrs-pri.h +index b8e43dc4d95ab..2e1d2f7e372ac 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs-pri.h ++++ b/drivers/infiniband/ulp/rtrs/rtrs-pri.h +@@ -188,7 +188,9 @@ struct rtrs_msg_conn_req { + __le16 recon_cnt; + uuid_t sess_uuid; + uuid_t paths_uuid; +- u8 reserved[12]; ++ u8 first_conn : 1; ++ u8 reserved_bits : 7; ++ u8 reserved[11]; + }; + + /** +@@ -304,8 +306,9 @@ int rtrs_post_rdma_write_imm_empty(struct rtrs_con *con, struct ib_cqe *cqe, + struct ib_send_wr *head); + + int rtrs_cq_qp_create(struct rtrs_sess *rtrs_sess, struct rtrs_con *con, +- u32 max_send_sge, int cq_vector, u16 cq_size, +- u16 wr_queue_size, enum ib_poll_context poll_ctx); ++ u32 max_send_sge, int cq_vector, int cq_size, ++ u32 max_send_wr, u32 max_recv_wr, ++ enum ib_poll_context poll_ctx); + void rtrs_cq_qp_destroy(struct rtrs_con *con); + + void rtrs_init_hb(struct rtrs_sess *sess, struct ib_cqe *cqe, +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c +index 07fbb063555d3..39708ab4f26e5 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c ++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv-sysfs.c +@@ -53,6 +53,8 @@ static ssize_t rtrs_srv_disconnect_store(struct kobject *kobj, + sockaddr_to_str((struct sockaddr *)&sess->s.dst_addr, str, sizeof(str)); + + rtrs_info(s, "disconnect for path %s requested\n", str); ++ /* first remove sysfs itself to avoid deadlock */ ++ sysfs_remove_file_self(&sess->kobj, &attr->attr); + close_sess(sess); + + return count; +@@ -184,6 +186,7 @@ static int rtrs_srv_create_once_sysfs_root_folders(struct rtrs_srv_sess *sess) + err = -ENOMEM; + pr_err("kobject_create_and_add(): %d\n", err); + device_del(&srv->dev); ++ put_device(&srv->dev); + goto unlock; + } + dev_set_uevent_suppress(&srv->dev, false); +@@ -209,6 +212,7 @@ rtrs_srv_destroy_once_sysfs_root_folders(struct rtrs_srv_sess *sess) + kobject_put(srv->kobj_paths); + mutex_unlock(&srv->paths_mutex); + device_del(&srv->dev); ++ put_device(&srv->dev); + } else { + mutex_unlock(&srv->paths_mutex); + } +@@ -237,6 +241,7 @@ static int rtrs_srv_create_stats_files(struct rtrs_srv_sess *sess) + &sess->kobj, "stats"); + if (err) { + rtrs_err(s, "kobject_init_and_add(): %d\n", err); ++ kobject_put(&sess->stats->kobj_stats); + return err; + } + err = sysfs_create_group(&sess->stats->kobj_stats, +@@ -293,8 +298,8 @@ remove_group: + sysfs_remove_group(&sess->kobj, &rtrs_srv_sess_attr_group); + put_kobj: + kobject_del(&sess->kobj); +- kobject_put(&sess->kobj); + destroy_root: ++ kobject_put(&sess->kobj); + rtrs_srv_destroy_once_sysfs_root_folders(sess); + + return err; +@@ -305,7 +310,7 @@ void rtrs_srv_destroy_sess_files(struct rtrs_srv_sess *sess) + if (sess->kobj.state_in_sysfs) { + kobject_del(&sess->stats->kobj_stats); + kobject_put(&sess->stats->kobj_stats); +- kobject_del(&sess->kobj); ++ sysfs_remove_group(&sess->kobj, &rtrs_srv_sess_attr_group); + kobject_put(&sess->kobj); + + rtrs_srv_destroy_once_sysfs_root_folders(sess); +diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c +index 1cb778aff3c59..f009a6907169c 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c ++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c +@@ -232,7 +232,8 @@ static int rdma_write_sg(struct rtrs_srv_op *id) + dma_addr_t dma_addr = sess->dma_addr[id->msg_id]; + struct rtrs_srv_mr *srv_mr; + struct rtrs_srv *srv = sess->srv; +- struct ib_send_wr inv_wr, imm_wr; ++ struct ib_send_wr inv_wr; ++ struct ib_rdma_wr imm_wr; + struct ib_rdma_wr *wr = NULL; + enum ib_send_flags flags; + size_t sg_cnt; +@@ -277,21 +278,22 @@ static int rdma_write_sg(struct rtrs_srv_op *id) + WARN_ON_ONCE(rkey != wr->rkey); + + wr->wr.opcode = IB_WR_RDMA_WRITE; ++ wr->wr.wr_cqe = &io_comp_cqe; + wr->wr.ex.imm_data = 0; + wr->wr.send_flags = 0; + + if (need_inval && always_invalidate) { + wr->wr.next = &rwr.wr; + rwr.wr.next = &inv_wr; +- inv_wr.next = &imm_wr; ++ inv_wr.next = &imm_wr.wr; + } else if (always_invalidate) { + wr->wr.next = &rwr.wr; +- rwr.wr.next = &imm_wr; ++ rwr.wr.next = &imm_wr.wr; + } else if (need_inval) { + wr->wr.next = &inv_wr; +- inv_wr.next = &imm_wr; ++ inv_wr.next = &imm_wr.wr; + } else { +- wr->wr.next = &imm_wr; ++ wr->wr.next = &imm_wr.wr; + } + /* + * From time to time we have to post signaled sends, +@@ -304,16 +306,18 @@ static int rdma_write_sg(struct rtrs_srv_op *id) + inv_wr.sg_list = NULL; + inv_wr.num_sge = 0; + inv_wr.opcode = IB_WR_SEND_WITH_INV; ++ inv_wr.wr_cqe = &io_comp_cqe; + inv_wr.send_flags = 0; + inv_wr.ex.invalidate_rkey = rkey; + } + +- imm_wr.next = NULL; ++ imm_wr.wr.next = NULL; + if (always_invalidate) { + struct rtrs_msg_rkey_rsp *msg; + + srv_mr = &sess->mrs[id->msg_id]; + rwr.wr.opcode = IB_WR_REG_MR; ++ rwr.wr.wr_cqe = &local_reg_cqe; + rwr.wr.num_sge = 0; + rwr.mr = srv_mr->mr; + rwr.wr.send_flags = 0; +@@ -328,22 +332,22 @@ static int rdma_write_sg(struct rtrs_srv_op *id) + list.addr = srv_mr->iu->dma_addr; + list.length = sizeof(*msg); + list.lkey = sess->s.dev->ib_pd->local_dma_lkey; +- imm_wr.sg_list = &list; +- imm_wr.num_sge = 1; +- imm_wr.opcode = IB_WR_SEND_WITH_IMM; ++ imm_wr.wr.sg_list = &list; ++ imm_wr.wr.num_sge = 1; ++ imm_wr.wr.opcode = IB_WR_SEND_WITH_IMM; + ib_dma_sync_single_for_device(sess->s.dev->ib_dev, + srv_mr->iu->dma_addr, + srv_mr->iu->size, DMA_TO_DEVICE); + } else { +- imm_wr.sg_list = NULL; +- imm_wr.num_sge = 0; +- imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM; ++ imm_wr.wr.sg_list = NULL; ++ imm_wr.wr.num_sge = 0; ++ imm_wr.wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM; + } +- imm_wr.send_flags = flags; +- imm_wr.ex.imm_data = cpu_to_be32(rtrs_to_io_rsp_imm(id->msg_id, ++ imm_wr.wr.send_flags = flags; ++ imm_wr.wr.ex.imm_data = cpu_to_be32(rtrs_to_io_rsp_imm(id->msg_id, + 0, need_inval)); + +- imm_wr.wr_cqe = &io_comp_cqe; ++ imm_wr.wr.wr_cqe = &io_comp_cqe; + ib_dma_sync_single_for_device(sess->s.dev->ib_dev, dma_addr, + offset, DMA_BIDIRECTIONAL); + +@@ -370,7 +374,8 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, + { + struct rtrs_sess *s = con->c.sess; + struct rtrs_srv_sess *sess = to_srv_sess(s); +- struct ib_send_wr inv_wr, imm_wr, *wr = NULL; ++ struct ib_send_wr inv_wr, *wr = NULL; ++ struct ib_rdma_wr imm_wr; + struct ib_reg_wr rwr; + struct rtrs_srv *srv = sess->srv; + struct rtrs_srv_mr *srv_mr; +@@ -389,6 +394,7 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, + + if (need_inval) { + if (likely(sg_cnt)) { ++ inv_wr.wr_cqe = &io_comp_cqe; + inv_wr.sg_list = NULL; + inv_wr.num_sge = 0; + inv_wr.opcode = IB_WR_SEND_WITH_INV; +@@ -406,15 +412,15 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, + if (need_inval && always_invalidate) { + wr = &inv_wr; + inv_wr.next = &rwr.wr; +- rwr.wr.next = &imm_wr; ++ rwr.wr.next = &imm_wr.wr; + } else if (always_invalidate) { + wr = &rwr.wr; +- rwr.wr.next = &imm_wr; ++ rwr.wr.next = &imm_wr.wr; + } else if (need_inval) { + wr = &inv_wr; +- inv_wr.next = &imm_wr; ++ inv_wr.next = &imm_wr.wr; + } else { +- wr = &imm_wr; ++ wr = &imm_wr.wr; + } + /* + * From time to time we have to post signalled sends, +@@ -423,14 +429,15 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, + flags = (atomic_inc_return(&con->wr_cnt) % srv->queue_depth) ? + 0 : IB_SEND_SIGNALED; + imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval); +- imm_wr.next = NULL; ++ imm_wr.wr.next = NULL; + if (always_invalidate) { + struct ib_sge list; + struct rtrs_msg_rkey_rsp *msg; + + srv_mr = &sess->mrs[id->msg_id]; +- rwr.wr.next = &imm_wr; ++ rwr.wr.next = &imm_wr.wr; + rwr.wr.opcode = IB_WR_REG_MR; ++ rwr.wr.wr_cqe = &local_reg_cqe; + rwr.wr.num_sge = 0; + rwr.wr.send_flags = 0; + rwr.mr = srv_mr->mr; +@@ -445,21 +452,21 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id, + list.addr = srv_mr->iu->dma_addr; + list.length = sizeof(*msg); + list.lkey = sess->s.dev->ib_pd->local_dma_lkey; +- imm_wr.sg_list = &list; +- imm_wr.num_sge = 1; +- imm_wr.opcode = IB_WR_SEND_WITH_IMM; ++ imm_wr.wr.sg_list = &list; ++ imm_wr.wr.num_sge = 1; ++ imm_wr.wr.opcode = IB_WR_SEND_WITH_IMM; + ib_dma_sync_single_for_device(sess->s.dev->ib_dev, + srv_mr->iu->dma_addr, + srv_mr->iu->size, DMA_TO_DEVICE); + } else { +- imm_wr.sg_list = NULL; +- imm_wr.num_sge = 0; +- imm_wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM; ++ imm_wr.wr.sg_list = NULL; ++ imm_wr.wr.num_sge = 0; ++ imm_wr.wr.opcode = IB_WR_RDMA_WRITE_WITH_IMM; + } +- imm_wr.send_flags = flags; +- imm_wr.wr_cqe = &io_comp_cqe; ++ imm_wr.wr.send_flags = flags; ++ imm_wr.wr.wr_cqe = &io_comp_cqe; + +- imm_wr.ex.imm_data = cpu_to_be32(imm); ++ imm_wr.wr.ex.imm_data = cpu_to_be32(imm); + + err = ib_post_send(id->con->c.qp, wr, NULL); + if (unlikely(err)) +@@ -1343,7 +1350,8 @@ static void free_srv(struct rtrs_srv *srv) + } + + static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx, +- const uuid_t *paths_uuid) ++ const uuid_t *paths_uuid, ++ bool first_conn) + { + struct rtrs_srv *srv; + int i; +@@ -1356,13 +1364,18 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx, + return srv; + } + } ++ mutex_unlock(&ctx->srv_mutex); ++ /* ++ * If this request is not the first connection request from the ++ * client for this session then fail and return error. ++ */ ++ if (!first_conn) ++ return ERR_PTR(-ENXIO); + + /* need to allocate a new srv */ + srv = kzalloc(sizeof(*srv), GFP_KERNEL); +- if (!srv) { +- mutex_unlock(&ctx->srv_mutex); +- return NULL; +- } ++ if (!srv) ++ return ERR_PTR(-ENOMEM); + + INIT_LIST_HEAD(&srv->paths_list); + mutex_init(&srv->paths_mutex); +@@ -1372,8 +1385,6 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx, + srv->ctx = ctx; + device_initialize(&srv->dev); + srv->dev.release = rtrs_srv_dev_release; +- list_add(&srv->ctx_list, &ctx->srv_list); +- mutex_unlock(&ctx->srv_mutex); + + srv->chunks = kcalloc(srv->queue_depth, sizeof(*srv->chunks), + GFP_KERNEL); +@@ -1386,6 +1397,9 @@ static struct rtrs_srv *get_or_create_srv(struct rtrs_srv_ctx *ctx, + goto err_free_chunks; + } + refcount_set(&srv->refcount, 1); ++ mutex_lock(&ctx->srv_mutex); ++ list_add(&srv->ctx_list, &ctx->srv_list); ++ mutex_unlock(&ctx->srv_mutex); + + return srv; + +@@ -1396,7 +1410,7 @@ err_free_chunks: + + err_free_srv: + kfree(srv); +- return NULL; ++ return ERR_PTR(-ENOMEM); + } + + static void put_srv(struct rtrs_srv *srv) +@@ -1476,10 +1490,12 @@ static bool __is_path_w_addr_exists(struct rtrs_srv *srv, + + static void free_sess(struct rtrs_srv_sess *sess) + { +- if (sess->kobj.state_in_sysfs) ++ if (sess->kobj.state_in_sysfs) { ++ kobject_del(&sess->kobj); + kobject_put(&sess->kobj); +- else ++ } else { + kfree(sess); ++ } + } + + static void rtrs_srv_close_work(struct work_struct *work) +@@ -1601,7 +1617,7 @@ static int create_con(struct rtrs_srv_sess *sess, + struct rtrs_sess *s = &sess->s; + struct rtrs_srv_con *con; + +- u16 cq_size, wr_queue_size; ++ u32 cq_size, wr_queue_size; + int err, cq_vector; + + con = kzalloc(sizeof(*con), GFP_KERNEL); +@@ -1615,7 +1631,7 @@ static int create_con(struct rtrs_srv_sess *sess, + con->c.cm_id = cm_id; + con->c.sess = &sess->s; + con->c.cid = cid; +- atomic_set(&con->wr_cnt, 0); ++ atomic_set(&con->wr_cnt, 1); + + if (con->c.cid == 0) { + /* +@@ -1645,7 +1661,8 @@ static int create_con(struct rtrs_srv_sess *sess, + + /* TODO: SOFTIRQ can be faster, but be careful with softirq context */ + err = rtrs_cq_qp_create(&sess->s, &con->c, 1, cq_vector, cq_size, +- wr_queue_size, IB_POLL_WORKQUEUE); ++ wr_queue_size, wr_queue_size, ++ IB_POLL_WORKQUEUE); + if (err) { + rtrs_err(s, "rtrs_cq_qp_create(), err: %d\n", err); + goto free_con; +@@ -1796,13 +1813,9 @@ static int rtrs_rdma_connect(struct rdma_cm_id *cm_id, + goto reject_w_econnreset; + } + recon_cnt = le16_to_cpu(msg->recon_cnt); +- srv = get_or_create_srv(ctx, &msg->paths_uuid); +- /* +- * "refcount == 0" happens if a previous thread calls get_or_create_srv +- * allocate srv, but chunks of srv are not allocated yet. +- */ +- if (!srv || refcount_read(&srv->refcount) == 0) { +- err = -ENOMEM; ++ srv = get_or_create_srv(ctx, &msg->paths_uuid, msg->first_conn); ++ if (IS_ERR(srv)) { ++ err = PTR_ERR(srv); + goto reject_w_err; + } + mutex_lock(&srv->paths_mutex); +@@ -1877,8 +1890,8 @@ reject_w_econnreset: + return rtrs_rdma_do_reject(cm_id, -ECONNRESET); + + close_and_return_err: +- close_sess(sess); + mutex_unlock(&srv->paths_mutex); ++ close_sess(sess); + + return err; + } +diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c +index ff1093d6e4bc9..23e5452e10c46 100644 +--- a/drivers/infiniband/ulp/rtrs/rtrs.c ++++ b/drivers/infiniband/ulp/rtrs/rtrs.c +@@ -246,14 +246,14 @@ static int create_cq(struct rtrs_con *con, int cq_vector, u16 cq_size, + } + + static int create_qp(struct rtrs_con *con, struct ib_pd *pd, +- u16 wr_queue_size, u32 max_sge) ++ u32 max_send_wr, u32 max_recv_wr, u32 max_sge) + { + struct ib_qp_init_attr init_attr = {NULL}; + struct rdma_cm_id *cm_id = con->cm_id; + int ret; + +- init_attr.cap.max_send_wr = wr_queue_size; +- init_attr.cap.max_recv_wr = wr_queue_size; ++ init_attr.cap.max_send_wr = max_send_wr; ++ init_attr.cap.max_recv_wr = max_recv_wr; + init_attr.cap.max_recv_sge = 1; + init_attr.event_handler = qp_event_handler; + init_attr.qp_context = con; +@@ -275,8 +275,9 @@ static int create_qp(struct rtrs_con *con, struct ib_pd *pd, + } + + int rtrs_cq_qp_create(struct rtrs_sess *sess, struct rtrs_con *con, +- u32 max_send_sge, int cq_vector, u16 cq_size, +- u16 wr_queue_size, enum ib_poll_context poll_ctx) ++ u32 max_send_sge, int cq_vector, int cq_size, ++ u32 max_send_wr, u32 max_recv_wr, ++ enum ib_poll_context poll_ctx) + { + int err; + +@@ -284,7 +285,8 @@ int rtrs_cq_qp_create(struct rtrs_sess *sess, struct rtrs_con *con, + if (err) + return err; + +- err = create_qp(con, sess->dev->ib_pd, wr_queue_size, max_send_sge); ++ err = create_qp(con, sess->dev->ib_pd, max_send_wr, max_recv_wr, ++ max_send_sge); + if (err) { + ib_free_cq(con->cq); + con->cq = NULL; +diff --git a/drivers/input/joydev.c b/drivers/input/joydev.c +index a2b5fbba2d3b3..430dc69750048 100644 +--- a/drivers/input/joydev.c ++++ b/drivers/input/joydev.c +@@ -456,7 +456,7 @@ static int joydev_handle_JSIOCSAXMAP(struct joydev *joydev, + if (IS_ERR(abspam)) + return PTR_ERR(abspam); + +- for (i = 0; i < joydev->nabs; i++) { ++ for (i = 0; i < len && i < joydev->nabs; i++) { + if (abspam[i] > ABS_MAX) { + retval = -EINVAL; + goto out; +@@ -480,6 +480,9 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev, + int i; + int retval = 0; + ++ if (len % sizeof(*keypam)) ++ return -EINVAL; ++ + len = min(len, sizeof(joydev->keypam)); + + /* Validate the map. */ +@@ -487,7 +490,7 @@ static int joydev_handle_JSIOCSBTNMAP(struct joydev *joydev, + if (IS_ERR(keypam)) + return PTR_ERR(keypam); + +- for (i = 0; i < joydev->nkey; i++) { ++ for (i = 0; i < (len / 2) && i < joydev->nkey; i++) { + if (keypam[i] > KEY_MAX || keypam[i] < BTN_MISC) { + retval = -EINVAL; + goto out; +diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c +index 3d004ca76b6ed..e5f1e3cf9179f 100644 +--- a/drivers/input/joystick/xpad.c ++++ b/drivers/input/joystick/xpad.c +@@ -305,6 +305,7 @@ static const struct xpad_device { + { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 }, + { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 }, + { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, ++ { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE }, + { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 }, + { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, + { 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h +index c74b020796a94..9119e12a57784 100644 +--- a/drivers/input/serio/i8042-x86ia64io.h ++++ b/drivers/input/serio/i8042-x86ia64io.h +@@ -588,6 +588,10 @@ static const struct dmi_system_id i8042_dmi_noselftest_table[] = { + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), + DMI_MATCH(DMI_CHASSIS_TYPE, "10"), /* Notebook */ + }, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_CHASSIS_TYPE, "31"), /* Convertible Notebook */ ++ }, + }, + { } + }; +diff --git a/drivers/input/serio/serport.c b/drivers/input/serio/serport.c +index 8ac970a423de6..33e9d9bfd036f 100644 +--- a/drivers/input/serio/serport.c ++++ b/drivers/input/serio/serport.c +@@ -156,7 +156,9 @@ out: + * returning 0 characters. + */ + +-static ssize_t serport_ldisc_read(struct tty_struct * tty, struct file * file, unsigned char __user * buf, size_t nr) ++static ssize_t serport_ldisc_read(struct tty_struct * tty, struct file * file, ++ unsigned char *kbuf, size_t nr, ++ void **cookie, unsigned long offset) + { + struct serport *serport = (struct serport*) tty->disc_data; + struct serio *serio; +diff --git a/drivers/input/touchscreen/elo.c b/drivers/input/touchscreen/elo.c +index e0bacd34866ad..96173232e53fe 100644 +--- a/drivers/input/touchscreen/elo.c ++++ b/drivers/input/touchscreen/elo.c +@@ -341,8 +341,10 @@ static int elo_connect(struct serio *serio, struct serio_driver *drv) + switch (elo->id) { + + case 0: /* 10-byte protocol */ +- if (elo_setup_10(elo)) ++ if (elo_setup_10(elo)) { ++ err = -EIO; + goto fail3; ++ } + + break; + +diff --git a/drivers/input/touchscreen/raydium_i2c_ts.c b/drivers/input/touchscreen/raydium_i2c_ts.c +index 603a948460d64..4d2d22a869773 100644 +--- a/drivers/input/touchscreen/raydium_i2c_ts.c ++++ b/drivers/input/touchscreen/raydium_i2c_ts.c +@@ -445,6 +445,7 @@ static int raydium_i2c_write_object(struct i2c_client *client, + enum raydium_bl_ack state) + { + int error; ++ static const u8 cmd[] = { 0xFF, 0x39 }; + + error = raydium_i2c_send(client, RM_CMD_BOOT_WRT, data, len); + if (error) { +@@ -453,7 +454,7 @@ static int raydium_i2c_write_object(struct i2c_client *client, + return error; + } + +- error = raydium_i2c_send(client, RM_CMD_BOOT_ACK, NULL, 0); ++ error = raydium_i2c_send(client, RM_CMD_BOOT_ACK, cmd, sizeof(cmd)); + if (error) { + dev_err(&client->dev, "Ack obj command failed: %d\n", error); + return error; +diff --git a/drivers/input/touchscreen/sur40.c b/drivers/input/touchscreen/sur40.c +index 620cdd7d214a6..12f2562b0141b 100644 +--- a/drivers/input/touchscreen/sur40.c ++++ b/drivers/input/touchscreen/sur40.c +@@ -787,6 +787,7 @@ static int sur40_probe(struct usb_interface *interface, + dev_err(&interface->dev, + "Unable to register video controls."); + v4l2_ctrl_handler_free(&sur40->hdl); ++ error = sur40->hdl.error; + goto err_unreg_v4l2; + } + +diff --git a/drivers/input/touchscreen/zinitix.c b/drivers/input/touchscreen/zinitix.c +index 1acc2eb2bcb33..fd8b4e9f08a21 100644 +--- a/drivers/input/touchscreen/zinitix.c ++++ b/drivers/input/touchscreen/zinitix.c +@@ -190,7 +190,7 @@ static int zinitix_write_cmd(struct i2c_client *client, u16 reg) + return 0; + } + +-static bool zinitix_init_touch(struct bt541_ts_data *bt541) ++static int zinitix_init_touch(struct bt541_ts_data *bt541) + { + struct i2c_client *client = bt541->client; + int i; +diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +index e634bbe605730..7067b7c116260 100644 +--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c ++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c +@@ -2267,7 +2267,7 @@ static void arm_smmu_iotlb_sync(struct iommu_domain *domain, + { + struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain); + +- arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start, ++ arm_smmu_tlb_inv_range(gather->start, gather->end - gather->start + 1, + gather->pgsize, true, smmu_domain); + } + +diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +index 0eba5e883e3f1..63f7173b241f0 100644 +--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c ++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c +@@ -65,6 +65,8 @@ static int qcom_smmu_cfg_probe(struct arm_smmu_device *smmu) + smr = arm_smmu_gr0_read(smmu, ARM_SMMU_GR0_SMR(i)); + + if (FIELD_GET(ARM_SMMU_SMR_VALID, smr)) { ++ /* Ignore valid bit for SMR mask extraction. */ ++ smr &= ~ARM_SMMU_SMR_VALID; + smmu->smrs[i].id = FIELD_GET(ARM_SMMU_SMR_ID, smr); + smmu->smrs[i].mask = FIELD_GET(ARM_SMMU_SMR_MASK, smr); + smmu->smrs[i].valid = true; +diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c +index 0f4dc25d46c92..0d9adce6d812f 100644 +--- a/drivers/iommu/iommu.c ++++ b/drivers/iommu/iommu.c +@@ -2409,9 +2409,6 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, + size -= pgsize; + } + +- if (ops->iotlb_sync_map) +- ops->iotlb_sync_map(domain); +- + /* unroll mapping in case something went wrong */ + if (ret) + iommu_unmap(domain, orig_iova, orig_size - size); +@@ -2421,18 +2418,31 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, + return ret; + } + ++static int _iommu_map(struct iommu_domain *domain, unsigned long iova, ++ phys_addr_t paddr, size_t size, int prot, gfp_t gfp) ++{ ++ const struct iommu_ops *ops = domain->ops; ++ int ret; ++ ++ ret = __iommu_map(domain, iova, paddr, size, prot, gfp); ++ if (ret == 0 && ops->iotlb_sync_map) ++ ops->iotlb_sync_map(domain); ++ ++ return ret; ++} ++ + int iommu_map(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot) + { + might_sleep(); +- return __iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL); ++ return _iommu_map(domain, iova, paddr, size, prot, GFP_KERNEL); + } + EXPORT_SYMBOL_GPL(iommu_map); + + int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova, + phys_addr_t paddr, size_t size, int prot) + { +- return __iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC); ++ return _iommu_map(domain, iova, paddr, size, prot, GFP_ATOMIC); + } + EXPORT_SYMBOL_GPL(iommu_map_atomic); + +@@ -2516,6 +2526,7 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova, + struct scatterlist *sg, unsigned int nents, int prot, + gfp_t gfp) + { ++ const struct iommu_ops *ops = domain->ops; + size_t len = 0, mapped = 0; + phys_addr_t start; + unsigned int i = 0; +@@ -2546,6 +2557,8 @@ static size_t __iommu_map_sg(struct iommu_domain *domain, unsigned long iova, + sg = sg_next(sg); + } + ++ if (ops->iotlb_sync_map) ++ ops->iotlb_sync_map(domain); + return mapped; + + out_err: +diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c +index c072cee532c20..19387d2bc4b4f 100644 +--- a/drivers/iommu/mtk_iommu.c ++++ b/drivers/iommu/mtk_iommu.c +@@ -445,7 +445,7 @@ static void mtk_iommu_iotlb_sync(struct iommu_domain *domain, + struct iommu_iotlb_gather *gather) + { + struct mtk_iommu_data *data = mtk_iommu_get_m4u_data(); +- size_t length = gather->end - gather->start; ++ size_t length = gather->end - gather->start + 1; + + if (gather->start == ULONG_MAX) + return; +diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig +index 2aa79c32ee228..6156a065681bc 100644 +--- a/drivers/irqchip/Kconfig ++++ b/drivers/irqchip/Kconfig +@@ -464,7 +464,8 @@ config IMX_IRQSTEER + Support for the i.MX IRQSTEER interrupt multiplexer/remapper. + + config IMX_INTMUX +- def_bool y if ARCH_MXC || COMPILE_TEST ++ bool "i.MX INTMUX support" if COMPILE_TEST ++ default y if ARCH_MXC + select IRQ_DOMAIN + help + Support for the i.MX INTMUX interrupt multiplexer. +diff --git a/drivers/irqchip/irq-loongson-pch-msi.c b/drivers/irqchip/irq-loongson-pch-msi.c +index 12aeeab432893..32562b7e681b5 100644 +--- a/drivers/irqchip/irq-loongson-pch-msi.c ++++ b/drivers/irqchip/irq-loongson-pch-msi.c +@@ -225,7 +225,7 @@ static int pch_msi_init(struct device_node *node, + goto err_priv; + } + +- priv->msi_map = bitmap_alloc(priv->num_irqs, GFP_KERNEL); ++ priv->msi_map = bitmap_zalloc(priv->num_irqs, GFP_KERNEL); + if (!priv->msi_map) { + ret = -ENOMEM; + goto err_priv; +diff --git a/drivers/macintosh/adb-iop.c b/drivers/macintosh/adb-iop.c +index 0ee3272491501..2633bc254935c 100644 +--- a/drivers/macintosh/adb-iop.c ++++ b/drivers/macintosh/adb-iop.c +@@ -19,6 +19,7 @@ + #include + #include + #include ++#include + + #include + +@@ -249,7 +250,7 @@ static void adb_iop_set_ap_complete(struct iop_msg *msg) + { + struct adb_iopmsg *amsg = (struct adb_iopmsg *)msg->message; + +- autopoll_devs = (amsg->data[1] << 8) | amsg->data[0]; ++ autopoll_devs = get_unaligned_be16(amsg->data); + if (autopoll_devs & (1 << autopoll_addr)) + return; + autopoll_addr = autopoll_devs ? (ffs(autopoll_devs) - 1) : 0; +@@ -266,8 +267,7 @@ static int adb_iop_autopoll(int devs) + amsg.flags = ADB_IOP_SET_AUTOPOLL | (mask ? ADB_IOP_AUTOPOLL : 0); + amsg.count = 2; + amsg.cmd = 0; +- amsg.data[0] = mask & 0xFF; +- amsg.data[1] = (mask >> 8) & 0xFF; ++ put_unaligned_be16(mask, amsg.data); + + iop_send_message(ADB_IOP, ADB_CHAN, NULL, sizeof(amsg), (__u8 *)&amsg, + adb_iop_set_ap_complete); +diff --git a/drivers/mailbox/sprd-mailbox.c b/drivers/mailbox/sprd-mailbox.c +index f6fab24ae8a9a..4c325301a2fe8 100644 +--- a/drivers/mailbox/sprd-mailbox.c ++++ b/drivers/mailbox/sprd-mailbox.c +@@ -35,7 +35,7 @@ + #define SPRD_MBOX_IRQ_CLR BIT(0) + + /* Bit and mask definiation for outbox's SPRD_MBOX_FIFO_STS register */ +-#define SPRD_OUTBOX_FIFO_FULL BIT(0) ++#define SPRD_OUTBOX_FIFO_FULL BIT(2) + #define SPRD_OUTBOX_FIFO_WR_SHIFT 16 + #define SPRD_OUTBOX_FIFO_RD_SHIFT 24 + #define SPRD_OUTBOX_FIFO_POS_MASK GENMASK(7, 0) +diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h +index 1d57f48307e66..e8bf4f752e8be 100644 +--- a/drivers/md/bcache/bcache.h ++++ b/drivers/md/bcache/bcache.h +@@ -1001,6 +1001,7 @@ void bch_write_bdev_super(struct cached_dev *dc, struct closure *parent); + + extern struct workqueue_struct *bcache_wq; + extern struct workqueue_struct *bch_journal_wq; ++extern struct workqueue_struct *bch_flush_wq; + extern struct mutex bch_register_lock; + extern struct list_head bch_cache_sets; + +@@ -1042,5 +1043,7 @@ void bch_debug_exit(void); + void bch_debug_init(void); + void bch_request_exit(void); + int bch_request_init(void); ++void bch_btree_exit(void); ++int bch_btree_init(void); + + #endif /* _BCACHE_H */ +diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c +index 910df242c83df..fe6dce125aba2 100644 +--- a/drivers/md/bcache/btree.c ++++ b/drivers/md/bcache/btree.c +@@ -99,6 +99,8 @@ + #define PTR_HASH(c, k) \ + (((k)->ptr[0] >> c->bucket_bits) | PTR_GEN(k, 0)) + ++static struct workqueue_struct *btree_io_wq; ++ + #define insert_lock(s, b) ((b)->level <= (s)->lock) + + +@@ -308,7 +310,7 @@ static void __btree_node_write_done(struct closure *cl) + btree_complete_write(b, w); + + if (btree_node_dirty(b)) +- schedule_delayed_work(&b->work, 30 * HZ); ++ queue_delayed_work(btree_io_wq, &b->work, 30 * HZ); + + closure_return_with_destructor(cl, btree_node_write_unlock); + } +@@ -481,7 +483,7 @@ static void bch_btree_leaf_dirty(struct btree *b, atomic_t *journal_ref) + BUG_ON(!i->keys); + + if (!btree_node_dirty(b)) +- schedule_delayed_work(&b->work, 30 * HZ); ++ queue_delayed_work(btree_io_wq, &b->work, 30 * HZ); + + set_btree_node_dirty(b); + +@@ -2764,3 +2766,18 @@ void bch_keybuf_init(struct keybuf *buf) + spin_lock_init(&buf->lock); + array_allocator_init(&buf->freelist); + } ++ ++void bch_btree_exit(void) ++{ ++ if (btree_io_wq) ++ destroy_workqueue(btree_io_wq); ++} ++ ++int __init bch_btree_init(void) ++{ ++ btree_io_wq = alloc_workqueue("bch_btree_io", WQ_MEM_RECLAIM, 0); ++ if (!btree_io_wq) ++ return -ENOMEM; ++ ++ return 0; ++} +diff --git a/drivers/md/bcache/journal.c b/drivers/md/bcache/journal.c +index aefbdb7e003bc..c6613e8173337 100644 +--- a/drivers/md/bcache/journal.c ++++ b/drivers/md/bcache/journal.c +@@ -932,8 +932,8 @@ atomic_t *bch_journal(struct cache_set *c, + journal_try_write(c); + } else if (!w->dirty) { + w->dirty = true; +- schedule_delayed_work(&c->journal.work, +- msecs_to_jiffies(c->journal_delay_ms)); ++ queue_delayed_work(bch_flush_wq, &c->journal.work, ++ msecs_to_jiffies(c->journal_delay_ms)); + spin_unlock(&c->journal.lock); + } else { + spin_unlock(&c->journal.lock); +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index a148b92ad8563..248bda63f0852 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -49,6 +49,7 @@ static int bcache_major; + static DEFINE_IDA(bcache_device_idx); + static wait_queue_head_t unregister_wait; + struct workqueue_struct *bcache_wq; ++struct workqueue_struct *bch_flush_wq; + struct workqueue_struct *bch_journal_wq; + + +@@ -2833,6 +2834,9 @@ static void bcache_exit(void) + destroy_workqueue(bcache_wq); + if (bch_journal_wq) + destroy_workqueue(bch_journal_wq); ++ if (bch_flush_wq) ++ destroy_workqueue(bch_flush_wq); ++ bch_btree_exit(); + + if (bcache_major) + unregister_blkdev(bcache_major, "bcache"); +@@ -2888,10 +2892,26 @@ static int __init bcache_init(void) + return bcache_major; + } + ++ if (bch_btree_init()) ++ goto err; ++ + bcache_wq = alloc_workqueue("bcache", WQ_MEM_RECLAIM, 0); + if (!bcache_wq) + goto err; + ++ /* ++ * Let's not make this `WQ_MEM_RECLAIM` for the following reasons: ++ * ++ * 1. It used `system_wq` before which also does no memory reclaim. ++ * 2. With `WQ_MEM_RECLAIM` desktop stalls, increased boot times, and ++ * reduced throughput can be observed. ++ * ++ * We still want to user our own queue to not congest the `system_wq`. ++ */ ++ bch_flush_wq = alloc_workqueue("bch_flush", 0, 0); ++ if (!bch_flush_wq) ++ goto err; ++ + bch_journal_wq = alloc_workqueue("bch_journal", WQ_MEM_RECLAIM, 0); + if (!bch_journal_wq) + goto err; +diff --git a/drivers/md/dm-core.h b/drivers/md/dm-core.h +index d522093cb39dd..3db92d9a030b9 100644 +--- a/drivers/md/dm-core.h ++++ b/drivers/md/dm-core.h +@@ -109,6 +109,10 @@ struct mapped_device { + + struct block_device *bdev; + ++ int swap_bios; ++ struct semaphore swap_bios_semaphore; ++ struct mutex swap_bios_lock; ++ + struct dm_stats stats; + + /* for blk-mq request-based DM support */ +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c +index 875823d6ee7e0..70ae6f3aede94 100644 +--- a/drivers/md/dm-crypt.c ++++ b/drivers/md/dm-crypt.c +@@ -3324,6 +3324,7 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv) + wake_up_process(cc->write_thread); + + ti->num_flush_bios = 1; ++ ti->limit_swap_bios = true; + + return 0; + +diff --git a/drivers/md/dm-era-target.c b/drivers/md/dm-era-target.c +index b24e3839bb3a1..d9ac7372108c9 100644 +--- a/drivers/md/dm-era-target.c ++++ b/drivers/md/dm-era-target.c +@@ -47,6 +47,7 @@ struct writeset { + static void writeset_free(struct writeset *ws) + { + vfree(ws->bits); ++ ws->bits = NULL; + } + + static int setup_on_disk_bitset(struct dm_disk_bitset *info, +@@ -71,8 +72,6 @@ static size_t bitset_size(unsigned nr_bits) + */ + static int writeset_alloc(struct writeset *ws, dm_block_t nr_blocks) + { +- ws->md.nr_bits = nr_blocks; +- ws->md.root = INVALID_WRITESET_ROOT; + ws->bits = vzalloc(bitset_size(nr_blocks)); + if (!ws->bits) { + DMERR("%s: couldn't allocate in memory bitset", __func__); +@@ -85,12 +84,14 @@ static int writeset_alloc(struct writeset *ws, dm_block_t nr_blocks) + /* + * Wipes the in-core bitset, and creates a new on disk bitset. + */ +-static int writeset_init(struct dm_disk_bitset *info, struct writeset *ws) ++static int writeset_init(struct dm_disk_bitset *info, struct writeset *ws, ++ dm_block_t nr_blocks) + { + int r; + +- memset(ws->bits, 0, bitset_size(ws->md.nr_bits)); ++ memset(ws->bits, 0, bitset_size(nr_blocks)); + ++ ws->md.nr_bits = nr_blocks; + r = setup_on_disk_bitset(info, ws->md.nr_bits, &ws->md.root); + if (r) { + DMERR("%s: setup_on_disk_bitset failed", __func__); +@@ -134,7 +135,7 @@ static int writeset_test_and_set(struct dm_disk_bitset *info, + { + int r; + +- if (!test_and_set_bit(block, ws->bits)) { ++ if (!test_bit(block, ws->bits)) { + r = dm_bitset_set_bit(info, ws->md.root, block, &ws->md.root); + if (r) { + /* FIXME: fail mode */ +@@ -388,7 +389,7 @@ static void ws_dec(void *context, const void *value) + + static int ws_eq(void *context, const void *value1, const void *value2) + { +- return !memcmp(value1, value2, sizeof(struct writeset_metadata)); ++ return !memcmp(value1, value2, sizeof(struct writeset_disk)); + } + + /*----------------------------------------------------------------*/ +@@ -564,6 +565,15 @@ static int open_metadata(struct era_metadata *md) + } + + disk = dm_block_data(sblock); ++ ++ /* Verify the data block size hasn't changed */ ++ if (le32_to_cpu(disk->data_block_size) != md->block_size) { ++ DMERR("changing the data block size (from %u to %llu) is not supported", ++ le32_to_cpu(disk->data_block_size), md->block_size); ++ r = -EINVAL; ++ goto bad; ++ } ++ + r = dm_tm_open_with_sm(md->bm, SUPERBLOCK_LOCATION, + disk->metadata_space_map_root, + sizeof(disk->metadata_space_map_root), +@@ -575,10 +585,10 @@ static int open_metadata(struct era_metadata *md) + + setup_infos(md); + +- md->block_size = le32_to_cpu(disk->data_block_size); + md->nr_blocks = le32_to_cpu(disk->nr_blocks); + md->current_era = le32_to_cpu(disk->current_era); + ++ ws_unpack(&disk->current_writeset, &md->current_writeset->md); + md->writeset_tree_root = le64_to_cpu(disk->writeset_tree_root); + md->era_array_root = le64_to_cpu(disk->era_array_root); + md->metadata_snap = le64_to_cpu(disk->metadata_snap); +@@ -746,6 +756,12 @@ static int metadata_digest_lookup_writeset(struct era_metadata *md, + ws_unpack(&disk, &d->writeset); + d->value = cpu_to_le32(key); + ++ /* ++ * We initialise another bitset info to avoid any caching side effects ++ * with the previous one. ++ */ ++ dm_disk_bitset_init(md->tm, &d->info); ++ + d->nr_bits = min(d->writeset.nr_bits, md->nr_blocks); + d->current_bit = 0; + d->step = metadata_digest_transcribe_writeset; +@@ -759,12 +775,6 @@ static int metadata_digest_start(struct era_metadata *md, struct digest *d) + return 0; + + memset(d, 0, sizeof(*d)); +- +- /* +- * We initialise another bitset info to avoid any caching side +- * effects with the previous one. +- */ +- dm_disk_bitset_init(md->tm, &d->info); + d->step = metadata_digest_lookup_writeset; + + return 0; +@@ -802,6 +812,8 @@ static struct era_metadata *metadata_open(struct block_device *bdev, + + static void metadata_close(struct era_metadata *md) + { ++ writeset_free(&md->writesets[0]); ++ writeset_free(&md->writesets[1]); + destroy_persistent_data_objects(md); + kfree(md); + } +@@ -839,6 +851,7 @@ static int metadata_resize(struct era_metadata *md, void *arg) + r = writeset_alloc(&md->writesets[1], *new_size); + if (r) { + DMERR("%s: writeset_alloc failed for writeset 1", __func__); ++ writeset_free(&md->writesets[0]); + return r; + } + +@@ -849,6 +862,8 @@ static int metadata_resize(struct era_metadata *md, void *arg) + &value, &md->era_array_root); + if (r) { + DMERR("%s: dm_array_resize failed", __func__); ++ writeset_free(&md->writesets[0]); ++ writeset_free(&md->writesets[1]); + return r; + } + +@@ -870,7 +885,6 @@ static int metadata_era_archive(struct era_metadata *md) + } + + ws_pack(&md->current_writeset->md, &value); +- md->current_writeset->md.root = INVALID_WRITESET_ROOT; + + keys[0] = md->current_era; + __dm_bless_for_disk(&value); +@@ -882,6 +896,7 @@ static int metadata_era_archive(struct era_metadata *md) + return r; + } + ++ md->current_writeset->md.root = INVALID_WRITESET_ROOT; + md->archived_writesets = true; + + return 0; +@@ -898,7 +913,7 @@ static int metadata_new_era(struct era_metadata *md) + int r; + struct writeset *new_writeset = next_writeset(md); + +- r = writeset_init(&md->bitset_info, new_writeset); ++ r = writeset_init(&md->bitset_info, new_writeset, md->nr_blocks); + if (r) { + DMERR("%s: writeset_init failed", __func__); + return r; +@@ -951,7 +966,7 @@ static int metadata_commit(struct era_metadata *md) + int r; + struct dm_block *sblock; + +- if (md->current_writeset->md.root != SUPERBLOCK_LOCATION) { ++ if (md->current_writeset->md.root != INVALID_WRITESET_ROOT) { + r = dm_bitset_flush(&md->bitset_info, md->current_writeset->md.root, + &md->current_writeset->md.root); + if (r) { +@@ -1225,8 +1240,10 @@ static void process_deferred_bios(struct era *era) + int r; + struct bio_list deferred_bios, marked_bios; + struct bio *bio; ++ struct blk_plug plug; + bool commit_needed = false; + bool failed = false; ++ struct writeset *ws = era->md->current_writeset; + + bio_list_init(&deferred_bios); + bio_list_init(&marked_bios); +@@ -1236,9 +1253,11 @@ static void process_deferred_bios(struct era *era) + bio_list_init(&era->deferred_bios); + spin_unlock(&era->deferred_lock); + ++ if (bio_list_empty(&deferred_bios)) ++ return; ++ + while ((bio = bio_list_pop(&deferred_bios))) { +- r = writeset_test_and_set(&era->md->bitset_info, +- era->md->current_writeset, ++ r = writeset_test_and_set(&era->md->bitset_info, ws, + get_block(era, bio)); + if (r < 0) { + /* +@@ -1246,7 +1265,6 @@ static void process_deferred_bios(struct era *era) + * FIXME: finish. + */ + failed = true; +- + } else if (r == 0) + commit_needed = true; + +@@ -1262,9 +1280,19 @@ static void process_deferred_bios(struct era *era) + if (failed) + while ((bio = bio_list_pop(&marked_bios))) + bio_io_error(bio); +- else +- while ((bio = bio_list_pop(&marked_bios))) ++ else { ++ blk_start_plug(&plug); ++ while ((bio = bio_list_pop(&marked_bios))) { ++ /* ++ * Only update the in-core writeset if the on-disk one ++ * was updated too. ++ */ ++ if (commit_needed) ++ set_bit(get_block(era, bio), ws->bits); + submit_bio_noacct(bio); ++ } ++ blk_finish_plug(&plug); ++ } + } + + static void process_rpc_calls(struct era *era) +@@ -1473,15 +1501,6 @@ static int era_ctr(struct dm_target *ti, unsigned argc, char **argv) + } + era->md = md; + +- era->nr_blocks = calc_nr_blocks(era); +- +- r = metadata_resize(era->md, &era->nr_blocks); +- if (r) { +- ti->error = "couldn't resize metadata"; +- era_destroy(era); +- return -ENOMEM; +- } +- + era->wq = alloc_ordered_workqueue("dm-" DM_MSG_PREFIX, WQ_MEM_RECLAIM); + if (!era->wq) { + ti->error = "could not create workqueue for metadata object"; +@@ -1556,16 +1575,24 @@ static int era_preresume(struct dm_target *ti) + dm_block_t new_size = calc_nr_blocks(era); + + if (era->nr_blocks != new_size) { +- r = in_worker1(era, metadata_resize, &new_size); +- if (r) ++ r = metadata_resize(era->md, &new_size); ++ if (r) { ++ DMERR("%s: metadata_resize failed", __func__); ++ return r; ++ } ++ ++ r = metadata_commit(era->md); ++ if (r) { ++ DMERR("%s: metadata_commit failed", __func__); + return r; ++ } + + era->nr_blocks = new_size; + } + + start_worker(era); + +- r = in_worker0(era, metadata_new_era); ++ r = in_worker0(era, metadata_era_rollover); + if (r) { + DMERR("%s: metadata_era_rollover failed", __func__); + return r; +diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c +index 09ded08cbb609..9b824c21580a4 100644 +--- a/drivers/md/dm-table.c ++++ b/drivers/md/dm-table.c +@@ -827,24 +827,24 @@ void dm_table_set_type(struct dm_table *t, enum dm_queue_mode type) + EXPORT_SYMBOL_GPL(dm_table_set_type); + + /* validate the dax capability of the target device span */ +-int device_supports_dax(struct dm_target *ti, struct dm_dev *dev, ++int device_not_dax_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) + { + int blocksize = *(int *) data, id; + bool rc; + + id = dax_read_lock(); +- rc = dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len); ++ rc = !dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len); + dax_read_unlock(id); + + return rc; + } + + /* Check devices support synchronous DAX */ +-static int device_dax_synchronous(struct dm_target *ti, struct dm_dev *dev, +- sector_t start, sector_t len, void *data) ++static int device_not_dax_synchronous_capable(struct dm_target *ti, struct dm_dev *dev, ++ sector_t start, sector_t len, void *data) + { +- return dev->dax_dev && dax_synchronous(dev->dax_dev); ++ return !dev->dax_dev || !dax_synchronous(dev->dax_dev); + } + + bool dm_table_supports_dax(struct dm_table *t, +@@ -861,7 +861,7 @@ bool dm_table_supports_dax(struct dm_table *t, + return false; + + if (!ti->type->iterate_devices || +- !ti->type->iterate_devices(ti, iterate_fn, blocksize)) ++ ti->type->iterate_devices(ti, iterate_fn, blocksize)) + return false; + } + +@@ -932,7 +932,7 @@ static int dm_table_determine_type(struct dm_table *t) + verify_bio_based: + /* We must use this table as bio-based */ + t->type = DM_TYPE_BIO_BASED; +- if (dm_table_supports_dax(t, device_supports_dax, &page_size) || ++ if (dm_table_supports_dax(t, device_not_dax_capable, &page_size) || + (list_empty(devices) && live_md_type == DM_TYPE_DAX_BIO_BASED)) { + t->type = DM_TYPE_DAX_BIO_BASED; + } +@@ -1302,6 +1302,46 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector) + return &t->targets[(KEYS_PER_NODE * n) + k]; + } + ++/* ++ * type->iterate_devices() should be called when the sanity check needs to ++ * iterate and check all underlying data devices. iterate_devices() will ++ * iterate all underlying data devices until it encounters a non-zero return ++ * code, returned by whether the input iterate_devices_callout_fn, or ++ * iterate_devices() itself internally. ++ * ++ * For some target type (e.g. dm-stripe), one call of iterate_devices() may ++ * iterate multiple underlying devices internally, in which case a non-zero ++ * return code returned by iterate_devices_callout_fn will stop the iteration ++ * in advance. ++ * ++ * Cases requiring _any_ underlying device supporting some kind of attribute, ++ * should use the iteration structure like dm_table_any_dev_attr(), or call ++ * it directly. @func should handle semantics of positive examples, e.g. ++ * capable of something. ++ * ++ * Cases requiring _all_ underlying devices supporting some kind of attribute, ++ * should use the iteration structure like dm_table_supports_nowait() or ++ * dm_table_supports_discards(). Or introduce dm_table_all_devs_attr() that ++ * uses an @anti_func that handle semantics of counter examples, e.g. not ++ * capable of something. So: return !dm_table_any_dev_attr(t, anti_func, data); ++ */ ++static bool dm_table_any_dev_attr(struct dm_table *t, ++ iterate_devices_callout_fn func, void *data) ++{ ++ struct dm_target *ti; ++ unsigned int i; ++ ++ for (i = 0; i < dm_table_get_num_targets(t); i++) { ++ ti = dm_table_get_target(t, i); ++ ++ if (ti->type->iterate_devices && ++ ti->type->iterate_devices(ti, func, data)) ++ return true; ++ } ++ ++ return false; ++} ++ + static int count_device(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) + { +@@ -1338,13 +1378,13 @@ bool dm_table_has_no_data_devices(struct dm_table *table) + return true; + } + +-static int device_is_zoned_model(struct dm_target *ti, struct dm_dev *dev, +- sector_t start, sector_t len, void *data) ++static int device_not_zoned_model(struct dm_target *ti, struct dm_dev *dev, ++ sector_t start, sector_t len, void *data) + { + struct request_queue *q = bdev_get_queue(dev->bdev); + enum blk_zoned_model *zoned_model = data; + +- return q && blk_queue_zoned_model(q) == *zoned_model; ++ return !q || blk_queue_zoned_model(q) != *zoned_model; + } + + static bool dm_table_supports_zoned_model(struct dm_table *t, +@@ -1361,37 +1401,20 @@ static bool dm_table_supports_zoned_model(struct dm_table *t, + return false; + + if (!ti->type->iterate_devices || +- !ti->type->iterate_devices(ti, device_is_zoned_model, &zoned_model)) ++ ti->type->iterate_devices(ti, device_not_zoned_model, &zoned_model)) + return false; + } + + return true; + } + +-static int device_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev, +- sector_t start, sector_t len, void *data) ++static int device_not_matches_zone_sectors(struct dm_target *ti, struct dm_dev *dev, ++ sector_t start, sector_t len, void *data) + { + struct request_queue *q = bdev_get_queue(dev->bdev); + unsigned int *zone_sectors = data; + +- return q && blk_queue_zone_sectors(q) == *zone_sectors; +-} +- +-static bool dm_table_matches_zone_sectors(struct dm_table *t, +- unsigned int zone_sectors) +-{ +- struct dm_target *ti; +- unsigned i; +- +- for (i = 0; i < dm_table_get_num_targets(t); i++) { +- ti = dm_table_get_target(t, i); +- +- if (!ti->type->iterate_devices || +- !ti->type->iterate_devices(ti, device_matches_zone_sectors, &zone_sectors)) +- return false; +- } +- +- return true; ++ return !q || blk_queue_zone_sectors(q) != *zone_sectors; + } + + static int validate_hardware_zoned_model(struct dm_table *table, +@@ -1411,7 +1434,7 @@ static int validate_hardware_zoned_model(struct dm_table *table, + if (!zone_sectors || !is_power_of_2(zone_sectors)) + return -EINVAL; + +- if (!dm_table_matches_zone_sectors(table, zone_sectors)) { ++ if (dm_table_any_dev_attr(table, device_not_matches_zone_sectors, &zone_sectors)) { + DMERR("%s: zone sectors is not consistent across all devices", + dm_device_name(table->md)); + return -EINVAL; +@@ -1585,29 +1608,12 @@ static int device_dax_write_cache_enabled(struct dm_target *ti, + return false; + } + +-static int dm_table_supports_dax_write_cache(struct dm_table *t) +-{ +- struct dm_target *ti; +- unsigned i; +- +- for (i = 0; i < dm_table_get_num_targets(t); i++) { +- ti = dm_table_get_target(t, i); +- +- if (ti->type->iterate_devices && +- ti->type->iterate_devices(ti, +- device_dax_write_cache_enabled, NULL)) +- return true; +- } +- +- return false; +-} +- +-static int device_is_nonrot(struct dm_target *ti, struct dm_dev *dev, +- sector_t start, sector_t len, void *data) ++static int device_is_rotational(struct dm_target *ti, struct dm_dev *dev, ++ sector_t start, sector_t len, void *data) + { + struct request_queue *q = bdev_get_queue(dev->bdev); + +- return q && blk_queue_nonrot(q); ++ return q && !blk_queue_nonrot(q); + } + + static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev, +@@ -1618,23 +1624,6 @@ static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev, + return q && !blk_queue_add_random(q); + } + +-static bool dm_table_all_devices_attribute(struct dm_table *t, +- iterate_devices_callout_fn func) +-{ +- struct dm_target *ti; +- unsigned i; +- +- for (i = 0; i < dm_table_get_num_targets(t); i++) { +- ti = dm_table_get_target(t, i); +- +- if (!ti->type->iterate_devices || +- !ti->type->iterate_devices(ti, func, NULL)) +- return false; +- } +- +- return true; +-} +- + static int device_not_write_same_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data) + { +@@ -1786,27 +1775,6 @@ static int device_requires_stable_pages(struct dm_target *ti, + return q && blk_queue_stable_writes(q); + } + +-/* +- * If any underlying device requires stable pages, a table must require +- * them as well. Only targets that support iterate_devices are considered: +- * don't want error, zero, etc to require stable pages. +- */ +-static bool dm_table_requires_stable_pages(struct dm_table *t) +-{ +- struct dm_target *ti; +- unsigned i; +- +- for (i = 0; i < dm_table_get_num_targets(t); i++) { +- ti = dm_table_get_target(t, i); +- +- if (ti->type->iterate_devices && +- ti->type->iterate_devices(ti, device_requires_stable_pages, NULL)) +- return true; +- } +- +- return false; +-} +- + void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + struct queue_limits *limits) + { +@@ -1844,22 +1812,22 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + } + blk_queue_write_cache(q, wc, fua); + +- if (dm_table_supports_dax(t, device_supports_dax, &page_size)) { ++ if (dm_table_supports_dax(t, device_not_dax_capable, &page_size)) { + blk_queue_flag_set(QUEUE_FLAG_DAX, q); +- if (dm_table_supports_dax(t, device_dax_synchronous, NULL)) ++ if (dm_table_supports_dax(t, device_not_dax_synchronous_capable, NULL)) + set_dax_synchronous(t->md->dax_dev); + } + else + blk_queue_flag_clear(QUEUE_FLAG_DAX, q); + +- if (dm_table_supports_dax_write_cache(t)) ++ if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL)) + dax_write_cache(t->md->dax_dev, true); + + /* Ensure that all underlying devices are non-rotational. */ +- if (dm_table_all_devices_attribute(t, device_is_nonrot)) +- blk_queue_flag_set(QUEUE_FLAG_NONROT, q); +- else ++ if (dm_table_any_dev_attr(t, device_is_rotational, NULL)) + blk_queue_flag_clear(QUEUE_FLAG_NONROT, q); ++ else ++ blk_queue_flag_set(QUEUE_FLAG_NONROT, q); + + if (!dm_table_supports_write_same(t)) + q->limits.max_write_same_sectors = 0; +@@ -1871,8 +1839,11 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + /* + * Some devices don't use blk_integrity but still want stable pages + * because they do their own checksumming. ++ * If any underlying device requires stable pages, a table must require ++ * them as well. Only targets that support iterate_devices are considered: ++ * don't want error, zero, etc to require stable pages. + */ +- if (dm_table_requires_stable_pages(t)) ++ if (dm_table_any_dev_attr(t, device_requires_stable_pages, NULL)) + blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q); + else + blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q); +@@ -1883,7 +1854,8 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q, + * Clear QUEUE_FLAG_ADD_RANDOM if any underlying device does not + * have it set. + */ +- if (blk_queue_add_random(q) && dm_table_all_devices_attribute(t, device_is_not_random)) ++ if (blk_queue_add_random(q) && ++ dm_table_any_dev_attr(t, device_is_not_random, NULL)) + blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); + + /* +diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c +index d5223a0e5cc51..8628c4aa2e854 100644 +--- a/drivers/md/dm-writecache.c ++++ b/drivers/md/dm-writecache.c +@@ -148,6 +148,7 @@ struct dm_writecache { + size_t metadata_sectors; + size_t n_blocks; + uint64_t seq_count; ++ sector_t data_device_sectors; + void *block_start; + struct wc_entry *entries; + unsigned block_size; +@@ -159,14 +160,22 @@ struct dm_writecache { + bool overwrote_committed:1; + bool memory_vmapped:1; + ++ bool start_sector_set:1; + bool high_wm_percent_set:1; + bool low_wm_percent_set:1; + bool max_writeback_jobs_set:1; + bool autocommit_blocks_set:1; + bool autocommit_time_set:1; ++ bool max_age_set:1; + bool writeback_fua_set:1; + bool flush_on_suspend:1; + bool cleaner:1; ++ bool cleaner_set:1; ++ ++ unsigned high_wm_percent_value; ++ unsigned low_wm_percent_value; ++ unsigned autocommit_time_value; ++ unsigned max_age_value; + + unsigned writeback_all; + struct workqueue_struct *writeback_wq; +@@ -523,7 +532,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc) + + region.bdev = wc->ssd_dev->bdev; + region.sector = 0; +- region.count = PAGE_SIZE; ++ region.count = PAGE_SIZE >> SECTOR_SHIFT; + + if (unlikely(region.sector + region.count > wc->metadata_sectors)) + region.count = wc->metadata_sectors - region.sector; +@@ -969,6 +978,8 @@ static void writecache_resume(struct dm_target *ti) + + wc_lock(wc); + ++ wc->data_device_sectors = i_size_read(wc->dev->bdev->bd_inode) >> SECTOR_SHIFT; ++ + if (WC_MODE_PMEM(wc)) { + persistent_memory_invalidate_cache(wc->memory_map, wc->memory_map_size); + } else { +@@ -1638,6 +1649,10 @@ static bool wc_add_block(struct writeback_struct *wb, struct wc_entry *e, gfp_t + void *address = memory_data(wc, e); + + persistent_memory_flush_cache(address, block_size); ++ ++ if (unlikely(bio_end_sector(&wb->bio) >= wc->data_device_sectors)) ++ return true; ++ + return bio_add_page(&wb->bio, persistent_memory_page(address), + block_size, persistent_memory_page_offset(address)) != 0; + } +@@ -1709,6 +1724,9 @@ static void __writecache_writeback_pmem(struct dm_writecache *wc, struct writeba + if (writecache_has_error(wc)) { + bio->bi_status = BLK_STS_IOERR; + bio_endio(bio); ++ } else if (unlikely(!bio_sectors(bio))) { ++ bio->bi_status = BLK_STS_OK; ++ bio_endio(bio); + } else { + submit_bio(bio); + } +@@ -1752,6 +1770,14 @@ static void __writecache_writeback_ssd(struct dm_writecache *wc, struct writebac + e = f; + } + ++ if (unlikely(to.sector + to.count > wc->data_device_sectors)) { ++ if (to.sector >= wc->data_device_sectors) { ++ writecache_copy_endio(0, 0, c); ++ continue; ++ } ++ from.count = to.count = wc->data_device_sectors - to.sector; ++ } ++ + dm_kcopyd_copy(wc->dm_kcopyd, &from, 1, &to, 0, writecache_copy_endio, c); + + __writeback_throttle(wc, wbl); +@@ -2205,6 +2231,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) + if (sscanf(string, "%llu%c", &start_sector, &dummy) != 1) + goto invalid_optional; + wc->start_sector = start_sector; ++ wc->start_sector_set = true; + if (wc->start_sector != start_sector || + wc->start_sector >= wc->memory_map_size >> SECTOR_SHIFT) + goto invalid_optional; +@@ -2214,6 +2241,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) + goto invalid_optional; + if (high_wm_percent < 0 || high_wm_percent > 100) + goto invalid_optional; ++ wc->high_wm_percent_value = high_wm_percent; + wc->high_wm_percent_set = true; + } else if (!strcasecmp(string, "low_watermark") && opt_params >= 1) { + string = dm_shift_arg(&as), opt_params--; +@@ -2221,6 +2249,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) + goto invalid_optional; + if (low_wm_percent < 0 || low_wm_percent > 100) + goto invalid_optional; ++ wc->low_wm_percent_value = low_wm_percent; + wc->low_wm_percent_set = true; + } else if (!strcasecmp(string, "writeback_jobs") && opt_params >= 1) { + string = dm_shift_arg(&as), opt_params--; +@@ -2240,6 +2269,7 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) + if (autocommit_msecs > 3600000) + goto invalid_optional; + wc->autocommit_jiffies = msecs_to_jiffies(autocommit_msecs); ++ wc->autocommit_time_value = autocommit_msecs; + wc->autocommit_time_set = true; + } else if (!strcasecmp(string, "max_age") && opt_params >= 1) { + unsigned max_age_msecs; +@@ -2249,7 +2279,10 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv) + if (max_age_msecs > 86400000) + goto invalid_optional; + wc->max_age = msecs_to_jiffies(max_age_msecs); ++ wc->max_age_set = true; ++ wc->max_age_value = max_age_msecs; + } else if (!strcasecmp(string, "cleaner")) { ++ wc->cleaner_set = true; + wc->cleaner = true; + } else if (!strcasecmp(string, "fua")) { + if (WC_MODE_PMEM(wc)) { +@@ -2455,7 +2488,6 @@ static void writecache_status(struct dm_target *ti, status_type_t type, + struct dm_writecache *wc = ti->private; + unsigned extra_args; + unsigned sz = 0; +- uint64_t x; + + switch (type) { + case STATUSTYPE_INFO: +@@ -2467,11 +2499,11 @@ static void writecache_status(struct dm_target *ti, status_type_t type, + DMEMIT("%c %s %s %u ", WC_MODE_PMEM(wc) ? 'p' : 's', + wc->dev->name, wc->ssd_dev->name, wc->block_size); + extra_args = 0; +- if (wc->start_sector) ++ if (wc->start_sector_set) + extra_args += 2; +- if (wc->high_wm_percent_set && !wc->cleaner) ++ if (wc->high_wm_percent_set) + extra_args += 2; +- if (wc->low_wm_percent_set && !wc->cleaner) ++ if (wc->low_wm_percent_set) + extra_args += 2; + if (wc->max_writeback_jobs_set) + extra_args += 2; +@@ -2479,37 +2511,29 @@ static void writecache_status(struct dm_target *ti, status_type_t type, + extra_args += 2; + if (wc->autocommit_time_set) + extra_args += 2; +- if (wc->max_age != MAX_AGE_UNSPECIFIED) ++ if (wc->max_age_set) + extra_args += 2; +- if (wc->cleaner) ++ if (wc->cleaner_set) + extra_args++; + if (wc->writeback_fua_set) + extra_args++; + + DMEMIT("%u", extra_args); +- if (wc->start_sector) ++ if (wc->start_sector_set) + DMEMIT(" start_sector %llu", (unsigned long long)wc->start_sector); +- if (wc->high_wm_percent_set && !wc->cleaner) { +- x = (uint64_t)wc->freelist_high_watermark * 100; +- x += wc->n_blocks / 2; +- do_div(x, (size_t)wc->n_blocks); +- DMEMIT(" high_watermark %u", 100 - (unsigned)x); +- } +- if (wc->low_wm_percent_set && !wc->cleaner) { +- x = (uint64_t)wc->freelist_low_watermark * 100; +- x += wc->n_blocks / 2; +- do_div(x, (size_t)wc->n_blocks); +- DMEMIT(" low_watermark %u", 100 - (unsigned)x); +- } ++ if (wc->high_wm_percent_set) ++ DMEMIT(" high_watermark %u", wc->high_wm_percent_value); ++ if (wc->low_wm_percent_set) ++ DMEMIT(" low_watermark %u", wc->low_wm_percent_value); + if (wc->max_writeback_jobs_set) + DMEMIT(" writeback_jobs %u", wc->max_writeback_jobs); + if (wc->autocommit_blocks_set) + DMEMIT(" autocommit_blocks %u", wc->autocommit_blocks); + if (wc->autocommit_time_set) +- DMEMIT(" autocommit_time %u", jiffies_to_msecs(wc->autocommit_jiffies)); +- if (wc->max_age != MAX_AGE_UNSPECIFIED) +- DMEMIT(" max_age %u", jiffies_to_msecs(wc->max_age)); +- if (wc->cleaner) ++ DMEMIT(" autocommit_time %u", wc->autocommit_time_value); ++ if (wc->max_age_set) ++ DMEMIT(" max_age %u", wc->max_age_value); ++ if (wc->cleaner_set) + DMEMIT(" cleaner"); + if (wc->writeback_fua_set) + DMEMIT(" %sfua", wc->writeback_fua ? "" : "no"); +@@ -2519,7 +2543,7 @@ static void writecache_status(struct dm_target *ti, status_type_t type, + + static struct target_type writecache_target = { + .name = "writecache", +- .version = {1, 3, 0}, ++ .version = {1, 4, 0}, + .module = THIS_MODULE, + .ctr = writecache_ctr, + .dtr = writecache_dtr, +diff --git a/drivers/md/dm.c b/drivers/md/dm.c +index 1e99a4c1eca43..638c04f9e832c 100644 +--- a/drivers/md/dm.c ++++ b/drivers/md/dm.c +@@ -148,6 +148,16 @@ EXPORT_SYMBOL_GPL(dm_bio_get_target_bio_nr); + #define DM_NUMA_NODE NUMA_NO_NODE + static int dm_numa_node = DM_NUMA_NODE; + ++#define DEFAULT_SWAP_BIOS (8 * 1048576 / PAGE_SIZE) ++static int swap_bios = DEFAULT_SWAP_BIOS; ++static int get_swap_bios(void) ++{ ++ int latch = READ_ONCE(swap_bios); ++ if (unlikely(latch <= 0)) ++ latch = DEFAULT_SWAP_BIOS; ++ return latch; ++} ++ + /* + * For mempools pre-allocation at the table loading time. + */ +@@ -966,6 +976,11 @@ void disable_write_zeroes(struct mapped_device *md) + limits->max_write_zeroes_sectors = 0; + } + ++static bool swap_bios_limit(struct dm_target *ti, struct bio *bio) ++{ ++ return unlikely((bio->bi_opf & REQ_SWAP) != 0) && unlikely(ti->limit_swap_bios); ++} ++ + static void clone_endio(struct bio *bio) + { + blk_status_t error = bio->bi_status; +@@ -1016,6 +1031,11 @@ static void clone_endio(struct bio *bio) + } + } + ++ if (unlikely(swap_bios_limit(tio->ti, bio))) { ++ struct mapped_device *md = io->md; ++ up(&md->swap_bios_semaphore); ++ } ++ + free_tio(tio); + dec_pending(io, error); + } +@@ -1125,7 +1145,7 @@ static bool dm_dax_supported(struct dax_device *dax_dev, struct block_device *bd + if (!map) + goto out; + +- ret = dm_table_supports_dax(map, device_supports_dax, &blocksize); ++ ret = dm_table_supports_dax(map, device_not_dax_capable, &blocksize); + + out: + dm_put_live_table(md, srcu_idx); +@@ -1249,6 +1269,22 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors) + } + EXPORT_SYMBOL_GPL(dm_accept_partial_bio); + ++static noinline void __set_swap_bios_limit(struct mapped_device *md, int latch) ++{ ++ mutex_lock(&md->swap_bios_lock); ++ while (latch < md->swap_bios) { ++ cond_resched(); ++ down(&md->swap_bios_semaphore); ++ md->swap_bios--; ++ } ++ while (latch > md->swap_bios) { ++ cond_resched(); ++ up(&md->swap_bios_semaphore); ++ md->swap_bios++; ++ } ++ mutex_unlock(&md->swap_bios_lock); ++} ++ + static blk_qc_t __map_bio(struct dm_target_io *tio) + { + int r; +@@ -1268,6 +1304,14 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) + atomic_inc(&io->io_count); + sector = clone->bi_iter.bi_sector; + ++ if (unlikely(swap_bios_limit(ti, clone))) { ++ struct mapped_device *md = io->md; ++ int latch = get_swap_bios(); ++ if (unlikely(latch != md->swap_bios)) ++ __set_swap_bios_limit(md, latch); ++ down(&md->swap_bios_semaphore); ++ } ++ + r = ti->type->map(ti, clone); + switch (r) { + case DM_MAPIO_SUBMITTED: +@@ -1279,10 +1323,18 @@ static blk_qc_t __map_bio(struct dm_target_io *tio) + ret = submit_bio_noacct(clone); + break; + case DM_MAPIO_KILL: ++ if (unlikely(swap_bios_limit(ti, clone))) { ++ struct mapped_device *md = io->md; ++ up(&md->swap_bios_semaphore); ++ } + free_tio(tio); + dec_pending(io, BLK_STS_IOERR); + break; + case DM_MAPIO_REQUEUE: ++ if (unlikely(swap_bios_limit(ti, clone))) { ++ struct mapped_device *md = io->md; ++ up(&md->swap_bios_semaphore); ++ } + free_tio(tio); + dec_pending(io, BLK_STS_DM_REQUEUE); + break; +@@ -1756,6 +1808,7 @@ static void cleanup_mapped_device(struct mapped_device *md) + mutex_destroy(&md->suspend_lock); + mutex_destroy(&md->type_lock); + mutex_destroy(&md->table_devices_lock); ++ mutex_destroy(&md->swap_bios_lock); + + dm_mq_cleanup_mapped_device(md); + } +@@ -1823,6 +1876,10 @@ static struct mapped_device *alloc_dev(int minor) + init_waitqueue_head(&md->eventq); + init_completion(&md->kobj_holder.completion); + ++ md->swap_bios = get_swap_bios(); ++ sema_init(&md->swap_bios_semaphore, md->swap_bios); ++ mutex_init(&md->swap_bios_lock); ++ + md->disk->major = _major; + md->disk->first_minor = minor; + md->disk->fops = &dm_blk_dops; +@@ -3119,6 +3176,9 @@ MODULE_PARM_DESC(reserved_bio_based_ios, "Reserved IOs in bio-based mempools"); + module_param(dm_numa_node, int, S_IRUGO | S_IWUSR); + MODULE_PARM_DESC(dm_numa_node, "NUMA node for DM device memory allocations"); + ++module_param(swap_bios, int, S_IRUGO | S_IWUSR); ++MODULE_PARM_DESC(swap_bios, "Maximum allowed inflight swap IOs"); ++ + MODULE_DESCRIPTION(DM_NAME " driver"); + MODULE_AUTHOR("Joe Thornber "); + MODULE_LICENSE("GPL"); +diff --git a/drivers/md/dm.h b/drivers/md/dm.h +index fffe1e289c533..b441ad772c188 100644 +--- a/drivers/md/dm.h ++++ b/drivers/md/dm.h +@@ -73,7 +73,7 @@ void dm_table_free_md_mempools(struct dm_table *t); + struct dm_md_mempools *dm_table_get_md_mempools(struct dm_table *t); + bool dm_table_supports_dax(struct dm_table *t, iterate_devices_callout_fn fn, + int *blocksize); +-int device_supports_dax(struct dm_target *ti, struct dm_dev *dev, ++int device_not_dax_capable(struct dm_target *ti, struct dm_dev *dev, + sector_t start, sector_t len, void *data); + + void dm_lock_md_type(struct mapped_device *md); +diff --git a/drivers/media/i2c/max9286.c b/drivers/media/i2c/max9286.c +index c82c1493e099d..b1e2476d3c9e6 100644 +--- a/drivers/media/i2c/max9286.c ++++ b/drivers/media/i2c/max9286.c +@@ -580,7 +580,7 @@ static int max9286_v4l2_notifier_register(struct max9286_priv *priv) + + asd = v4l2_async_notifier_add_fwnode_subdev(&priv->notifier, + source->fwnode, +- sizeof(*asd)); ++ sizeof(struct max9286_asd)); + if (IS_ERR(asd)) { + dev_err(dev, "Failed to add subdev for source %u: %ld", + i, PTR_ERR(asd)); +diff --git a/drivers/media/i2c/ov5670.c b/drivers/media/i2c/ov5670.c +index f26252e35e08d..04d3f14902017 100644 +--- a/drivers/media/i2c/ov5670.c ++++ b/drivers/media/i2c/ov5670.c +@@ -2084,7 +2084,8 @@ static int ov5670_init_controls(struct ov5670 *ov5670) + + /* By default, V4L2_CID_PIXEL_RATE is read only */ + ov5670->pixel_rate = v4l2_ctrl_new_std(ctrl_hdlr, &ov5670_ctrl_ops, +- V4L2_CID_PIXEL_RATE, 0, ++ V4L2_CID_PIXEL_RATE, ++ link_freq_configs[0].pixel_rate, + link_freq_configs[0].pixel_rate, + 1, + link_freq_configs[0].pixel_rate); +diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c +index 55018d9e439fb..285047b32c44a 100644 +--- a/drivers/media/pci/cx25821/cx25821-core.c ++++ b/drivers/media/pci/cx25821/cx25821-core.c +@@ -976,8 +976,10 @@ int cx25821_riscmem_alloc(struct pci_dev *pci, + __le32 *cpu; + dma_addr_t dma = 0; + +- if (NULL != risc->cpu && risc->size < size) ++ if (risc->cpu && risc->size < size) { + pci_free_consistent(pci, risc->size, risc->cpu, risc->dma); ++ risc->cpu = NULL; ++ } + if (NULL == risc->cpu) { + cpu = pci_zalloc_consistent(pci, size, &dma); + if (NULL == cpu) +diff --git a/drivers/media/pci/intel/ipu3/Kconfig b/drivers/media/pci/intel/ipu3/Kconfig +index 82d7f17e6a024..7a805201034b7 100644 +--- a/drivers/media/pci/intel/ipu3/Kconfig ++++ b/drivers/media/pci/intel/ipu3/Kconfig +@@ -2,7 +2,8 @@ + config VIDEO_IPU3_CIO2 + tristate "Intel ipu3-cio2 driver" + depends on VIDEO_V4L2 && PCI +- depends on (X86 && ACPI) || COMPILE_TEST ++ depends on ACPI || COMPILE_TEST ++ depends on X86 + select MEDIA_CONTROLLER + select VIDEO_V4L2_SUBDEV_API + select V4L2_FWNODE +diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c +index 1fcd131482e0e..dcbfe8c9abc72 100644 +--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c ++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c +@@ -1277,7 +1277,7 @@ static int cio2_subdev_set_fmt(struct v4l2_subdev *sd, + fmt->format.code = formats[0].mbus_code; + + for (i = 0; i < ARRAY_SIZE(formats); i++) { +- if (formats[i].mbus_code == fmt->format.code) { ++ if (formats[i].mbus_code == mbus_code) { + fmt->format.code = mbus_code; + break; + } +diff --git a/drivers/media/pci/saa7134/saa7134-empress.c b/drivers/media/pci/saa7134/saa7134-empress.c +index 39e3c7f8c5b46..76a37fbd84587 100644 +--- a/drivers/media/pci/saa7134/saa7134-empress.c ++++ b/drivers/media/pci/saa7134/saa7134-empress.c +@@ -282,8 +282,11 @@ static int empress_init(struct saa7134_dev *dev) + q->lock = &dev->lock; + q->dev = &dev->pci->dev; + err = vb2_queue_init(q); +- if (err) ++ if (err) { ++ video_device_release(dev->empress_dev); ++ dev->empress_dev = NULL; + return err; ++ } + dev->empress_dev->queue = q; + dev->empress_dev->device_caps = V4L2_CAP_READWRITE | V4L2_CAP_STREAMING | + V4L2_CAP_VIDEO_CAPTURE; +diff --git a/drivers/media/pci/smipcie/smipcie-ir.c b/drivers/media/pci/smipcie/smipcie-ir.c +index e6b74e161a055..c0604d9c70119 100644 +--- a/drivers/media/pci/smipcie/smipcie-ir.c ++++ b/drivers/media/pci/smipcie/smipcie-ir.c +@@ -60,38 +60,44 @@ static void smi_ir_decode(struct smi_rc *ir) + { + struct smi_dev *dev = ir->dev; + struct rc_dev *rc_dev = ir->rc_dev; +- u32 dwIRControl, dwIRData; +- u8 index, ucIRCount, readLoop; ++ u32 control, data; ++ u8 index, ir_count, read_loop; + +- dwIRControl = smi_read(IR_Init_Reg); ++ control = smi_read(IR_Init_Reg); + +- if (dwIRControl & rbIRVld) { +- ucIRCount = (u8) smi_read(IR_Data_Cnt); ++ dev_dbg(&rc_dev->dev, "ircontrol: 0x%08x\n", control); + +- readLoop = ucIRCount/4; +- if (ucIRCount % 4) +- readLoop += 1; +- for (index = 0; index < readLoop; index++) { +- dwIRData = smi_read(IR_DATA_BUFFER_BASE + (index * 4)); ++ if (control & rbIRVld) { ++ ir_count = (u8)smi_read(IR_Data_Cnt); + +- ir->irData[index*4 + 0] = (u8)(dwIRData); +- ir->irData[index*4 + 1] = (u8)(dwIRData >> 8); +- ir->irData[index*4 + 2] = (u8)(dwIRData >> 16); +- ir->irData[index*4 + 3] = (u8)(dwIRData >> 24); ++ dev_dbg(&rc_dev->dev, "ircount %d\n", ir_count); ++ ++ read_loop = ir_count / 4; ++ if (ir_count % 4) ++ read_loop += 1; ++ for (index = 0; index < read_loop; index++) { ++ data = smi_read(IR_DATA_BUFFER_BASE + (index * 4)); ++ dev_dbg(&rc_dev->dev, "IRData 0x%08x\n", data); ++ ++ ir->irData[index * 4 + 0] = (u8)(data); ++ ir->irData[index * 4 + 1] = (u8)(data >> 8); ++ ir->irData[index * 4 + 2] = (u8)(data >> 16); ++ ir->irData[index * 4 + 3] = (u8)(data >> 24); + } +- smi_raw_process(rc_dev, ir->irData, ucIRCount); +- smi_set(IR_Init_Reg, rbIRVld); ++ smi_raw_process(rc_dev, ir->irData, ir_count); + } + +- if (dwIRControl & rbIRhighidle) { ++ if (control & rbIRhighidle) { + struct ir_raw_event rawir = {}; + ++ dev_dbg(&rc_dev->dev, "high idle\n"); ++ + rawir.pulse = 0; + rawir.duration = SMI_SAMPLE_PERIOD * SMI_SAMPLE_IDLEMIN; + ir_raw_event_store_with_filter(rc_dev, &rawir); +- smi_set(IR_Init_Reg, rbIRhighidle); + } + ++ smi_set(IR_Init_Reg, rbIRVld); + ir_raw_event_handle(rc_dev); + } + +@@ -150,7 +156,7 @@ int smi_ir_init(struct smi_dev *dev) + rc_dev->dev.parent = &dev->pci_dev->dev; + + rc_dev->map_name = dev->info->rc_map; +- rc_dev->timeout = MS_TO_US(100); ++ rc_dev->timeout = SMI_SAMPLE_PERIOD * SMI_SAMPLE_IDLEMIN; + rc_dev->rx_resolution = SMI_SAMPLE_PERIOD; + + ir->rc_dev = rc_dev; +@@ -173,7 +179,7 @@ void smi_ir_exit(struct smi_dev *dev) + struct smi_rc *ir = &dev->ir; + struct rc_dev *rc_dev = ir->rc_dev; + +- smi_ir_stop(ir); + rc_unregister_device(rc_dev); ++ smi_ir_stop(ir); + ir->rc_dev = NULL; + } +diff --git a/drivers/media/platform/aspeed-video.c b/drivers/media/platform/aspeed-video.c +index c46a79eace98b..f2c4dadd6a0eb 100644 +--- a/drivers/media/platform/aspeed-video.c ++++ b/drivers/media/platform/aspeed-video.c +@@ -1551,12 +1551,12 @@ static int aspeed_video_setup_video(struct aspeed_video *video) + V4L2_JPEG_CHROMA_SUBSAMPLING_420, mask, + V4L2_JPEG_CHROMA_SUBSAMPLING_444); + +- if (video->ctrl_handler.error) { ++ rc = video->ctrl_handler.error; ++ if (rc) { + v4l2_ctrl_handler_free(&video->ctrl_handler); + v4l2_device_unregister(v4l2_dev); + +- dev_err(video->dev, "Failed to init controls: %d\n", +- video->ctrl_handler.error); ++ dev_err(video->dev, "Failed to init controls: %d\n", rc); + return rc; + } + +diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c +index c012fd2e1d291..34266fba824f2 100644 +--- a/drivers/media/platform/marvell-ccic/mcam-core.c ++++ b/drivers/media/platform/marvell-ccic/mcam-core.c +@@ -931,6 +931,7 @@ static int mclk_enable(struct clk_hw *hw) + mclk_div = 2; + } + ++ pm_runtime_get_sync(cam->dev); + clk_enable(cam->clk[0]); + mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div); + mcam_ctlr_power_up(cam); +@@ -944,6 +945,7 @@ static void mclk_disable(struct clk_hw *hw) + + mcam_ctlr_power_down(cam); + clk_disable(cam->clk[0]); ++ pm_runtime_put(cam->dev); + } + + static unsigned long mclk_recalc_rate(struct clk_hw *hw, +diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c +index 3be8a04c4c679..219c2c5b78efc 100644 +--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c ++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_enc_drv.c +@@ -310,7 +310,7 @@ static int mtk_vcodec_probe(struct platform_device *pdev) + ret = PTR_ERR((__force void *)dev->reg_base[VENC_SYS]); + goto err_res; + } +- mtk_v4l2_debug(2, "reg[%d] base=0x%p", i, dev->reg_base[VENC_SYS]); ++ mtk_v4l2_debug(2, "reg[%d] base=0x%p", VENC_SYS, dev->reg_base[VENC_SYS]); + + res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); + if (res == NULL) { +@@ -339,7 +339,7 @@ static int mtk_vcodec_probe(struct platform_device *pdev) + ret = PTR_ERR((__force void *)dev->reg_base[VENC_LT_SYS]); + goto err_res; + } +- mtk_v4l2_debug(2, "reg[%d] base=0x%p", i, dev->reg_base[VENC_LT_SYS]); ++ mtk_v4l2_debug(2, "reg[%d] base=0x%p", VENC_LT_SYS, dev->reg_base[VENC_LT_SYS]); + + dev->enc_lt_irq = platform_get_irq(pdev, 1); + irq_set_status_flags(dev->enc_lt_irq, IRQ_NOAUTOEN); +diff --git a/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c b/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c +index 5ea153a685225..d9880210b2ab6 100644 +--- a/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c ++++ b/drivers/media/platform/mtk-vcodec/vdec/vdec_vp9_if.c +@@ -890,7 +890,8 @@ static int vdec_vp9_decode(void *h_vdec, struct mtk_vcodec_mem *bs, + memset(inst->seg_id_buf.va, 0, inst->seg_id_buf.size); + + if (vsi->show_frame & BIT(2)) { +- if (vpu_dec_start(&inst->vpu, NULL, 0)) { ++ ret = vpu_dec_start(&inst->vpu, NULL, 0); ++ if (ret) { + mtk_vcodec_err(inst, "vpu trig decoder failed"); + goto DECODE_ERROR; + } +diff --git a/drivers/media/platform/pxa_camera.c b/drivers/media/platform/pxa_camera.c +index e47520fcb93c0..4ee7d5327df05 100644 +--- a/drivers/media/platform/pxa_camera.c ++++ b/drivers/media/platform/pxa_camera.c +@@ -1386,6 +1386,9 @@ static int pxac_vb2_prepare(struct vb2_buffer *vb) + struct pxa_camera_dev *pcdev = vb2_get_drv_priv(vb->vb2_queue); + struct pxa_buffer *buf = vb2_to_pxa_buffer(vb); + int ret = 0; ++#ifdef DEBUG ++ int i; ++#endif + + switch (pcdev->channels) { + case 1: +diff --git a/drivers/media/platform/qcom/camss/camss-video.c b/drivers/media/platform/qcom/camss/camss-video.c +index 114c3ae4a4abb..15965e63cb619 100644 +--- a/drivers/media/platform/qcom/camss/camss-video.c ++++ b/drivers/media/platform/qcom/camss/camss-video.c +@@ -979,6 +979,7 @@ int msm_video_register(struct camss_video *video, struct v4l2_device *v4l2_dev, + video->nformats = ARRAY_SIZE(formats_rdi_8x96); + } + } else { ++ ret = -EINVAL; + goto error_video_register; + } + +diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c +index 59a0266b1f399..2eef245c31a17 100644 +--- a/drivers/media/platform/ti-vpe/cal.c ++++ b/drivers/media/platform/ti-vpe/cal.c +@@ -406,7 +406,7 @@ static irqreturn_t cal_irq(int irq_cal, void *data) + */ + + struct cal_v4l2_async_subdev { +- struct v4l2_async_subdev asd; ++ struct v4l2_async_subdev asd; /* Must be first */ + struct cal_camerarx *phy; + }; + +@@ -472,7 +472,7 @@ static int cal_async_notifier_register(struct cal_dev *cal) + fwnode = of_fwnode_handle(phy->sensor_node); + asd = v4l2_async_notifier_add_fwnode_subdev(&cal->notifier, + fwnode, +- sizeof(*asd)); ++ sizeof(*casd)); + if (IS_ERR(asd)) { + phy_err(phy, "Failed to add subdev to notifier\n"); + ret = PTR_ERR(asd); +diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c +index dc62533cf32ce..aa66e4f5f3f34 100644 +--- a/drivers/media/platform/vsp1/vsp1_drv.c ++++ b/drivers/media/platform/vsp1/vsp1_drv.c +@@ -882,8 +882,10 @@ static int vsp1_probe(struct platform_device *pdev) + } + + done: +- if (ret) ++ if (ret) { + pm_runtime_disable(&pdev->dev); ++ rcar_fcp_put(vsp1->fcp); ++ } + + return ret; + } +diff --git a/drivers/media/rc/ir_toy.c b/drivers/media/rc/ir_toy.c +index e0242c9b6aeb1..3e729a17b35ff 100644 +--- a/drivers/media/rc/ir_toy.c ++++ b/drivers/media/rc/ir_toy.c +@@ -491,6 +491,7 @@ static void irtoy_disconnect(struct usb_interface *intf) + + static const struct usb_device_id irtoy_table[] = { + { USB_DEVICE_INTERFACE_CLASS(0x04d8, 0xfd08, USB_CLASS_CDC_DATA) }, ++ { USB_DEVICE_INTERFACE_CLASS(0x04d8, 0xf58b, USB_CLASS_CDC_DATA) }, + { } + }; + +diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c +index f1dbd059ed087..c8d63673e131d 100644 +--- a/drivers/media/rc/mceusb.c ++++ b/drivers/media/rc/mceusb.c +@@ -1169,7 +1169,7 @@ static void mceusb_handle_command(struct mceusb_dev *ir, u8 *buf_in) + switch (subcmd) { + /* the one and only 5-byte return value command */ + case MCE_RSP_GETPORTSTATUS: +- if (buf_in[5] == 0) ++ if (buf_in[5] == 0 && *hi < 8) + ir->txports_cabled |= 1 << *hi; + break; + +diff --git a/drivers/media/test-drivers/vidtv/vidtv_psi.c b/drivers/media/test-drivers/vidtv/vidtv_psi.c +index 4511a2a98405d..1724bb485e670 100644 +--- a/drivers/media/test-drivers/vidtv/vidtv_psi.c ++++ b/drivers/media/test-drivers/vidtv/vidtv_psi.c +@@ -1164,6 +1164,8 @@ u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args) + struct vidtv_psi_desc *table_descriptor = args->pmt->descriptor; + struct vidtv_psi_table_pmt_stream *stream = args->pmt->stream; + struct vidtv_psi_desc *stream_descriptor; ++ u32 crc = INITIAL_CRC; ++ u32 nbytes = 0; + struct header_write_args h_args = { + .dest_buf = args->buf, + .dest_offset = args->offset, +@@ -1181,6 +1183,7 @@ u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args) + .new_psi_section = false, + .is_crc = false, + .dest_buf_sz = args->buf_sz, ++ .crc = &crc, + }; + struct desc_write_args d_args = { + .dest_buf = args->buf, +@@ -1193,8 +1196,6 @@ u32 vidtv_psi_pmt_write_into(struct vidtv_psi_pmt_write_args *args) + .pid = args->pid, + .dest_buf_sz = args->buf_sz, + }; +- u32 crc = INITIAL_CRC; +- u32 nbytes = 0; + + vidtv_psi_pmt_table_update_sec_len(args->pmt); + +diff --git a/drivers/media/tuners/qm1d1c0042.c b/drivers/media/tuners/qm1d1c0042.c +index 0e26d22f0b268..53aa2558f71e1 100644 +--- a/drivers/media/tuners/qm1d1c0042.c ++++ b/drivers/media/tuners/qm1d1c0042.c +@@ -343,8 +343,10 @@ static int qm1d1c0042_init(struct dvb_frontend *fe) + if (val == reg_initval[reg_index][0x00]) + break; + } +- if (reg_index >= QM1D1C0042_NUM_REG_ROWS) ++ if (reg_index >= QM1D1C0042_NUM_REG_ROWS) { ++ ret = -EINVAL; + goto failed; ++ } + memcpy(state->regs, reg_initval[reg_index], QM1D1C0042_NUM_REGS); + usleep_range(2000, 3000); + +diff --git a/drivers/media/usb/dvb-usb-v2/lmedm04.c b/drivers/media/usb/dvb-usb-v2/lmedm04.c +index 5a7a9522d46da..9ddda8d68ee0f 100644 +--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c ++++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c +@@ -391,7 +391,7 @@ static int lme2510_int_read(struct dvb_usb_adapter *adap) + ep = usb_pipe_endpoint(d->udev, lme_int->lme_urb->pipe); + + if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK) +- lme_int->lme_urb->pipe = usb_rcvbulkpipe(d->udev, 0xa), ++ lme_int->lme_urb->pipe = usb_rcvbulkpipe(d->udev, 0xa); + + usb_submit_urb(lme_int->lme_urb, GFP_ATOMIC); + info("INT Interrupt Service Started"); +diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c +index e6088b5d1b805..3daa64bb1e1d9 100644 +--- a/drivers/media/usb/em28xx/em28xx-core.c ++++ b/drivers/media/usb/em28xx/em28xx-core.c +@@ -956,14 +956,10 @@ int em28xx_alloc_urbs(struct em28xx *dev, enum em28xx_mode mode, int xfer_bulk, + + usb_bufs->buf[i] = kzalloc(sb_size, GFP_KERNEL); + if (!usb_bufs->buf[i]) { +- em28xx_uninit_usb_xfer(dev, mode); +- + for (i--; i >= 0; i--) + kfree(usb_bufs->buf[i]); + +- kfree(usb_bufs->buf); +- usb_bufs->buf = NULL; +- ++ em28xx_uninit_usb_xfer(dev, mode); + return -ENOMEM; + } + +diff --git a/drivers/media/usb/tm6000/tm6000-dvb.c b/drivers/media/usb/tm6000/tm6000-dvb.c +index 19c90fa9e443d..293a460f4616c 100644 +--- a/drivers/media/usb/tm6000/tm6000-dvb.c ++++ b/drivers/media/usb/tm6000/tm6000-dvb.c +@@ -141,6 +141,10 @@ static int tm6000_start_stream(struct tm6000_core *dev) + if (ret < 0) { + printk(KERN_ERR "tm6000: error %i in %s during pipe reset\n", + ret, __func__); ++ ++ kfree(dvb->bulk_urb->transfer_buffer); ++ usb_free_urb(dvb->bulk_urb); ++ dvb->bulk_urb = NULL; + return ret; + } else + printk(KERN_ERR "tm6000: pipe reset\n"); +diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c +index fa06bfa174ad3..c7172b8952a96 100644 +--- a/drivers/media/usb/uvc/uvc_v4l2.c ++++ b/drivers/media/usb/uvc/uvc_v4l2.c +@@ -248,7 +248,9 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream, + goto done; + + /* After the probe, update fmt with the values returned from +- * negotiation with the device. ++ * negotiation with the device. Some devices return invalid bFormatIndex ++ * and bFrameIndex values, in which case we can only assume they have ++ * accepted the requested format as-is. + */ + for (i = 0; i < stream->nformats; ++i) { + if (probe->bFormatIndex == stream->format[i].index) { +@@ -257,11 +259,10 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream, + } + } + +- if (i == stream->nformats) { +- uvc_trace(UVC_TRACE_FORMAT, "Unknown bFormatIndex %u\n", ++ if (i == stream->nformats) ++ uvc_trace(UVC_TRACE_FORMAT, ++ "Unknown bFormatIndex %u, using default\n", + probe->bFormatIndex); +- return -EINVAL; +- } + + for (i = 0; i < format->nframes; ++i) { + if (probe->bFrameIndex == format->frame[i].bFrameIndex) { +@@ -270,11 +271,10 @@ static int uvc_v4l2_try_format(struct uvc_streaming *stream, + } + } + +- if (i == format->nframes) { +- uvc_trace(UVC_TRACE_FORMAT, "Unknown bFrameIndex %u\n", ++ if (i == format->nframes) ++ uvc_trace(UVC_TRACE_FORMAT, ++ "Unknown bFrameIndex %u, using default\n", + probe->bFrameIndex); +- return -EINVAL; +- } + + fmt->fmt.pix.width = frame->wWidth; + fmt->fmt.pix.height = frame->wHeight; +diff --git a/drivers/memory/mtk-smi.c b/drivers/memory/mtk-smi.c +index 691e4c344cf84..75f8e0f60d81d 100644 +--- a/drivers/memory/mtk-smi.c ++++ b/drivers/memory/mtk-smi.c +@@ -130,7 +130,7 @@ static void mtk_smi_clk_disable(const struct mtk_smi *smi) + + int mtk_smi_larb_get(struct device *larbdev) + { +- int ret = pm_runtime_get_sync(larbdev); ++ int ret = pm_runtime_resume_and_get(larbdev); + + return (ret < 0) ? ret : 0; + } +@@ -366,7 +366,7 @@ static int __maybe_unused mtk_smi_larb_resume(struct device *dev) + int ret; + + /* Power on smi-common. */ +- ret = pm_runtime_get_sync(larb->smi_common_dev); ++ ret = pm_runtime_resume_and_get(larb->smi_common_dev); + if (ret < 0) { + dev_err(dev, "Failed to pm get for smi-common(%d).\n", ret); + return ret; +diff --git a/drivers/memory/ti-aemif.c b/drivers/memory/ti-aemif.c +index 159a16f5e7d67..51d20c2ccb755 100644 +--- a/drivers/memory/ti-aemif.c ++++ b/drivers/memory/ti-aemif.c +@@ -378,8 +378,10 @@ static int aemif_probe(struct platform_device *pdev) + */ + for_each_available_child_of_node(np, child_np) { + ret = of_aemif_parse_abus_config(pdev, child_np); +- if (ret < 0) ++ if (ret < 0) { ++ of_node_put(child_np); + goto error; ++ } + } + } else if (pdata && pdata->num_abus_data > 0) { + for (i = 0; i < pdata->num_abus_data; i++, aemif->num_cs++) { +@@ -405,8 +407,10 @@ static int aemif_probe(struct platform_device *pdev) + for_each_available_child_of_node(np, child_np) { + ret = of_platform_populate(child_np, NULL, + dev_lookup, dev); +- if (ret < 0) ++ if (ret < 0) { ++ of_node_put(child_np); + goto error; ++ } + } + } else if (pdata) { + for (i = 0; i < pdata->num_sub_devices; i++) { +diff --git a/drivers/mfd/altera-sysmgr.c b/drivers/mfd/altera-sysmgr.c +index 41076d121dd54..591b300d90953 100644 +--- a/drivers/mfd/altera-sysmgr.c ++++ b/drivers/mfd/altera-sysmgr.c +@@ -145,7 +145,8 @@ static int sysmgr_probe(struct platform_device *pdev) + sysmgr_config.reg_write = s10_protected_reg_write; + + /* Need physical address for SMCC call */ +- regmap = devm_regmap_init(dev, NULL, (void *)res->start, ++ regmap = devm_regmap_init(dev, NULL, ++ (void *)(uintptr_t)res->start, + &sysmgr_config); + } else { + base = devm_ioremap(dev, res->start, resource_size(res)); +diff --git a/drivers/mfd/bd9571mwv.c b/drivers/mfd/bd9571mwv.c +index fab3cdc27ed64..19d57a45134c6 100644 +--- a/drivers/mfd/bd9571mwv.c ++++ b/drivers/mfd/bd9571mwv.c +@@ -185,9 +185,9 @@ static int bd9571mwv_probe(struct i2c_client *client, + return ret; + } + +- ret = mfd_add_devices(bd->dev, PLATFORM_DEVID_AUTO, bd9571mwv_cells, +- ARRAY_SIZE(bd9571mwv_cells), NULL, 0, +- regmap_irq_get_domain(bd->irq_data)); ++ ret = devm_mfd_add_devices(bd->dev, PLATFORM_DEVID_AUTO, ++ bd9571mwv_cells, ARRAY_SIZE(bd9571mwv_cells), ++ NULL, 0, regmap_irq_get_domain(bd->irq_data)); + if (ret) { + regmap_del_irq_chip(bd->irq, bd->irq_data); + return ret; +diff --git a/drivers/mfd/gateworks-gsc.c b/drivers/mfd/gateworks-gsc.c +index 576da62fbb0ce..d87876747b913 100644 +--- a/drivers/mfd/gateworks-gsc.c ++++ b/drivers/mfd/gateworks-gsc.c +@@ -234,7 +234,7 @@ static int gsc_probe(struct i2c_client *client) + + ret = devm_regmap_add_irq_chip(dev, gsc->regmap, client->irq, + IRQF_ONESHOT | IRQF_SHARED | +- IRQF_TRIGGER_FALLING, 0, ++ IRQF_TRIGGER_LOW, 0, + &gsc_irq_chip, &irq_data); + if (ret) + return ret; +diff --git a/drivers/mfd/wm831x-auxadc.c b/drivers/mfd/wm831x-auxadc.c +index 8a7cc0f86958b..65b98f3fbd929 100644 +--- a/drivers/mfd/wm831x-auxadc.c ++++ b/drivers/mfd/wm831x-auxadc.c +@@ -93,11 +93,10 @@ static int wm831x_auxadc_read_irq(struct wm831x *wm831x, + wait_for_completion_timeout(&req->done, msecs_to_jiffies(500)); + + mutex_lock(&wm831x->auxadc_lock); +- +- list_del(&req->list); + ret = req->val; + + out: ++ list_del(&req->list); + mutex_unlock(&wm831x->auxadc_lock); + + kfree(req); +diff --git a/drivers/misc/cardreader/rts5227.c b/drivers/misc/cardreader/rts5227.c +index 8859011672cb9..8200af22b529e 100644 +--- a/drivers/misc/cardreader/rts5227.c ++++ b/drivers/misc/cardreader/rts5227.c +@@ -398,6 +398,11 @@ static int rts522a_extra_init_hw(struct rtsx_pcr *pcr) + { + rts5227_extra_init_hw(pcr); + ++ /* Power down OCP for power consumption */ ++ if (!pcr->card_exist) ++ rtsx_pci_write_register(pcr, FPDCTL, OC_POWER_DOWN, ++ OC_POWER_DOWN); ++ + rtsx_pci_write_register(pcr, FUNC_FORCE_CTL, FUNC_FORCE_UPME_XMT_DBG, + FUNC_FORCE_UPME_XMT_DBG); + rtsx_pci_write_register(pcr, PCLK_CTL, 0x04, 0x04); +diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c +index 7c45f82b43027..d92c4d2c521a3 100644 +--- a/drivers/misc/eeprom/eeprom_93xx46.c ++++ b/drivers/misc/eeprom/eeprom_93xx46.c +@@ -512,3 +512,4 @@ MODULE_LICENSE("GPL"); + MODULE_DESCRIPTION("Driver for 93xx46 EEPROMs"); + MODULE_AUTHOR("Anatolij Gustschin "); + MODULE_ALIAS("spi:93xx46"); ++MODULE_ALIAS("spi:eeprom-93xx46"); +diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c +index 994ab67bc2dce..815d01f785dff 100644 +--- a/drivers/misc/fastrpc.c ++++ b/drivers/misc/fastrpc.c +@@ -520,12 +520,13 @@ fastrpc_map_dma_buf(struct dma_buf_attachment *attachment, + { + struct fastrpc_dma_buf_attachment *a = attachment->priv; + struct sg_table *table; ++ int ret; + + table = &a->sgt; + +- if (!dma_map_sgtable(attachment->dev, table, dir, 0)) +- return ERR_PTR(-ENOMEM); +- ++ ret = dma_map_sgtable(attachment->dev, table, dir, 0); ++ if (ret) ++ table = ERR_PTR(ret); + return table; + } + +diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c +index a97eb5d47705d..33579d9795c32 100644 +--- a/drivers/misc/mei/hbm.c ++++ b/drivers/misc/mei/hbm.c +@@ -1373,7 +1373,7 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr) + return -EPROTO; + } + +- dev->dev_state = MEI_DEV_POWER_DOWN; ++ mei_set_devstate(dev, MEI_DEV_POWER_DOWN); + dev_info(dev->dev, "hbm: stop response: resetting.\n"); + /* force the reset */ + return -EPROTO; +diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h +index 9cf8d8f60cfef..14be76d4c2e61 100644 +--- a/drivers/misc/mei/hw-me-regs.h ++++ b/drivers/misc/mei/hw-me-regs.h +@@ -101,6 +101,11 @@ + #define MEI_DEV_ID_MCC 0x4B70 /* Mule Creek Canyon (EHL) */ + #define MEI_DEV_ID_MCC_4 0x4B75 /* Mule Creek Canyon 4 (EHL) */ + ++#define MEI_DEV_ID_EBG 0x1BE0 /* Emmitsburg WS */ ++ ++#define MEI_DEV_ID_ADP_S 0x7AE8 /* Alder Lake Point S */ ++#define MEI_DEV_ID_ADP_LP 0x7A60 /* Alder Lake Point LP */ ++ + /* + * MEI HW Section + */ +diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c +index 326955b04fda9..2161c1234ad72 100644 +--- a/drivers/misc/mei/interrupt.c ++++ b/drivers/misc/mei/interrupt.c +@@ -295,12 +295,17 @@ static inline bool hdr_is_fixed(struct mei_msg_hdr *mei_hdr) + static inline int hdr_is_valid(u32 msg_hdr) + { + struct mei_msg_hdr *mei_hdr; ++ u32 expected_len = 0; + + mei_hdr = (struct mei_msg_hdr *)&msg_hdr; + if (!msg_hdr || mei_hdr->reserved) + return -EBADMSG; + +- if (mei_hdr->dma_ring && mei_hdr->length != MEI_SLOT_SIZE) ++ if (mei_hdr->dma_ring) ++ expected_len += MEI_SLOT_SIZE; ++ if (mei_hdr->extended) ++ expected_len += MEI_SLOT_SIZE; ++ if (mei_hdr->length < expected_len) + return -EBADMSG; + + return 0; +@@ -324,6 +329,8 @@ int mei_irq_read_handler(struct mei_device *dev, + struct mei_cl *cl; + int ret; + u32 ext_meta_hdr_u32; ++ u32 hdr_size_left; ++ u32 hdr_size_ext; + int i; + int ext_hdr_end; + +@@ -353,6 +360,7 @@ int mei_irq_read_handler(struct mei_device *dev, + } + + ext_hdr_end = 1; ++ hdr_size_left = mei_hdr->length; + + if (mei_hdr->extended) { + if (!dev->rd_msg_hdr[1]) { +@@ -363,8 +371,21 @@ int mei_irq_read_handler(struct mei_device *dev, + dev_dbg(dev->dev, "extended header is %08x\n", + ext_meta_hdr_u32); + } +- meta_hdr = ((struct mei_ext_meta_hdr *) +- dev->rd_msg_hdr + 1); ++ meta_hdr = ((struct mei_ext_meta_hdr *)dev->rd_msg_hdr + 1); ++ if (check_add_overflow((u32)sizeof(*meta_hdr), ++ mei_slots2data(meta_hdr->size), ++ &hdr_size_ext)) { ++ dev_err(dev->dev, "extended message size too big %d\n", ++ meta_hdr->size); ++ return -EBADMSG; ++ } ++ if (hdr_size_left < hdr_size_ext) { ++ dev_err(dev->dev, "corrupted message header len %d\n", ++ mei_hdr->length); ++ return -EBADMSG; ++ } ++ hdr_size_left -= hdr_size_ext; ++ + ext_hdr_end = meta_hdr->size + 2; + for (i = dev->rd_msg_hdr_count; i < ext_hdr_end; i++) { + dev->rd_msg_hdr[i] = mei_read_hdr(dev); +@@ -376,6 +397,12 @@ int mei_irq_read_handler(struct mei_device *dev, + } + + if (mei_hdr->dma_ring) { ++ if (hdr_size_left != sizeof(dev->rd_msg_hdr[ext_hdr_end])) { ++ dev_err(dev->dev, "corrupted message header len %d\n", ++ mei_hdr->length); ++ return -EBADMSG; ++ } ++ + dev->rd_msg_hdr[ext_hdr_end] = mei_read_hdr(dev); + dev->rd_msg_hdr_count++; + (*slots)--; +diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c +index 1de9ef7a272ba..a7e179626b635 100644 +--- a/drivers/misc/mei/pci-me.c ++++ b/drivers/misc/mei/pci-me.c +@@ -107,6 +107,11 @@ static const struct pci_device_id mei_me_pci_tbl[] = { + + {MEI_PCI_DEVICE(MEI_DEV_ID_CDF, MEI_ME_PCH8_CFG)}, + ++ {MEI_PCI_DEVICE(MEI_DEV_ID_EBG, MEI_ME_PCH15_SPS_CFG)}, ++ ++ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_S, MEI_ME_PCH15_CFG)}, ++ {MEI_PCI_DEVICE(MEI_DEV_ID_ADP_LP, MEI_ME_PCH15_CFG)}, ++ + /* required last entry */ + {0, } + }; +diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c +index c49065887e8f5..c2338750313c4 100644 +--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c ++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c +@@ -537,6 +537,9 @@ static struct vmci_queue *qp_host_alloc_queue(u64 size) + + queue_page_size = num_pages * sizeof(*queue->kernel_if->u.h.page); + ++ if (queue_size + queue_page_size > KMALLOC_MAX_SIZE) ++ return NULL; ++ + queue = kzalloc(queue_size + queue_page_size, GFP_KERNEL); + if (queue) { + queue->q_header = NULL; +@@ -630,7 +633,7 @@ static void qp_release_pages(struct page **pages, + + for (i = 0; i < num_pages; i++) { + if (dirty) +- set_page_dirty(pages[i]); ++ set_page_dirty_lock(pages[i]); + + put_page(pages[i]); + pages[i] = NULL; +diff --git a/drivers/mmc/host/owl-mmc.c b/drivers/mmc/host/owl-mmc.c +index ccf214a89eda9..3d4abf175b1d8 100644 +--- a/drivers/mmc/host/owl-mmc.c ++++ b/drivers/mmc/host/owl-mmc.c +@@ -641,7 +641,7 @@ static int owl_mmc_probe(struct platform_device *pdev) + owl_host->irq = platform_get_irq(pdev, 0); + if (owl_host->irq < 0) { + ret = -EINVAL; +- goto err_free_host; ++ goto err_release_channel; + } + + ret = devm_request_irq(&pdev->dev, owl_host->irq, owl_irq_handler, +@@ -649,19 +649,21 @@ static int owl_mmc_probe(struct platform_device *pdev) + if (ret) { + dev_err(&pdev->dev, "Failed to request irq %d\n", + owl_host->irq); +- goto err_free_host; ++ goto err_release_channel; + } + + ret = mmc_add_host(mmc); + if (ret) { + dev_err(&pdev->dev, "Failed to add host\n"); +- goto err_free_host; ++ goto err_release_channel; + } + + dev_dbg(&pdev->dev, "Owl MMC Controller Initialized\n"); + + return 0; + ++err_release_channel: ++ dma_release_channel(owl_host->dma); + err_free_host: + mmc_free_host(mmc); + +@@ -675,6 +677,7 @@ static int owl_mmc_remove(struct platform_device *pdev) + + mmc_remove_host(mmc); + disable_irq(owl_host->irq); ++ dma_release_channel(owl_host->dma); + mmc_free_host(mmc); + + return 0; +diff --git a/drivers/mmc/host/renesas_sdhi_internal_dmac.c b/drivers/mmc/host/renesas_sdhi_internal_dmac.c +index fe13e1ea22dcc..f3e76d6b3e3fe 100644 +--- a/drivers/mmc/host/renesas_sdhi_internal_dmac.c ++++ b/drivers/mmc/host/renesas_sdhi_internal_dmac.c +@@ -186,8 +186,8 @@ renesas_sdhi_internal_dmac_start_dma(struct tmio_mmc_host *host, + mmc_get_dma_dir(data))) + goto force_pio; + +- /* This DMAC cannot handle if buffer is not 8-bytes alignment */ +- if (!IS_ALIGNED(sg_dma_address(sg), 8)) ++ /* This DMAC cannot handle if buffer is not 128-bytes alignment */ ++ if (!IS_ALIGNED(sg_dma_address(sg), 128)) + goto force_pio_with_unmap; + + if (data->flags & MMC_DATA_READ) { +diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c +index fce8fa7e6b309..5d9b3106d2f70 100644 +--- a/drivers/mmc/host/sdhci-esdhc-imx.c ++++ b/drivers/mmc/host/sdhci-esdhc-imx.c +@@ -1752,9 +1752,10 @@ static int sdhci_esdhc_imx_remove(struct platform_device *pdev) + struct sdhci_host *host = platform_get_drvdata(pdev); + struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); + struct pltfm_imx_data *imx_data = sdhci_pltfm_priv(pltfm_host); +- int dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff); ++ int dead; + + pm_runtime_get_sync(&pdev->dev); ++ dead = (readl(host->ioaddr + SDHCI_INT_STATUS) == 0xffffffff); + pm_runtime_disable(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); + +diff --git a/drivers/mmc/host/sdhci-pci-o2micro.c b/drivers/mmc/host/sdhci-pci-o2micro.c +index fa76748d89293..94e3f72f6405d 100644 +--- a/drivers/mmc/host/sdhci-pci-o2micro.c ++++ b/drivers/mmc/host/sdhci-pci-o2micro.c +@@ -33,6 +33,8 @@ + #define O2_SD_ADMA2 0xE7 + #define O2_SD_INF_MOD 0xF1 + #define O2_SD_MISC_CTRL4 0xFC ++#define O2_SD_MISC_CTRL 0x1C0 ++#define O2_SD_PWR_FORCE_L0 0x0002 + #define O2_SD_TUNING_CTRL 0x300 + #define O2_SD_PLL_SETTING 0x304 + #define O2_SD_MISC_SETTING 0x308 +@@ -300,6 +302,8 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode) + { + struct sdhci_host *host = mmc_priv(mmc); + int current_bus_width = 0; ++ u32 scratch32 = 0; ++ u16 scratch = 0; + + /* + * This handler only implements the eMMC tuning that is specific to +@@ -312,6 +316,17 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode) + if (WARN_ON((opcode != MMC_SEND_TUNING_BLOCK_HS200) && + (opcode != MMC_SEND_TUNING_BLOCK))) + return -EINVAL; ++ ++ /* Force power mode enter L0 */ ++ scratch = sdhci_readw(host, O2_SD_MISC_CTRL); ++ scratch |= O2_SD_PWR_FORCE_L0; ++ sdhci_writew(host, scratch, O2_SD_MISC_CTRL); ++ ++ /* wait DLL lock, timeout value 5ms */ ++ if (readx_poll_timeout(sdhci_o2_pll_dll_wdt_control, host, ++ scratch32, (scratch32 & O2_DLL_LOCK_STATUS), 1, 5000)) ++ pr_warn("%s: DLL can't lock in 5ms after force L0 during tuning.\n", ++ mmc_hostname(host->mmc)); + /* + * Judge the tuning reason, whether caused by dll shift + * If cause by dll shift, should call sdhci_o2_dll_recovery +@@ -344,6 +359,11 @@ static int sdhci_o2_execute_tuning(struct mmc_host *mmc, u32 opcode) + sdhci_set_bus_width(host, current_bus_width); + } + ++ /* Cancel force power mode enter L0 */ ++ scratch = sdhci_readw(host, O2_SD_MISC_CTRL); ++ scratch &= ~(O2_SD_PWR_FORCE_L0); ++ sdhci_writew(host, scratch, O2_SD_MISC_CTRL); ++ + sdhci_reset(host, SDHCI_RESET_CMD); + sdhci_reset(host, SDHCI_RESET_DATA); + +diff --git a/drivers/mmc/host/sdhci-sprd.c b/drivers/mmc/host/sdhci-sprd.c +index 58109c5b53e2e..19cbb6171b358 100644 +--- a/drivers/mmc/host/sdhci-sprd.c ++++ b/drivers/mmc/host/sdhci-sprd.c +@@ -708,14 +708,14 @@ static int sdhci_sprd_remove(struct platform_device *pdev) + { + struct sdhci_host *host = platform_get_drvdata(pdev); + struct sdhci_sprd_host *sprd_host = TO_SPRD_HOST(host); +- struct mmc_host *mmc = host->mmc; + +- mmc_remove_host(mmc); ++ sdhci_remove_host(host, 0); ++ + clk_disable_unprepare(sprd_host->clk_sdio); + clk_disable_unprepare(sprd_host->clk_enable); + clk_disable_unprepare(sprd_host->clk_2x_enable); + +- mmc_free_host(mmc); ++ sdhci_pltfm_free(pdev); + + return 0; + } +diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c +index e2d5112d809dc..615f3d008af1e 100644 +--- a/drivers/mmc/host/usdhi6rol0.c ++++ b/drivers/mmc/host/usdhi6rol0.c +@@ -1858,10 +1858,12 @@ static int usdhi6_probe(struct platform_device *pdev) + + ret = mmc_add_host(mmc); + if (ret < 0) +- goto e_clk_off; ++ goto e_release_dma; + + return 0; + ++e_release_dma: ++ usdhi6_dma_release(host); + e_clk_off: + clk_disable_unprepare(host->clk); + e_free_mmc: +diff --git a/drivers/mtd/parsers/afs.c b/drivers/mtd/parsers/afs.c +index 980e332bdac48..26116694c821b 100644 +--- a/drivers/mtd/parsers/afs.c ++++ b/drivers/mtd/parsers/afs.c +@@ -370,10 +370,8 @@ static int parse_afs_partitions(struct mtd_info *mtd, + return i; + + out_free_parts: +- while (i >= 0) { ++ while (--i >= 0) + kfree(parts[i].name); +- i--; +- } + kfree(parts); + *pparts = NULL; + return ret; +diff --git a/drivers/mtd/parsers/parser_imagetag.c b/drivers/mtd/parsers/parser_imagetag.c +index d69607b482272..fab0949aabba1 100644 +--- a/drivers/mtd/parsers/parser_imagetag.c ++++ b/drivers/mtd/parsers/parser_imagetag.c +@@ -83,6 +83,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master, + pr_err("invalid rootfs address: %*ph\n", + (int)sizeof(buf->flash_image_start), + buf->flash_image_start); ++ ret = -EINVAL; + goto out; + } + +@@ -92,6 +93,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master, + pr_err("invalid kernel address: %*ph\n", + (int)sizeof(buf->kernel_address), + buf->kernel_address); ++ ret = -EINVAL; + goto out; + } + +@@ -100,6 +102,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master, + pr_err("invalid kernel length: %*ph\n", + (int)sizeof(buf->kernel_length), + buf->kernel_length); ++ ret = -EINVAL; + goto out; + } + +@@ -108,6 +111,7 @@ static int bcm963xx_parse_imagetag_partitions(struct mtd_info *master, + pr_err("invalid total length: %*ph\n", + (int)sizeof(buf->total_length), + buf->total_length); ++ ret = -EINVAL; + goto out; + } + +diff --git a/drivers/mtd/spi-nor/controllers/hisi-sfc.c b/drivers/mtd/spi-nor/controllers/hisi-sfc.c +index 95c502173cbda..440fc5ae7d34c 100644 +--- a/drivers/mtd/spi-nor/controllers/hisi-sfc.c ++++ b/drivers/mtd/spi-nor/controllers/hisi-sfc.c +@@ -399,8 +399,10 @@ static int hisi_spi_nor_register_all(struct hifmc_host *host) + + for_each_available_child_of_node(dev->of_node, np) { + ret = hisi_spi_nor_register(np, host); +- if (ret) ++ if (ret) { ++ of_node_put(np); + goto fail; ++ } + + if (host->num_chip == HIFMC_MAX_CHIP_NUM) { + dev_warn(dev, "Flash device number exceeds the maximum chipselect number\n"); +diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c +index ad6c79d9a7f86..06e1bf01fd920 100644 +--- a/drivers/mtd/spi-nor/core.c ++++ b/drivers/mtd/spi-nor/core.c +@@ -1212,14 +1212,15 @@ spi_nor_find_best_erase_type(const struct spi_nor_erase_map *map, + + erase = &map->erase_type[i]; + ++ /* Alignment is not mandatory for overlaid regions */ ++ if (region->offset & SNOR_OVERLAID_REGION && ++ region->size <= len) ++ return erase; ++ + /* Don't erase more than what the user has asked for. */ + if (erase->size > len) + continue; + +- /* Alignment is not mandatory for overlaid regions */ +- if (region->offset & SNOR_OVERLAID_REGION) +- return erase; +- + spi_nor_div_by_erase_size(erase, addr, &rem); + if (rem) + continue; +@@ -1363,6 +1364,7 @@ static int spi_nor_init_erase_cmd_list(struct spi_nor *nor, + goto destroy_erase_cmd_list; + + if (prev_erase != erase || ++ erase->size != cmd->size || + region->offset & SNOR_OVERLAID_REGION) { + cmd = spi_nor_init_erase_cmd(region, erase); + if (IS_ERR(cmd)) { +diff --git a/drivers/mtd/spi-nor/sfdp.c b/drivers/mtd/spi-nor/sfdp.c +index e2a43d39eb5f4..08de2a2b44520 100644 +--- a/drivers/mtd/spi-nor/sfdp.c ++++ b/drivers/mtd/spi-nor/sfdp.c +@@ -760,7 +760,7 @@ spi_nor_region_check_overlay(struct spi_nor_erase_region *region, + int i; + + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) { +- if (!(erase_type & BIT(i))) ++ if (!(erase[i].size && erase_type & BIT(erase[i].idx))) + continue; + if (region->size & erase[i].size_mask) { + spi_nor_region_mark_overlay(region); +@@ -830,6 +830,7 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor, + offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) + + region[i].size; + } ++ spi_nor_region_mark_end(®ion[i - 1]); + + save_uniform_erase_type = map->uniform_erase_type; + map->uniform_erase_type = spi_nor_sort_erase_mask(map, +@@ -853,8 +854,6 @@ spi_nor_init_non_uniform_erase_map(struct spi_nor *nor, + if (!(regions_erase_type & BIT(erase[i].idx))) + spi_nor_set_erase_type(&erase[i], 0, 0xFF); + +- spi_nor_region_mark_end(®ion[i - 1]); +- + return 0; + } + +diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig +index c3dbe64e628ea..13e0a8caf3b6f 100644 +--- a/drivers/net/Kconfig ++++ b/drivers/net/Kconfig +@@ -87,7 +87,7 @@ config WIREGUARD + select CRYPTO_CURVE25519_X86 if X86 && 64BIT + select ARM_CRYPTO if ARM + select ARM64_CRYPTO if ARM64 +- select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON ++ select CRYPTO_CHACHA20_NEON if ARM || (ARM64 && KERNEL_MODE_NEON) + select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON + select CRYPTO_POLY1305_ARM if ARM + select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON +diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c +index 59de6b3b5f026..096d818c167e2 100644 +--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c ++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c +@@ -2824,7 +2824,7 @@ static int mcp251xfd_probe(struct spi_device *spi) + spi_get_device_id(spi)->driver_data; + + /* Errata Reference: +- * mcp2517fd: DS80000789B, mcp2518fd: DS80000792C 4. ++ * mcp2517fd: DS80000792C 5., mcp2518fd: DS80000789C 4. + * + * The SPI can write corrupted data to the RAM at fast SPI + * speeds: +diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c +index 89d7c9b231863..4e53464411edf 100644 +--- a/drivers/net/dsa/ocelot/felix.c ++++ b/drivers/net/dsa/ocelot/felix.c +@@ -635,14 +635,18 @@ static void felix_teardown(struct dsa_switch *ds) + struct felix *felix = ocelot_to_felix(ocelot); + int port; + +- if (felix->info->mdio_bus_free) +- felix->info->mdio_bus_free(ocelot); +- +- for (port = 0; port < ocelot->num_phys_ports; port++) +- ocelot_deinit_port(ocelot, port); + ocelot_deinit_timestamp(ocelot); +- /* stop workqueue thread */ + ocelot_deinit(ocelot); ++ ++ for (port = 0; port < ocelot->num_phys_ports; port++) { ++ if (dsa_is_unused_port(ds, port)) ++ continue; ++ ++ ocelot_deinit_port(ocelot, port); ++ } ++ ++ if (felix->info->mdio_bus_free) ++ felix->info->mdio_bus_free(ocelot); + } + + static int felix_hwtstamp_get(struct dsa_switch *ds, int port, +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h +index b40d4377cc71d..b2cd3bdba9f89 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h +@@ -1279,10 +1279,18 @@ + #define MDIO_PMA_10GBR_FECCTRL 0x00ab + #endif + ++#ifndef MDIO_PMA_RX_CTRL1 ++#define MDIO_PMA_RX_CTRL1 0x8051 ++#endif ++ + #ifndef MDIO_PCS_DIG_CTRL + #define MDIO_PCS_DIG_CTRL 0x8000 + #endif + ++#ifndef MDIO_PCS_DIGITAL_STAT ++#define MDIO_PCS_DIGITAL_STAT 0x8010 ++#endif ++ + #ifndef MDIO_AN_XNP + #define MDIO_AN_XNP 0x0016 + #endif +@@ -1358,6 +1366,8 @@ + #define XGBE_KR_TRAINING_ENABLE BIT(1) + + #define XGBE_PCS_CL37_BP BIT(12) ++#define XGBE_PCS_PSEQ_STATE_MASK 0x1c ++#define XGBE_PCS_PSEQ_STATE_POWER_GOOD 0x10 + + #define XGBE_AN_CL37_INT_CMPLT BIT(0) + #define XGBE_AN_CL37_INT_MASK 0x01 +@@ -1375,6 +1385,10 @@ + #define XGBE_PMA_CDR_TRACK_EN_OFF 0x00 + #define XGBE_PMA_CDR_TRACK_EN_ON 0x01 + ++#define XGBE_PMA_RX_RST_0_MASK BIT(4) ++#define XGBE_PMA_RX_RST_0_RESET_ON 0x10 ++#define XGBE_PMA_RX_RST_0_RESET_OFF 0x00 ++ + /* Bit setting and getting macros + * The get macro will extract the current bit field value from within + * the variable +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c +index 2709a2db56577..395eb0b526802 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c +@@ -1368,6 +1368,7 @@ static void xgbe_stop(struct xgbe_prv_data *pdata) + return; + + netif_tx_stop_all_queues(netdev); ++ netif_carrier_off(pdata->netdev); + + xgbe_stop_timers(pdata); + flush_workqueue(pdata->dev_workqueue); +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +index 93ef5a30cb8d9..4e97b48695220 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +@@ -1345,7 +1345,7 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata) + &an_restart); + if (an_restart) { + xgbe_phy_config_aneg(pdata); +- return; ++ goto adjust_link; + } + + if (pdata->phy.link) { +@@ -1396,7 +1396,6 @@ static void xgbe_phy_stop(struct xgbe_prv_data *pdata) + pdata->phy_if.phy_impl.stop(pdata); + + pdata->phy.link = 0; +- netif_carrier_off(pdata->netdev); + + xgbe_phy_adjust_link(pdata); + } +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c +index 859ded0c06b05..18e48b3bc402b 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c +@@ -922,6 +922,9 @@ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata) + if ((phy_id & 0xfffffff0) != 0x03625d10) + return false; + ++ /* Reset PHY - wait for self-clearing reset bit to clear */ ++ genphy_soft_reset(phy_data->phydev); ++ + /* Disable RGMII mode */ + phy_write(phy_data->phydev, 0x18, 0x7007); + reg = phy_read(phy_data->phydev, 0x18); +@@ -1953,6 +1956,27 @@ static void xgbe_phy_set_redrv_mode(struct xgbe_prv_data *pdata) + xgbe_phy_put_comm_ownership(pdata); + } + ++static void xgbe_phy_rx_reset(struct xgbe_prv_data *pdata) ++{ ++ int reg; ++ ++ reg = XMDIO_READ_BITS(pdata, MDIO_MMD_PCS, MDIO_PCS_DIGITAL_STAT, ++ XGBE_PCS_PSEQ_STATE_MASK); ++ if (reg == XGBE_PCS_PSEQ_STATE_POWER_GOOD) { ++ /* Mailbox command timed out, reset of RX block is required. ++ * This can be done by asseting the reset bit and wait for ++ * its compeletion. ++ */ ++ XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_CTRL1, ++ XGBE_PMA_RX_RST_0_MASK, XGBE_PMA_RX_RST_0_RESET_ON); ++ ndelay(20); ++ XMDIO_WRITE_BITS(pdata, MDIO_MMD_PMAPMD, MDIO_PMA_RX_CTRL1, ++ XGBE_PMA_RX_RST_0_MASK, XGBE_PMA_RX_RST_0_RESET_OFF); ++ usleep_range(40, 50); ++ netif_err(pdata, link, pdata->netdev, "firmware mailbox reset performed\n"); ++ } ++} ++ + static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, + unsigned int cmd, unsigned int sub_cmd) + { +@@ -1960,9 +1984,11 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, + unsigned int wait; + + /* Log if a previous command did not complete */ +- if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS)) ++ if (XP_IOREAD_BITS(pdata, XP_DRIVER_INT_RO, STATUS)) { + netif_dbg(pdata, link, pdata->netdev, + "firmware mailbox not ready for command\n"); ++ xgbe_phy_rx_reset(pdata); ++ } + + /* Construct the command */ + XP_SET_BITS(s0, XP_DRIVER_SCRATCH_0, COMMAND, cmd); +@@ -1984,6 +2010,9 @@ static void xgbe_phy_perform_ratechange(struct xgbe_prv_data *pdata, + + netif_dbg(pdata, link, pdata->netdev, + "firmware mailbox command did not complete\n"); ++ ++ /* Reset on error */ ++ xgbe_phy_rx_reset(pdata); + } + + static void xgbe_phy_rrc(struct xgbe_prv_data *pdata) +@@ -2584,6 +2613,14 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) + if (reg & MDIO_STAT1_LSTATUS) + return 1; + ++ if (pdata->phy.autoneg == AUTONEG_ENABLE && ++ phy_data->port_mode == XGBE_PORT_MODE_BACKPLANE) { ++ if (!test_bit(XGBE_LINK_INIT, &pdata->dev_state)) { ++ netif_carrier_off(pdata->netdev); ++ *an_restart = 1; ++ } ++ } ++ + /* No link, attempt a receiver reset cycle */ + if (phy_data->rrc_count++ > XGBE_RRC_FREQUENCY) { + phy_data->rrc_count = 0; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index 033bfab24ef2f..c7c5c01a783a0 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -8856,9 +8856,10 @@ void bnxt_tx_disable(struct bnxt *bp) + txr->dev_state = BNXT_DEV_STATE_CLOSING; + } + } ++ /* Drop carrier first to prevent TX timeout */ ++ netif_carrier_off(bp->dev); + /* Stop all TX queues */ + netif_tx_disable(bp->dev); +- netif_carrier_off(bp->dev); + } + + void bnxt_tx_enable(struct bnxt *bp) +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c +index 184b6d0513b2a..8b0e916afe6b1 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c +@@ -474,8 +474,8 @@ static int bnxt_dl_info_get(struct devlink *dl, struct devlink_info_req *req, + if (BNXT_PF(bp) && !bnxt_hwrm_get_nvm_cfg_ver(bp, &nvm_cfg_ver)) { + u32 ver = nvm_cfg_ver.vu32; + +- sprintf(buf, "%X.%X.%X", (ver >> 16) & 0xF, (ver >> 8) & 0xF, +- ver & 0xF); ++ sprintf(buf, "%d.%d.%d", (ver >> 16) & 0xf, (ver >> 8) & 0xf, ++ ver & 0xf); + rc = bnxt_dl_info_put(bp, req, BNXT_VERSION_STORED, + DEVLINK_INFO_VERSION_GENERIC_FW_PSID, + buf); +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h +index 1b49f2fa9b185..34546f5312eee 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.h +@@ -46,6 +46,9 @@ + #define MAX_ULD_QSETS 16 + #define MAX_ULD_NPORTS 4 + ++/* ulp_mem_io + ulptx_idata + payload + padding */ ++#define MAX_IMM_ULPTX_WR_LEN (32 + 8 + 256 + 8) ++ + /* CPL message priority levels */ + enum { + CPL_PRIORITY_DATA = 0, /* data messages */ +diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c +index 196652a114c5f..3334c9e2152ab 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c +@@ -2842,17 +2842,22 @@ int t4_mgmt_tx(struct adapter *adap, struct sk_buff *skb) + * @skb: the packet + * + * Returns true if a packet can be sent as an offload WR with immediate +- * data. We currently use the same limit as for Ethernet packets. ++ * data. ++ * FW_OFLD_TX_DATA_WR limits the payload to 255 bytes due to 8-bit field. ++ * However, FW_ULPTX_WR commands have a 256 byte immediate only ++ * payload limit. + */ + static inline int is_ofld_imm(const struct sk_buff *skb) + { + struct work_request_hdr *req = (struct work_request_hdr *)skb->data; + unsigned long opcode = FW_WR_OP_G(ntohl(req->wr_hi)); + +- if (opcode == FW_CRYPTO_LOOKASIDE_WR) ++ if (unlikely(opcode == FW_ULPTX_WR)) ++ return skb->len <= MAX_IMM_ULPTX_WR_LEN; ++ else if (opcode == FW_CRYPTO_LOOKASIDE_WR) + return skb->len <= SGE_MAX_WR_LEN; + else +- return skb->len <= MAX_IMM_TX_PKT_LEN; ++ return skb->len <= MAX_IMM_OFLD_TX_DATA_WR_LEN; + } + + /** +diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h +index 47ba81e42f5d0..b1161bdeda4dc 100644 +--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h ++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.h +@@ -50,9 +50,6 @@ + #define MIN_RCV_WND (24 * 1024U) + #define LOOPBACK(x) (((x) & htonl(0xff000000)) == htonl(0x7f000000)) + +-/* ulp_mem_io + ulptx_idata + payload + padding */ +-#define MAX_IMM_ULPTX_WR_LEN (32 + 8 + 256 + 8) +- + /* for TX: a skb must have a headroom of at least TX_HEADER_LEN bytes */ + #define TX_HEADER_LEN \ + (sizeof(struct fw_ofld_tx_data_wr) + sizeof(struct sge_opaque_hdr)) +diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +index d880ab2a7d962..f91c67489e629 100644 +--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c ++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +@@ -399,10 +399,20 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv, + xdp.frame_sz = DPAA2_ETH_RX_BUF_RAW_SIZE; + + err = xdp_do_redirect(priv->net_dev, &xdp, xdp_prog); +- if (unlikely(err)) ++ if (unlikely(err)) { ++ addr = dma_map_page(priv->net_dev->dev.parent, ++ virt_to_page(vaddr), 0, ++ priv->rx_buf_size, DMA_BIDIRECTIONAL); ++ if (unlikely(dma_mapping_error(priv->net_dev->dev.parent, addr))) { ++ free_pages((unsigned long)vaddr, 0); ++ } else { ++ ch->buf_count++; ++ dpaa2_eth_xdp_release_buf(priv, ch, addr); ++ } + ch->stats.xdp_drop++; +- else ++ } else { + ch->stats.xdp_redirect++; ++ } + break; + } + +diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pf.c b/drivers/net/ethernet/freescale/enetc/enetc_pf.c +index 06514af0df106..796e3d6f23f09 100644 +--- a/drivers/net/ethernet/freescale/enetc/enetc_pf.c ++++ b/drivers/net/ethernet/freescale/enetc/enetc_pf.c +@@ -1164,14 +1164,15 @@ static void enetc_pf_remove(struct pci_dev *pdev) + struct enetc_ndev_priv *priv; + + priv = netdev_priv(si->ndev); +- enetc_phylink_destroy(priv); +- enetc_mdiobus_destroy(pf); + + if (pf->num_vfs) + enetc_sriov_configure(pdev, 0); + + unregister_netdev(si->ndev); + ++ enetc_phylink_destroy(priv); ++ enetc_mdiobus_destroy(pf); ++ + enetc_free_msix(priv); + + enetc_free_si_resources(priv); +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c +index ee16e0e4fa5fc..5e1f4e71af7bc 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.c ++++ b/drivers/net/ethernet/ibm/ibmvnic.c +@@ -249,8 +249,13 @@ static void free_long_term_buff(struct ibmvnic_adapter *adapter, + if (!ltb->buff) + return; + ++ /* VIOS automatically unmaps the long term buffer at remote ++ * end for the following resets: ++ * FAILOVER, MOBILITY, TIMEOUT. ++ */ + if (adapter->reset_reason != VNIC_RESET_FAILOVER && +- adapter->reset_reason != VNIC_RESET_MOBILITY) ++ adapter->reset_reason != VNIC_RESET_MOBILITY && ++ adapter->reset_reason != VNIC_RESET_TIMEOUT) + send_request_unmap(adapter, ltb->map_id); + dma_free_coherent(dev, ltb->size, ltb->buff, ltb->addr); + } +@@ -1329,10 +1334,8 @@ static int __ibmvnic_close(struct net_device *netdev) + + adapter->state = VNIC_CLOSING; + rc = set_link_state(adapter, IBMVNIC_LOGICAL_LNK_DN); +- if (rc) +- return rc; + adapter->state = VNIC_CLOSED; +- return 0; ++ return rc; + } + + static int ibmvnic_close(struct net_device *netdev) +@@ -1594,6 +1597,9 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev) + skb_copy_from_linear_data(skb, dst, skb->len); + } + ++ /* post changes to long_term_buff *dst before VIOS accessing it */ ++ dma_wmb(); ++ + tx_pool->consumer_index = + (tx_pool->consumer_index + 1) % tx_pool->num_buffers; + +@@ -2434,6 +2440,8 @@ restart_poll: + offset = be16_to_cpu(next->rx_comp.off_frame_data); + flags = next->rx_comp.flags; + skb = rx_buff->skb; ++ /* load long_term_buff before copying to skb */ ++ dma_rmb(); + skb_copy_to_linear_data(skb, rx_buff->data + offset, + length); + +diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +index 26ba1f3eb2d85..9e81f85ee2d8d 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c +@@ -4878,7 +4878,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags) + enum i40e_admin_queue_err adq_err; + struct i40e_vsi *vsi = np->vsi; + struct i40e_pf *pf = vsi->back; +- bool is_reset_needed; ++ u32 reset_needed = 0; + i40e_status status; + u32 i, j; + +@@ -4923,9 +4923,11 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags) + flags_complete: + changed_flags = orig_flags ^ new_flags; + +- is_reset_needed = !!(changed_flags & (I40E_FLAG_VEB_STATS_ENABLED | +- I40E_FLAG_LEGACY_RX | I40E_FLAG_SOURCE_PRUNING_DISABLED | +- I40E_FLAG_DISABLE_FW_LLDP)); ++ if (changed_flags & I40E_FLAG_DISABLE_FW_LLDP) ++ reset_needed = I40E_PF_RESET_AND_REBUILD_FLAG; ++ if (changed_flags & (I40E_FLAG_VEB_STATS_ENABLED | ++ I40E_FLAG_LEGACY_RX | I40E_FLAG_SOURCE_PRUNING_DISABLED)) ++ reset_needed = BIT(__I40E_PF_RESET_REQUESTED); + + /* Before we finalize any flag changes, we need to perform some + * checks to ensure that the changes are supported and safe. +@@ -5057,7 +5059,7 @@ flags_complete: + case I40E_AQ_RC_EEXIST: + dev_warn(&pf->pdev->dev, + "FW LLDP agent is already running\n"); +- is_reset_needed = false; ++ reset_needed = 0; + break; + case I40E_AQ_RC_EPERM: + dev_warn(&pf->pdev->dev, +@@ -5086,8 +5088,8 @@ flags_complete: + /* Issue reset to cause things to take effect, as additional bits + * are added we will need to create a mask of bits requiring reset + */ +- if (is_reset_needed) +- i40e_do_reset(pf, BIT(__I40E_PF_RESET_REQUESTED), true); ++ if (reset_needed) ++ i40e_do_reset(pf, reset_needed, true); + + return 0; + } +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index 1db482d310c2d..59971f62e6268 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -2616,7 +2616,7 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf) + return; + if (!test_and_clear_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state)) + return; +- if (test_and_set_bit(__I40E_VF_DISABLE, pf->state)) { ++ if (test_bit(__I40E_VF_DISABLE, pf->state)) { + set_bit(__I40E_MACVLAN_SYNC_PENDING, pf->state); + return; + } +@@ -2634,7 +2634,6 @@ static void i40e_sync_filters_subtask(struct i40e_pf *pf) + } + } + } +- clear_bit(__I40E_VF_DISABLE, pf->state); + } + + /** +@@ -7667,6 +7666,8 @@ int i40e_add_del_cloud_filter(struct i40e_vsi *vsi, + if (filter->flags >= ARRAY_SIZE(flag_table)) + return I40E_ERR_CONFIG; + ++ memset(&cld_filter, 0, sizeof(cld_filter)); ++ + /* copy element needed to add cloud filter from filter */ + i40e_set_cld_element(filter, &cld_filter); + +@@ -7730,10 +7731,13 @@ int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi, + return -EOPNOTSUPP; + + /* adding filter using src_port/src_ip is not supported at this stage */ +- if (filter->src_port || filter->src_ipv4 || ++ if (filter->src_port || ++ (filter->src_ipv4 && filter->n_proto != ETH_P_IPV6) || + !ipv6_addr_any(&filter->ip.v6.src_ip6)) + return -EOPNOTSUPP; + ++ memset(&cld_filter, 0, sizeof(cld_filter)); ++ + /* copy element needed to add cloud filter from filter */ + i40e_set_cld_element(filter, &cld_filter.element); + +@@ -7757,7 +7761,7 @@ int i40e_add_del_cloud_filter_big_buf(struct i40e_vsi *vsi, + cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_MAC_VLAN_PORT); + } + +- } else if (filter->dst_ipv4 || ++ } else if ((filter->dst_ipv4 && filter->n_proto != ETH_P_IPV6) || + !ipv6_addr_any(&filter->ip.v6.dst_ip6)) { + cld_filter.element.flags = + cpu_to_le16(I40E_AQC_ADD_CLOUD_FILTER_IP_PORT); +@@ -8533,11 +8537,6 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired) + dev_dbg(&pf->pdev->dev, "PFR requested\n"); + i40e_handle_reset_warning(pf, lock_acquired); + +- dev_info(&pf->pdev->dev, +- pf->flags & I40E_FLAG_DISABLE_FW_LLDP ? +- "FW LLDP is disabled\n" : +- "FW LLDP is enabled\n"); +- + } else if (reset_flags & I40E_PF_RESET_AND_REBUILD_FLAG) { + /* Request a PF Reset + * +@@ -8545,6 +8544,10 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags, bool lock_acquired) + */ + i40e_prep_for_reset(pf, lock_acquired); + i40e_reset_and_rebuild(pf, true, lock_acquired); ++ dev_info(&pf->pdev->dev, ++ pf->flags & I40E_FLAG_DISABLE_FW_LLDP ? ++ "FW LLDP is disabled\n" : ++ "FW LLDP is enabled\n"); + + } else if (reset_flags & BIT_ULL(__I40E_REINIT_REQUESTED)) { + int v; +@@ -10001,7 +10004,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) + int old_recovery_mode_bit = test_bit(__I40E_RECOVERY_MODE, pf->state); + struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi]; + struct i40e_hw *hw = &pf->hw; +- u8 set_fc_aq_fail = 0; + i40e_status ret; + u32 val; + int v; +@@ -10127,13 +10129,6 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) + i40e_stat_str(&pf->hw, ret), + i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); + +- /* make sure our flow control settings are restored */ +- ret = i40e_set_fc(&pf->hw, &set_fc_aq_fail, true); +- if (ret) +- dev_dbg(&pf->pdev->dev, "setting flow control: ret = %s last_status = %s\n", +- i40e_stat_str(&pf->hw, ret), +- i40e_aq_str(&pf->hw, pf->hw.aq.asq_last_status)); +- + /* Rebuild the VSIs and VEBs that existed before reset. + * They are still in our local switch element arrays, so only + * need to rebuild the switch model in the HW. +@@ -11709,6 +11704,8 @@ i40e_status i40e_set_partition_bw_setting(struct i40e_pf *pf) + struct i40e_aqc_configure_partition_bw_data bw_data; + i40e_status status; + ++ memset(&bw_data, 0, sizeof(bw_data)); ++ + /* Set the valid bit for this PF */ + bw_data.pf_valid_bits = cpu_to_le16(BIT(pf->hw.pf_id)); + bw_data.max_bw[pf->hw.pf_id] = pf->max_bw & I40E_ALT_BW_VALUE_MASK; +@@ -14714,7 +14711,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + int err; + u32 val; + u32 i; +- u8 set_fc_aq_fail; + + err = pci_enable_device_mem(pdev); + if (err) +@@ -15048,24 +15044,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + } + INIT_LIST_HEAD(&pf->vsi[pf->lan_vsi]->ch_list); + +- /* Make sure flow control is set according to current settings */ +- err = i40e_set_fc(hw, &set_fc_aq_fail, true); +- if (set_fc_aq_fail & I40E_SET_FC_AQ_FAIL_GET) +- dev_dbg(&pf->pdev->dev, +- "Set fc with err %s aq_err %s on get_phy_cap\n", +- i40e_stat_str(hw, err), +- i40e_aq_str(hw, hw->aq.asq_last_status)); +- if (set_fc_aq_fail & I40E_SET_FC_AQ_FAIL_SET) +- dev_dbg(&pf->pdev->dev, +- "Set fc with err %s aq_err %s on set_phy_config\n", +- i40e_stat_str(hw, err), +- i40e_aq_str(hw, hw->aq.asq_last_status)); +- if (set_fc_aq_fail & I40E_SET_FC_AQ_FAIL_UPDATE) +- dev_dbg(&pf->pdev->dev, +- "Set fc with err %s aq_err %s on get_link_info\n", +- i40e_stat_str(hw, err), +- i40e_aq_str(hw, hw->aq.asq_last_status)); +- + /* if FDIR VSI was set up, start it now */ + for (i = 0; i < pf->num_alloc_vsi; i++) { + if (pf->vsi[i] && pf->vsi[i]->type == I40E_VSI_FDIR) { +diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c +index 3f5825fa67c99..38dec49ac64d2 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c +@@ -3102,13 +3102,16 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags, + + l4_proto = ip.v4->protocol; + } else if (*tx_flags & I40E_TX_FLAGS_IPV6) { ++ int ret; ++ + tunnel |= I40E_TX_CTX_EXT_IP_IPV6; + + exthdr = ip.hdr + sizeof(*ip.v6); + l4_proto = ip.v6->nexthdr; +- if (l4.hdr != exthdr) +- ipv6_skip_exthdr(skb, exthdr - skb->data, +- &l4_proto, &frag_off); ++ ret = ipv6_skip_exthdr(skb, exthdr - skb->data, ++ &l4_proto, &frag_off); ++ if (ret < 0) ++ return -1; + } + + /* define outer transport */ +diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h +index 54cf382fddaf9..5b3f2bb22eba7 100644 +--- a/drivers/net/ethernet/intel/ice/ice.h ++++ b/drivers/net/ethernet/intel/ice/ice.h +@@ -444,9 +444,7 @@ struct ice_pf { + struct ice_hw_port_stats stats_prev; + struct ice_hw hw; + u8 stat_prev_loaded:1; /* has previous stats been loaded */ +-#ifdef CONFIG_DCB + u16 dcbx_cap; +-#endif /* CONFIG_DCB */ + u32 tx_timeout_count; + unsigned long tx_timeout_last_recovery; + u32 tx_timeout_recovery_level; +diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c +index 87f91b750d59a..8c133a8be6add 100644 +--- a/drivers/net/ethernet/intel/ice/ice_dcb_nl.c ++++ b/drivers/net/ethernet/intel/ice/ice_dcb_nl.c +@@ -136,7 +136,7 @@ ice_dcbnl_getnumtcs(struct net_device *dev, int __always_unused tcid, u8 *num) + if (!test_bit(ICE_FLAG_DCB_CAPABLE, pf->flags)) + return -EINVAL; + +- *num = IEEE_8021QAZ_MAX_TCS; ++ *num = pf->hw.func_caps.common_cap.maxtc; + return 0; + } + +@@ -160,6 +160,10 @@ static u8 ice_dcbnl_setdcbx(struct net_device *netdev, u8 mode) + { + struct ice_pf *pf = ice_netdev_to_pf(netdev); + ++ /* if FW LLDP agent is running, DCBNL not allowed to change mode */ ++ if (test_bit(ICE_FLAG_FW_LLDP_AGENT, pf->flags)) ++ return ICE_DCB_NO_HW_CHG; ++ + /* No support for LLD_MANAGED modes or CEE+IEEE */ + if ((mode & DCB_CAP_DCBX_LLD_MANAGED) || + ((mode & DCB_CAP_DCBX_VER_IEEE) && (mode & DCB_CAP_DCBX_VER_CEE)) || +diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c +index 69c113a4de7e6..aebebd2102da0 100644 +--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c ++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c +@@ -8,6 +8,7 @@ + #include "ice_fltr.h" + #include "ice_lib.h" + #include "ice_dcb_lib.h" ++#include + + struct ice_stats { + char stat_string[ETH_GSTRING_LEN]; +@@ -1238,6 +1239,9 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags) + status = ice_init_pf_dcb(pf, true); + if (status) + dev_warn(dev, "Fail to init DCB\n"); ++ ++ pf->dcbx_cap &= ~DCB_CAP_DCBX_LLD_MANAGED; ++ pf->dcbx_cap |= DCB_CAP_DCBX_HOST; + } else { + enum ice_status status; + bool dcbx_agent_status; +@@ -1280,6 +1284,9 @@ static int ice_set_priv_flags(struct net_device *netdev, u32 flags) + if (status) + dev_dbg(dev, "Fail to enable MIB change events\n"); + ++ pf->dcbx_cap &= ~DCB_CAP_DCBX_HOST; ++ pf->dcbx_cap |= DCB_CAP_DCBX_LLD_MANAGED; ++ + ice_nway_reset(netdev); + } + } +@@ -3321,6 +3328,18 @@ ice_get_channels(struct net_device *dev, struct ethtool_channels *ch) + ch->max_other = ch->other_count; + } + ++/** ++ * ice_get_valid_rss_size - return valid number of RSS queues ++ * @hw: pointer to the HW structure ++ * @new_size: requested RSS queues ++ */ ++static int ice_get_valid_rss_size(struct ice_hw *hw, int new_size) ++{ ++ struct ice_hw_common_caps *caps = &hw->func_caps.common_cap; ++ ++ return min_t(int, new_size, BIT(caps->rss_table_entry_width)); ++} ++ + /** + * ice_vsi_set_dflt_rss_lut - set default RSS LUT with requested RSS size + * @vsi: VSI to reconfigure RSS LUT on +@@ -3348,14 +3367,10 @@ static int ice_vsi_set_dflt_rss_lut(struct ice_vsi *vsi, int req_rss_size) + return -ENOMEM; + + /* set RSS LUT parameters */ +- if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) { ++ if (!test_bit(ICE_FLAG_RSS_ENA, pf->flags)) + vsi->rss_size = 1; +- } else { +- struct ice_hw_common_caps *caps = &hw->func_caps.common_cap; +- +- vsi->rss_size = min_t(int, req_rss_size, +- BIT(caps->rss_table_entry_width)); +- } ++ else ++ vsi->rss_size = ice_get_valid_rss_size(hw, req_rss_size); + + /* create/set RSS LUT */ + ice_fill_rss_lut(lut, vsi->rss_table_size, vsi->rss_size); +@@ -3434,9 +3449,12 @@ static int ice_set_channels(struct net_device *dev, struct ethtool_channels *ch) + + ice_vsi_recfg_qs(vsi, new_rx, new_tx); + +- if (new_rx && !netif_is_rxfh_configured(dev)) ++ if (!netif_is_rxfh_configured(dev)) + return ice_vsi_set_dflt_rss_lut(vsi, new_rx); + ++ /* Update rss_size due to change in Rx queues */ ++ vsi->rss_size = ice_get_valid_rss_size(&pf->hw, new_rx); ++ + return 0; + } + +diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +index ec7f6c64132ee..b3161c5def465 100644 +--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c ++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c +@@ -1878,6 +1878,29 @@ static int ice_vc_get_ver_msg(struct ice_vf *vf, u8 *msg) + sizeof(struct virtchnl_version_info)); + } + ++/** ++ * ice_vc_get_max_frame_size - get max frame size allowed for VF ++ * @vf: VF used to determine max frame size ++ * ++ * Max frame size is determined based on the current port's max frame size and ++ * whether a port VLAN is configured on this VF. The VF is not aware whether ++ * it's in a port VLAN so the PF needs to account for this in max frame size ++ * checks and sending the max frame size to the VF. ++ */ ++static u16 ice_vc_get_max_frame_size(struct ice_vf *vf) ++{ ++ struct ice_vsi *vsi = vf->pf->vsi[vf->lan_vsi_idx]; ++ struct ice_port_info *pi = vsi->port_info; ++ u16 max_frame_size; ++ ++ max_frame_size = pi->phy.link_info.max_frame_size; ++ ++ if (vf->port_vlan_info) ++ max_frame_size -= VLAN_HLEN; ++ ++ return max_frame_size; ++} ++ + /** + * ice_vc_get_vf_res_msg + * @vf: pointer to the VF info +@@ -1960,6 +1983,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg) + vfres->max_vectors = pf->num_msix_per_vf; + vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE; + vfres->rss_lut_size = ICE_VSIQF_HLUT_ARRAY_SIZE; ++ vfres->max_mtu = ice_vc_get_max_frame_size(vf); + + vfres->vsi_res[0].vsi_id = vf->lan_vsi_num; + vfres->vsi_res[0].vsi_type = VIRTCHNL_VSI_SRIOV; +@@ -2952,6 +2976,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) + + /* copy Rx queue info from VF into VSI */ + if (qpi->rxq.ring_len > 0) { ++ u16 max_frame_size = ice_vc_get_max_frame_size(vf); ++ + num_rxq++; + vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr; + vsi->rx_rings[i]->count = qpi->rxq.ring_len; +@@ -2964,7 +2990,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) + } + vsi->rx_buf_len = qpi->rxq.databuffer_size; + vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len; +- if (qpi->rxq.max_pkt_size >= (16 * 1024) || ++ if (qpi->rxq.max_pkt_size > max_frame_size || + qpi->rxq.max_pkt_size < 64) { + v_ret = VIRTCHNL_STATUS_ERR_PARAM; + goto error_param; +@@ -2972,6 +2998,11 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg) + } + + vsi->max_frame = qpi->rxq.max_pkt_size; ++ /* add space for the port VLAN since the VF driver is not ++ * expected to account for it in the MTU calculation ++ */ ++ if (vf->port_vlan_info) ++ vsi->max_frame += VLAN_HLEN; + } + + /* VF can request to configure less than allocated queues or default +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index ceb4f27898002..c6b735b305156 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -3409,7 +3409,9 @@ static int mvneta_txq_sw_init(struct mvneta_port *pp, + return -ENOMEM; + + /* Setup XPS mapping */ +- if (txq_number > 1) ++ if (pp->neta_armada3700) ++ cpu = 0; ++ else if (txq_number > 1) + cpu = txq->id % num_present_cpus(); + else + cpu = pp->rxq_def % num_present_cpus(); +@@ -4187,6 +4189,11 @@ static int mvneta_cpu_online(unsigned int cpu, struct hlist_node *node) + node_online); + struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu); + ++ /* Armada 3700's per-cpu interrupt for mvneta is broken, all interrupts ++ * are routed to CPU 0, so we don't need all the cpu-hotplug support ++ */ ++ if (pp->neta_armada3700) ++ return 0; + + spin_lock(&pp->lock); + /* +diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +index 77adad4adb1bc..809f50ab0432e 100644 +--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c ++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c +@@ -332,7 +332,7 @@ static ssize_t rvu_dbg_qsize_write(struct file *filp, + u16 pcifunc; + int ret, lf; + +- cmd_buf = memdup_user(buffer, count); ++ cmd_buf = memdup_user(buffer, count + 1); + if (IS_ERR(cmd_buf)) + return -ENOMEM; + +diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +index 1187ef1375e29..cb341372d5a35 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c ++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c +@@ -4986,6 +4986,7 @@ static int mlx4_do_mirror_rule(struct mlx4_dev *dev, struct res_fs_rule *fs_rule + + if (!fs_rule->mirr_mbox) { + mlx4_err(dev, "rule mirroring mailbox is null\n"); ++ mlx4_free_cmd_mailbox(dev, mailbox); + return -EINVAL; + } + memcpy(mailbox->buf, fs_rule->mirr_mbox, fs_rule->mirr_mbox_size); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c +index a28f95df2901d..bf5cf022e279d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c +@@ -137,6 +137,11 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change, + { + struct mlx5_core_dev *dev = devlink_priv(devlink); + ++ if (mlx5_lag_is_active(dev)) { ++ NL_SET_ERR_MSG_MOD(extack, "reload is unsupported in Lag mode\n"); ++ return -EOPNOTSUPP; ++ } ++ + switch (action) { + case DEVLINK_RELOAD_ACTION_DRIVER_REINIT: + mlx5_unload_one(dev, false); +@@ -282,6 +287,10 @@ static int mlx5_devlink_enable_roce_validate(struct devlink *devlink, u32 id, + NL_SET_ERR_MSG_MOD(extack, "Device doesn't support RoCE"); + return -EOPNOTSUPP; + } ++ if (mlx5_core_is_mp_slave(dev) || mlx5_lag_is_active(dev)) { ++ NL_SET_ERR_MSG_MOD(extack, "Multi port slave/Lag device can't configure RoCE"); ++ return -EOPNOTSUPP; ++ } + + return 0; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +index 6bc6b48a56dc7..24e2c0d955b99 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_ct.c +@@ -12,6 +12,7 @@ + #include + #include + #include ++#include + #include + + #include "lib/fs_chains.h" +@@ -51,11 +52,11 @@ struct mlx5_tc_ct_priv { + struct mlx5_flow_table *ct_nat; + struct mlx5_flow_table *post_ct; + struct mutex control_lock; /* guards parallel adds/dels */ +- struct mutex shared_counter_lock; + struct mapping_ctx *zone_mapping; + struct mapping_ctx *labels_mapping; + enum mlx5_flow_namespace_type ns_type; + struct mlx5_fs_chains *chains; ++ spinlock_t ht_lock; /* protects ft entries */ + }; + + struct mlx5_ct_flow { +@@ -124,6 +125,10 @@ struct mlx5_ct_counter { + bool is_shared; + }; + ++enum { ++ MLX5_CT_ENTRY_FLAG_VALID, ++}; ++ + struct mlx5_ct_entry { + struct rhash_head node; + struct rhash_head tuple_node; +@@ -134,6 +139,12 @@ struct mlx5_ct_entry { + struct mlx5_ct_tuple tuple; + struct mlx5_ct_tuple tuple_nat; + struct mlx5_ct_zone_rule zone_rules[2]; ++ ++ struct mlx5_tc_ct_priv *ct_priv; ++ struct work_struct work; ++ ++ refcount_t refcnt; ++ unsigned long flags; + }; + + static const struct rhashtable_params cts_ht_params = { +@@ -740,6 +751,87 @@ err_attr: + return err; + } + ++static bool ++mlx5_tc_ct_entry_valid(struct mlx5_ct_entry *entry) ++{ ++ return test_bit(MLX5_CT_ENTRY_FLAG_VALID, &entry->flags); ++} ++ ++static struct mlx5_ct_entry * ++mlx5_tc_ct_entry_get(struct mlx5_tc_ct_priv *ct_priv, struct mlx5_ct_tuple *tuple) ++{ ++ struct mlx5_ct_entry *entry; ++ ++ entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, tuple, ++ tuples_ht_params); ++ if (entry && mlx5_tc_ct_entry_valid(entry) && ++ refcount_inc_not_zero(&entry->refcnt)) { ++ return entry; ++ } else if (!entry) { ++ entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_nat_ht, ++ tuple, tuples_nat_ht_params); ++ if (entry && mlx5_tc_ct_entry_valid(entry) && ++ refcount_inc_not_zero(&entry->refcnt)) ++ return entry; ++ } ++ ++ return entry ? ERR_PTR(-EINVAL) : NULL; ++} ++ ++static void mlx5_tc_ct_entry_remove_from_tuples(struct mlx5_ct_entry *entry) ++{ ++ struct mlx5_tc_ct_priv *ct_priv = entry->ct_priv; ++ ++ rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, ++ &entry->tuple_nat_node, ++ tuples_nat_ht_params); ++ rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node, ++ tuples_ht_params); ++} ++ ++static void mlx5_tc_ct_entry_del(struct mlx5_ct_entry *entry) ++{ ++ struct mlx5_tc_ct_priv *ct_priv = entry->ct_priv; ++ ++ mlx5_tc_ct_entry_del_rules(ct_priv, entry); ++ ++ spin_lock_bh(&ct_priv->ht_lock); ++ mlx5_tc_ct_entry_remove_from_tuples(entry); ++ spin_unlock_bh(&ct_priv->ht_lock); ++ ++ mlx5_tc_ct_counter_put(ct_priv, entry); ++ kfree(entry); ++} ++ ++static void ++mlx5_tc_ct_entry_put(struct mlx5_ct_entry *entry) ++{ ++ if (!refcount_dec_and_test(&entry->refcnt)) ++ return; ++ ++ mlx5_tc_ct_entry_del(entry); ++} ++ ++static void mlx5_tc_ct_entry_del_work(struct work_struct *work) ++{ ++ struct mlx5_ct_entry *entry = container_of(work, struct mlx5_ct_entry, work); ++ ++ mlx5_tc_ct_entry_del(entry); ++} ++ ++static void ++__mlx5_tc_ct_entry_put(struct mlx5_ct_entry *entry) ++{ ++ struct mlx5e_priv *priv; ++ ++ if (!refcount_dec_and_test(&entry->refcnt)) ++ return; ++ ++ priv = netdev_priv(entry->ct_priv->netdev); ++ INIT_WORK(&entry->work, mlx5_tc_ct_entry_del_work); ++ queue_work(priv->wq, &entry->work); ++} ++ + static struct mlx5_ct_counter * + mlx5_tc_ct_counter_create(struct mlx5_tc_ct_priv *ct_priv) + { +@@ -792,16 +884,26 @@ mlx5_tc_ct_shared_counter_get(struct mlx5_tc_ct_priv *ct_priv, + } + + /* Use the same counter as the reverse direction */ +- mutex_lock(&ct_priv->shared_counter_lock); +- rev_entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &rev_tuple, +- tuples_ht_params); +- if (rev_entry) { +- if (refcount_inc_not_zero(&rev_entry->counter->refcount)) { +- mutex_unlock(&ct_priv->shared_counter_lock); +- return rev_entry->counter; +- } ++ spin_lock_bh(&ct_priv->ht_lock); ++ rev_entry = mlx5_tc_ct_entry_get(ct_priv, &rev_tuple); ++ ++ if (IS_ERR(rev_entry)) { ++ spin_unlock_bh(&ct_priv->ht_lock); ++ goto create_counter; ++ } ++ ++ if (rev_entry && refcount_inc_not_zero(&rev_entry->counter->refcount)) { ++ ct_dbg("Using shared counter entry=0x%p rev=0x%p\n", entry, rev_entry); ++ shared_counter = rev_entry->counter; ++ spin_unlock_bh(&ct_priv->ht_lock); ++ ++ mlx5_tc_ct_entry_put(rev_entry); ++ return shared_counter; + } +- mutex_unlock(&ct_priv->shared_counter_lock); ++ ++ spin_unlock_bh(&ct_priv->ht_lock); ++ ++create_counter: + + shared_counter = mlx5_tc_ct_counter_create(ct_priv); + if (IS_ERR(shared_counter)) { +@@ -866,10 +968,14 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft, + if (!meta_action) + return -EOPNOTSUPP; + +- entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, +- cts_ht_params); +- if (entry) +- return 0; ++ spin_lock_bh(&ct_priv->ht_lock); ++ entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params); ++ if (entry && refcount_inc_not_zero(&entry->refcnt)) { ++ spin_unlock_bh(&ct_priv->ht_lock); ++ mlx5_tc_ct_entry_put(entry); ++ return -EEXIST; ++ } ++ spin_unlock_bh(&ct_priv->ht_lock); + + entry = kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) +@@ -878,6 +984,8 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft, + entry->tuple.zone = ft->zone; + entry->cookie = flow->cookie; + entry->restore_cookie = meta_action->ct_metadata.cookie; ++ refcount_set(&entry->refcnt, 2); ++ entry->ct_priv = ct_priv; + + err = mlx5_tc_ct_rule_to_tuple(&entry->tuple, flow_rule); + if (err) +@@ -888,35 +996,40 @@ mlx5_tc_ct_block_flow_offload_add(struct mlx5_ct_ft *ft, + if (err) + goto err_set; + +- err = rhashtable_insert_fast(&ct_priv->ct_tuples_ht, +- &entry->tuple_node, +- tuples_ht_params); ++ spin_lock_bh(&ct_priv->ht_lock); ++ ++ err = rhashtable_lookup_insert_fast(&ft->ct_entries_ht, &entry->node, ++ cts_ht_params); ++ if (err) ++ goto err_entries; ++ ++ err = rhashtable_lookup_insert_fast(&ct_priv->ct_tuples_ht, ++ &entry->tuple_node, ++ tuples_ht_params); + if (err) + goto err_tuple; + + if (memcmp(&entry->tuple, &entry->tuple_nat, sizeof(entry->tuple))) { +- err = rhashtable_insert_fast(&ct_priv->ct_tuples_nat_ht, +- &entry->tuple_nat_node, +- tuples_nat_ht_params); ++ err = rhashtable_lookup_insert_fast(&ct_priv->ct_tuples_nat_ht, ++ &entry->tuple_nat_node, ++ tuples_nat_ht_params); + if (err) + goto err_tuple_nat; + } ++ spin_unlock_bh(&ct_priv->ht_lock); + + err = mlx5_tc_ct_entry_add_rules(ct_priv, flow_rule, entry, + ft->zone_restore_id); + if (err) + goto err_rules; + +- err = rhashtable_insert_fast(&ft->ct_entries_ht, &entry->node, +- cts_ht_params); +- if (err) +- goto err_insert; ++ set_bit(MLX5_CT_ENTRY_FLAG_VALID, &entry->flags); ++ mlx5_tc_ct_entry_put(entry); /* this function reference */ + + return 0; + +-err_insert: +- mlx5_tc_ct_entry_del_rules(ct_priv, entry); + err_rules: ++ spin_lock_bh(&ct_priv->ht_lock); + if (mlx5_tc_ct_entry_has_nat(entry)) + rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, + &entry->tuple_nat_node, tuples_nat_ht_params); +@@ -925,47 +1038,43 @@ err_tuple_nat: + &entry->tuple_node, + tuples_ht_params); + err_tuple: ++ rhashtable_remove_fast(&ft->ct_entries_ht, ++ &entry->node, ++ cts_ht_params); ++err_entries: ++ spin_unlock_bh(&ct_priv->ht_lock); + err_set: + kfree(entry); +- netdev_warn(ct_priv->netdev, +- "Failed to offload ct entry, err: %d\n", err); ++ if (err != -EEXIST) ++ netdev_warn(ct_priv->netdev, "Failed to offload ct entry, err: %d\n", err); + return err; + } + +-static void +-mlx5_tc_ct_del_ft_entry(struct mlx5_tc_ct_priv *ct_priv, +- struct mlx5_ct_entry *entry) +-{ +- mlx5_tc_ct_entry_del_rules(ct_priv, entry); +- mutex_lock(&ct_priv->shared_counter_lock); +- if (mlx5_tc_ct_entry_has_nat(entry)) +- rhashtable_remove_fast(&ct_priv->ct_tuples_nat_ht, +- &entry->tuple_nat_node, +- tuples_nat_ht_params); +- rhashtable_remove_fast(&ct_priv->ct_tuples_ht, &entry->tuple_node, +- tuples_ht_params); +- mutex_unlock(&ct_priv->shared_counter_lock); +- mlx5_tc_ct_counter_put(ct_priv, entry); +- +-} +- + static int + mlx5_tc_ct_block_flow_offload_del(struct mlx5_ct_ft *ft, + struct flow_cls_offload *flow) + { ++ struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + unsigned long cookie = flow->cookie; + struct mlx5_ct_entry *entry; + +- entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, +- cts_ht_params); +- if (!entry) ++ spin_lock_bh(&ct_priv->ht_lock); ++ entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params); ++ if (!entry) { ++ spin_unlock_bh(&ct_priv->ht_lock); + return -ENOENT; ++ } + +- mlx5_tc_ct_del_ft_entry(ft->ct_priv, entry); +- WARN_ON(rhashtable_remove_fast(&ft->ct_entries_ht, +- &entry->node, +- cts_ht_params)); +- kfree(entry); ++ if (!mlx5_tc_ct_entry_valid(entry)) { ++ spin_unlock_bh(&ct_priv->ht_lock); ++ return -EINVAL; ++ } ++ ++ rhashtable_remove_fast(&ft->ct_entries_ht, &entry->node, cts_ht_params); ++ mlx5_tc_ct_entry_remove_from_tuples(entry); ++ spin_unlock_bh(&ct_priv->ht_lock); ++ ++ mlx5_tc_ct_entry_put(entry); + + return 0; + } +@@ -974,19 +1083,30 @@ static int + mlx5_tc_ct_block_flow_offload_stats(struct mlx5_ct_ft *ft, + struct flow_cls_offload *f) + { ++ struct mlx5_tc_ct_priv *ct_priv = ft->ct_priv; + unsigned long cookie = f->cookie; + struct mlx5_ct_entry *entry; + u64 lastuse, packets, bytes; + +- entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, +- cts_ht_params); +- if (!entry) ++ spin_lock_bh(&ct_priv->ht_lock); ++ entry = rhashtable_lookup_fast(&ft->ct_entries_ht, &cookie, cts_ht_params); ++ if (!entry) { ++ spin_unlock_bh(&ct_priv->ht_lock); + return -ENOENT; ++ } ++ ++ if (!mlx5_tc_ct_entry_valid(entry) || !refcount_inc_not_zero(&entry->refcnt)) { ++ spin_unlock_bh(&ct_priv->ht_lock); ++ return -EINVAL; ++ } ++ ++ spin_unlock_bh(&ct_priv->ht_lock); + + mlx5_fc_query_cached(entry->counter->counter, &bytes, &packets, &lastuse); + flow_stats_update(&f->stats, bytes, packets, 0, lastuse, + FLOW_ACTION_HW_STATS_DELAYED); + ++ mlx5_tc_ct_entry_put(entry); + return 0; + } + +@@ -1478,11 +1598,9 @@ err_mapping: + static void + mlx5_tc_ct_flush_ft_entry(void *ptr, void *arg) + { +- struct mlx5_tc_ct_priv *ct_priv = arg; + struct mlx5_ct_entry *entry = ptr; + +- mlx5_tc_ct_del_ft_entry(ct_priv, entry); +- kfree(entry); ++ mlx5_tc_ct_entry_put(entry); + } + + static void +@@ -1960,6 +2078,7 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, + goto err_mapping_labels; + } + ++ spin_lock_init(&ct_priv->ht_lock); + ct_priv->ns_type = ns_type; + ct_priv->chains = chains; + ct_priv->netdev = priv->netdev; +@@ -1994,7 +2113,6 @@ mlx5_tc_ct_init(struct mlx5e_priv *priv, struct mlx5_fs_chains *chains, + + idr_init(&ct_priv->fte_ids); + mutex_init(&ct_priv->control_lock); +- mutex_init(&ct_priv->shared_counter_lock); + rhashtable_init(&ct_priv->zone_ht, &zone_params); + rhashtable_init(&ct_priv->ct_tuples_ht, &tuples_ht_params); + rhashtable_init(&ct_priv->ct_tuples_nat_ht, &tuples_nat_ht_params); +@@ -2037,7 +2155,6 @@ mlx5_tc_ct_clean(struct mlx5_tc_ct_priv *ct_priv) + rhashtable_destroy(&ct_priv->ct_tuples_nat_ht); + rhashtable_destroy(&ct_priv->zone_ht); + mutex_destroy(&ct_priv->control_lock); +- mutex_destroy(&ct_priv->shared_counter_lock); + idr_destroy(&ct_priv->fte_ids); + kfree(ct_priv); + } +@@ -2059,14 +2176,22 @@ mlx5e_tc_ct_restore_flow(struct mlx5_tc_ct_priv *ct_priv, + if (!mlx5_tc_ct_skb_to_tuple(skb, &tuple, zone)) + return false; + +- entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_ht, &tuple, +- tuples_ht_params); +- if (!entry) +- entry = rhashtable_lookup_fast(&ct_priv->ct_tuples_nat_ht, +- &tuple, tuples_nat_ht_params); +- if (!entry) ++ spin_lock(&ct_priv->ht_lock); ++ ++ entry = mlx5_tc_ct_entry_get(ct_priv, &tuple); ++ if (!entry) { ++ spin_unlock(&ct_priv->ht_lock); ++ return false; ++ } ++ ++ if (IS_ERR(entry)) { ++ spin_unlock(&ct_priv->ht_lock); + return false; ++ } ++ spin_unlock(&ct_priv->ht_lock); + + tcf_ct_flow_table_restore_skb(skb, entry->restore_cookie); ++ __mlx5_tc_ct_entry_put(entry); ++ + return true; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +index d487e5e371625..8d991c3b7a503 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h +@@ -83,7 +83,7 @@ static inline void mlx5e_xdp_tx_disable(struct mlx5e_priv *priv) + + clear_bit(MLX5E_STATE_XDP_TX_ENABLED, &priv->state); + /* Let other device's napi(s) and XSK wakeups see our new state. */ +- synchronize_rcu(); ++ synchronize_net(); + } + + static inline bool mlx5e_xdp_tx_is_enabled(struct mlx5e_priv *priv) +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c +index be3465ba38ca1..f95905fc4979e 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c +@@ -106,7 +106,7 @@ err_free_cparam: + void mlx5e_close_xsk(struct mlx5e_channel *c) + { + clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state); +- synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */ ++ synchronize_net(); /* Sync with the XSK wakeup and with NAPI. */ + + mlx5e_close_rq(&c->xskrq); + mlx5e_close_cq(&c->xskrq.cq); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +index 1fae7fab8297e..ff81b69a59a9b 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/en_accel.h +@@ -173,7 +173,7 @@ static inline bool mlx5e_accel_tx_eseg(struct mlx5e_priv *priv, + #endif + + #if IS_ENABLED(CONFIG_GENEVE) +- if (skb->encapsulation) ++ if (skb->encapsulation && skb->ip_summed == CHECKSUM_PARTIAL) + mlx5e_tx_tunnel_accel(skb, eseg, ihs); + #endif + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c +index 6a1d82503ef8f..d06532d0baa43 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls_rx.c +@@ -57,6 +57,20 @@ struct mlx5e_ktls_offload_context_rx { + struct mlx5e_ktls_rx_resync_ctx resync; + }; + ++static bool mlx5e_ktls_priv_rx_put(struct mlx5e_ktls_offload_context_rx *priv_rx) ++{ ++ if (!refcount_dec_and_test(&priv_rx->resync.refcnt)) ++ return false; ++ ++ kfree(priv_rx); ++ return true; ++} ++ ++static void mlx5e_ktls_priv_rx_get(struct mlx5e_ktls_offload_context_rx *priv_rx) ++{ ++ refcount_inc(&priv_rx->resync.refcnt); ++} ++ + static int mlx5e_ktls_create_tir(struct mlx5_core_dev *mdev, u32 *tirn, u32 rqtn) + { + int err, inlen; +@@ -326,7 +340,7 @@ static void resync_handle_work(struct work_struct *work) + priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync); + + if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags))) { +- refcount_dec(&resync->refcnt); ++ mlx5e_ktls_priv_rx_put(priv_rx); + return; + } + +@@ -334,7 +348,7 @@ static void resync_handle_work(struct work_struct *work) + sq = &c->async_icosq; + + if (resync_post_get_progress_params(sq, priv_rx)) +- refcount_dec(&resync->refcnt); ++ mlx5e_ktls_priv_rx_put(priv_rx); + } + + static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync, +@@ -377,7 +391,11 @@ unlock: + return err; + } + +-/* Function is called with elevated refcount, it decreases it. */ ++/* Function can be called with the refcount being either elevated or not. ++ * It decreases the refcount and may free the kTLS priv context. ++ * Refcount is not elevated only if tls_dev_del has been called, but GET_PSV was ++ * already in flight. ++ */ + void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi, + struct mlx5e_icosq *sq) + { +@@ -410,7 +428,7 @@ void mlx5e_ktls_handle_get_psv_completion(struct mlx5e_icosq_wqe_info *wi, + tls_offload_rx_resync_async_request_end(priv_rx->sk, cpu_to_be32(hw_seq)); + priv_rx->stats->tls_resync_req_end++; + out: +- refcount_dec(&resync->refcnt); ++ mlx5e_ktls_priv_rx_put(priv_rx); + dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE); + kfree(buf); + } +@@ -431,9 +449,9 @@ static bool resync_queue_get_psv(struct sock *sk) + return false; + + resync = &priv_rx->resync; +- refcount_inc(&resync->refcnt); ++ mlx5e_ktls_priv_rx_get(priv_rx); + if (unlikely(!queue_work(resync->priv->tls->rx_wq, &resync->work))) +- refcount_dec(&resync->refcnt); ++ mlx5e_ktls_priv_rx_put(priv_rx); + + return true; + } +@@ -625,31 +643,6 @@ err_create_key: + return err; + } + +-/* Elevated refcount on the resync object means there are +- * outstanding operations (uncompleted GET_PSV WQEs) that +- * will read the resync / priv_rx objects once completed. +- * Wait for them to avoid use-after-free. +- */ +-static void wait_for_resync(struct net_device *netdev, +- struct mlx5e_ktls_rx_resync_ctx *resync) +-{ +-#define MLX5E_KTLS_RX_RESYNC_TIMEOUT 20000 /* msecs */ +- unsigned long exp_time = jiffies + msecs_to_jiffies(MLX5E_KTLS_RX_RESYNC_TIMEOUT); +- unsigned int refcnt; +- +- do { +- refcnt = refcount_read(&resync->refcnt); +- if (refcnt == 1) +- return; +- +- msleep(20); +- } while (time_before(jiffies, exp_time)); +- +- netdev_warn(netdev, +- "Failed waiting for kTLS RX resync refcnt to be released (%u).\n", +- refcnt); +-} +- + void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx) + { + struct mlx5e_ktls_offload_context_rx *priv_rx; +@@ -663,7 +656,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx) + priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx); + set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags); + mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL); +- synchronize_rcu(); /* Sync with NAPI */ ++ synchronize_net(); /* Sync with NAPI */ + if (!cancel_work_sync(&priv_rx->rule.work)) + /* completion is needed, as the priv_rx in the add flow + * is maintained on the wqe info (wi), not on the socket. +@@ -671,8 +664,7 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx) + wait_for_completion(&priv_rx->add_ctx); + resync = &priv_rx->resync; + if (cancel_work_sync(&resync->work)) +- refcount_dec(&resync->refcnt); +- wait_for_resync(netdev, resync); ++ mlx5e_ktls_priv_rx_put(priv_rx); + + priv_rx->stats->tls_del++; + if (priv_rx->rule.rule) +@@ -680,5 +672,9 @@ void mlx5e_ktls_del_rx(struct net_device *netdev, struct tls_context *tls_ctx) + + mlx5_core_destroy_tir(mdev, priv_rx->tirn); + mlx5_ktls_destroy_key(mdev, priv_rx->key_id); +- kfree(priv_rx); ++ /* priv_rx should normally be freed here, but if there is an outstanding ++ * GET_PSV, deallocation will be delayed until the CQE for GET_PSV is ++ * processed. ++ */ ++ mlx5e_ktls_priv_rx_put(priv_rx); + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +index e596f050c4316..b8622440243b4 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c +@@ -522,7 +522,7 @@ static int mlx5e_get_coalesce(struct net_device *netdev, + #define MLX5E_MAX_COAL_FRAMES MLX5_MAX_CQ_COUNT + + static void +-mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal) ++mlx5e_set_priv_channels_tx_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal) + { + struct mlx5_core_dev *mdev = priv->mdev; + int tc; +@@ -537,6 +537,17 @@ mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesc + coal->tx_coalesce_usecs, + coal->tx_max_coalesced_frames); + } ++ } ++} ++ ++static void ++mlx5e_set_priv_channels_rx_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal) ++{ ++ struct mlx5_core_dev *mdev = priv->mdev; ++ int i; ++ ++ for (i = 0; i < priv->channels.num; ++i) { ++ struct mlx5e_channel *c = priv->channels.c[i]; + + mlx5_core_modify_cq_moderation(mdev, &c->rq.cq.mcq, + coal->rx_coalesce_usecs, +@@ -583,21 +594,9 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, + tx_moder->pkts = coal->tx_max_coalesced_frames; + new_channels.params.tx_dim_enabled = !!coal->use_adaptive_tx_coalesce; + +- if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { +- priv->channels.params = new_channels.params; +- goto out; +- } +- /* we are opened */ +- + reset_rx = !!coal->use_adaptive_rx_coalesce != priv->channels.params.rx_dim_enabled; + reset_tx = !!coal->use_adaptive_tx_coalesce != priv->channels.params.tx_dim_enabled; + +- if (!reset_rx && !reset_tx) { +- mlx5e_set_priv_channels_coalesce(priv, coal); +- priv->channels.params = new_channels.params; +- goto out; +- } +- + if (reset_rx) { + u8 mode = MLX5E_GET_PFLAG(&new_channels.params, + MLX5E_PFLAG_RX_CQE_BASED_MODER); +@@ -611,6 +610,20 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv, + mlx5e_reset_tx_moderation(&new_channels.params, mode); + } + ++ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { ++ priv->channels.params = new_channels.params; ++ goto out; ++ } ++ ++ if (!reset_rx && !reset_tx) { ++ if (!coal->use_adaptive_rx_coalesce) ++ mlx5e_set_priv_channels_rx_coalesce(priv, coal); ++ if (!coal->use_adaptive_tx_coalesce) ++ mlx5e_set_priv_channels_tx_coalesce(priv, coal); ++ priv->channels.params = new_channels.params; ++ goto out; ++ } ++ + err = mlx5e_safe_switch_channels(priv, &new_channels, NULL, NULL); + + out: +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index 42848db8f8dd6..6394f9d8c6851 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -919,7 +919,7 @@ void mlx5e_activate_rq(struct mlx5e_rq *rq) + void mlx5e_deactivate_rq(struct mlx5e_rq *rq) + { + clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state); +- synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */ ++ synchronize_net(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */ + } + + void mlx5e_close_rq(struct mlx5e_rq *rq) +@@ -1380,7 +1380,7 @@ static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq) + struct mlx5_wq_cyc *wq = &sq->wq; + + clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); +- synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */ ++ synchronize_net(); /* Sync with NAPI to prevent netif_tx_wake_queue. */ + + mlx5e_tx_disable_queue(sq->txq); + +@@ -1456,7 +1456,7 @@ void mlx5e_activate_icosq(struct mlx5e_icosq *icosq) + void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq) + { + clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state); +- synchronize_rcu(); /* Sync with NAPI. */ ++ synchronize_net(); /* Sync with NAPI. */ + } + + void mlx5e_close_icosq(struct mlx5e_icosq *sq) +@@ -1535,7 +1535,7 @@ void mlx5e_close_xdpsq(struct mlx5e_xdpsq *sq) + struct mlx5e_channel *c = sq->channel; + + clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state); +- synchronize_rcu(); /* Sync with NAPI. */ ++ synchronize_net(); /* Sync with NAPI. */ + + mlx5e_destroy_sq(c->mdev, sq->sqn); + mlx5e_free_xdpsq_descs(sq); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c +index 54523bed16cd3..0c32c485eb588 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c +@@ -190,6 +190,16 @@ static bool reset_fw_if_needed(struct mlx5_core_dev *dev) + return true; + } + ++static void enter_error_state(struct mlx5_core_dev *dev, bool force) ++{ ++ if (mlx5_health_check_fatal_sensors(dev) || force) { /* protected state setting */ ++ dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; ++ mlx5_cmd_flush(dev); ++ } ++ ++ mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_SYS_ERROR, (void *)1); ++} ++ + void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force) + { + bool err_detected = false; +@@ -208,12 +218,7 @@ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force) + goto unlock; + } + +- if (mlx5_health_check_fatal_sensors(dev) || force) { /* protected state setting */ +- dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; +- mlx5_cmd_flush(dev); +- } +- +- mlx5_notifier_call_chain(dev->priv.events, MLX5_DEV_EVENT_SYS_ERROR, (void *)1); ++ enter_error_state(dev, force); + unlock: + mutex_unlock(&dev->intf_state_mutex); + } +@@ -613,7 +618,7 @@ static void mlx5_fw_fatal_reporter_err_work(struct work_struct *work) + priv = container_of(health, struct mlx5_priv, health); + dev = container_of(priv, struct mlx5_core_dev, priv); + +- mlx5_enter_error_state(dev, false); ++ enter_error_state(dev, false); + if (IS_ERR_OR_NULL(health->fw_fatal_reporter)) { + if (mlx5_health_try_recover(dev)) + mlx5_core_err(dev, "health recovery failed\n"); +@@ -707,8 +712,9 @@ static void poll_health(struct timer_list *t) + mlx5_core_err(dev, "Fatal error %u detected\n", fatal_error); + dev->priv.health.fatal_error = fatal_error; + print_health_info(dev); ++ dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR; + mlx5_trigger_health_work(dev); +- goto out; ++ return; + } + + count = ioread32be(health->health_counter); +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c +index e455a2f31f070..8246b6285d5a4 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c +@@ -1380,7 +1380,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id) + dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err); + + pci_save_state(pdev); +- devlink_reload_enable(devlink); ++ if (!mlx5_core_is_mp_slave(dev)) ++ devlink_reload_enable(devlink); + return 0; + + err_load_one: +diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c +index 75f774347f6d1..cfcc3ac613189 100644 +--- a/drivers/net/ethernet/realtek/r8169_main.c ++++ b/drivers/net/ethernet/realtek/r8169_main.c +@@ -2351,14 +2351,14 @@ static void r8168dp_hw_jumbo_disable(struct rtl8169_private *tp) + + static void r8168e_hw_jumbo_enable(struct rtl8169_private *tp) + { +- RTL_W8(tp, MaxTxPacketSize, 0x3f); ++ RTL_W8(tp, MaxTxPacketSize, 0x24); + RTL_W8(tp, Config3, RTL_R8(tp, Config3) | Jumbo_En0); + RTL_W8(tp, Config4, RTL_R8(tp, Config4) | 0x01); + } + + static void r8168e_hw_jumbo_disable(struct rtl8169_private *tp) + { +- RTL_W8(tp, MaxTxPacketSize, 0x0c); ++ RTL_W8(tp, MaxTxPacketSize, 0x3f); + RTL_W8(tp, Config3, RTL_R8(tp, Config3) & ~Jumbo_En0); + RTL_W8(tp, Config4, RTL_R8(tp, Config4) & ~0x01); + } +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c +index 9ddadae8e4c51..752658ec7beeb 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c +@@ -301,7 +301,7 @@ static int meson8b_init_prg_eth(struct meson8b_dwmac *dwmac) + return -EINVAL; + }; + +- if (rx_dly_config & PRG_ETH0_ADJ_ENABLE) { ++ if (delay_config & PRG_ETH0_ADJ_ENABLE) { + if (!dwmac->timing_adj_clk) { + dev_err(dwmac->dev, + "The timing-adjustment clock is mandatory for the RX delay re-timing\n"); +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c +index 6088071cb1923..40dc14d1415f3 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_tc.c +@@ -322,6 +322,32 @@ static int tc_setup_cbs(struct stmmac_priv *priv, + if (!priv->dma_cap.av) + return -EOPNOTSUPP; + ++ /* Port Transmit Rate and Speed Divider */ ++ switch (priv->speed) { ++ case SPEED_10000: ++ ptr = 32; ++ speed_div = 10000000; ++ break; ++ case SPEED_5000: ++ ptr = 32; ++ speed_div = 5000000; ++ break; ++ case SPEED_2500: ++ ptr = 8; ++ speed_div = 2500000; ++ break; ++ case SPEED_1000: ++ ptr = 8; ++ speed_div = 1000000; ++ break; ++ case SPEED_100: ++ ptr = 4; ++ speed_div = 100000; ++ break; ++ default: ++ return -EOPNOTSUPP; ++ } ++ + mode_to_use = priv->plat->tx_queues_cfg[queue].mode_to_use; + if (mode_to_use == MTL_QUEUE_DCB && qopt->enable) { + ret = stmmac_dma_qmode(priv, priv->ioaddr, queue, MTL_QUEUE_AVB); +@@ -338,10 +364,6 @@ static int tc_setup_cbs(struct stmmac_priv *priv, + priv->plat->tx_queues_cfg[queue].mode_to_use = MTL_QUEUE_DCB; + } + +- /* Port Transmit Rate and Speed Divider */ +- ptr = (priv->speed == SPEED_100) ? 4 : 8; +- speed_div = (priv->speed == SPEED_100) ? 100000 : 1000000; +- + /* Final adjustments for HW */ + value = div_s64(qopt->idleslope * 1024ll * ptr, speed_div); + priv->plat->tx_queues_cfg[queue].idle_slope = value & GENMASK(31, 0); +diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +index 9aafd3ecdaa4d..eea0bb7c23ede 100644 +--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c ++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +@@ -1805,6 +1805,18 @@ static int axienet_probe(struct platform_device *pdev) + lp->options = XAE_OPTION_DEFAULTS; + lp->rx_bd_num = RX_BD_NUM_DEFAULT; + lp->tx_bd_num = TX_BD_NUM_DEFAULT; ++ ++ lp->clk = devm_clk_get_optional(&pdev->dev, NULL); ++ if (IS_ERR(lp->clk)) { ++ ret = PTR_ERR(lp->clk); ++ goto free_netdev; ++ } ++ ret = clk_prepare_enable(lp->clk); ++ if (ret) { ++ dev_err(&pdev->dev, "Unable to enable clock: %d\n", ret); ++ goto free_netdev; ++ } ++ + /* Map device registers */ + ethres = platform_get_resource(pdev, IORESOURCE_MEM, 0); + lp->regs = devm_ioremap_resource(&pdev->dev, ethres); +@@ -1980,20 +1992,6 @@ static int axienet_probe(struct platform_device *pdev) + + lp->phy_node = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); + if (lp->phy_node) { +- lp->clk = devm_clk_get(&pdev->dev, NULL); +- if (IS_ERR(lp->clk)) { +- dev_warn(&pdev->dev, "Failed to get clock: %ld\n", +- PTR_ERR(lp->clk)); +- lp->clk = NULL; +- } else { +- ret = clk_prepare_enable(lp->clk); +- if (ret) { +- dev_err(&pdev->dev, "Unable to enable clock: %d\n", +- ret); +- goto free_netdev; +- } +- } +- + ret = axienet_mdio_setup(lp); + if (ret) + dev_warn(&pdev->dev, +diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c +index dc668ed280b9f..1c46bc4d27058 100644 +--- a/drivers/net/gtp.c ++++ b/drivers/net/gtp.c +@@ -539,7 +539,6 @@ static int gtp_build_skb_ip4(struct sk_buff *skb, struct net_device *dev, + if (!skb_is_gso(skb) && (iph->frag_off & htons(IP_DF)) && + mtu < ntohs(iph->tot_len)) { + netdev_dbg(dev, "packet too big, fragmentation needed\n"); +- memset(IPCB(skb), 0, sizeof(*IPCB(skb))); + icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, + htonl(mtu)); + goto err_rt; +diff --git a/drivers/net/phy/mscc/mscc.h b/drivers/net/phy/mscc/mscc.h +index 9481bce94c2ed..c2023f93c0b24 100644 +--- a/drivers/net/phy/mscc/mscc.h ++++ b/drivers/net/phy/mscc/mscc.h +@@ -102,6 +102,7 @@ enum rgmii_clock_delay { + #define PHY_MCB_S6G_READ BIT(30) + + #define PHY_S6G_PLL5G_CFG0 0x06 ++#define PHY_S6G_PLL5G_CFG2 0x08 + #define PHY_S6G_LCPLL_CFG 0x11 + #define PHY_S6G_PLL_CFG 0x2b + #define PHY_S6G_COMMON_CFG 0x2c +@@ -121,6 +122,9 @@ enum rgmii_clock_delay { + #define PHY_S6G_PLL_FSM_CTRL_DATA_POS 8 + #define PHY_S6G_PLL_FSM_ENA_POS 7 + ++#define PHY_S6G_CFG2_FSM_DIS 1 ++#define PHY_S6G_CFG2_FSM_CLK_BP 23 ++ + #define MSCC_EXT_PAGE_ACCESS 31 + #define MSCC_PHY_PAGE_STANDARD 0x0000 /* Standard registers */ + #define MSCC_PHY_PAGE_EXTENDED 0x0001 /* Extended registers */ +@@ -412,6 +416,10 @@ struct vsc8531_edge_rate_table { + }; + #endif /* CONFIG_OF_MDIO */ + ++enum csr_target { ++ MACRO_CTRL = 0x07, ++}; ++ + #if IS_ENABLED(CONFIG_MACSEC) + int vsc8584_macsec_init(struct phy_device *phydev); + void vsc8584_handle_macsec_interrupt(struct phy_device *phydev); +diff --git a/drivers/net/phy/mscc/mscc_main.c b/drivers/net/phy/mscc/mscc_main.c +index 6bc7406a1ce73..41a410124437d 100644 +--- a/drivers/net/phy/mscc/mscc_main.c ++++ b/drivers/net/phy/mscc/mscc_main.c +@@ -710,6 +710,113 @@ static int phy_base_read(struct phy_device *phydev, u32 regnum) + return __phy_package_read(phydev, regnum); + } + ++static u32 vsc85xx_csr_read(struct phy_device *phydev, ++ enum csr_target target, u32 reg) ++{ ++ unsigned long deadline; ++ u32 val, val_l, val_h; ++ ++ phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL); ++ ++ /* CSR registers are grouped under different Target IDs. ++ * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and ++ * MSCC_EXT_PAGE_CSR_CNTL_19 registers. ++ * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20 ++ * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19. ++ */ ++ ++ /* Setup the Target ID */ ++ phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20, ++ MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2)); ++ ++ if ((target >> 2 == 0x1) || (target >> 2 == 0x3)) ++ /* non-MACsec access */ ++ target &= 0x3; ++ else ++ target = 0; ++ ++ /* Trigger CSR Action - Read into the CSR's */ ++ phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19, ++ MSCC_PHY_CSR_CNTL_19_CMD | MSCC_PHY_CSR_CNTL_19_READ | ++ MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) | ++ MSCC_PHY_CSR_CNTL_19_TARGET(target)); ++ ++ /* Wait for register access*/ ++ deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS); ++ do { ++ usleep_range(500, 1000); ++ val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19); ++ } while (time_before(jiffies, deadline) && ++ !(val & MSCC_PHY_CSR_CNTL_19_CMD)); ++ ++ if (!(val & MSCC_PHY_CSR_CNTL_19_CMD)) ++ return 0xffffffff; ++ ++ /* Read the Least Significant Word (LSW) (17) */ ++ val_l = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_17); ++ ++ /* Read the Most Significant Word (MSW) (18) */ ++ val_h = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_18); ++ ++ phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, ++ MSCC_PHY_PAGE_STANDARD); ++ ++ return (val_h << 16) | val_l; ++} ++ ++static int vsc85xx_csr_write(struct phy_device *phydev, ++ enum csr_target target, u32 reg, u32 val) ++{ ++ unsigned long deadline; ++ ++ phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL); ++ ++ /* CSR registers are grouped under different Target IDs. ++ * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and ++ * MSCC_EXT_PAGE_CSR_CNTL_19 registers. ++ * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20 ++ * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19. ++ */ ++ ++ /* Setup the Target ID */ ++ phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20, ++ MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2)); ++ ++ /* Write the Least Significant Word (LSW) (17) */ ++ phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_17, (u16)val); ++ ++ /* Write the Most Significant Word (MSW) (18) */ ++ phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_18, (u16)(val >> 16)); ++ ++ if ((target >> 2 == 0x1) || (target >> 2 == 0x3)) ++ /* non-MACsec access */ ++ target &= 0x3; ++ else ++ target = 0; ++ ++ /* Trigger CSR Action - Write into the CSR's */ ++ phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19, ++ MSCC_PHY_CSR_CNTL_19_CMD | ++ MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) | ++ MSCC_PHY_CSR_CNTL_19_TARGET(target)); ++ ++ /* Wait for register access */ ++ deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS); ++ do { ++ usleep_range(500, 1000); ++ val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19); ++ } while (time_before(jiffies, deadline) && ++ !(val & MSCC_PHY_CSR_CNTL_19_CMD)); ++ ++ if (!(val & MSCC_PHY_CSR_CNTL_19_CMD)) ++ return -ETIMEDOUT; ++ ++ phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, ++ MSCC_PHY_PAGE_STANDARD); ++ ++ return 0; ++} ++ + /* bus->mdio_lock should be locked when using this function */ + static void vsc8584_csr_write(struct phy_device *phydev, u16 addr, u32 val) + { +@@ -1131,6 +1238,92 @@ out: + return ret; + } + ++/* Access LCPLL Cfg_2 */ ++static void vsc8584_pll5g_cfg2_wr(struct phy_device *phydev, ++ bool disable_fsm) ++{ ++ u32 rd_dat; ++ ++ rd_dat = vsc85xx_csr_read(phydev, MACRO_CTRL, PHY_S6G_PLL5G_CFG2); ++ rd_dat &= ~BIT(PHY_S6G_CFG2_FSM_DIS); ++ rd_dat |= (disable_fsm << PHY_S6G_CFG2_FSM_DIS); ++ vsc85xx_csr_write(phydev, MACRO_CTRL, PHY_S6G_PLL5G_CFG2, rd_dat); ++} ++ ++/* trigger a read to the spcified MCB */ ++static int vsc8584_mcb_rd_trig(struct phy_device *phydev, ++ u32 mcb_reg_addr, u8 mcb_slave_num) ++{ ++ u32 rd_dat = 0; ++ ++ /* read MCB */ ++ vsc85xx_csr_write(phydev, MACRO_CTRL, mcb_reg_addr, ++ (0x40000000 | (1L << mcb_slave_num))); ++ ++ return read_poll_timeout(vsc85xx_csr_read, rd_dat, ++ !(rd_dat & 0x40000000), ++ 4000, 200000, 0, ++ phydev, MACRO_CTRL, mcb_reg_addr); ++} ++ ++/* trigger a write to the spcified MCB */ ++static int vsc8584_mcb_wr_trig(struct phy_device *phydev, ++ u32 mcb_reg_addr, ++ u8 mcb_slave_num) ++{ ++ u32 rd_dat = 0; ++ ++ /* write back MCB */ ++ vsc85xx_csr_write(phydev, MACRO_CTRL, mcb_reg_addr, ++ (0x80000000 | (1L << mcb_slave_num))); ++ ++ return read_poll_timeout(vsc85xx_csr_read, rd_dat, ++ !(rd_dat & 0x80000000), ++ 4000, 200000, 0, ++ phydev, MACRO_CTRL, mcb_reg_addr); ++} ++ ++/* Sequence to Reset LCPLL for the VIPER and ELISE PHY */ ++static int vsc8584_pll5g_reset(struct phy_device *phydev) ++{ ++ bool dis_fsm; ++ int ret = 0; ++ ++ ret = vsc8584_mcb_rd_trig(phydev, 0x11, 0); ++ if (ret < 0) ++ goto done; ++ dis_fsm = 1; ++ ++ /* Reset LCPLL */ ++ vsc8584_pll5g_cfg2_wr(phydev, dis_fsm); ++ ++ /* write back LCPLL MCB */ ++ ret = vsc8584_mcb_wr_trig(phydev, 0x11, 0); ++ if (ret < 0) ++ goto done; ++ ++ /* 10 mSec sleep while LCPLL is hold in reset */ ++ usleep_range(10000, 20000); ++ ++ /* read LCPLL MCB into CSRs */ ++ ret = vsc8584_mcb_rd_trig(phydev, 0x11, 0); ++ if (ret < 0) ++ goto done; ++ dis_fsm = 0; ++ ++ /* Release the Reset of LCPLL */ ++ vsc8584_pll5g_cfg2_wr(phydev, dis_fsm); ++ ++ /* write back LCPLL MCB */ ++ ret = vsc8584_mcb_wr_trig(phydev, 0x11, 0); ++ if (ret < 0) ++ goto done; ++ ++ usleep_range(110000, 200000); ++done: ++ return ret; ++} ++ + /* bus->mdio_lock should be locked when using this function */ + static int vsc8584_config_pre_init(struct phy_device *phydev) + { +@@ -1579,8 +1772,16 @@ static int vsc8514_config_pre_init(struct phy_device *phydev) + {0x16b2, 0x00007000}, + {0x16b4, 0x00000814}, + }; ++ struct device *dev = &phydev->mdio.dev; + unsigned int i; + u16 reg; ++ int ret; ++ ++ ret = vsc8584_pll5g_reset(phydev); ++ if (ret < 0) { ++ dev_err(dev, "failed LCPLL reset, ret: %d\n", ret); ++ return ret; ++ } + + phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_STANDARD); + +@@ -1615,101 +1816,6 @@ static int vsc8514_config_pre_init(struct phy_device *phydev) + return 0; + } + +-static u32 vsc85xx_csr_ctrl_phy_read(struct phy_device *phydev, +- u32 target, u32 reg) +-{ +- unsigned long deadline; +- u32 val, val_l, val_h; +- +- phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL); +- +- /* CSR registers are grouped under different Target IDs. +- * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and +- * MSCC_EXT_PAGE_CSR_CNTL_19 registers. +- * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20 +- * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19. +- */ +- +- /* Setup the Target ID */ +- phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20, +- MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2)); +- +- /* Trigger CSR Action - Read into the CSR's */ +- phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19, +- MSCC_PHY_CSR_CNTL_19_CMD | MSCC_PHY_CSR_CNTL_19_READ | +- MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) | +- MSCC_PHY_CSR_CNTL_19_TARGET(target & 0x3)); +- +- /* Wait for register access*/ +- deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS); +- do { +- usleep_range(500, 1000); +- val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19); +- } while (time_before(jiffies, deadline) && +- !(val & MSCC_PHY_CSR_CNTL_19_CMD)); +- +- if (!(val & MSCC_PHY_CSR_CNTL_19_CMD)) +- return 0xffffffff; +- +- /* Read the Least Significant Word (LSW) (17) */ +- val_l = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_17); +- +- /* Read the Most Significant Word (MSW) (18) */ +- val_h = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_18); +- +- phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, +- MSCC_PHY_PAGE_STANDARD); +- +- return (val_h << 16) | val_l; +-} +- +-static int vsc85xx_csr_ctrl_phy_write(struct phy_device *phydev, +- u32 target, u32 reg, u32 val) +-{ +- unsigned long deadline; +- +- phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, MSCC_PHY_PAGE_CSR_CNTL); +- +- /* CSR registers are grouped under different Target IDs. +- * 6-bit Target_ID is split between MSCC_EXT_PAGE_CSR_CNTL_20 and +- * MSCC_EXT_PAGE_CSR_CNTL_19 registers. +- * Target_ID[5:2] maps to bits[3:0] of MSCC_EXT_PAGE_CSR_CNTL_20 +- * and Target_ID[1:0] maps to bits[13:12] of MSCC_EXT_PAGE_CSR_CNTL_19. +- */ +- +- /* Setup the Target ID */ +- phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_20, +- MSCC_PHY_CSR_CNTL_20_TARGET(target >> 2)); +- +- /* Write the Least Significant Word (LSW) (17) */ +- phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_17, (u16)val); +- +- /* Write the Most Significant Word (MSW) (18) */ +- phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_18, (u16)(val >> 16)); +- +- /* Trigger CSR Action - Write into the CSR's */ +- phy_base_write(phydev, MSCC_EXT_PAGE_CSR_CNTL_19, +- MSCC_PHY_CSR_CNTL_19_CMD | +- MSCC_PHY_CSR_CNTL_19_REG_ADDR(reg) | +- MSCC_PHY_CSR_CNTL_19_TARGET(target & 0x3)); +- +- /* Wait for register access */ +- deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS); +- do { +- usleep_range(500, 1000); +- val = phy_base_read(phydev, MSCC_EXT_PAGE_CSR_CNTL_19); +- } while (time_before(jiffies, deadline) && +- !(val & MSCC_PHY_CSR_CNTL_19_CMD)); +- +- if (!(val & MSCC_PHY_CSR_CNTL_19_CMD)) +- return -ETIMEDOUT; +- +- phy_base_write(phydev, MSCC_EXT_PAGE_ACCESS, +- MSCC_PHY_PAGE_STANDARD); +- +- return 0; +-} +- + static int __phy_write_mcb_s6g(struct phy_device *phydev, u32 reg, u8 mcb, + u32 op) + { +@@ -1717,15 +1823,15 @@ static int __phy_write_mcb_s6g(struct phy_device *phydev, u32 reg, u8 mcb, + u32 val; + int ret; + +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, reg, +- op | (1 << mcb)); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, reg, ++ op | (1 << mcb)); + if (ret) + return -EINVAL; + + deadline = jiffies + msecs_to_jiffies(PROC_CMD_NCOMPLETED_TIMEOUT_MS); + do { + usleep_range(500, 1000); +- val = vsc85xx_csr_ctrl_phy_read(phydev, PHY_MCB_TARGET, reg); ++ val = vsc85xx_csr_read(phydev, PHY_MCB_TARGET, reg); + + if (val == 0xffffffff) + return -EIO; +@@ -1806,41 +1912,41 @@ static int vsc8514_config_init(struct phy_device *phydev) + /* lcpll mcb */ + phy_update_mcb_s6g(phydev, PHY_S6G_LCPLL_CFG, 0); + /* pll5gcfg0 */ +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, +- PHY_S6G_PLL5G_CFG0, 0x7036f145); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, ++ PHY_S6G_PLL5G_CFG0, 0x7036f145); + if (ret) + goto err; + + phy_commit_mcb_s6g(phydev, PHY_S6G_LCPLL_CFG, 0); + /* pllcfg */ +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, +- PHY_S6G_PLL_CFG, +- (3 << PHY_S6G_PLL_ENA_OFFS_POS) | +- (120 << PHY_S6G_PLL_FSM_CTRL_DATA_POS) +- | (0 << PHY_S6G_PLL_FSM_ENA_POS)); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, ++ PHY_S6G_PLL_CFG, ++ (3 << PHY_S6G_PLL_ENA_OFFS_POS) | ++ (120 << PHY_S6G_PLL_FSM_CTRL_DATA_POS) ++ | (0 << PHY_S6G_PLL_FSM_ENA_POS)); + if (ret) + goto err; + + /* commoncfg */ +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, +- PHY_S6G_COMMON_CFG, +- (0 << PHY_S6G_SYS_RST_POS) | +- (0 << PHY_S6G_ENA_LANE_POS) | +- (0 << PHY_S6G_ENA_LOOP_POS) | +- (0 << PHY_S6G_QRATE_POS) | +- (3 << PHY_S6G_IF_MODE_POS)); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, ++ PHY_S6G_COMMON_CFG, ++ (0 << PHY_S6G_SYS_RST_POS) | ++ (0 << PHY_S6G_ENA_LANE_POS) | ++ (0 << PHY_S6G_ENA_LOOP_POS) | ++ (0 << PHY_S6G_QRATE_POS) | ++ (3 << PHY_S6G_IF_MODE_POS)); + if (ret) + goto err; + + /* misccfg */ +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, +- PHY_S6G_MISC_CFG, 1); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, ++ PHY_S6G_MISC_CFG, 1); + if (ret) + goto err; + + /* gpcfg */ +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, +- PHY_S6G_GPC_CFG, 768); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, ++ PHY_S6G_GPC_CFG, 768); + if (ret) + goto err; + +@@ -1851,8 +1957,8 @@ static int vsc8514_config_init(struct phy_device *phydev) + usleep_range(500, 1000); + phy_update_mcb_s6g(phydev, PHY_MCB_S6G_CFG, + 0); /* read 6G MCB into CSRs */ +- reg = vsc85xx_csr_ctrl_phy_read(phydev, PHY_MCB_TARGET, +- PHY_S6G_PLL_STATUS); ++ reg = vsc85xx_csr_read(phydev, PHY_MCB_TARGET, ++ PHY_S6G_PLL_STATUS); + if (reg == 0xffffffff) { + phy_unlock_mdio_bus(phydev); + return -EIO; +@@ -1866,8 +1972,8 @@ static int vsc8514_config_init(struct phy_device *phydev) + } + + /* misccfg */ +- ret = vsc85xx_csr_ctrl_phy_write(phydev, PHY_MCB_TARGET, +- PHY_S6G_MISC_CFG, 0); ++ ret = vsc85xx_csr_write(phydev, PHY_MCB_TARGET, ++ PHY_S6G_MISC_CFG, 0); + if (ret) + goto err; + +@@ -1878,8 +1984,8 @@ static int vsc8514_config_init(struct phy_device *phydev) + usleep_range(500, 1000); + phy_update_mcb_s6g(phydev, PHY_MCB_S6G_CFG, + 0); /* read 6G MCB into CSRs */ +- reg = vsc85xx_csr_ctrl_phy_read(phydev, PHY_MCB_TARGET, +- PHY_S6G_IB_STATUS0); ++ reg = vsc85xx_csr_read(phydev, PHY_MCB_TARGET, ++ PHY_S6G_IB_STATUS0); + if (reg == 0xffffffff) { + phy_unlock_mdio_bus(phydev); + return -EIO; +diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c +index 5dab6be6fc383..dd1f711140c3d 100644 +--- a/drivers/net/phy/phy_device.c ++++ b/drivers/net/phy/phy_device.c +@@ -300,50 +300,22 @@ static int mdio_bus_phy_resume(struct device *dev) + + phydev->suspended_by_mdio_bus = 0; + +- ret = phy_resume(phydev); ++ ret = phy_init_hw(phydev); + if (ret < 0) + return ret; + +-no_resume: +- if (phydev->attached_dev && phydev->adjust_link) +- phy_start_machine(phydev); +- +- return 0; +-} +- +-static int mdio_bus_phy_restore(struct device *dev) +-{ +- struct phy_device *phydev = to_phy_device(dev); +- struct net_device *netdev = phydev->attached_dev; +- int ret; +- +- if (!netdev) +- return 0; +- +- ret = phy_init_hw(phydev); ++ ret = phy_resume(phydev); + if (ret < 0) + return ret; +- ++no_resume: + if (phydev->attached_dev && phydev->adjust_link) + phy_start_machine(phydev); + + return 0; + } + +-static const struct dev_pm_ops mdio_bus_phy_pm_ops = { +- .suspend = mdio_bus_phy_suspend, +- .resume = mdio_bus_phy_resume, +- .freeze = mdio_bus_phy_suspend, +- .thaw = mdio_bus_phy_resume, +- .restore = mdio_bus_phy_restore, +-}; +- +-#define MDIO_BUS_PHY_PM_OPS (&mdio_bus_phy_pm_ops) +- +-#else +- +-#define MDIO_BUS_PHY_PM_OPS NULL +- ++static SIMPLE_DEV_PM_OPS(mdio_bus_phy_pm_ops, mdio_bus_phy_suspend, ++ mdio_bus_phy_resume); + #endif /* CONFIG_PM */ + + /** +@@ -554,7 +526,7 @@ static const struct device_type mdio_bus_phy_type = { + .name = "PHY", + .groups = phy_dev_groups, + .release = phy_device_release, +- .pm = MDIO_BUS_PHY_PM_OPS, ++ .pm = pm_ptr(&mdio_bus_phy_pm_ops), + }; + + static int phy_request_driver_module(struct phy_device *dev, u32 phy_id) +@@ -1143,10 +1115,19 @@ int phy_init_hw(struct phy_device *phydev) + if (ret < 0) + return ret; + +- if (phydev->drv->config_init) ++ if (phydev->drv->config_init) { + ret = phydev->drv->config_init(phydev); ++ if (ret < 0) ++ return ret; ++ } + +- return ret; ++ if (phydev->drv->config_intr) { ++ ret = phydev->drv->config_intr(phydev); ++ if (ret < 0) ++ return ret; ++ } ++ ++ return 0; + } + EXPORT_SYMBOL(phy_init_hw); + +diff --git a/drivers/net/ppp/ppp_async.c b/drivers/net/ppp/ppp_async.c +index 29a0917a81e60..f14a9d190de91 100644 +--- a/drivers/net/ppp/ppp_async.c ++++ b/drivers/net/ppp/ppp_async.c +@@ -259,7 +259,8 @@ static int ppp_asynctty_hangup(struct tty_struct *tty) + */ + static ssize_t + ppp_asynctty_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t count) ++ unsigned char *buf, size_t count, ++ void **cookie, unsigned long offset) + { + return -EAGAIN; + } +diff --git a/drivers/net/ppp/ppp_synctty.c b/drivers/net/ppp/ppp_synctty.c +index 0f338752c38b9..f774b7e52da44 100644 +--- a/drivers/net/ppp/ppp_synctty.c ++++ b/drivers/net/ppp/ppp_synctty.c +@@ -257,7 +257,8 @@ static int ppp_sync_hangup(struct tty_struct *tty) + */ + static ssize_t + ppp_sync_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t count) ++ unsigned char *buf, size_t count, ++ void **cookie, unsigned long offset) + { + return -EAGAIN; + } +diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c +index 977f77e2c2ce6..50cb8f045a1e5 100644 +--- a/drivers/net/vxlan.c ++++ b/drivers/net/vxlan.c +@@ -4721,7 +4721,6 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head) + struct vxlan_net *vn = net_generic(net, vxlan_net_id); + struct vxlan_dev *vxlan, *next; + struct net_device *dev, *aux; +- unsigned int h; + + for_each_netdev_safe(net, dev, aux) + if (dev->rtnl_link_ops == &vxlan_link_ops) +@@ -4735,14 +4734,13 @@ static void vxlan_destroy_tunnels(struct net *net, struct list_head *head) + unregister_netdevice_queue(vxlan->dev, head); + } + +- for (h = 0; h < PORT_HASH_SIZE; ++h) +- WARN_ON_ONCE(!hlist_empty(&vn->sock_list[h])); + } + + static void __net_exit vxlan_exit_batch_net(struct list_head *net_list) + { + struct net *net; + LIST_HEAD(list); ++ unsigned int h; + + rtnl_lock(); + list_for_each_entry(net, net_list, exit_list) +@@ -4752,6 +4750,13 @@ static void __net_exit vxlan_exit_batch_net(struct list_head *net_list) + + unregister_netdevice_many(&list); + rtnl_unlock(); ++ ++ list_for_each_entry(net, net_list, exit_list) { ++ struct vxlan_net *vn = net_generic(net, vxlan_net_id); ++ ++ for (h = 0; h < PORT_HASH_SIZE; ++h) ++ WARN_ON_ONCE(!hlist_empty(&vn->sock_list[h])); ++ } + } + + static struct pernet_operations vxlan_net_ops = { +diff --git a/drivers/net/wireguard/device.c b/drivers/net/wireguard/device.c +index c9f65e96ccb04..e01ab0742738a 100644 +--- a/drivers/net/wireguard/device.c ++++ b/drivers/net/wireguard/device.c +@@ -138,7 +138,7 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev) + else if (skb->protocol == htons(ETH_P_IPV6)) + net_dbg_ratelimited("%s: No peer has allowed IPs matching %pI6\n", + dev->name, &ipv6_hdr(skb)->daddr); +- goto err; ++ goto err_icmp; + } + + family = READ_ONCE(peer->endpoint.addr.sa_family); +@@ -201,12 +201,13 @@ static netdev_tx_t wg_xmit(struct sk_buff *skb, struct net_device *dev) + + err_peer: + wg_peer_put(peer); +-err: +- ++dev->stats.tx_errors; ++err_icmp: + if (skb->protocol == htons(ETH_P_IP)) + icmp_ndo_send(skb, ICMP_DEST_UNREACH, ICMP_HOST_UNREACH, 0); + else if (skb->protocol == htons(ETH_P_IPV6)) + icmpv6_ndo_send(skb, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, 0); ++err: ++ ++dev->stats.tx_errors; + kfree_skb(skb); + return ret; + } +@@ -234,8 +235,8 @@ static void wg_destruct(struct net_device *dev) + destroy_workqueue(wg->handshake_receive_wq); + destroy_workqueue(wg->handshake_send_wq); + destroy_workqueue(wg->packet_crypt_wq); +- wg_packet_queue_free(&wg->decrypt_queue, true); +- wg_packet_queue_free(&wg->encrypt_queue, true); ++ wg_packet_queue_free(&wg->decrypt_queue); ++ wg_packet_queue_free(&wg->encrypt_queue); + rcu_barrier(); /* Wait for all the peers to be actually freed. */ + wg_ratelimiter_uninit(); + memzero_explicit(&wg->static_identity, sizeof(wg->static_identity)); +@@ -337,12 +338,12 @@ static int wg_newlink(struct net *src_net, struct net_device *dev, + goto err_destroy_handshake_send; + + ret = wg_packet_queue_init(&wg->encrypt_queue, wg_packet_encrypt_worker, +- true, MAX_QUEUED_PACKETS); ++ MAX_QUEUED_PACKETS); + if (ret < 0) + goto err_destroy_packet_crypt; + + ret = wg_packet_queue_init(&wg->decrypt_queue, wg_packet_decrypt_worker, +- true, MAX_QUEUED_PACKETS); ++ MAX_QUEUED_PACKETS); + if (ret < 0) + goto err_free_encrypt_queue; + +@@ -367,9 +368,9 @@ static int wg_newlink(struct net *src_net, struct net_device *dev, + err_uninit_ratelimiter: + wg_ratelimiter_uninit(); + err_free_decrypt_queue: +- wg_packet_queue_free(&wg->decrypt_queue, true); ++ wg_packet_queue_free(&wg->decrypt_queue); + err_free_encrypt_queue: +- wg_packet_queue_free(&wg->encrypt_queue, true); ++ wg_packet_queue_free(&wg->encrypt_queue); + err_destroy_packet_crypt: + destroy_workqueue(wg->packet_crypt_wq); + err_destroy_handshake_send: +diff --git a/drivers/net/wireguard/device.h b/drivers/net/wireguard/device.h +index 4d0144e169478..854bc3d97150e 100644 +--- a/drivers/net/wireguard/device.h ++++ b/drivers/net/wireguard/device.h +@@ -27,13 +27,14 @@ struct multicore_worker { + + struct crypt_queue { + struct ptr_ring ring; +- union { +- struct { +- struct multicore_worker __percpu *worker; +- int last_cpu; +- }; +- struct work_struct work; +- }; ++ struct multicore_worker __percpu *worker; ++ int last_cpu; ++}; ++ ++struct prev_queue { ++ struct sk_buff *head, *tail, *peeked; ++ struct { struct sk_buff *next, *prev; } empty; // Match first 2 members of struct sk_buff. ++ atomic_t count; + }; + + struct wg_device { +diff --git a/drivers/net/wireguard/peer.c b/drivers/net/wireguard/peer.c +index b3b6370e6b959..cd5cb0292cb67 100644 +--- a/drivers/net/wireguard/peer.c ++++ b/drivers/net/wireguard/peer.c +@@ -32,27 +32,22 @@ struct wg_peer *wg_peer_create(struct wg_device *wg, + peer = kzalloc(sizeof(*peer), GFP_KERNEL); + if (unlikely(!peer)) + return ERR_PTR(ret); +- peer->device = wg; ++ if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)) ++ goto err; + ++ peer->device = wg; + wg_noise_handshake_init(&peer->handshake, &wg->static_identity, + public_key, preshared_key, peer); +- if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL)) +- goto err_1; +- if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false, +- MAX_QUEUED_PACKETS)) +- goto err_2; +- if (wg_packet_queue_init(&peer->rx_queue, NULL, false, +- MAX_QUEUED_PACKETS)) +- goto err_3; +- + peer->internal_id = atomic64_inc_return(&peer_counter); + peer->serial_work_cpu = nr_cpumask_bits; + wg_cookie_init(&peer->latest_cookie); + wg_timers_init(peer); + wg_cookie_checker_precompute_peer_keys(peer); + spin_lock_init(&peer->keypairs.keypair_update_lock); +- INIT_WORK(&peer->transmit_handshake_work, +- wg_packet_handshake_send_worker); ++ INIT_WORK(&peer->transmit_handshake_work, wg_packet_handshake_send_worker); ++ INIT_WORK(&peer->transmit_packet_work, wg_packet_tx_worker); ++ wg_prev_queue_init(&peer->tx_queue); ++ wg_prev_queue_init(&peer->rx_queue); + rwlock_init(&peer->endpoint_lock); + kref_init(&peer->refcount); + skb_queue_head_init(&peer->staged_packet_queue); +@@ -68,11 +63,7 @@ struct wg_peer *wg_peer_create(struct wg_device *wg, + pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id); + return peer; + +-err_3: +- wg_packet_queue_free(&peer->tx_queue, false); +-err_2: +- dst_cache_destroy(&peer->endpoint_cache); +-err_1: ++err: + kfree(peer); + return ERR_PTR(ret); + } +@@ -197,8 +188,7 @@ static void rcu_release(struct rcu_head *rcu) + struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu); + + dst_cache_destroy(&peer->endpoint_cache); +- wg_packet_queue_free(&peer->rx_queue, false); +- wg_packet_queue_free(&peer->tx_queue, false); ++ WARN_ON(wg_prev_queue_peek(&peer->tx_queue) || wg_prev_queue_peek(&peer->rx_queue)); + + /* The final zeroing takes care of clearing any remaining handshake key + * material and other potentially sensitive information. +diff --git a/drivers/net/wireguard/peer.h b/drivers/net/wireguard/peer.h +index 23af409229972..0809cda08bfa4 100644 +--- a/drivers/net/wireguard/peer.h ++++ b/drivers/net/wireguard/peer.h +@@ -36,7 +36,7 @@ struct endpoint { + + struct wg_peer { + struct wg_device *device; +- struct crypt_queue tx_queue, rx_queue; ++ struct prev_queue tx_queue, rx_queue; + struct sk_buff_head staged_packet_queue; + int serial_work_cpu; + struct noise_keypairs keypairs; +@@ -45,7 +45,7 @@ struct wg_peer { + rwlock_t endpoint_lock; + struct noise_handshake handshake; + atomic64_t last_sent_handshake; +- struct work_struct transmit_handshake_work, clear_peer_work; ++ struct work_struct transmit_handshake_work, clear_peer_work, transmit_packet_work; + struct cookie latest_cookie; + struct hlist_node pubkey_hash; + u64 rx_bytes, tx_bytes; +diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c +index 71b8e80b58e12..48e7b982a3073 100644 +--- a/drivers/net/wireguard/queueing.c ++++ b/drivers/net/wireguard/queueing.c +@@ -9,8 +9,7 @@ struct multicore_worker __percpu * + wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr) + { + int cpu; +- struct multicore_worker __percpu *worker = +- alloc_percpu(struct multicore_worker); ++ struct multicore_worker __percpu *worker = alloc_percpu(struct multicore_worker); + + if (!worker) + return NULL; +@@ -23,7 +22,7 @@ wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr) + } + + int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, +- bool multicore, unsigned int len) ++ unsigned int len) + { + int ret; + +@@ -31,25 +30,78 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, + ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL); + if (ret) + return ret; +- if (function) { +- if (multicore) { +- queue->worker = wg_packet_percpu_multicore_worker_alloc( +- function, queue); +- if (!queue->worker) { +- ptr_ring_cleanup(&queue->ring, NULL); +- return -ENOMEM; +- } +- } else { +- INIT_WORK(&queue->work, function); +- } ++ queue->worker = wg_packet_percpu_multicore_worker_alloc(function, queue); ++ if (!queue->worker) { ++ ptr_ring_cleanup(&queue->ring, NULL); ++ return -ENOMEM; + } + return 0; + } + +-void wg_packet_queue_free(struct crypt_queue *queue, bool multicore) ++void wg_packet_queue_free(struct crypt_queue *queue) + { +- if (multicore) +- free_percpu(queue->worker); ++ free_percpu(queue->worker); + WARN_ON(!__ptr_ring_empty(&queue->ring)); + ptr_ring_cleanup(&queue->ring, NULL); + } ++ ++#define NEXT(skb) ((skb)->prev) ++#define STUB(queue) ((struct sk_buff *)&queue->empty) ++ ++void wg_prev_queue_init(struct prev_queue *queue) ++{ ++ NEXT(STUB(queue)) = NULL; ++ queue->head = queue->tail = STUB(queue); ++ queue->peeked = NULL; ++ atomic_set(&queue->count, 0); ++ BUILD_BUG_ON( ++ offsetof(struct sk_buff, next) != offsetof(struct prev_queue, empty.next) - ++ offsetof(struct prev_queue, empty) || ++ offsetof(struct sk_buff, prev) != offsetof(struct prev_queue, empty.prev) - ++ offsetof(struct prev_queue, empty)); ++} ++ ++static void __wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb) ++{ ++ WRITE_ONCE(NEXT(skb), NULL); ++ WRITE_ONCE(NEXT(xchg_release(&queue->head, skb)), skb); ++} ++ ++bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb) ++{ ++ if (!atomic_add_unless(&queue->count, 1, MAX_QUEUED_PACKETS)) ++ return false; ++ __wg_prev_queue_enqueue(queue, skb); ++ return true; ++} ++ ++struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue) ++{ ++ struct sk_buff *tail = queue->tail, *next = smp_load_acquire(&NEXT(tail)); ++ ++ if (tail == STUB(queue)) { ++ if (!next) ++ return NULL; ++ queue->tail = next; ++ tail = next; ++ next = smp_load_acquire(&NEXT(next)); ++ } ++ if (next) { ++ queue->tail = next; ++ atomic_dec(&queue->count); ++ return tail; ++ } ++ if (tail != READ_ONCE(queue->head)) ++ return NULL; ++ __wg_prev_queue_enqueue(queue, STUB(queue)); ++ next = smp_load_acquire(&NEXT(tail)); ++ if (next) { ++ queue->tail = next; ++ atomic_dec(&queue->count); ++ return tail; ++ } ++ return NULL; ++} ++ ++#undef NEXT ++#undef STUB +diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h +index dfb674e030764..4ef2944a68bc9 100644 +--- a/drivers/net/wireguard/queueing.h ++++ b/drivers/net/wireguard/queueing.h +@@ -17,12 +17,13 @@ struct wg_device; + struct wg_peer; + struct multicore_worker; + struct crypt_queue; ++struct prev_queue; + struct sk_buff; + + /* queueing.c APIs: */ + int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, +- bool multicore, unsigned int len); +-void wg_packet_queue_free(struct crypt_queue *queue, bool multicore); ++ unsigned int len); ++void wg_packet_queue_free(struct crypt_queue *queue); + struct multicore_worker __percpu * + wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr); + +@@ -135,8 +136,31 @@ static inline int wg_cpumask_next_online(int *next) + return cpu; + } + ++void wg_prev_queue_init(struct prev_queue *queue); ++ ++/* Multi producer */ ++bool wg_prev_queue_enqueue(struct prev_queue *queue, struct sk_buff *skb); ++ ++/* Single consumer */ ++struct sk_buff *wg_prev_queue_dequeue(struct prev_queue *queue); ++ ++/* Single consumer */ ++static inline struct sk_buff *wg_prev_queue_peek(struct prev_queue *queue) ++{ ++ if (queue->peeked) ++ return queue->peeked; ++ queue->peeked = wg_prev_queue_dequeue(queue); ++ return queue->peeked; ++} ++ ++/* Single consumer */ ++static inline void wg_prev_queue_drop_peeked(struct prev_queue *queue) ++{ ++ queue->peeked = NULL; ++} ++ + static inline int wg_queue_enqueue_per_device_and_peer( +- struct crypt_queue *device_queue, struct crypt_queue *peer_queue, ++ struct crypt_queue *device_queue, struct prev_queue *peer_queue, + struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu) + { + int cpu; +@@ -145,8 +169,9 @@ static inline int wg_queue_enqueue_per_device_and_peer( + /* We first queue this up for the peer ingestion, but the consumer + * will wait for the state to change to CRYPTED or DEAD before. + */ +- if (unlikely(ptr_ring_produce_bh(&peer_queue->ring, skb))) ++ if (unlikely(!wg_prev_queue_enqueue(peer_queue, skb))) + return -ENOSPC; ++ + /* Then we queue it up in the device queue, which consumes the + * packet as soon as it can. + */ +@@ -157,9 +182,7 @@ static inline int wg_queue_enqueue_per_device_and_peer( + return 0; + } + +-static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue, +- struct sk_buff *skb, +- enum packet_state state) ++static inline void wg_queue_enqueue_per_peer_tx(struct sk_buff *skb, enum packet_state state) + { + /* We take a reference, because as soon as we call atomic_set, the + * peer can be freed from below us. +@@ -167,14 +190,12 @@ static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue, + struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb)); + + atomic_set_release(&PACKET_CB(skb)->state, state); +- queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu, +- peer->internal_id), +- peer->device->packet_crypt_wq, &queue->work); ++ queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu, peer->internal_id), ++ peer->device->packet_crypt_wq, &peer->transmit_packet_work); + wg_peer_put(peer); + } + +-static inline void wg_queue_enqueue_per_peer_napi(struct sk_buff *skb, +- enum packet_state state) ++static inline void wg_queue_enqueue_per_peer_rx(struct sk_buff *skb, enum packet_state state) + { + /* We take a reference, because as soon as we call atomic_set, the + * peer can be freed from below us. +diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c +index 2c9551ea6dc73..7dc84bcca2613 100644 +--- a/drivers/net/wireguard/receive.c ++++ b/drivers/net/wireguard/receive.c +@@ -444,7 +444,6 @@ packet_processed: + int wg_packet_rx_poll(struct napi_struct *napi, int budget) + { + struct wg_peer *peer = container_of(napi, struct wg_peer, napi); +- struct crypt_queue *queue = &peer->rx_queue; + struct noise_keypair *keypair; + struct endpoint endpoint; + enum packet_state state; +@@ -455,11 +454,10 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget) + if (unlikely(budget <= 0)) + return 0; + +- while ((skb = __ptr_ring_peek(&queue->ring)) != NULL && ++ while ((skb = wg_prev_queue_peek(&peer->rx_queue)) != NULL && + (state = atomic_read_acquire(&PACKET_CB(skb)->state)) != + PACKET_STATE_UNCRYPTED) { +- __ptr_ring_discard_one(&queue->ring); +- peer = PACKET_PEER(skb); ++ wg_prev_queue_drop_peeked(&peer->rx_queue); + keypair = PACKET_CB(skb)->keypair; + free = true; + +@@ -508,7 +506,7 @@ void wg_packet_decrypt_worker(struct work_struct *work) + enum packet_state state = + likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ? + PACKET_STATE_CRYPTED : PACKET_STATE_DEAD; +- wg_queue_enqueue_per_peer_napi(skb, state); ++ wg_queue_enqueue_per_peer_rx(skb, state); + if (need_resched()) + cond_resched(); + } +@@ -531,12 +529,10 @@ static void wg_packet_consume_data(struct wg_device *wg, struct sk_buff *skb) + if (unlikely(READ_ONCE(peer->is_dead))) + goto err; + +- ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, +- &peer->rx_queue, skb, +- wg->packet_crypt_wq, +- &wg->decrypt_queue.last_cpu); ++ ret = wg_queue_enqueue_per_device_and_peer(&wg->decrypt_queue, &peer->rx_queue, skb, ++ wg->packet_crypt_wq, &wg->decrypt_queue.last_cpu); + if (unlikely(ret == -EPIPE)) +- wg_queue_enqueue_per_peer_napi(skb, PACKET_STATE_DEAD); ++ wg_queue_enqueue_per_peer_rx(skb, PACKET_STATE_DEAD); + if (likely(!ret || ret == -EPIPE)) { + rcu_read_unlock_bh(); + return; +diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c +index f74b9341ab0fe..5368f7c35b4bf 100644 +--- a/drivers/net/wireguard/send.c ++++ b/drivers/net/wireguard/send.c +@@ -239,8 +239,7 @@ void wg_packet_send_keepalive(struct wg_peer *peer) + wg_packet_send_staged_packets(peer); + } + +-static void wg_packet_create_data_done(struct sk_buff *first, +- struct wg_peer *peer) ++static void wg_packet_create_data_done(struct wg_peer *peer, struct sk_buff *first) + { + struct sk_buff *skb, *next; + bool is_keepalive, data_sent = false; +@@ -262,22 +261,19 @@ static void wg_packet_create_data_done(struct sk_buff *first, + + void wg_packet_tx_worker(struct work_struct *work) + { +- struct crypt_queue *queue = container_of(work, struct crypt_queue, +- work); ++ struct wg_peer *peer = container_of(work, struct wg_peer, transmit_packet_work); + struct noise_keypair *keypair; + enum packet_state state; + struct sk_buff *first; +- struct wg_peer *peer; + +- while ((first = __ptr_ring_peek(&queue->ring)) != NULL && ++ while ((first = wg_prev_queue_peek(&peer->tx_queue)) != NULL && + (state = atomic_read_acquire(&PACKET_CB(first)->state)) != + PACKET_STATE_UNCRYPTED) { +- __ptr_ring_discard_one(&queue->ring); +- peer = PACKET_PEER(first); ++ wg_prev_queue_drop_peeked(&peer->tx_queue); + keypair = PACKET_CB(first)->keypair; + + if (likely(state == PACKET_STATE_CRYPTED)) +- wg_packet_create_data_done(first, peer); ++ wg_packet_create_data_done(peer, first); + else + kfree_skb_list(first); + +@@ -306,16 +302,14 @@ void wg_packet_encrypt_worker(struct work_struct *work) + break; + } + } +- wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first, +- state); ++ wg_queue_enqueue_per_peer_tx(first, state); + if (need_resched()) + cond_resched(); + } + } + +-static void wg_packet_create_data(struct sk_buff *first) ++static void wg_packet_create_data(struct wg_peer *peer, struct sk_buff *first) + { +- struct wg_peer *peer = PACKET_PEER(first); + struct wg_device *wg = peer->device; + int ret = -EINVAL; + +@@ -323,13 +317,10 @@ static void wg_packet_create_data(struct sk_buff *first) + if (unlikely(READ_ONCE(peer->is_dead))) + goto err; + +- ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, +- &peer->tx_queue, first, +- wg->packet_crypt_wq, +- &wg->encrypt_queue.last_cpu); ++ ret = wg_queue_enqueue_per_device_and_peer(&wg->encrypt_queue, &peer->tx_queue, first, ++ wg->packet_crypt_wq, &wg->encrypt_queue.last_cpu); + if (unlikely(ret == -EPIPE)) +- wg_queue_enqueue_per_peer(&peer->tx_queue, first, +- PACKET_STATE_DEAD); ++ wg_queue_enqueue_per_peer_tx(first, PACKET_STATE_DEAD); + err: + rcu_read_unlock_bh(); + if (likely(!ret || ret == -EPIPE)) +@@ -393,7 +384,7 @@ void wg_packet_send_staged_packets(struct wg_peer *peer) + packets.prev->next = NULL; + wg_peer_get(keypair->entry.peer); + PACKET_CB(packets.next)->keypair = keypair; +- wg_packet_create_data(packets.next); ++ wg_packet_create_data(peer, packets.next); + return; + + out_invalid: +diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c +index 2e3eb5bbe49c8..4bc84cc5e824b 100644 +--- a/drivers/net/wireless/ath/ath10k/mac.c ++++ b/drivers/net/wireless/ath/ath10k/mac.c +@@ -9116,7 +9116,9 @@ static void ath10k_sta_statistics(struct ieee80211_hw *hw, + if (!ath10k_peer_stats_enabled(ar)) + return; + ++ mutex_lock(&ar->conf_mutex); + ath10k_debug_fw_stats_request(ar); ++ mutex_unlock(&ar->conf_mutex); + + sinfo->rx_duration = arsta->rx_duration; + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_DURATION); +diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c +index fd41f25456dc4..daae470ecf5aa 100644 +--- a/drivers/net/wireless/ath/ath10k/snoc.c ++++ b/drivers/net/wireless/ath/ath10k/snoc.c +@@ -1045,12 +1045,13 @@ static int ath10k_snoc_hif_power_up(struct ath10k *ar, + ret = ath10k_snoc_init_pipes(ar); + if (ret) { + ath10k_err(ar, "failed to initialize CE: %d\n", ret); +- goto err_wlan_enable; ++ goto err_free_rri; + } + + return 0; + +-err_wlan_enable: ++err_free_rri: ++ ath10k_ce_free_rri(ar); + ath10k_snoc_wlan_disable(ar); + + return ret; +diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c +index 7b5834157fe51..e6135795719a1 100644 +--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c ++++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c +@@ -240,8 +240,10 @@ static int ath10k_wmi_tlv_parse_peer_stats_info(struct ath10k *ar, u16 tag, u16 + __le32_to_cpu(stat->last_tx_rate_code), + __le32_to_cpu(stat->last_tx_bitrate_kbps)); + ++ rcu_read_lock(); + sta = ieee80211_find_sta_by_ifaddr(ar->hw, stat->peer_macaddr.addr, NULL); + if (!sta) { ++ rcu_read_unlock(); + ath10k_warn(ar, "not found station for peer stats\n"); + return -EINVAL; + } +@@ -251,6 +253,7 @@ static int ath10k_wmi_tlv_parse_peer_stats_info(struct ath10k *ar, u16 tag, u16 + arsta->rx_bitrate_kbps = __le32_to_cpu(stat->last_rx_bitrate_kbps); + arsta->tx_rate_code = __le32_to_cpu(stat->last_tx_rate_code); + arsta->tx_bitrate_kbps = __le32_to_cpu(stat->last_tx_bitrate_kbps); ++ rcu_read_unlock(); + + return 0; + } +diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c +index af427d9051a07..b5bd9b06da89e 100644 +--- a/drivers/net/wireless/ath/ath11k/mac.c ++++ b/drivers/net/wireless/ath/ath11k/mac.c +@@ -4213,11 +4213,6 @@ static int ath11k_mac_op_start(struct ieee80211_hw *hw) + /* Configure the hash seed for hash based reo dest ring selection */ + ath11k_wmi_pdev_lro_cfg(ar, ar->pdev->pdev_id); + +- mutex_unlock(&ar->conf_mutex); +- +- rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], +- &ab->pdevs[ar->pdev_idx]); +- + /* allow device to enter IMPS */ + if (ab->hw_params.idle_ps) { + ret = ath11k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_IDLE_PS_CONFIG, +@@ -4227,6 +4222,12 @@ static int ath11k_mac_op_start(struct ieee80211_hw *hw) + goto err; + } + } ++ ++ mutex_unlock(&ar->conf_mutex); ++ ++ rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], ++ &ab->pdevs[ar->pdev_idx]); ++ + return 0; + + err: +diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c +index 26ea51a721564..859a865c59950 100644 +--- a/drivers/net/wireless/ath/ath9k/debug.c ++++ b/drivers/net/wireless/ath/ath9k/debug.c +@@ -1223,8 +1223,11 @@ static ssize_t write_file_nf_override(struct file *file, + + ah->nf_override = val; + +- if (ah->curchan) ++ if (ah->curchan) { ++ ath9k_ps_wakeup(sc); + ath9k_hw_loadnf(ah, ah->curchan); ++ ath9k_ps_restore(sc); ++ } + + return count; + } +diff --git a/drivers/net/wireless/broadcom/b43/phy_n.c b/drivers/net/wireless/broadcom/b43/phy_n.c +index b669dff24b6e0..665b737fbb0d8 100644 +--- a/drivers/net/wireless/broadcom/b43/phy_n.c ++++ b/drivers/net/wireless/broadcom/b43/phy_n.c +@@ -5311,7 +5311,7 @@ static void b43_nphy_restore_cal(struct b43_wldev *dev) + + for (i = 0; i < 4; i++) { + if (dev->phy.rev >= 3) +- table[i] = coef[i]; ++ coef[i] = table[i]; + else + coef[i] = 0; + } +diff --git a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c +index 895a907acdf0f..37ce4fe136c5e 100644 +--- a/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c ++++ b/drivers/net/wireless/intel/iwlwifi/fw/pnvm.c +@@ -198,14 +198,14 @@ static int iwl_pnvm_parse(struct iwl_trans *trans, const u8 *data, + le32_to_cpu(sku_id->data[1]), + le32_to_cpu(sku_id->data[2])); + ++ data += sizeof(*tlv) + ALIGN(tlv_len, 4); ++ len -= ALIGN(tlv_len, 4); ++ + if (trans->sku_id[0] == le32_to_cpu(sku_id->data[0]) && + trans->sku_id[1] == le32_to_cpu(sku_id->data[1]) && + trans->sku_id[2] == le32_to_cpu(sku_id->data[2])) { + int ret; + +- data += sizeof(*tlv) + ALIGN(tlv_len, 4); +- len -= ALIGN(tlv_len, 4); +- + ret = iwl_pnvm_handle_section(trans, data, len); + if (!ret) + return 0; +@@ -227,6 +227,7 @@ int iwl_pnvm_load(struct iwl_trans *trans, + struct iwl_notification_wait pnvm_wait; + static const u16 ntf_cmds[] = { WIDE_ID(REGULATORY_AND_NVM_GROUP, + PNVM_INIT_COMPLETE_NTFY) }; ++ int ret; + + /* if the SKU_ID is empty, there's nothing to do */ + if (!trans->sku_id[0] && !trans->sku_id[1] && !trans->sku_id[2]) +@@ -236,7 +237,6 @@ int iwl_pnvm_load(struct iwl_trans *trans, + if (!trans->pnvm_loaded) { + const struct firmware *pnvm; + char pnvm_name[64]; +- int ret; + + /* + * The prefix unfortunately includes a hyphen at the end, so +@@ -264,6 +264,11 @@ int iwl_pnvm_load(struct iwl_trans *trans, + + release_firmware(pnvm); + } ++ } else { ++ /* if we already loaded, we need to set it again */ ++ ret = iwl_trans_set_pnvm(trans, NULL, 0); ++ if (ret) ++ return ret; + } + + iwl_init_notification_wait(notif_wait, &pnvm_wait, +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +index 6385b9641126b..ad374b25e2550 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c +@@ -896,12 +896,10 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm) + if (cmd_ver == 3) { + len = sizeof(cmd.v3); + n_bands = ARRAY_SIZE(cmd.v3.table[0]); +- cmd.v3.table_revision = cpu_to_le32(mvm->fwrt.geo_rev); + } else if (fw_has_api(&mvm->fwrt.fw->ucode_capa, + IWL_UCODE_TLV_API_SAR_TABLE_VER)) { + len = sizeof(cmd.v2); + n_bands = ARRAY_SIZE(cmd.v2.table[0]); +- cmd.v2.table_revision = cpu_to_le32(mvm->fwrt.geo_rev); + } else { + len = sizeof(cmd.v1); + n_bands = ARRAY_SIZE(cmd.v1.table[0]); +@@ -921,6 +919,16 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm) + if (ret) + return 0; + ++ /* ++ * Set the revision on versions that contain it. ++ * This must be done after calling iwl_sar_geo_init(). ++ */ ++ if (cmd_ver == 3) ++ cmd.v3.table_revision = cpu_to_le32(mvm->fwrt.geo_rev); ++ else if (fw_has_api(&mvm->fwrt.fw->ucode_capa, ++ IWL_UCODE_TLV_API_SAR_TABLE_VER)) ++ cmd.v2.table_revision = cpu_to_le32(mvm->fwrt.geo_rev); ++ + return iwl_mvm_send_cmd_pdu(mvm, + WIDE_ID(PHY_OPS_GROUP, GEO_TX_POWER_LIMIT), + 0, len, &cmd); +@@ -929,7 +937,6 @@ static int iwl_mvm_sar_geo_init(struct iwl_mvm *mvm) + static int iwl_mvm_get_ppag_table(struct iwl_mvm *mvm) + { + union acpi_object *wifi_pkg, *data, *enabled; +- union iwl_ppag_table_cmd ppag_table; + int i, j, ret, tbl_rev, num_sub_bands; + int idx = 2; + s8 *gain; +@@ -983,8 +990,8 @@ read_table: + goto out_free; + } + +- ppag_table.v1.enabled = cpu_to_le32(enabled->integer.value); +- if (!ppag_table.v1.enabled) { ++ mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(enabled->integer.value); ++ if (!mvm->fwrt.ppag_table.v1.enabled) { + ret = 0; + goto out_free; + } +@@ -999,16 +1006,23 @@ read_table: + union acpi_object *ent; + + ent = &wifi_pkg->package.elements[idx++]; +- if (ent->type != ACPI_TYPE_INTEGER || +- (j == 0 && ent->integer.value > ACPI_PPAG_MAX_LB) || +- (j == 0 && ent->integer.value < ACPI_PPAG_MIN_LB) || +- (j != 0 && ent->integer.value > ACPI_PPAG_MAX_HB) || +- (j != 0 && ent->integer.value < ACPI_PPAG_MIN_HB)) { +- ppag_table.v1.enabled = cpu_to_le32(0); ++ if (ent->type != ACPI_TYPE_INTEGER) { + ret = -EINVAL; + goto out_free; + } ++ + gain[i * num_sub_bands + j] = ent->integer.value; ++ ++ if ((j == 0 && ++ (gain[i * num_sub_bands + j] > ACPI_PPAG_MAX_LB || ++ gain[i * num_sub_bands + j] < ACPI_PPAG_MIN_LB)) || ++ (j != 0 && ++ (gain[i * num_sub_bands + j] > ACPI_PPAG_MAX_HB || ++ gain[i * num_sub_bands + j] < ACPI_PPAG_MIN_HB))) { ++ mvm->fwrt.ppag_table.v1.enabled = cpu_to_le32(0); ++ ret = -EINVAL; ++ goto out_free; ++ } + } + } + ret = 0; +@@ -1021,7 +1035,6 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm) + { + u8 cmd_ver; + int i, j, ret, num_sub_bands, cmd_size; +- union iwl_ppag_table_cmd ppag_table; + s8 *gain; + + if (!fw_has_capa(&mvm->fw->ucode_capa, IWL_UCODE_TLV_CAPA_SET_PPAG)) { +@@ -1040,7 +1053,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm) + if (cmd_ver == 1) { + num_sub_bands = IWL_NUM_SUB_BANDS; + gain = mvm->fwrt.ppag_table.v1.gain[0]; +- cmd_size = sizeof(ppag_table.v1); ++ cmd_size = sizeof(mvm->fwrt.ppag_table.v1); + if (mvm->fwrt.ppag_ver == 2) { + IWL_DEBUG_RADIO(mvm, + "PPAG table is v2 but FW supports v1, sending truncated table\n"); +@@ -1048,7 +1061,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm) + } else if (cmd_ver == 2) { + num_sub_bands = IWL_NUM_SUB_BANDS_V2; + gain = mvm->fwrt.ppag_table.v2.gain[0]; +- cmd_size = sizeof(ppag_table.v2); ++ cmd_size = sizeof(mvm->fwrt.ppag_table.v2); + if (mvm->fwrt.ppag_ver == 1) { + IWL_DEBUG_RADIO(mvm, + "PPAG table is v1 but FW supports v2, sending padded table\n"); +@@ -1068,7 +1081,7 @@ int iwl_mvm_ppag_send_cmd(struct iwl_mvm *mvm) + IWL_DEBUG_RADIO(mvm, "Sending PER_PLATFORM_ANT_GAIN_CMD\n"); + ret = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(PHY_OPS_GROUP, + PER_PLATFORM_ANT_GAIN_CMD), +- 0, cmd_size, &ppag_table); ++ 0, cmd_size, &mvm->fwrt.ppag_table); + if (ret < 0) + IWL_ERR(mvm, "failed to send PER_PLATFORM_ANT_GAIN_CMD (%d)\n", + ret); +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c +index 1db6d8d38822a..3939eccd3d5ac 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c +@@ -1055,9 +1055,6 @@ void iwl_mvm_remove_csa_period(struct iwl_mvm *mvm, + + lockdep_assert_held(&mvm->mutex); + +- if (!te_data->running) +- return; +- + spin_lock_bh(&mvm->time_event_lock); + id = te_data->id; + spin_unlock_bh(&mvm->time_event_lock); +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c +index 2d43899fbdd7a..81ef4fc8d7831 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c +@@ -345,17 +345,20 @@ int iwl_trans_pcie_ctx_info_gen3_set_pnvm(struct iwl_trans *trans, + if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210) + return 0; + +- ret = iwl_pcie_ctxt_info_alloc_dma(trans, data, len, +- &trans_pcie->pnvm_dram); +- if (ret < 0) { +- IWL_DEBUG_FW(trans, "Failed to allocate PNVM DMA %d.\n", +- ret); +- return ret; ++ /* only allocate the DRAM if not allocated yet */ ++ if (!trans->pnvm_loaded) { ++ if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size)) ++ return -EBUSY; ++ ++ ret = iwl_pcie_ctxt_info_alloc_dma(trans, data, len, ++ &trans_pcie->pnvm_dram); ++ if (ret < 0) { ++ IWL_DEBUG_FW(trans, "Failed to allocate PNVM DMA %d.\n", ++ ret); ++ return ret; ++ } + } + +- if (WARN_ON(prph_sc_ctrl->pnvm_cfg.pnvm_size)) +- return -EBUSY; +- + prph_sc_ctrl->pnvm_cfg.pnvm_base_addr = + cpu_to_le64(trans_pcie->pnvm_dram.physical); + prph_sc_ctrl->pnvm_cfg.pnvm_size = +diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c +index acb786d8b1d8f..e02a4fbb74de5 100644 +--- a/drivers/net/xen-netback/interface.c ++++ b/drivers/net/xen-netback/interface.c +@@ -162,13 +162,15 @@ irqreturn_t xenvif_interrupt(int irq, void *dev_id) + { + struct xenvif_queue *queue = dev_id; + int old; ++ bool has_rx, has_tx; + + old = atomic_fetch_or(NETBK_COMMON_EOI, &queue->eoi_pending); + WARN(old, "Interrupt while EOI pending\n"); + +- /* Use bitwise or as we need to call both functions. */ +- if ((!xenvif_handle_tx_interrupt(queue) | +- !xenvif_handle_rx_interrupt(queue))) { ++ has_tx = xenvif_handle_tx_interrupt(queue); ++ has_rx = xenvif_handle_rx_interrupt(queue); ++ ++ if (!has_rx && !has_tx) { + atomic_andnot(NETBK_COMMON_EOI, &queue->eoi_pending); + xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS); + } +diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c +index 292e535a385d4..e812a0d0fdb3d 100644 +--- a/drivers/nvme/host/multipath.c ++++ b/drivers/nvme/host/multipath.c +@@ -676,6 +676,10 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, struct nvme_id_ns *id) + if (blk_queue_stable_writes(ns->queue) && ns->head->disk) + blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, + ns->head->disk->queue); ++#ifdef CONFIG_BLK_DEV_ZONED ++ if (blk_queue_is_zoned(ns->queue) && ns->head->disk) ++ ns->head->disk->queue->nr_zones = ns->queue->nr_zones; ++#endif + } + + void nvme_mpath_remove_disk(struct nvme_ns_head *head) +diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c +index 92ca23bc8dbfc..e20dea5c44f7b 100644 +--- a/drivers/nvme/target/admin-cmd.c ++++ b/drivers/nvme/target/admin-cmd.c +@@ -469,7 +469,6 @@ out: + static void nvmet_execute_identify_ns(struct nvmet_req *req) + { + struct nvmet_ctrl *ctrl = req->sq->ctrl; +- struct nvmet_ns *ns; + struct nvme_id_ns *id; + u16 status = 0; + +@@ -486,20 +485,21 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) + } + + /* return an all zeroed buffer if we can't find an active namespace */ +- ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid); +- if (!ns) { +- status = NVME_SC_INVALID_NS; ++ req->ns = nvmet_find_namespace(ctrl, req->cmd->identify.nsid); ++ if (!req->ns) { ++ status = 0; + goto done; + } + +- nvmet_ns_revalidate(ns); ++ nvmet_ns_revalidate(req->ns); + + /* + * nuse = ncap = nsze isn't always true, but we have no way to find + * that out from the underlying device. + */ +- id->ncap = id->nsze = cpu_to_le64(ns->size >> ns->blksize_shift); +- switch (req->port->ana_state[ns->anagrpid]) { ++ id->ncap = id->nsze = ++ cpu_to_le64(req->ns->size >> req->ns->blksize_shift); ++ switch (req->port->ana_state[req->ns->anagrpid]) { + case NVME_ANA_INACCESSIBLE: + case NVME_ANA_PERSISTENT_LOSS: + break; +@@ -508,8 +508,8 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) + break; + } + +- if (ns->bdev) +- nvmet_bdev_set_limits(ns->bdev, id); ++ if (req->ns->bdev) ++ nvmet_bdev_set_limits(req->ns->bdev, id); + + /* + * We just provide a single LBA format that matches what the +@@ -523,25 +523,24 @@ static void nvmet_execute_identify_ns(struct nvmet_req *req) + * controllers, but also with any other user of the block device. + */ + id->nmic = (1 << 0); +- id->anagrpid = cpu_to_le32(ns->anagrpid); ++ id->anagrpid = cpu_to_le32(req->ns->anagrpid); + +- memcpy(&id->nguid, &ns->nguid, sizeof(id->nguid)); ++ memcpy(&id->nguid, &req->ns->nguid, sizeof(id->nguid)); + +- id->lbaf[0].ds = ns->blksize_shift; ++ id->lbaf[0].ds = req->ns->blksize_shift; + +- if (ctrl->pi_support && nvmet_ns_has_pi(ns)) { ++ if (ctrl->pi_support && nvmet_ns_has_pi(req->ns)) { + id->dpc = NVME_NS_DPC_PI_FIRST | NVME_NS_DPC_PI_LAST | + NVME_NS_DPC_PI_TYPE1 | NVME_NS_DPC_PI_TYPE2 | + NVME_NS_DPC_PI_TYPE3; + id->mc = NVME_MC_EXTENDED_LBA; +- id->dps = ns->pi_type; ++ id->dps = req->ns->pi_type; + id->flbas = NVME_NS_FLBAS_META_EXT; +- id->lbaf[0].ms = cpu_to_le16(ns->metadata_size); ++ id->lbaf[0].ms = cpu_to_le16(req->ns->metadata_size); + } + +- if (ns->readonly) ++ if (req->ns->readonly) + id->nsattr |= (1 << 0); +- nvmet_put_namespace(ns); + done: + if (!status) + status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); +diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c +index aacf06f0b4312..8b0485ada315b 100644 +--- a/drivers/nvme/target/tcp.c ++++ b/drivers/nvme/target/tcp.c +@@ -379,7 +379,7 @@ err: + return NVME_SC_INTERNAL; + } + +-static void nvmet_tcp_ddgst(struct ahash_request *hash, ++static void nvmet_tcp_send_ddgst(struct ahash_request *hash, + struct nvmet_tcp_cmd *cmd) + { + ahash_request_set_crypt(hash, cmd->req.sg, +@@ -387,6 +387,23 @@ static void nvmet_tcp_ddgst(struct ahash_request *hash, + crypto_ahash_digest(hash); + } + ++static void nvmet_tcp_recv_ddgst(struct ahash_request *hash, ++ struct nvmet_tcp_cmd *cmd) ++{ ++ struct scatterlist sg; ++ struct kvec *iov; ++ int i; ++ ++ crypto_ahash_init(hash); ++ for (i = 0, iov = cmd->iov; i < cmd->nr_mapped; i++, iov++) { ++ sg_init_one(&sg, iov->iov_base, iov->iov_len); ++ ahash_request_set_crypt(hash, &sg, NULL, iov->iov_len); ++ crypto_ahash_update(hash); ++ } ++ ahash_request_set_crypt(hash, NULL, (void *)&cmd->exp_ddgst, 0); ++ crypto_ahash_final(hash); ++} ++ + static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd) + { + struct nvme_tcp_data_pdu *pdu = cmd->data_pdu; +@@ -411,7 +428,7 @@ static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd) + + if (queue->data_digest) { + pdu->hdr.flags |= NVME_TCP_F_DDGST; +- nvmet_tcp_ddgst(queue->snd_hash, cmd); ++ nvmet_tcp_send_ddgst(queue->snd_hash, cmd); + } + + if (cmd->queue->hdr_digest) { +@@ -1060,7 +1077,7 @@ static void nvmet_tcp_prep_recv_ddgst(struct nvmet_tcp_cmd *cmd) + { + struct nvmet_tcp_queue *queue = cmd->queue; + +- nvmet_tcp_ddgst(queue->rcv_hash, cmd); ++ nvmet_tcp_recv_ddgst(queue->rcv_hash, cmd); + queue->offset = 0; + queue->left = NVME_TCP_DIGEST_LENGTH; + queue->rcv_state = NVMET_TCP_RECV_DDGST; +@@ -1081,14 +1098,14 @@ static int nvmet_tcp_try_recv_data(struct nvmet_tcp_queue *queue) + cmd->rbytes_done += ret; + } + ++ if (queue->data_digest) { ++ nvmet_tcp_prep_recv_ddgst(cmd); ++ return 0; ++ } + nvmet_tcp_unmap_pdu_iovec(cmd); + + if (!(cmd->flags & NVMET_TCP_F_INIT_FAILED) && + cmd->rbytes_done == cmd->req.transfer_len) { +- if (queue->data_digest) { +- nvmet_tcp_prep_recv_ddgst(cmd); +- return 0; +- } + cmd->req.execute(&cmd->req); + } + +@@ -1468,17 +1485,27 @@ static int nvmet_tcp_set_queue_sock(struct nvmet_tcp_queue *queue) + if (inet->rcv_tos > 0) + ip_sock_set_tos(sock->sk, inet->rcv_tos); + ++ ret = 0; + write_lock_bh(&sock->sk->sk_callback_lock); +- sock->sk->sk_user_data = queue; +- queue->data_ready = sock->sk->sk_data_ready; +- sock->sk->sk_data_ready = nvmet_tcp_data_ready; +- queue->state_change = sock->sk->sk_state_change; +- sock->sk->sk_state_change = nvmet_tcp_state_change; +- queue->write_space = sock->sk->sk_write_space; +- sock->sk->sk_write_space = nvmet_tcp_write_space; ++ if (sock->sk->sk_state != TCP_ESTABLISHED) { ++ /* ++ * If the socket is already closing, don't even start ++ * consuming it ++ */ ++ ret = -ENOTCONN; ++ } else { ++ sock->sk->sk_user_data = queue; ++ queue->data_ready = sock->sk->sk_data_ready; ++ sock->sk->sk_data_ready = nvmet_tcp_data_ready; ++ queue->state_change = sock->sk->sk_state_change; ++ sock->sk->sk_state_change = nvmet_tcp_state_change; ++ queue->write_space = sock->sk->sk_write_space; ++ sock->sk->sk_write_space = nvmet_tcp_write_space; ++ queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work); ++ } + write_unlock_bh(&sock->sk->sk_callback_lock); + +- return 0; ++ return ret; + } + + static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port, +@@ -1526,8 +1553,6 @@ static int nvmet_tcp_alloc_queue(struct nvmet_tcp_port *port, + if (ret) + goto out_destroy_sq; + +- queue_work_on(queue_cpu(queue), nvmet_tcp_wq, &queue->io_work); +- + return 0; + out_destroy_sq: + mutex_lock(&nvmet_tcp_queue_mutex); +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c +index a09ff8409f600..9b6ab83956c3b 100644 +--- a/drivers/nvmem/core.c ++++ b/drivers/nvmem/core.c +@@ -545,7 +545,9 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem) + + for_each_child_of_node(parent, child) { + addr = of_get_property(child, "reg", &len); +- if (!addr || (len < 2 * sizeof(u32))) { ++ if (!addr) ++ continue; ++ if (len < 2 * sizeof(u32)) { + dev_err(dev, "nvmem: invalid reg on %pOF\n", child); + return -EINVAL; + } +@@ -576,6 +578,7 @@ static int nvmem_add_cells_from_of(struct nvmem_device *nvmem) + cell->name, nvmem->stride); + /* Cells already added will be freed later. */ + kfree_const(cell->name); ++ of_node_put(cell->np); + kfree(cell); + return -EINVAL; + } +diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c +index a72704cd04681..f6e9f96933ca2 100644 +--- a/drivers/nvmem/qcom-spmi-sdam.c ++++ b/drivers/nvmem/qcom-spmi-sdam.c +@@ -1,6 +1,6 @@ + // SPDX-License-Identifier: GPL-2.0-only + /* +- * Copyright (c) 2017, 2020 The Linux Foundation. All rights reserved. ++ * Copyright (c) 2017, 2020-2021, The Linux Foundation. All rights reserved. + */ + + #include +@@ -18,7 +18,6 @@ + #define SDAM_PBS_TRIG_CLR 0xE6 + + struct sdam_chip { +- struct platform_device *pdev; + struct regmap *regmap; + struct nvmem_config sdam_config; + unsigned int base; +@@ -65,7 +64,7 @@ static int sdam_read(void *priv, unsigned int offset, void *val, + size_t bytes) + { + struct sdam_chip *sdam = priv; +- struct device *dev = &sdam->pdev->dev; ++ struct device *dev = sdam->sdam_config.dev; + int rc; + + if (!sdam_is_valid(sdam, offset, bytes)) { +@@ -86,7 +85,7 @@ static int sdam_write(void *priv, unsigned int offset, void *val, + size_t bytes) + { + struct sdam_chip *sdam = priv; +- struct device *dev = &sdam->pdev->dev; ++ struct device *dev = sdam->sdam_config.dev; + int rc; + + if (!sdam_is_valid(sdam, offset, bytes)) { +diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c +index 4602e467ca8b9..f2e697000b96f 100644 +--- a/drivers/of/fdt.c ++++ b/drivers/of/fdt.c +@@ -1149,8 +1149,16 @@ int __init __weak early_init_dt_mark_hotplug_memory_arch(u64 base, u64 size) + int __init __weak early_init_dt_reserve_memory_arch(phys_addr_t base, + phys_addr_t size, bool nomap) + { +- if (nomap) +- return memblock_remove(base, size); ++ if (nomap) { ++ /* ++ * If the memory is already reserved (by another region), we ++ * should not allow it to be marked nomap. ++ */ ++ if (memblock_is_region_reserved(base, size)) ++ return -EBUSY; ++ ++ return memblock_mark_nomap(base, size); ++ } + return memblock_reserve(base, size); + } + +diff --git a/drivers/opp/of.c b/drivers/opp/of.c +index 9faeb83e4b326..363277b31ecbb 100644 +--- a/drivers/opp/of.c ++++ b/drivers/opp/of.c +@@ -751,7 +751,6 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table, + struct device *dev, struct device_node *np) + { + struct dev_pm_opp *new_opp; +- u64 rate = 0; + u32 val; + int ret; + bool rate_not_available = false; +@@ -768,7 +767,8 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table, + + /* Check if the OPP supports hardware's hierarchy of versions or not */ + if (!_opp_is_supported(dev, opp_table, np)) { +- dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate); ++ dev_dbg(dev, "OPP not supported by hardware: %lu\n", ++ new_opp->rate); + goto free_opp; + } + +diff --git a/drivers/pci/controller/cadence/pcie-cadence-host.c b/drivers/pci/controller/cadence/pcie-cadence-host.c +index 811c1cb2e8deb..1cb7cfc75d6e4 100644 +--- a/drivers/pci/controller/cadence/pcie-cadence-host.c ++++ b/drivers/pci/controller/cadence/pcie-cadence-host.c +@@ -321,9 +321,10 @@ static int cdns_pcie_host_map_dma_ranges(struct cdns_pcie_rc *rc) + + resource_list_for_each_entry(entry, &bridge->dma_ranges) { + err = cdns_pcie_host_bar_config(rc, entry); +- if (err) ++ if (err) { + dev_err(dev, "Fail to configure IB using dma-ranges\n"); +- return err; ++ return err; ++ } + } + + return 0; +diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c +index b4761640ffd99..557554f53ce9c 100644 +--- a/drivers/pci/controller/dwc/pcie-qcom.c ++++ b/drivers/pci/controller/dwc/pcie-qcom.c +@@ -395,7 +395,9 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie) + + /* enable external reference clock */ + val = readl(pcie->parf + PCIE20_PARF_PHY_REFCLK); +- val &= ~PHY_REFCLK_USE_PAD; ++ /* USE_PAD is required only for ipq806x */ ++ if (!of_device_is_compatible(node, "qcom,pcie-apq8064")) ++ val &= ~PHY_REFCLK_USE_PAD; + val |= PHY_REFCLK_SSP_EN; + writel(val, pcie->parf + PCIE20_PARF_PHY_REFCLK); + +diff --git a/drivers/pci/controller/pcie-rcar-host.c b/drivers/pci/controller/pcie-rcar-host.c +index cdc0963f154e3..2bee09b16255d 100644 +--- a/drivers/pci/controller/pcie-rcar-host.c ++++ b/drivers/pci/controller/pcie-rcar-host.c +@@ -737,7 +737,7 @@ static int rcar_pcie_enable_msi(struct rcar_pcie_host *host) + } + + /* setup MSI data target */ +- msi->pages = __get_free_pages(GFP_KERNEL, 0); ++ msi->pages = __get_free_pages(GFP_KERNEL | GFP_DMA32, 0); + rcar_pcie_hw_enable_msi(host); + + return 0; +diff --git a/drivers/pci/controller/pcie-rockchip.c b/drivers/pci/controller/pcie-rockchip.c +index 904dec0d3a88f..990a00e08bc5b 100644 +--- a/drivers/pci/controller/pcie-rockchip.c ++++ b/drivers/pci/controller/pcie-rockchip.c +@@ -82,7 +82,7 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) + } + + rockchip->mgmt_sticky_rst = devm_reset_control_get_exclusive(dev, +- "mgmt-sticky"); ++ "mgmt-sticky"); + if (IS_ERR(rockchip->mgmt_sticky_rst)) { + if (PTR_ERR(rockchip->mgmt_sticky_rst) != -EPROBE_DEFER) + dev_err(dev, "missing mgmt-sticky reset property in node\n"); +@@ -118,11 +118,11 @@ int rockchip_pcie_parse_dt(struct rockchip_pcie *rockchip) + } + + if (rockchip->is_rc) { +- rockchip->ep_gpio = devm_gpiod_get(dev, "ep", GPIOD_OUT_HIGH); +- if (IS_ERR(rockchip->ep_gpio)) { +- dev_err(dev, "missing ep-gpios property in node\n"); +- return PTR_ERR(rockchip->ep_gpio); +- } ++ rockchip->ep_gpio = devm_gpiod_get_optional(dev, "ep", ++ GPIOD_OUT_HIGH); ++ if (IS_ERR(rockchip->ep_gpio)) ++ return dev_err_probe(dev, PTR_ERR(rockchip->ep_gpio), ++ "failed to get ep GPIO\n"); + } + + rockchip->aclk_pcie = devm_clk_get(dev, "aclk"); +diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c +index f92e0152e65e3..67937facd90cd 100644 +--- a/drivers/pci/controller/pcie-xilinx-cpm.c ++++ b/drivers/pci/controller/pcie-xilinx-cpm.c +@@ -404,6 +404,7 @@ static int xilinx_cpm_pcie_init_irq_domain(struct xilinx_cpm_pcie_port *port) + return 0; + out: + xilinx_cpm_free_irq_domains(port); ++ of_node_put(pcie_intc_node); + dev_err(dev, "Failed to allocate IRQ domains\n"); + + return -ENOMEM; +diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c +index 139869d50eb26..fdaf86a888b73 100644 +--- a/drivers/pci/pci-bridge-emul.c ++++ b/drivers/pci/pci-bridge-emul.c +@@ -21,8 +21,9 @@ + #include "pci-bridge-emul.h" + + #define PCI_BRIDGE_CONF_END PCI_STD_HEADER_SIZEOF ++#define PCI_CAP_PCIE_SIZEOF (PCI_EXP_SLTSTA2 + 2) + #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END +-#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) ++#define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_CAP_PCIE_SIZEOF) + + /** + * struct pci_bridge_reg_behavior - register bits behaviors +@@ -46,7 +47,8 @@ struct pci_bridge_reg_behavior { + u32 w1c; + }; + +-static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { ++static const ++struct pci_bridge_reg_behavior pci_regs_behavior[PCI_STD_HEADER_SIZEOF / 4] = { + [PCI_VENDOR_ID / 4] = { .ro = ~0 }, + [PCI_COMMAND / 4] = { + .rw = (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | +@@ -164,7 +166,8 @@ static const struct pci_bridge_reg_behavior pci_regs_behavior[] = { + }, + }; + +-static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { ++static const ++struct pci_bridge_reg_behavior pcie_cap_regs_behavior[PCI_CAP_PCIE_SIZEOF / 4] = { + [PCI_CAP_LIST_ID / 4] = { + /* + * Capability ID, Next Capability Pointer and +@@ -260,6 +263,8 @@ static const struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { + int pci_bridge_emul_init(struct pci_bridge_emul *bridge, + unsigned int flags) + { ++ BUILD_BUG_ON(sizeof(bridge->conf) != PCI_BRIDGE_CONF_END); ++ + bridge->conf.class_revision |= cpu_to_le32(PCI_CLASS_BRIDGE_PCI << 16); + bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; + bridge->conf.cache_line_size = 0x10; +diff --git a/drivers/pci/setup-res.c b/drivers/pci/setup-res.c +index 43eda101fcf40..7f1acb3918d0c 100644 +--- a/drivers/pci/setup-res.c ++++ b/drivers/pci/setup-res.c +@@ -410,10 +410,16 @@ EXPORT_SYMBOL(pci_release_resource); + int pci_resize_resource(struct pci_dev *dev, int resno, int size) + { + struct resource *res = dev->resource + resno; ++ struct pci_host_bridge *host; + int old, ret; + u32 sizes; + u16 cmd; + ++ /* Check if we must preserve the firmware's resource assignment */ ++ host = pci_find_host_bridge(dev->bus); ++ if (host->preserve_config) ++ return -ENOTSUPP; ++ + /* Make sure the resource isn't assigned before resizing it. */ + if (!(res->flags & IORESOURCE_UNSET)) + return -EBUSY; +diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c +index 31e39558d49d8..8b003c890b87b 100644 +--- a/drivers/pci/syscall.c ++++ b/drivers/pci/syscall.c +@@ -20,7 +20,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn, + u16 word; + u32 dword; + long err; +- long cfg_ret; ++ int cfg_ret; + + if (!capable(CAP_SYS_ADMIN)) + return -EPERM; +@@ -46,7 +46,7 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn, + } + + err = -EIO; +- if (cfg_ret != PCIBIOS_SUCCESSFUL) ++ if (cfg_ret) + goto error; + + switch (len) { +@@ -105,7 +105,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn, + if (err) + break; + err = pci_user_write_config_byte(dev, off, byte); +- if (err != PCIBIOS_SUCCESSFUL) ++ if (err) + err = -EIO; + break; + +@@ -114,7 +114,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn, + if (err) + break; + err = pci_user_write_config_word(dev, off, word); +- if (err != PCIBIOS_SUCCESSFUL) ++ if (err) + err = -EIO; + break; + +@@ -123,7 +123,7 @@ SYSCALL_DEFINE5(pciconfig_write, unsigned long, bus, unsigned long, dfn, + if (err) + break; + err = pci_user_write_config_dword(dev, off, dword); +- if (err != PCIBIOS_SUCCESSFUL) ++ if (err) + err = -EIO; + break; + +diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c +index a76ff594f3ca4..46defb1dcf867 100644 +--- a/drivers/perf/arm-cmn.c ++++ b/drivers/perf/arm-cmn.c +@@ -1150,7 +1150,7 @@ static int arm_cmn_commit_txn(struct pmu *pmu) + static int arm_cmn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) + { + struct arm_cmn *cmn; +- unsigned int target; ++ unsigned int i, target; + + cmn = hlist_entry_safe(node, struct arm_cmn, cpuhp_node); + if (cpu != cmn->cpu) +@@ -1161,6 +1161,8 @@ static int arm_cmn_pmu_offline_cpu(unsigned int cpu, struct hlist_node *node) + return 0; + + perf_pmu_migrate_context(&cmn->pmu, cpu, target); ++ for (i = 0; i < cmn->num_dtcs; i++) ++ irq_set_affinity_hint(cmn->dtc[i].irq, cpumask_of(target)); + cmn->cpu = target; + return 0; + } +@@ -1502,7 +1504,7 @@ static int arm_cmn_probe(struct platform_device *pdev) + struct arm_cmn *cmn; + const char *name; + static atomic_t id; +- int err, rootnode, this_id; ++ int err, rootnode; + + cmn = devm_kzalloc(&pdev->dev, sizeof(*cmn), GFP_KERNEL); + if (!cmn) +@@ -1549,14 +1551,9 @@ static int arm_cmn_probe(struct platform_device *pdev) + .cancel_txn = arm_cmn_end_txn, + }; + +- this_id = atomic_fetch_inc(&id); +- if (this_id == 0) { +- name = "arm_cmn"; +- } else { +- name = devm_kasprintf(cmn->dev, GFP_KERNEL, "arm_cmn_%d", this_id); +- if (!name) +- return -ENOMEM; +- } ++ name = devm_kasprintf(cmn->dev, GFP_KERNEL, "arm_cmn_%d", atomic_fetch_inc(&id)); ++ if (!name) ++ return -ENOMEM; + + err = cpuhp_state_add_instance(arm_cmn_hp_state, &cmn->cpuhp_node); + if (err) +diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig +index 01b53f86004cb..9ed5f167a9f3c 100644 +--- a/drivers/phy/Kconfig ++++ b/drivers/phy/Kconfig +@@ -52,6 +52,7 @@ config PHY_XGENE + config USB_LGM_PHY + tristate "INTEL Lightning Mountain USB PHY Driver" + depends on USB_SUPPORT ++ depends on X86 || COMPILE_TEST + select USB_PHY + select REGULATOR + select REGULATOR_FIXED_VOLTAGE +diff --git a/drivers/phy/cadence/phy-cadence-torrent.c b/drivers/phy/cadence/phy-cadence-torrent.c +index f310e15d94cbc..591a15834b48f 100644 +--- a/drivers/phy/cadence/phy-cadence-torrent.c ++++ b/drivers/phy/cadence/phy-cadence-torrent.c +@@ -2298,6 +2298,7 @@ static int cdns_torrent_phy_probe(struct platform_device *pdev) + + if (total_num_lanes > MAX_NUM_LANES) { + dev_err(dev, "Invalid lane configuration\n"); ++ ret = -EINVAL; + goto put_lnk_rst; + } + +diff --git a/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c b/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c +index a7d126192cf12..29d246ea24b47 100644 +--- a/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c ++++ b/drivers/phy/lantiq/phy-lantiq-rcu-usb2.c +@@ -124,8 +124,16 @@ static int ltq_rcu_usb2_phy_power_on(struct phy *phy) + reset_control_deassert(priv->phy_reset); + + ret = clk_prepare_enable(priv->phy_gate_clk); +- if (ret) ++ if (ret) { + dev_err(dev, "failed to enable PHY gate\n"); ++ return ret; ++ } ++ ++ /* ++ * at least the xrx200 usb2 phy requires some extra time to be ++ * operational after enabling the clock ++ */ ++ usleep_range(100, 200); + + return ret; + } +diff --git a/drivers/phy/rockchip/phy-rockchip-emmc.c b/drivers/phy/rockchip/phy-rockchip-emmc.c +index 2dc19ddd120f5..a005fc58bbf02 100644 +--- a/drivers/phy/rockchip/phy-rockchip-emmc.c ++++ b/drivers/phy/rockchip/phy-rockchip-emmc.c +@@ -240,15 +240,17 @@ static int rockchip_emmc_phy_init(struct phy *phy) + * - SDHCI driver to get the PHY + * - SDHCI driver to init the PHY + * +- * The clock is optional, so upon any error we just set to NULL. ++ * The clock is optional, using clk_get_optional() to get the clock ++ * and do error processing if the return value != NULL + * + * NOTE: we don't do anything special for EPROBE_DEFER here. Given the + * above expected use case, EPROBE_DEFER isn't sensible to expect, so + * it's just like any other error. + */ +- rk_phy->emmcclk = clk_get(&phy->dev, "emmcclk"); ++ rk_phy->emmcclk = clk_get_optional(&phy->dev, "emmcclk"); + if (IS_ERR(rk_phy->emmcclk)) { +- dev_dbg(&phy->dev, "Error getting emmcclk: %d\n", ret); ++ ret = PTR_ERR(rk_phy->emmcclk); ++ dev_err(&phy->dev, "Error getting emmcclk: %d\n", ret); + rk_phy->emmcclk = NULL; + } + +diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c +index 0ecee8b8773d0..ea5149efcbeae 100644 +--- a/drivers/platform/chrome/cros_ec_proto.c ++++ b/drivers/platform/chrome/cros_ec_proto.c +@@ -526,11 +526,13 @@ int cros_ec_query_all(struct cros_ec_device *ec_dev) + * power), not wake up. + */ + ec_dev->host_event_wake_mask = U32_MAX & +- ~(BIT(EC_HOST_EVENT_AC_DISCONNECTED) | +- BIT(EC_HOST_EVENT_BATTERY_LOW) | +- BIT(EC_HOST_EVENT_BATTERY_CRITICAL) | +- BIT(EC_HOST_EVENT_PD_MCU) | +- BIT(EC_HOST_EVENT_BATTERY_STATUS)); ++ ~(EC_HOST_EVENT_MASK(EC_HOST_EVENT_LID_CLOSED) | ++ EC_HOST_EVENT_MASK(EC_HOST_EVENT_AC_DISCONNECTED) | ++ EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY_LOW) | ++ EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY_CRITICAL) | ++ EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY) | ++ EC_HOST_EVENT_MASK(EC_HOST_EVENT_PD_MCU) | ++ EC_HOST_EVENT_MASK(EC_HOST_EVENT_BATTERY_STATUS)); + /* + * Old ECs may not support this command. Complain about all + * other errors. +diff --git a/drivers/power/reset/at91-sama5d2_shdwc.c b/drivers/power/reset/at91-sama5d2_shdwc.c +index 2fe3a627cb535..d9cf91e5b06d0 100644 +--- a/drivers/power/reset/at91-sama5d2_shdwc.c ++++ b/drivers/power/reset/at91-sama5d2_shdwc.c +@@ -37,7 +37,7 @@ + + #define AT91_SHDW_MR 0x04 /* Shut Down Mode Register */ + #define AT91_SHDW_WKUPDBC_SHIFT 24 +-#define AT91_SHDW_WKUPDBC_MASK GENMASK(31, 16) ++#define AT91_SHDW_WKUPDBC_MASK GENMASK(26, 24) + #define AT91_SHDW_WKUPDBC(x) (((x) << AT91_SHDW_WKUPDBC_SHIFT) \ + & AT91_SHDW_WKUPDBC_MASK) + +diff --git a/drivers/power/supply/Kconfig b/drivers/power/supply/Kconfig +index eec646c568b7b..1699b9269a78e 100644 +--- a/drivers/power/supply/Kconfig ++++ b/drivers/power/supply/Kconfig +@@ -229,6 +229,7 @@ config BATTERY_SBS + config CHARGER_SBS + tristate "SBS Compliant charger" + depends on I2C ++ select REGMAP_I2C + help + Say Y to include support for SBS compliant battery chargers. + +diff --git a/drivers/power/supply/axp20x_usb_power.c b/drivers/power/supply/axp20x_usb_power.c +index 0eaa86c52874a..25e288388edad 100644 +--- a/drivers/power/supply/axp20x_usb_power.c ++++ b/drivers/power/supply/axp20x_usb_power.c +@@ -593,6 +593,7 @@ static int axp20x_usb_power_probe(struct platform_device *pdev) + power->axp20x_id = axp_data->axp20x_id; + power->regmap = axp20x->regmap; + power->num_irqs = axp_data->num_irq_names; ++ INIT_DELAYED_WORK(&power->vbus_detect, axp20x_usb_power_poll_vbus); + + if (power->axp20x_id == AXP202_ID) { + /* Enable vbus valid checking */ +@@ -645,7 +646,6 @@ static int axp20x_usb_power_probe(struct platform_device *pdev) + } + } + +- INIT_DELAYED_WORK(&power->vbus_detect, axp20x_usb_power_poll_vbus); + if (axp20x_usb_vbus_needs_polling(power)) + queue_delayed_work(system_wq, &power->vbus_detect, 0); + +diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c +index 295611b3b15e9..cebc5c8fda1b5 100644 +--- a/drivers/power/supply/cpcap-battery.c ++++ b/drivers/power/supply/cpcap-battery.c +@@ -561,17 +561,21 @@ static int cpcap_battery_update_charger(struct cpcap_battery_ddata *ddata, + POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE, + &prop); + if (error) +- return error; ++ goto out_put; + + /* Allow charger const voltage lower than battery const voltage */ + if (const_charge_voltage > prop.intval) +- return 0; ++ goto out_put; + + val.intval = const_charge_voltage; + +- return power_supply_set_property(charger, ++ error = power_supply_set_property(charger, + POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE, + &val); ++out_put: ++ power_supply_put(charger); ++ ++ return error; + } + + static int cpcap_battery_set_property(struct power_supply *psy, +@@ -666,7 +670,7 @@ static int cpcap_battery_init_irq(struct platform_device *pdev, + + error = devm_request_threaded_irq(ddata->dev, irq, NULL, + cpcap_battery_irq_thread, +- IRQF_SHARED, ++ IRQF_SHARED | IRQF_ONESHOT, + name, ddata); + if (error) { + dev_err(ddata->dev, "could not get irq %s: %i\n", +diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c +index c0d452e3dc8b0..22fff01425d63 100644 +--- a/drivers/power/supply/cpcap-charger.c ++++ b/drivers/power/supply/cpcap-charger.c +@@ -301,6 +301,8 @@ cpcap_charger_get_bat_const_charge_voltage(struct cpcap_charger_ddata *ddata) + &prop); + if (!error) + voltage = prop.intval; ++ ++ power_supply_put(battery); + } + + return voltage; +@@ -708,7 +710,7 @@ static int cpcap_usb_init_irq(struct platform_device *pdev, + + error = devm_request_threaded_irq(ddata->dev, irq, NULL, + cpcap_charger_irq_thread, +- IRQF_SHARED, ++ IRQF_SHARED | IRQF_ONESHOT, + name, ddata); + if (error) { + dev_err(ddata->dev, "could not get irq %s: %i\n", +diff --git a/drivers/power/supply/smb347-charger.c b/drivers/power/supply/smb347-charger.c +index d3bf35ed12cee..8cfbd8d6b4786 100644 +--- a/drivers/power/supply/smb347-charger.c ++++ b/drivers/power/supply/smb347-charger.c +@@ -137,6 +137,7 @@ + * @mains_online: is AC/DC input connected + * @usb_online: is USB input connected + * @charging_enabled: is charging enabled ++ * @irq_unsupported: is interrupt unsupported by SMB hardware + * @max_charge_current: maximum current (in uA) the battery can be charged + * @max_charge_voltage: maximum voltage (in uV) the battery can be charged + * @pre_charge_current: current (in uA) to use in pre-charging phase +@@ -193,6 +194,7 @@ struct smb347_charger { + bool mains_online; + bool usb_online; + bool charging_enabled; ++ bool irq_unsupported; + + unsigned int max_charge_current; + unsigned int max_charge_voltage; +@@ -862,6 +864,9 @@ static int smb347_irq_set(struct smb347_charger *smb, bool enable) + { + int ret; + ++ if (smb->irq_unsupported) ++ return 0; ++ + ret = smb347_set_writable(smb, true); + if (ret < 0) + return ret; +@@ -923,8 +928,6 @@ static int smb347_irq_init(struct smb347_charger *smb, + ret = regmap_update_bits(smb->regmap, CFG_STAT, + CFG_STAT_ACTIVE_HIGH | CFG_STAT_DISABLED, + CFG_STAT_DISABLED); +- if (ret < 0) +- client->irq = 0; + + smb347_set_writable(smb, false); + +@@ -1345,6 +1348,7 @@ static int smb347_probe(struct i2c_client *client, + if (ret < 0) { + dev_warn(dev, "failed to initialize IRQ: %d\n", ret); + dev_warn(dev, "disabling IRQ support\n"); ++ smb->irq_unsupported = true; + } else { + smb347_irq_enable(smb); + } +@@ -1357,8 +1361,8 @@ static int smb347_remove(struct i2c_client *client) + { + struct smb347_charger *smb = i2c_get_clientdata(client); + +- if (client->irq) +- smb347_irq_disable(smb); ++ smb347_irq_disable(smb); ++ + return 0; + } + +diff --git a/drivers/pwm/pwm-iqs620a.c b/drivers/pwm/pwm-iqs620a.c +index 7d33e36464360..3e967a12458c6 100644 +--- a/drivers/pwm/pwm-iqs620a.c ++++ b/drivers/pwm/pwm-iqs620a.c +@@ -46,7 +46,8 @@ static int iqs620_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, + { + struct iqs620_pwm_private *iqs620_pwm; + struct iqs62x_core *iqs62x; +- u64 duty_scale; ++ unsigned int duty_cycle; ++ unsigned int duty_scale; + int ret; + + if (state->polarity != PWM_POLARITY_NORMAL) +@@ -70,7 +71,8 @@ static int iqs620_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, + * For lower duty cycles (e.g. 0), the PWM output is simply disabled to + * allow an external pull-down resistor to hold the GPIO3/LTX pin low. + */ +- duty_scale = div_u64(state->duty_cycle * 256, IQS620_PWM_PERIOD_NS); ++ duty_cycle = min_t(u64, state->duty_cycle, IQS620_PWM_PERIOD_NS); ++ duty_scale = duty_cycle * 256 / IQS620_PWM_PERIOD_NS; + + mutex_lock(&iqs620_pwm->lock); + +@@ -82,7 +84,7 @@ static int iqs620_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm, + } + + if (duty_scale) { +- u8 duty_val = min_t(u64, duty_scale - 1, 0xff); ++ u8 duty_val = duty_scale - 1; + + ret = regmap_write(iqs62x->regmap, IQS620_PWM_DUTY_CYCLE, + duty_val); +diff --git a/drivers/pwm/pwm-rockchip.c b/drivers/pwm/pwm-rockchip.c +index 77c23a2c6d71e..3b8da7b0091b1 100644 +--- a/drivers/pwm/pwm-rockchip.c ++++ b/drivers/pwm/pwm-rockchip.c +@@ -289,6 +289,7 @@ static int rockchip_pwm_probe(struct platform_device *pdev) + struct rockchip_pwm_chip *pc; + struct resource *r; + u32 enable_conf, ctrl; ++ bool enabled; + int ret, count; + + id = of_match_device(rockchip_pwm_dt_ids, &pdev->dev); +@@ -332,9 +333,9 @@ static int rockchip_pwm_probe(struct platform_device *pdev) + return ret; + } + +- ret = clk_prepare(pc->pclk); ++ ret = clk_prepare_enable(pc->pclk); + if (ret) { +- dev_err(&pdev->dev, "Can't prepare APB clk: %d\n", ret); ++ dev_err(&pdev->dev, "Can't prepare enable APB clk: %d\n", ret); + goto err_clk; + } + +@@ -351,23 +352,26 @@ static int rockchip_pwm_probe(struct platform_device *pdev) + pc->chip.of_pwm_n_cells = 3; + } + ++ enable_conf = pc->data->enable_conf; ++ ctrl = readl_relaxed(pc->base + pc->data->regs.ctrl); ++ enabled = (ctrl & enable_conf) == enable_conf; ++ + ret = pwmchip_add(&pc->chip); + if (ret < 0) { +- clk_unprepare(pc->clk); + dev_err(&pdev->dev, "pwmchip_add() failed: %d\n", ret); + goto err_pclk; + } + + /* Keep the PWM clk enabled if the PWM appears to be up and running. */ +- enable_conf = pc->data->enable_conf; +- ctrl = readl_relaxed(pc->base + pc->data->regs.ctrl); +- if ((ctrl & enable_conf) != enable_conf) ++ if (!enabled) + clk_disable(pc->clk); + ++ clk_disable(pc->pclk); ++ + return 0; + + err_pclk: +- clk_unprepare(pc->pclk); ++ clk_disable_unprepare(pc->pclk); + err_clk: + clk_disable_unprepare(pc->clk); + +diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c +index 90cb8445f7216..d260c442b788d 100644 +--- a/drivers/regulator/axp20x-regulator.c ++++ b/drivers/regulator/axp20x-regulator.c +@@ -1070,7 +1070,7 @@ static int axp20x_set_dcdc_freq(struct platform_device *pdev, u32 dcdcfreq) + static int axp20x_regulator_parse_dt(struct platform_device *pdev) + { + struct device_node *np, *regulators; +- int ret; ++ int ret = 0; + u32 dcdcfreq = 0; + + np = of_node_get(pdev->dev.parent->of_node); +@@ -1085,13 +1085,12 @@ static int axp20x_regulator_parse_dt(struct platform_device *pdev) + ret = axp20x_set_dcdc_freq(pdev, dcdcfreq); + if (ret < 0) { + dev_err(&pdev->dev, "Error setting dcdc frequency: %d\n", ret); +- return ret; + } +- + of_node_put(regulators); + } + +- return 0; ++ of_node_put(np); ++ return ret; + } + + static int axp20x_set_dcdc_workmode(struct regulator_dev *rdev, int id, u32 workmode) +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index 35098dbd32a3c..7b3de8b0b1caf 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -1617,7 +1617,7 @@ static struct regulator *create_regulator(struct regulator_dev *rdev, + const char *supply_name) + { + struct regulator *regulator; +- int err; ++ int err = 0; + + if (dev) { + char buf[REG_STR_SIZE]; +@@ -1663,8 +1663,8 @@ static struct regulator *create_regulator(struct regulator_dev *rdev, + } + } + +- regulator->debugfs = debugfs_create_dir(supply_name, +- rdev->debugfs); ++ if (err != -EEXIST) ++ regulator->debugfs = debugfs_create_dir(supply_name, rdev->debugfs); + if (!regulator->debugfs) { + rdev_dbg(rdev, "Failed to create debugfs directory\n"); + } else { +diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c +index a22c4b5f64f7e..52e4396d40717 100644 +--- a/drivers/regulator/qcom-rpmh-regulator.c ++++ b/drivers/regulator/qcom-rpmh-regulator.c +@@ -732,6 +732,15 @@ static const struct rpmh_vreg_hw_data pmic5_hfsmps515 = { + .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode, + }; + ++static const struct rpmh_vreg_hw_data pmic5_hfsmps515_1 = { ++ .regulator_type = VRM, ++ .ops = &rpmh_regulator_vrm_ops, ++ .voltage_range = REGULATOR_LINEAR_RANGE(900000, 0, 4, 16000), ++ .n_voltages = 5, ++ .pmic_mode_map = pmic_mode_map_pmic5_smps, ++ .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode, ++}; ++ + static const struct rpmh_vreg_hw_data pmic5_bob = { + .regulator_type = VRM, + .ops = &rpmh_regulator_vrm_bypass_ops, +@@ -874,6 +883,19 @@ static const struct rpmh_vreg_init_data pm8009_vreg_data[] = { + RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo, "vdd-l4"), + RPMH_VREG("ldo5", "ldo%s5", &pmic5_pldo, "vdd-l5-l6"), + RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo, "vdd-l5-l6"), ++ RPMH_VREG("ldo7", "ldo%s7", &pmic5_pldo_lv, "vdd-l7"), ++ {}, ++}; ++ ++static const struct rpmh_vreg_init_data pm8009_1_vreg_data[] = { ++ RPMH_VREG("smps1", "smp%s1", &pmic5_hfsmps510, "vdd-s1"), ++ RPMH_VREG("smps2", "smp%s2", &pmic5_hfsmps515_1, "vdd-s2"), ++ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo, "vdd-l1"), ++ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo, "vdd-l2"), ++ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo, "vdd-l3"), ++ RPMH_VREG("ldo4", "ldo%s4", &pmic5_nldo, "vdd-l4"), ++ RPMH_VREG("ldo5", "ldo%s5", &pmic5_pldo, "vdd-l5-l6"), ++ RPMH_VREG("ldo6", "ldo%s6", &pmic5_pldo, "vdd-l5-l6"), + RPMH_VREG("ldo7", "ldo%s6", &pmic5_pldo_lv, "vdd-l7"), + {}, + }; +@@ -976,6 +998,10 @@ static const struct of_device_id __maybe_unused rpmh_regulator_match_table[] = { + .compatible = "qcom,pm8009-rpmh-regulators", + .data = pm8009_vreg_data, + }, ++ { ++ .compatible = "qcom,pm8009-1-rpmh-regulators", ++ .data = pm8009_1_vreg_data, ++ }, + { + .compatible = "qcom,pm8150-rpmh-regulators", + .data = pm8150_vreg_data, +diff --git a/drivers/regulator/rohm-regulator.c b/drivers/regulator/rohm-regulator.c +index 399002383b28b..5c558b153d55e 100644 +--- a/drivers/regulator/rohm-regulator.c ++++ b/drivers/regulator/rohm-regulator.c +@@ -52,9 +52,12 @@ int rohm_regulator_set_dvs_levels(const struct rohm_dvs_config *dvs, + char *prop; + unsigned int reg, mask, omask, oreg = desc->enable_reg; + +- for (i = 0; i < ROHM_DVS_LEVEL_MAX && !ret; i++) { +- if (dvs->level_map & (1 << i)) { +- switch (i + 1) { ++ for (i = 0; i < ROHM_DVS_LEVEL_VALID_AMOUNT && !ret; i++) { ++ int bit; ++ ++ bit = BIT(i); ++ if (dvs->level_map & bit) { ++ switch (bit) { + case ROHM_DVS_LEVEL_RUN: + prop = "rohm,dvs-run-voltage"; + reg = dvs->run_reg; +diff --git a/drivers/regulator/s5m8767.c b/drivers/regulator/s5m8767.c +index 3fa472127e9a1..7c111bbdc2afa 100644 +--- a/drivers/regulator/s5m8767.c ++++ b/drivers/regulator/s5m8767.c +@@ -544,14 +544,18 @@ static int s5m8767_pmic_dt_parse_pdata(struct platform_device *pdev, + rdata = devm_kcalloc(&pdev->dev, + pdata->num_regulators, sizeof(*rdata), + GFP_KERNEL); +- if (!rdata) ++ if (!rdata) { ++ of_node_put(regulators_np); + return -ENOMEM; ++ } + + rmode = devm_kcalloc(&pdev->dev, + pdata->num_regulators, sizeof(*rmode), + GFP_KERNEL); +- if (!rmode) ++ if (!rmode) { ++ of_node_put(regulators_np); + return -ENOMEM; ++ } + + pdata->regulators = rdata; + pdata->opmode = rmode; +@@ -573,10 +577,13 @@ static int s5m8767_pmic_dt_parse_pdata(struct platform_device *pdev, + "s5m8767,pmic-ext-control", + GPIOD_OUT_HIGH | GPIOD_FLAGS_BIT_NONEXCLUSIVE, + "s5m8767"); +- if (PTR_ERR(rdata->ext_control_gpiod) == -ENOENT) ++ if (PTR_ERR(rdata->ext_control_gpiod) == -ENOENT) { + rdata->ext_control_gpiod = NULL; +- else if (IS_ERR(rdata->ext_control_gpiod)) ++ } else if (IS_ERR(rdata->ext_control_gpiod)) { ++ of_node_put(reg_np); ++ of_node_put(regulators_np); + return PTR_ERR(rdata->ext_control_gpiod); ++ } + + rdata->id = i; + rdata->initdata = of_get_regulator_init_data( +diff --git a/drivers/remoteproc/mtk_common.h b/drivers/remoteproc/mtk_common.h +index f2bcc9d9fda65..58388057062a2 100644 +--- a/drivers/remoteproc/mtk_common.h ++++ b/drivers/remoteproc/mtk_common.h +@@ -47,6 +47,7 @@ + + #define MT8192_CORE0_SW_RSTN_CLR 0x10000 + #define MT8192_CORE0_SW_RSTN_SET 0x10004 ++#define MT8192_CORE0_WDT_IRQ 0x10030 + #define MT8192_CORE0_WDT_CFG 0x10034 + + #define SCP_FW_VER_LEN 32 +diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c +index 52fa01d67c18e..00a6e57dfa16b 100644 +--- a/drivers/remoteproc/mtk_scp.c ++++ b/drivers/remoteproc/mtk_scp.c +@@ -184,17 +184,19 @@ static void mt8192_scp_irq_handler(struct mtk_scp *scp) + + scp_to_host = readl(scp->reg_base + MT8192_SCP2APMCU_IPC_SET); + +- if (scp_to_host & MT8192_SCP_IPC_INT_BIT) ++ if (scp_to_host & MT8192_SCP_IPC_INT_BIT) { + scp_ipi_handler(scp); +- else +- scp_wdt_handler(scp, scp_to_host); + +- /* +- * SCP won't send another interrupt until we clear +- * MT8192_SCP2APMCU_IPC. +- */ +- writel(MT8192_SCP_IPC_INT_BIT, +- scp->reg_base + MT8192_SCP2APMCU_IPC_CLR); ++ /* ++ * SCP won't send another interrupt until we clear ++ * MT8192_SCP2APMCU_IPC. ++ */ ++ writel(MT8192_SCP_IPC_INT_BIT, ++ scp->reg_base + MT8192_SCP2APMCU_IPC_CLR); ++ } else { ++ scp_wdt_handler(scp, scp_to_host); ++ writel(1, scp->reg_base + MT8192_CORE0_WDT_IRQ); ++ } + } + + static irqreturn_t scp_irq_handler(int irq, void *priv) +diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig +index 65ad9d0b47ab1..33e4ecd6c6659 100644 +--- a/drivers/rtc/Kconfig ++++ b/drivers/rtc/Kconfig +@@ -692,6 +692,7 @@ config RTC_DRV_S5M + tristate "Samsung S2M/S5M series" + depends on MFD_SEC_CORE || COMPILE_TEST + select REGMAP_IRQ ++ select REGMAP_I2C + help + If you say yes here you will get support for the + RTC of Samsung S2MPS14 and S5M PMIC series. +@@ -1296,7 +1297,7 @@ config RTC_DRV_OPAL + + config RTC_DRV_ZYNQMP + tristate "Xilinx Zynq Ultrascale+ MPSoC RTC" +- depends on OF ++ depends on OF && HAS_IOMEM + help + If you say yes here you get support for the RTC controller found on + Xilinx Zynq Ultrascale+ MPSoC. +diff --git a/drivers/s390/crypto/zcrypt_api.c b/drivers/s390/crypto/zcrypt_api.c +index f60f9fb252142..3b9eda311c273 100644 +--- a/drivers/s390/crypto/zcrypt_api.c ++++ b/drivers/s390/crypto/zcrypt_api.c +@@ -1438,6 +1438,8 @@ static int icarsamodexpo_ioctl(struct ap_perms *perms, unsigned long arg) + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + if (rc) { + ZCRYPT_DBF(DBF_DEBUG, "ioctl ICARSAMODEXPO rc=%d\n", rc); + return rc; +@@ -1481,6 +1483,8 @@ static int icarsacrt_ioctl(struct ap_perms *perms, unsigned long arg) + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + if (rc) { + ZCRYPT_DBF(DBF_DEBUG, "ioctl ICARSACRT rc=%d\n", rc); + return rc; +@@ -1524,6 +1528,8 @@ static int zsecsendcprb_ioctl(struct ap_perms *perms, unsigned long arg) + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + if (rc) + ZCRYPT_DBF(DBF_DEBUG, "ioctl ZSENDCPRB rc=%d status=0x%x\n", + rc, xcRB.status); +@@ -1568,6 +1574,8 @@ static int zsendep11cprb_ioctl(struct ap_perms *perms, unsigned long arg) + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + if (rc) + ZCRYPT_DBF(DBF_DEBUG, "ioctl ZSENDEP11CPRB rc=%d\n", rc); + if (copy_to_user(uxcrb, &xcrb, sizeof(xcrb))) +@@ -1744,6 +1752,8 @@ static long trans_modexpo32(struct ap_perms *perms, struct file *filp, + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + if (rc) + return rc; + return put_user(mex64.outputdatalength, +@@ -1795,6 +1805,8 @@ static long trans_modexpo_crt32(struct ap_perms *perms, struct file *filp, + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + if (rc) + return rc; + return put_user(crt64.outputdatalength, +@@ -1865,6 +1877,8 @@ static long trans_xcRB32(struct ap_perms *perms, struct file *filp, + if (rc == -EAGAIN) + tr.again_counter++; + } while (rc == -EAGAIN && tr.again_counter < TRACK_AGAIN_MAX); ++ if (rc == -EAGAIN && tr.again_counter >= TRACK_AGAIN_MAX) ++ rc = -EIO; + xcRB32.reply_control_blk_length = xcRB64.reply_control_blk_length; + xcRB32.reply_data_length = xcRB64.reply_data_length; + xcRB32.status = xcRB64.status; +diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c +index 5730572b52cd5..54e686dca6dea 100644 +--- a/drivers/s390/virtio/virtio_ccw.c ++++ b/drivers/s390/virtio/virtio_ccw.c +@@ -117,7 +117,7 @@ struct virtio_rev_info { + }; + + /* the highest virtio-ccw revision we support */ +-#define VIRTIO_CCW_REV_MAX 1 ++#define VIRTIO_CCW_REV_MAX 2 + + struct virtio_ccw_vq_info { + struct virtqueue *vq; +@@ -952,7 +952,7 @@ static u8 virtio_ccw_get_status(struct virtio_device *vdev) + u8 old_status = vcdev->dma_area->status; + struct ccw1 *ccw; + +- if (vcdev->revision < 1) ++ if (vcdev->revision < 2) + return vcdev->dma_area->status; + + ccw = ccw_device_dma_zalloc(vcdev->cdev, sizeof(*ccw)); +diff --git a/drivers/scsi/bnx2fc/Kconfig b/drivers/scsi/bnx2fc/Kconfig +index 3cf7e08df8093..ecdc0f0f4f4e6 100644 +--- a/drivers/scsi/bnx2fc/Kconfig ++++ b/drivers/scsi/bnx2fc/Kconfig +@@ -5,6 +5,7 @@ config SCSI_BNX2X_FCOE + depends on (IPV6 || IPV6=n) + depends on LIBFC + depends on LIBFCOE ++ depends on MMU + select NETDEVICES + select ETHERNET + select NET_VENDOR_BROADCOM +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c +index 9746d2f4fcfad..f4a672e549716 100644 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c +@@ -1154,13 +1154,14 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + struct lpfc_vport *vport = pmb->vport; + LPFC_MBOXQ_t *sparam_mb; + struct lpfc_dmabuf *sparam_mp; ++ u16 status = pmb->u.mb.mbxStatus; + int rc; + +- if (pmb->u.mb.mbxStatus) +- goto out; +- + mempool_free(pmb, phba->mbox_mem_pool); + ++ if (status) ++ goto out; ++ + /* don't perform discovery for SLI4 loopback diagnostic test */ + if ((phba->sli_rev == LPFC_SLI_REV4) && + !(phba->hba_flag & HBA_FCOE_MODE) && +@@ -1223,12 +1224,10 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + + out: + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, +- "0306 CONFIG_LINK mbxStatus error x%x " +- "HBA state x%x\n", +- pmb->u.mb.mbxStatus, vport->port_state); +-sparam_out: +- mempool_free(pmb, phba->mbox_mem_pool); ++ "0306 CONFIG_LINK mbxStatus error x%x HBA state x%x\n", ++ status, vport->port_state); + ++sparam_out: + lpfc_linkdown(phba); + + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, +diff --git a/drivers/scsi/qla2xxx/qla_dbg.c b/drivers/scsi/qla2xxx/qla_dbg.c +index bb7431912d410..144a893e7335b 100644 +--- a/drivers/scsi/qla2xxx/qla_dbg.c ++++ b/drivers/scsi/qla2xxx/qla_dbg.c +@@ -202,6 +202,7 @@ qla24xx_dump_ram(struct qla_hw_data *ha, uint32_t addr, __be32 *ram, + wrt_reg_word(®->mailbox0, MBC_DUMP_RISC_RAM_EXTENDED); + wrt_reg_word(®->mailbox1, LSW(addr)); + wrt_reg_word(®->mailbox8, MSW(addr)); ++ wrt_reg_word(®->mailbox10, 0); + + wrt_reg_word(®->mailbox2, MSW(LSD(dump_dma))); + wrt_reg_word(®->mailbox3, LSW(LSD(dump_dma))); +diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c +index d6325fb2ef73b..4ebd8851a0c9f 100644 +--- a/drivers/scsi/qla2xxx/qla_mbx.c ++++ b/drivers/scsi/qla2xxx/qla_mbx.c +@@ -4277,7 +4277,8 @@ qla2x00_dump_ram(scsi_qla_host_t *vha, dma_addr_t req_dma, uint32_t addr, + if (MSW(addr) || IS_FWI2_CAPABLE(vha->hw)) { + mcp->mb[0] = MBC_DUMP_RISC_RAM_EXTENDED; + mcp->mb[8] = MSW(addr); +- mcp->out_mb = MBX_8|MBX_0; ++ mcp->mb[10] = 0; ++ mcp->out_mb = MBX_10|MBX_8|MBX_0; + } else { + mcp->mb[0] = MBC_DUMP_RISC_RAM; + mcp->out_mb = MBX_0; +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index fedb89d4ac3f0..20a6564f87d9f 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -709,9 +709,9 @@ static int sd_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, + put_unaligned_be16(spsp, &cdb[2]); + put_unaligned_be32(len, &cdb[6]); + +- ret = scsi_execute_req(sdev, cdb, +- send ? DMA_TO_DEVICE : DMA_FROM_DEVICE, +- buffer, len, NULL, SD_TIMEOUT, sdkp->max_retries, NULL); ++ ret = scsi_execute(sdev, cdb, send ? DMA_TO_DEVICE : DMA_FROM_DEVICE, ++ buffer, len, NULL, NULL, SD_TIMEOUT, sdkp->max_retries, 0, ++ RQF_PM, NULL); + return ret <= 0 ? ret : -EIO; + } + #endif /* CONFIG_BLK_SED_OPAL */ +diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c +index cf07b7f935790..87a7274e4632b 100644 +--- a/drivers/scsi/sd_zbc.c ++++ b/drivers/scsi/sd_zbc.c +@@ -688,6 +688,7 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) + unsigned int nr_zones = sdkp->rev_nr_zones; + u32 max_append; + int ret = 0; ++ unsigned int flags; + + /* + * For all zoned disks, initialize zone append emulation data if not +@@ -720,16 +721,19 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp) + disk->queue->nr_zones == nr_zones) + goto unlock; + ++ flags = memalloc_noio_save(); + sdkp->zone_blocks = zone_blocks; + sdkp->nr_zones = nr_zones; +- sdkp->rev_wp_offset = kvcalloc(nr_zones, sizeof(u32), GFP_NOIO); ++ sdkp->rev_wp_offset = kvcalloc(nr_zones, sizeof(u32), GFP_KERNEL); + if (!sdkp->rev_wp_offset) { + ret = -ENOMEM; ++ memalloc_noio_restore(flags); + goto unlock; + } + + ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb); + ++ memalloc_noio_restore(flags); + kvfree(sdkp->rev_wp_offset); + sdkp->rev_wp_offset = NULL; + +diff --git a/drivers/soc/aspeed/aspeed-lpc-snoop.c b/drivers/soc/aspeed/aspeed-lpc-snoop.c +index f3d8d53ab84de..dbe5325a324d5 100644 +--- a/drivers/soc/aspeed/aspeed-lpc-snoop.c ++++ b/drivers/soc/aspeed/aspeed-lpc-snoop.c +@@ -11,6 +11,7 @@ + */ + + #include ++#include + #include + #include + #include +@@ -67,6 +68,7 @@ struct aspeed_lpc_snoop_channel { + struct aspeed_lpc_snoop { + struct regmap *regmap; + int irq; ++ struct clk *clk; + struct aspeed_lpc_snoop_channel chan[NUM_SNOOP_CHANNELS]; + }; + +@@ -282,22 +284,42 @@ static int aspeed_lpc_snoop_probe(struct platform_device *pdev) + return -ENODEV; + } + ++ lpc_snoop->clk = devm_clk_get(dev, NULL); ++ if (IS_ERR(lpc_snoop->clk)) { ++ rc = PTR_ERR(lpc_snoop->clk); ++ if (rc != -EPROBE_DEFER) ++ dev_err(dev, "couldn't get clock\n"); ++ return rc; ++ } ++ rc = clk_prepare_enable(lpc_snoop->clk); ++ if (rc) { ++ dev_err(dev, "couldn't enable clock\n"); ++ return rc; ++ } ++ + rc = aspeed_lpc_snoop_config_irq(lpc_snoop, pdev); + if (rc) +- return rc; ++ goto err; + + rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 0, port); + if (rc) +- return rc; ++ goto err; + + /* Configuration of 2nd snoop channel port is optional */ + if (of_property_read_u32_index(dev->of_node, "snoop-ports", + 1, &port) == 0) { + rc = aspeed_lpc_enable_snoop(lpc_snoop, dev, 1, port); +- if (rc) ++ if (rc) { + aspeed_lpc_disable_snoop(lpc_snoop, 0); ++ goto err; ++ } + } + ++ return 0; ++ ++err: ++ clk_disable_unprepare(lpc_snoop->clk); ++ + return rc; + } + +@@ -309,6 +331,8 @@ static int aspeed_lpc_snoop_remove(struct platform_device *pdev) + aspeed_lpc_disable_snoop(lpc_snoop, 0); + aspeed_lpc_disable_snoop(lpc_snoop, 1); + ++ clk_disable_unprepare(lpc_snoop->clk); ++ + return 0; + } + +diff --git a/drivers/soc/qcom/ocmem.c b/drivers/soc/qcom/ocmem.c +index 7f9e9944d1eae..f1875dc31ae2c 100644 +--- a/drivers/soc/qcom/ocmem.c ++++ b/drivers/soc/qcom/ocmem.c +@@ -189,6 +189,7 @@ struct ocmem *of_get_ocmem(struct device *dev) + { + struct platform_device *pdev; + struct device_node *devnode; ++ struct ocmem *ocmem; + + devnode = of_parse_phandle(dev->of_node, "sram", 0); + if (!devnode || !devnode->parent) { +@@ -202,7 +203,12 @@ struct ocmem *of_get_ocmem(struct device *dev) + return ERR_PTR(-EPROBE_DEFER); + } + +- return platform_get_drvdata(pdev); ++ ocmem = platform_get_drvdata(pdev); ++ if (!ocmem) { ++ dev_err(dev, "Cannot get ocmem\n"); ++ return ERR_PTR(-ENODEV); ++ } ++ return ocmem; + } + EXPORT_SYMBOL(of_get_ocmem); + +diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c +index b44ede48decc0..e0620416e5743 100644 +--- a/drivers/soc/qcom/socinfo.c ++++ b/drivers/soc/qcom/socinfo.c +@@ -280,7 +280,7 @@ static int qcom_show_pmic_model(struct seq_file *seq, void *p) + if (model < 0) + return -EINVAL; + +- if (model <= ARRAY_SIZE(pmic_models) && pmic_models[model]) ++ if (model < ARRAY_SIZE(pmic_models) && pmic_models[model]) + seq_printf(seq, "%s\n", pmic_models[model]); + else + seq_printf(seq, "unknown (%d)\n", model); +diff --git a/drivers/soc/samsung/exynos-asv.c b/drivers/soc/samsung/exynos-asv.c +index 8abf4dfaa5c59..5daeadc363829 100644 +--- a/drivers/soc/samsung/exynos-asv.c ++++ b/drivers/soc/samsung/exynos-asv.c +@@ -119,11 +119,6 @@ static int exynos_asv_probe(struct platform_device *pdev) + u32 product_id = 0; + int ret, i; + +- cpu_dev = get_cpu_device(0); +- ret = dev_pm_opp_get_opp_count(cpu_dev); +- if (ret < 0) +- return -EPROBE_DEFER; +- + asv = devm_kzalloc(&pdev->dev, sizeof(*asv), GFP_KERNEL); + if (!asv) + return -ENOMEM; +@@ -134,7 +129,13 @@ static int exynos_asv_probe(struct platform_device *pdev) + return PTR_ERR(asv->chipid_regmap); + } + +- regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PRO_ID, &product_id); ++ ret = regmap_read(asv->chipid_regmap, EXYNOS_CHIPID_REG_PRO_ID, ++ &product_id); ++ if (ret < 0) { ++ dev_err(&pdev->dev, "Cannot read revision from ChipID: %d\n", ++ ret); ++ return -ENODEV; ++ } + + switch (product_id & EXYNOS_MASK) { + case 0xE5422000: +@@ -144,6 +145,11 @@ static int exynos_asv_probe(struct platform_device *pdev) + return -ENODEV; + } + ++ cpu_dev = get_cpu_device(0); ++ ret = dev_pm_opp_get_opp_count(cpu_dev); ++ if (ret < 0) ++ return -EPROBE_DEFER; ++ + ret = of_property_read_u32(pdev->dev.of_node, "samsung,asv-bin", + &asv->of_bin); + if (ret < 0) +diff --git a/drivers/soc/ti/pm33xx.c b/drivers/soc/ti/pm33xx.c +index d2f5e7001a93c..dc21aa855a458 100644 +--- a/drivers/soc/ti/pm33xx.c ++++ b/drivers/soc/ti/pm33xx.c +@@ -536,7 +536,7 @@ static int am33xx_pm_probe(struct platform_device *pdev) + + ret = am33xx_push_sram_idle(); + if (ret) +- goto err_free_sram; ++ goto err_unsetup_rtc; + + am33xx_pm_set_ipc_ops(); + +@@ -566,6 +566,9 @@ static int am33xx_pm_probe(struct platform_device *pdev) + + err_put_wkup_m3_ipc: + wkup_m3_ipc_put(m3_ipc); ++err_unsetup_rtc: ++ iounmap(rtc_base_virt); ++ clk_put(rtc_fck); + err_free_sram: + am33xx_pm_free_sram(); + pm33xx_dev = NULL; +diff --git a/drivers/soundwire/bus.c b/drivers/soundwire/bus.c +index 8eaf31e766773..1fe786855095a 100644 +--- a/drivers/soundwire/bus.c ++++ b/drivers/soundwire/bus.c +@@ -405,10 +405,11 @@ sdw_nwrite_no_pm(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) + return sdw_transfer(slave->bus, &msg); + } + +-static int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value) ++int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value) + { + return sdw_nwrite_no_pm(slave, addr, 1, &value); + } ++EXPORT_SYMBOL(sdw_write_no_pm); + + static int + sdw_bread_no_pm(struct sdw_bus *bus, u16 dev_num, u32 addr) +@@ -476,8 +477,7 @@ int sdw_bwrite_no_pm_unlocked(struct sdw_bus *bus, u16 dev_num, u32 addr, u8 val + } + EXPORT_SYMBOL(sdw_bwrite_no_pm_unlocked); + +-static int +-sdw_read_no_pm(struct sdw_slave *slave, u32 addr) ++int sdw_read_no_pm(struct sdw_slave *slave, u32 addr) + { + u8 buf; + int ret; +@@ -488,6 +488,19 @@ sdw_read_no_pm(struct sdw_slave *slave, u32 addr) + else + return buf; + } ++EXPORT_SYMBOL(sdw_read_no_pm); ++ ++static int sdw_update_no_pm(struct sdw_slave *slave, u32 addr, u8 mask, u8 val) ++{ ++ int tmp; ++ ++ tmp = sdw_read_no_pm(slave, addr); ++ if (tmp < 0) ++ return tmp; ++ ++ tmp = (tmp & ~mask) | val; ++ return sdw_write_no_pm(slave, addr, tmp); ++} + + /** + * sdw_nread() - Read "n" contiguous SDW Slave registers +@@ -500,16 +513,16 @@ int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) + { + int ret; + +- ret = pm_runtime_get_sync(slave->bus->dev); ++ ret = pm_runtime_get_sync(&slave->dev); + if (ret < 0 && ret != -EACCES) { +- pm_runtime_put_noidle(slave->bus->dev); ++ pm_runtime_put_noidle(&slave->dev); + return ret; + } + + ret = sdw_nread_no_pm(slave, addr, count, val); + +- pm_runtime_mark_last_busy(slave->bus->dev); +- pm_runtime_put(slave->bus->dev); ++ pm_runtime_mark_last_busy(&slave->dev); ++ pm_runtime_put(&slave->dev); + + return ret; + } +@@ -526,16 +539,16 @@ int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) + { + int ret; + +- ret = pm_runtime_get_sync(slave->bus->dev); ++ ret = pm_runtime_get_sync(&slave->dev); + if (ret < 0 && ret != -EACCES) { +- pm_runtime_put_noidle(slave->bus->dev); ++ pm_runtime_put_noidle(&slave->dev); + return ret; + } + + ret = sdw_nwrite_no_pm(slave, addr, count, val); + +- pm_runtime_mark_last_busy(slave->bus->dev); +- pm_runtime_put(slave->bus->dev); ++ pm_runtime_mark_last_busy(&slave->dev); ++ pm_runtime_put(&slave->dev); + + return ret; + } +@@ -1210,7 +1223,7 @@ static int sdw_slave_set_frequency(struct sdw_slave *slave) + } + scale_index++; + +- ret = sdw_write(slave, SDW_SCP_BUS_CLOCK_BASE, base); ++ ret = sdw_write_no_pm(slave, SDW_SCP_BUS_CLOCK_BASE, base); + if (ret < 0) { + dev_err(&slave->dev, + "SDW_SCP_BUS_CLOCK_BASE write failed:%d\n", ret); +@@ -1218,13 +1231,13 @@ static int sdw_slave_set_frequency(struct sdw_slave *slave) + } + + /* initialize scale for both banks */ +- ret = sdw_write(slave, SDW_SCP_BUSCLOCK_SCALE_B0, scale_index); ++ ret = sdw_write_no_pm(slave, SDW_SCP_BUSCLOCK_SCALE_B0, scale_index); + if (ret < 0) { + dev_err(&slave->dev, + "SDW_SCP_BUSCLOCK_SCALE_B0 write failed:%d\n", ret); + return ret; + } +- ret = sdw_write(slave, SDW_SCP_BUSCLOCK_SCALE_B1, scale_index); ++ ret = sdw_write_no_pm(slave, SDW_SCP_BUSCLOCK_SCALE_B1, scale_index); + if (ret < 0) + dev_err(&slave->dev, + "SDW_SCP_BUSCLOCK_SCALE_B1 write failed:%d\n", ret); +@@ -1256,7 +1269,7 @@ static int sdw_initialize_slave(struct sdw_slave *slave) + val = slave->prop.scp_int1_mask; + + /* Enable SCP interrupts */ +- ret = sdw_update(slave, SDW_SCP_INTMASK1, val, val); ++ ret = sdw_update_no_pm(slave, SDW_SCP_INTMASK1, val, val); + if (ret < 0) { + dev_err(slave->bus->dev, + "SDW_SCP_INTMASK1 write failed:%d\n", ret); +@@ -1271,7 +1284,7 @@ static int sdw_initialize_slave(struct sdw_slave *slave) + val = prop->dp0_prop->imp_def_interrupts; + val |= SDW_DP0_INT_PORT_READY | SDW_DP0_INT_BRA_FAILURE; + +- ret = sdw_update(slave, SDW_DP0_INTMASK, val, val); ++ ret = sdw_update_no_pm(slave, SDW_DP0_INTMASK, val, val); + if (ret < 0) + dev_err(slave->bus->dev, + "SDW_DP0_INTMASK read failed:%d\n", ret); +@@ -1433,7 +1446,7 @@ static int sdw_handle_slave_alerts(struct sdw_slave *slave) + ret = pm_runtime_get_sync(&slave->dev); + if (ret < 0 && ret != -EACCES) { + dev_err(&slave->dev, "Failed to resume device: %d\n", ret); +- pm_runtime_put_noidle(slave->bus->dev); ++ pm_runtime_put_noidle(&slave->dev); + return ret; + } + +diff --git a/drivers/soundwire/cadence_master.c b/drivers/soundwire/cadence_master.c +index 9fa55164354a2..580660599f461 100644 +--- a/drivers/soundwire/cadence_master.c ++++ b/drivers/soundwire/cadence_master.c +@@ -484,10 +484,10 @@ cdns_fill_msg_resp(struct sdw_cdns *cdns, + if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { + no_ack = 1; + dev_dbg_ratelimited(cdns->dev, "Msg Ack not received\n"); +- if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { +- nack = 1; +- dev_err_ratelimited(cdns->dev, "Msg NACK received\n"); +- } ++ } ++ if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { ++ nack = 1; ++ dev_err_ratelimited(cdns->dev, "Msg NACK received\n"); + } + } + +diff --git a/drivers/soundwire/intel_init.c b/drivers/soundwire/intel_init.c +index cabdadb09a1bb..bc8520eb385ec 100644 +--- a/drivers/soundwire/intel_init.c ++++ b/drivers/soundwire/intel_init.c +@@ -405,11 +405,12 @@ int sdw_intel_acpi_scan(acpi_handle *parent_handle, + { + acpi_status status; + ++ info->handle = NULL; + status = acpi_walk_namespace(ACPI_TYPE_DEVICE, + parent_handle, 1, + sdw_intel_acpi_cb, + NULL, info, NULL); +- if (ACPI_FAILURE(status)) ++ if (ACPI_FAILURE(status) || info->handle == NULL) + return -ENODEV; + + return sdw_intel_scan_controller(info); +diff --git a/drivers/spi/spi-atmel.c b/drivers/spi/spi-atmel.c +index 0e5e64a80848d..1db43cbead575 100644 +--- a/drivers/spi/spi-atmel.c ++++ b/drivers/spi/spi-atmel.c +@@ -1590,7 +1590,7 @@ static int atmel_spi_probe(struct platform_device *pdev) + if (ret == 0) { + as->use_dma = true; + } else if (ret == -EPROBE_DEFER) { +- return ret; ++ goto out_unmap_regs; + } + } else if (as->caps.has_pdc_support) { + as->use_pdc = true; +diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c +index ba7d40c2922f7..826b01f346246 100644 +--- a/drivers/spi/spi-cadence-quadspi.c ++++ b/drivers/spi/spi-cadence-quadspi.c +@@ -461,7 +461,7 @@ static int cqspi_read_setup(struct cqspi_flash_pdata *f_pdata, + /* Setup dummy clock cycles */ + dummy_clk = op->dummy.nbytes * 8; + if (dummy_clk > CQSPI_DUMMY_CLKS_MAX) +- dummy_clk = CQSPI_DUMMY_CLKS_MAX; ++ return -EOPNOTSUPP; + + if (dummy_clk) + reg |= (dummy_clk & CQSPI_REG_RD_INSTR_DUMMY_MASK) +diff --git a/drivers/spi/spi-dw-bt1.c b/drivers/spi/spi-dw-bt1.c +index c279b7891e3ac..bc9d5eab3c589 100644 +--- a/drivers/spi/spi-dw-bt1.c ++++ b/drivers/spi/spi-dw-bt1.c +@@ -84,7 +84,7 @@ static void dw_spi_bt1_dirmap_copy_from_map(void *to, void __iomem *from, size_t + if (shift) { + chunk = min_t(size_t, 4 - shift, len); + data = readl_relaxed(from - shift); +- memcpy(to, &data + shift, chunk); ++ memcpy(to, (char *)&data + shift, chunk); + from += chunk; + to += chunk; + len -= chunk; +diff --git a/drivers/spi/spi-fsl-spi.c b/drivers/spi/spi-fsl-spi.c +index 6d8e0a05a5355..e4a8d203f9408 100644 +--- a/drivers/spi/spi-fsl-spi.c ++++ b/drivers/spi/spi-fsl-spi.c +@@ -695,7 +695,7 @@ static void fsl_spi_cs_control(struct spi_device *spi, bool on) + + if (WARN_ON_ONCE(!pinfo->immr_spi_cs)) + return; +- iowrite32be(on ? SPI_BOOT_SEL_BIT : 0, pinfo->immr_spi_cs); ++ iowrite32be(on ? 0 : SPI_BOOT_SEL_BIT, pinfo->immr_spi_cs); + } + } + +diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c +index 8df5e973404f0..831a38920fa98 100644 +--- a/drivers/spi/spi-imx.c ++++ b/drivers/spi/spi-imx.c +@@ -1713,7 +1713,7 @@ static int spi_imx_probe(struct platform_device *pdev) + master->dev.of_node = pdev->dev.of_node; + ret = spi_bitbang_start(&spi_imx->bitbang); + if (ret) { +- dev_err(&pdev->dev, "bitbang start failed with %d\n", ret); ++ dev_err_probe(&pdev->dev, ret, "bitbang start failed\n"); + goto out_bitbang_start; + } + +diff --git a/drivers/spi/spi-pxa2xx-pci.c b/drivers/spi/spi-pxa2xx-pci.c +index f236e3034cf85..aafac128bb5f1 100644 +--- a/drivers/spi/spi-pxa2xx-pci.c ++++ b/drivers/spi/spi-pxa2xx-pci.c +@@ -21,7 +21,8 @@ enum { + PORT_BSW1, + PORT_BSW2, + PORT_CE4100, +- PORT_LPT, ++ PORT_LPT0, ++ PORT_LPT1, + }; + + struct pxa_spi_info { +@@ -57,8 +58,10 @@ static struct dw_dma_slave bsw1_rx_param = { .src_id = 7 }; + static struct dw_dma_slave bsw2_tx_param = { .dst_id = 8 }; + static struct dw_dma_slave bsw2_rx_param = { .src_id = 9 }; + +-static struct dw_dma_slave lpt_tx_param = { .dst_id = 0 }; +-static struct dw_dma_slave lpt_rx_param = { .src_id = 1 }; ++static struct dw_dma_slave lpt1_tx_param = { .dst_id = 0 }; ++static struct dw_dma_slave lpt1_rx_param = { .src_id = 1 }; ++static struct dw_dma_slave lpt0_tx_param = { .dst_id = 2 }; ++static struct dw_dma_slave lpt0_rx_param = { .src_id = 3 }; + + static bool lpss_dma_filter(struct dma_chan *chan, void *param) + { +@@ -185,12 +188,19 @@ static struct pxa_spi_info spi_info_configs[] = { + .num_chipselect = 1, + .max_clk_rate = 50000000, + }, +- [PORT_LPT] = { ++ [PORT_LPT0] = { + .type = LPSS_LPT_SSP, + .port_id = 0, + .setup = lpss_spi_setup, +- .tx_param = &lpt_tx_param, +- .rx_param = &lpt_rx_param, ++ .tx_param = &lpt0_tx_param, ++ .rx_param = &lpt0_rx_param, ++ }, ++ [PORT_LPT1] = { ++ .type = LPSS_LPT_SSP, ++ .port_id = 1, ++ .setup = lpss_spi_setup, ++ .tx_param = &lpt1_tx_param, ++ .rx_param = &lpt1_rx_param, + }, + }; + +@@ -285,8 +295,9 @@ static const struct pci_device_id pxa2xx_spi_pci_devices[] = { + { PCI_VDEVICE(INTEL, 0x2290), PORT_BSW1 }, + { PCI_VDEVICE(INTEL, 0x22ac), PORT_BSW2 }, + { PCI_VDEVICE(INTEL, 0x2e6a), PORT_CE4100 }, +- { PCI_VDEVICE(INTEL, 0x9ce6), PORT_LPT }, +- { }, ++ { PCI_VDEVICE(INTEL, 0x9ce5), PORT_LPT0 }, ++ { PCI_VDEVICE(INTEL, 0x9ce6), PORT_LPT1 }, ++ { } + }; + MODULE_DEVICE_TABLE(pci, pxa2xx_spi_pci_devices); + +diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c +index 6017209c6d2f7..6eeb39669a866 100644 +--- a/drivers/spi/spi-stm32.c ++++ b/drivers/spi/spi-stm32.c +@@ -1677,6 +1677,10 @@ static int stm32_spi_transfer_one(struct spi_master *master, + struct stm32_spi *spi = spi_master_get_devdata(master); + int ret; + ++ /* Don't do anything on 0 bytes transfers */ ++ if (transfer->len == 0) ++ return 0; ++ + spi->tx_buf = transfer->tx_buf; + spi->rx_buf = transfer->rx_buf; + spi->tx_len = spi->tx_buf ? transfer->len : 0; +diff --git a/drivers/spi/spi-synquacer.c b/drivers/spi/spi-synquacer.c +index 8cdca6ab80989..ea706d9629cb1 100644 +--- a/drivers/spi/spi-synquacer.c ++++ b/drivers/spi/spi-synquacer.c +@@ -490,6 +490,10 @@ static void synquacer_spi_set_cs(struct spi_device *spi, bool enable) + val &= ~(SYNQUACER_HSSPI_DMPSEL_CS_MASK << + SYNQUACER_HSSPI_DMPSEL_CS_SHIFT); + val |= spi->chip_select << SYNQUACER_HSSPI_DMPSEL_CS_SHIFT; ++ ++ if (!enable) ++ val |= SYNQUACER_HSSPI_DMSTOP_STOP; ++ + writel(val, sspi->regs + SYNQUACER_HSSPI_REG_DMSTART); + } + +diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c +index 7694e1ae5b0b2..4257a2d368f71 100644 +--- a/drivers/spi/spi.c ++++ b/drivers/spi/spi.c +@@ -1259,7 +1259,7 @@ static int spi_transfer_one_message(struct spi_controller *ctlr, + ptp_read_system_prets(xfer->ptp_sts); + } + +- if (xfer->tx_buf || xfer->rx_buf) { ++ if ((xfer->tx_buf || xfer->rx_buf) && xfer->len) { + reinit_completion(&ctlr->xfer_completion); + + fallback_pio: +diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c +index de844b4121107..bbbd311eda030 100644 +--- a/drivers/spmi/spmi-pmic-arb.c ++++ b/drivers/spmi/spmi-pmic-arb.c +@@ -1,6 +1,6 @@ + // SPDX-License-Identifier: GPL-2.0-only + /* +- * Copyright (c) 2012-2015, 2017, The Linux Foundation. All rights reserved. ++ * Copyright (c) 2012-2015, 2017, 2021, The Linux Foundation. All rights reserved. + */ + #include + #include +@@ -505,8 +505,7 @@ static void cleanup_irq(struct spmi_pmic_arb *pmic_arb, u16 apid, int id) + static void periph_interrupt(struct spmi_pmic_arb *pmic_arb, u16 apid) + { + unsigned int irq; +- u32 status; +- int id; ++ u32 status, id; + u8 sid = (pmic_arb->apid_data[apid].ppid >> 8) & 0xF; + u8 per = pmic_arb->apid_data[apid].ppid & 0xFF; + +diff --git a/drivers/staging/gdm724x/gdm_usb.c b/drivers/staging/gdm724x/gdm_usb.c +index dc4da66c3695b..54bdb64f52e88 100644 +--- a/drivers/staging/gdm724x/gdm_usb.c ++++ b/drivers/staging/gdm724x/gdm_usb.c +@@ -56,20 +56,24 @@ static int gdm_usb_recv(void *priv_dev, + + static int request_mac_address(struct lte_udev *udev) + { +- u8 buf[16] = {0,}; +- struct hci_packet *hci = (struct hci_packet *)buf; ++ struct hci_packet *hci; + struct usb_device *usbdev = udev->usbdev; + int actual; + int ret = -1; + ++ hci = kmalloc(struct_size(hci, data, 1), GFP_KERNEL); ++ if (!hci) ++ return -ENOMEM; ++ + hci->cmd_evt = gdm_cpu_to_dev16(udev->gdm_ed, LTE_GET_INFORMATION); + hci->len = gdm_cpu_to_dev16(udev->gdm_ed, 1); + hci->data[0] = MAC_ADDRESS; + +- ret = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 2), buf, 5, ++ ret = usb_bulk_msg(usbdev, usb_sndbulkpipe(usbdev, 2), hci, 5, + &actual, 1000); + + udev->request_mac_addr = 1; ++ kfree(hci); + + return ret; + } +diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c +index 9f718f43282bc..640451134072b 100644 +--- a/drivers/staging/media/allegro-dvt/allegro-core.c ++++ b/drivers/staging/media/allegro-dvt/allegro-core.c +@@ -2483,8 +2483,6 @@ static int allegro_open(struct file *file) + INIT_LIST_HEAD(&channel->buffers_reference); + INIT_LIST_HEAD(&channel->buffers_intermediate); + +- list_add(&channel->list, &dev->channels); +- + channel->fh.m2m_ctx = v4l2_m2m_ctx_init(dev->m2m_dev, channel, + allegro_queue_init); + +@@ -2493,6 +2491,7 @@ static int allegro_open(struct file *file) + goto error; + } + ++ list_add(&channel->list, &dev->channels); + file->private_data = &channel->fh; + v4l2_fh_add(&channel->fh); + +diff --git a/drivers/staging/media/atomisp/pci/atomisp_subdev.c b/drivers/staging/media/atomisp/pci/atomisp_subdev.c +index 52b9fb18c87f0..dcc2dd981ca60 100644 +--- a/drivers/staging/media/atomisp/pci/atomisp_subdev.c ++++ b/drivers/staging/media/atomisp/pci/atomisp_subdev.c +@@ -349,12 +349,20 @@ static int isp_subdev_get_selection(struct v4l2_subdev *sd, + return 0; + } + +-static char *atomisp_pad_str[] = { "ATOMISP_SUBDEV_PAD_SINK", +- "ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE", +- "ATOMISP_SUBDEV_PAD_SOURCE_VF", +- "ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW", +- "ATOMISP_SUBDEV_PAD_SOURCE_VIDEO" +- }; ++static const char *atomisp_pad_str(unsigned int pad) ++{ ++ static const char *const pad_str[] = { ++ "ATOMISP_SUBDEV_PAD_SINK", ++ "ATOMISP_SUBDEV_PAD_SOURCE_CAPTURE", ++ "ATOMISP_SUBDEV_PAD_SOURCE_VF", ++ "ATOMISP_SUBDEV_PAD_SOURCE_PREVIEW", ++ "ATOMISP_SUBDEV_PAD_SOURCE_VIDEO", ++ }; ++ ++ if (pad >= ARRAY_SIZE(pad_str)) ++ return "ATOMISP_INVALID_PAD"; ++ return pad_str[pad]; ++} + + int atomisp_subdev_set_selection(struct v4l2_subdev *sd, + struct v4l2_subdev_pad_config *cfg, +@@ -378,7 +386,7 @@ int atomisp_subdev_set_selection(struct v4l2_subdev *sd, + + dev_dbg(isp->dev, + "sel: pad %s tgt %s l %d t %d w %d h %d which %s f 0x%8.8x\n", +- atomisp_pad_str[pad], target == V4L2_SEL_TGT_CROP ++ atomisp_pad_str(pad), target == V4L2_SEL_TGT_CROP + ? "V4L2_SEL_TGT_CROP" : "V4L2_SEL_TGT_COMPOSE", + r->left, r->top, r->width, r->height, + which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY" +@@ -612,7 +620,7 @@ void atomisp_subdev_set_ffmt(struct v4l2_subdev *sd, + enum atomisp_input_stream_id stream_id; + + dev_dbg(isp->dev, "ffmt: pad %s w %d h %d code 0x%8.8x which %s\n", +- atomisp_pad_str[pad], ffmt->width, ffmt->height, ffmt->code, ++ atomisp_pad_str(pad), ffmt->width, ffmt->height, ffmt->code, + which == V4L2_SUBDEV_FORMAT_TRY ? "V4L2_SUBDEV_FORMAT_TRY" + : "V4L2_SUBDEV_FORMAT_ACTIVE"); + +diff --git a/drivers/staging/media/atomisp/pci/hmm/hmm.c b/drivers/staging/media/atomisp/pci/hmm/hmm.c +index e0eaff0f8a228..6a5ee46070898 100644 +--- a/drivers/staging/media/atomisp/pci/hmm/hmm.c ++++ b/drivers/staging/media/atomisp/pci/hmm/hmm.c +@@ -269,7 +269,7 @@ ia_css_ptr hmm_alloc(size_t bytes, enum hmm_bo_type type, + hmm_set(bo->start, 0, bytes); + + dev_dbg(atomisp_dev, +- "%s: pages: 0x%08x (%ld bytes), type: %d from highmem %d, user ptr %p, cached %d\n", ++ "%s: pages: 0x%08x (%zu bytes), type: %d from highmem %d, user ptr %p, cached %d\n", + __func__, bo->start, bytes, type, from_highmem, userptr, cached); + + return bo->start; +diff --git a/drivers/staging/media/imx/imx-media-csc-scaler.c b/drivers/staging/media/imx/imx-media-csc-scaler.c +index fab1155a5958c..63a0204502a8b 100644 +--- a/drivers/staging/media/imx/imx-media-csc-scaler.c ++++ b/drivers/staging/media/imx/imx-media-csc-scaler.c +@@ -869,11 +869,7 @@ void imx_media_csc_scaler_device_unregister(struct imx_media_video_dev *vdev) + struct ipu_csc_scaler_priv *priv = vdev_to_priv(vdev); + struct video_device *vfd = priv->vdev.vfd; + +- mutex_lock(&priv->mutex); +- + video_unregister_device(vfd); +- +- mutex_unlock(&priv->mutex); + } + + struct imx_media_video_dev * +diff --git a/drivers/staging/media/imx/imx-media-dev.c b/drivers/staging/media/imx/imx-media-dev.c +index 6d2205461e565..338b8bd0bb076 100644 +--- a/drivers/staging/media/imx/imx-media-dev.c ++++ b/drivers/staging/media/imx/imx-media-dev.c +@@ -53,6 +53,7 @@ static int imx6_media_probe_complete(struct v4l2_async_notifier *notifier) + imxmd->m2m_vdev = imx_media_csc_scaler_device_init(imxmd); + if (IS_ERR(imxmd->m2m_vdev)) { + ret = PTR_ERR(imxmd->m2m_vdev); ++ imxmd->m2m_vdev = NULL; + goto unlock; + } + +@@ -107,10 +108,14 @@ static int imx_media_remove(struct platform_device *pdev) + + v4l2_info(&imxmd->v4l2_dev, "Removing imx-media\n"); + ++ if (imxmd->m2m_vdev) { ++ imx_media_csc_scaler_device_unregister(imxmd->m2m_vdev); ++ imxmd->m2m_vdev = NULL; ++ } ++ + v4l2_async_notifier_unregister(&imxmd->notifier); + imx_media_unregister_ipu_internal_subdevs(imxmd); + v4l2_async_notifier_cleanup(&imxmd->notifier); +- imx_media_csc_scaler_device_unregister(imxmd->m2m_vdev); + media_device_unregister(&imxmd->md); + v4l2_device_unregister(&imxmd->v4l2_dev); + media_device_cleanup(&imxmd->md); +diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c +index a3f3df9017046..ac52b1daf9914 100644 +--- a/drivers/staging/media/imx/imx7-media-csi.c ++++ b/drivers/staging/media/imx/imx7-media-csi.c +@@ -499,6 +499,7 @@ static int imx7_csi_pad_link_validate(struct v4l2_subdev *sd, + struct v4l2_subdev_format *sink_fmt) + { + struct imx7_csi *csi = v4l2_get_subdevdata(sd); ++ struct media_entity *src; + struct media_pad *pad; + int ret; + +@@ -509,11 +510,21 @@ static int imx7_csi_pad_link_validate(struct v4l2_subdev *sd, + if (!csi->src_sd) + return -EPIPE; + ++ src = &csi->src_sd->entity; ++ ++ /* ++ * if the source is neither a CSI MUX or CSI-2 get the one directly ++ * upstream from this CSI ++ */ ++ if (src->function != MEDIA_ENT_F_VID_IF_BRIDGE && ++ src->function != MEDIA_ENT_F_VID_MUX) ++ src = &csi->sd.entity; ++ + /* +- * find the entity that is selected by the CSI mux. This is needed ++ * find the entity that is selected by the source. This is needed + * to distinguish between a parallel or CSI-2 pipeline. + */ +- pad = imx_media_pipeline_pad(&csi->src_sd->entity, 0, 0, true); ++ pad = imx_media_pipeline_pad(src, 0, 0, true); + if (!pad) + return -ENODEV; + +@@ -1164,12 +1175,12 @@ static int imx7_csi_notify_bound(struct v4l2_async_notifier *notifier, + struct imx7_csi *csi = imx7_csi_notifier_to_dev(notifier); + struct media_pad *sink = &csi->sd.entity.pads[IMX7_CSI_PAD_SINK]; + +- /* The bound subdev must always be the CSI mux */ +- if (WARN_ON(sd->entity.function != MEDIA_ENT_F_VID_MUX)) +- return -ENXIO; +- +- /* Mark it as such via its group id */ +- sd->grp_id = IMX_MEDIA_GRP_ID_CSI_MUX; ++ /* ++ * If the subdev is a video mux, it must be one of the CSI ++ * muxes. Mark it as such via its group id. ++ */ ++ if (sd->entity.function == MEDIA_ENT_F_VID_MUX) ++ sd->grp_id = IMX_MEDIA_GRP_ID_CSI_MUX; + + return v4l2_create_fwnode_links_to_pad(sd, sink); + } +diff --git a/drivers/staging/mt7621-dma/Makefile b/drivers/staging/mt7621-dma/Makefile +index 66da1bf10c32e..23256d1286f3e 100644 +--- a/drivers/staging/mt7621-dma/Makefile ++++ b/drivers/staging/mt7621-dma/Makefile +@@ -1,4 +1,4 @@ + # SPDX-License-Identifier: GPL-2.0 +-obj-$(CONFIG_MTK_HSDMA) += mtk-hsdma.o ++obj-$(CONFIG_MTK_HSDMA) += hsdma-mt7621.o + + ccflags-y += -I$(srctree)/drivers/dma +diff --git a/drivers/staging/mt7621-dma/hsdma-mt7621.c b/drivers/staging/mt7621-dma/hsdma-mt7621.c +new file mode 100644 +index 0000000000000..28f1c2446be16 +--- /dev/null ++++ b/drivers/staging/mt7621-dma/hsdma-mt7621.c +@@ -0,0 +1,760 @@ ++// SPDX-License-Identifier: GPL-2.0+ ++/* ++ * Copyright (C) 2015, Michael Lee ++ * MTK HSDMA support ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++ ++#include "virt-dma.h" ++ ++#define HSDMA_BASE_OFFSET 0x800 ++ ++#define HSDMA_REG_TX_BASE 0x00 ++#define HSDMA_REG_TX_CNT 0x04 ++#define HSDMA_REG_TX_CTX 0x08 ++#define HSDMA_REG_TX_DTX 0x0c ++#define HSDMA_REG_RX_BASE 0x100 ++#define HSDMA_REG_RX_CNT 0x104 ++#define HSDMA_REG_RX_CRX 0x108 ++#define HSDMA_REG_RX_DRX 0x10c ++#define HSDMA_REG_INFO 0x200 ++#define HSDMA_REG_GLO_CFG 0x204 ++#define HSDMA_REG_RST_CFG 0x208 ++#define HSDMA_REG_DELAY_INT 0x20c ++#define HSDMA_REG_FREEQ_THRES 0x210 ++#define HSDMA_REG_INT_STATUS 0x220 ++#define HSDMA_REG_INT_MASK 0x228 ++#define HSDMA_REG_SCH_Q01 0x280 ++#define HSDMA_REG_SCH_Q23 0x284 ++ ++#define HSDMA_DESCS_MAX 0xfff ++#define HSDMA_DESCS_NUM 8 ++#define HSDMA_DESCS_MASK (HSDMA_DESCS_NUM - 1) ++#define HSDMA_NEXT_DESC(x) (((x) + 1) & HSDMA_DESCS_MASK) ++ ++/* HSDMA_REG_INFO */ ++#define HSDMA_INFO_INDEX_MASK 0xf ++#define HSDMA_INFO_INDEX_SHIFT 24 ++#define HSDMA_INFO_BASE_MASK 0xff ++#define HSDMA_INFO_BASE_SHIFT 16 ++#define HSDMA_INFO_RX_MASK 0xff ++#define HSDMA_INFO_RX_SHIFT 8 ++#define HSDMA_INFO_TX_MASK 0xff ++#define HSDMA_INFO_TX_SHIFT 0 ++ ++/* HSDMA_REG_GLO_CFG */ ++#define HSDMA_GLO_TX_2B_OFFSET BIT(31) ++#define HSDMA_GLO_CLK_GATE BIT(30) ++#define HSDMA_GLO_BYTE_SWAP BIT(29) ++#define HSDMA_GLO_MULTI_DMA BIT(10) ++#define HSDMA_GLO_TWO_BUF BIT(9) ++#define HSDMA_GLO_32B_DESC BIT(8) ++#define HSDMA_GLO_BIG_ENDIAN BIT(7) ++#define HSDMA_GLO_TX_DONE BIT(6) ++#define HSDMA_GLO_BT_MASK 0x3 ++#define HSDMA_GLO_BT_SHIFT 4 ++#define HSDMA_GLO_RX_BUSY BIT(3) ++#define HSDMA_GLO_RX_DMA BIT(2) ++#define HSDMA_GLO_TX_BUSY BIT(1) ++#define HSDMA_GLO_TX_DMA BIT(0) ++ ++#define HSDMA_BT_SIZE_16BYTES (0 << HSDMA_GLO_BT_SHIFT) ++#define HSDMA_BT_SIZE_32BYTES (1 << HSDMA_GLO_BT_SHIFT) ++#define HSDMA_BT_SIZE_64BYTES (2 << HSDMA_GLO_BT_SHIFT) ++#define HSDMA_BT_SIZE_128BYTES (3 << HSDMA_GLO_BT_SHIFT) ++ ++#define HSDMA_GLO_DEFAULT (HSDMA_GLO_MULTI_DMA | \ ++ HSDMA_GLO_RX_DMA | HSDMA_GLO_TX_DMA | HSDMA_BT_SIZE_32BYTES) ++ ++/* HSDMA_REG_RST_CFG */ ++#define HSDMA_RST_RX_SHIFT 16 ++#define HSDMA_RST_TX_SHIFT 0 ++ ++/* HSDMA_REG_DELAY_INT */ ++#define HSDMA_DELAY_INT_EN BIT(15) ++#define HSDMA_DELAY_PEND_OFFSET 8 ++#define HSDMA_DELAY_TIME_OFFSET 0 ++#define HSDMA_DELAY_TX_OFFSET 16 ++#define HSDMA_DELAY_RX_OFFSET 0 ++ ++#define HSDMA_DELAY_INIT(x) (HSDMA_DELAY_INT_EN | \ ++ ((x) << HSDMA_DELAY_PEND_OFFSET)) ++#define HSDMA_DELAY(x) ((HSDMA_DELAY_INIT(x) << \ ++ HSDMA_DELAY_TX_OFFSET) | HSDMA_DELAY_INIT(x)) ++ ++/* HSDMA_REG_INT_STATUS */ ++#define HSDMA_INT_DELAY_RX_COH BIT(31) ++#define HSDMA_INT_DELAY_RX_INT BIT(30) ++#define HSDMA_INT_DELAY_TX_COH BIT(29) ++#define HSDMA_INT_DELAY_TX_INT BIT(28) ++#define HSDMA_INT_RX_MASK 0x3 ++#define HSDMA_INT_RX_SHIFT 16 ++#define HSDMA_INT_RX_Q0 BIT(16) ++#define HSDMA_INT_TX_MASK 0xf ++#define HSDMA_INT_TX_SHIFT 0 ++#define HSDMA_INT_TX_Q0 BIT(0) ++ ++/* tx/rx dma desc flags */ ++#define HSDMA_PLEN_MASK 0x3fff ++#define HSDMA_DESC_DONE BIT(31) ++#define HSDMA_DESC_LS0 BIT(30) ++#define HSDMA_DESC_PLEN0(_x) (((_x) & HSDMA_PLEN_MASK) << 16) ++#define HSDMA_DESC_TAG BIT(15) ++#define HSDMA_DESC_LS1 BIT(14) ++#define HSDMA_DESC_PLEN1(_x) ((_x) & HSDMA_PLEN_MASK) ++ ++/* align 4 bytes */ ++#define HSDMA_ALIGN_SIZE 3 ++/* align size 128bytes */ ++#define HSDMA_MAX_PLEN 0x3f80 ++ ++struct hsdma_desc { ++ u32 addr0; ++ u32 flags; ++ u32 addr1; ++ u32 unused; ++}; ++ ++struct mtk_hsdma_sg { ++ dma_addr_t src_addr; ++ dma_addr_t dst_addr; ++ u32 len; ++}; ++ ++struct mtk_hsdma_desc { ++ struct virt_dma_desc vdesc; ++ unsigned int num_sgs; ++ struct mtk_hsdma_sg sg[1]; ++}; ++ ++struct mtk_hsdma_chan { ++ struct virt_dma_chan vchan; ++ unsigned int id; ++ dma_addr_t desc_addr; ++ int tx_idx; ++ int rx_idx; ++ struct hsdma_desc *tx_ring; ++ struct hsdma_desc *rx_ring; ++ struct mtk_hsdma_desc *desc; ++ unsigned int next_sg; ++}; ++ ++struct mtk_hsdam_engine { ++ struct dma_device ddev; ++ struct device_dma_parameters dma_parms; ++ void __iomem *base; ++ struct tasklet_struct task; ++ volatile unsigned long chan_issued; ++ ++ struct mtk_hsdma_chan chan[1]; ++}; ++ ++static inline struct mtk_hsdam_engine *mtk_hsdma_chan_get_dev( ++ struct mtk_hsdma_chan *chan) ++{ ++ return container_of(chan->vchan.chan.device, struct mtk_hsdam_engine, ++ ddev); ++} ++ ++static inline struct mtk_hsdma_chan *to_mtk_hsdma_chan(struct dma_chan *c) ++{ ++ return container_of(c, struct mtk_hsdma_chan, vchan.chan); ++} ++ ++static inline struct mtk_hsdma_desc *to_mtk_hsdma_desc( ++ struct virt_dma_desc *vdesc) ++{ ++ return container_of(vdesc, struct mtk_hsdma_desc, vdesc); ++} ++ ++static inline u32 mtk_hsdma_read(struct mtk_hsdam_engine *hsdma, u32 reg) ++{ ++ return readl(hsdma->base + reg); ++} ++ ++static inline void mtk_hsdma_write(struct mtk_hsdam_engine *hsdma, ++ unsigned int reg, u32 val) ++{ ++ writel(val, hsdma->base + reg); ++} ++ ++static void mtk_hsdma_reset_chan(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ chan->tx_idx = 0; ++ chan->rx_idx = HSDMA_DESCS_NUM - 1; ++ ++ mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx); ++ mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx); ++ ++ mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG, ++ 0x1 << (chan->id + HSDMA_RST_TX_SHIFT)); ++ mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG, ++ 0x1 << (chan->id + HSDMA_RST_RX_SHIFT)); ++} ++ ++static void hsdma_dump_reg(struct mtk_hsdam_engine *hsdma) ++{ ++ dev_dbg(hsdma->ddev.dev, "tbase %08x, tcnt %08x, " ++ "tctx %08x, tdtx: %08x, rbase %08x, " ++ "rcnt %08x, rctx %08x, rdtx %08x\n", ++ mtk_hsdma_read(hsdma, HSDMA_REG_TX_BASE), ++ mtk_hsdma_read(hsdma, HSDMA_REG_TX_CNT), ++ mtk_hsdma_read(hsdma, HSDMA_REG_TX_CTX), ++ mtk_hsdma_read(hsdma, HSDMA_REG_TX_DTX), ++ mtk_hsdma_read(hsdma, HSDMA_REG_RX_BASE), ++ mtk_hsdma_read(hsdma, HSDMA_REG_RX_CNT), ++ mtk_hsdma_read(hsdma, HSDMA_REG_RX_CRX), ++ mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX)); ++ ++ dev_dbg(hsdma->ddev.dev, "info %08x, glo %08x, delay %08x, intr_stat %08x, intr_mask %08x\n", ++ mtk_hsdma_read(hsdma, HSDMA_REG_INFO), ++ mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG), ++ mtk_hsdma_read(hsdma, HSDMA_REG_DELAY_INT), ++ mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS), ++ mtk_hsdma_read(hsdma, HSDMA_REG_INT_MASK)); ++} ++ ++static void hsdma_dump_desc(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ struct hsdma_desc *tx_desc; ++ struct hsdma_desc *rx_desc; ++ int i; ++ ++ dev_dbg(hsdma->ddev.dev, "tx idx: %d, rx idx: %d\n", ++ chan->tx_idx, chan->rx_idx); ++ ++ for (i = 0; i < HSDMA_DESCS_NUM; i++) { ++ tx_desc = &chan->tx_ring[i]; ++ rx_desc = &chan->rx_ring[i]; ++ ++ dev_dbg(hsdma->ddev.dev, "%d tx addr0: %08x, flags %08x, " ++ "tx addr1: %08x, rx addr0 %08x, flags %08x\n", ++ i, tx_desc->addr0, tx_desc->flags, ++ tx_desc->addr1, rx_desc->addr0, rx_desc->flags); ++ } ++} ++ ++static void mtk_hsdma_reset(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ int i; ++ ++ /* disable dma */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0); ++ ++ /* disable intr */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0); ++ ++ /* init desc value */ ++ for (i = 0; i < HSDMA_DESCS_NUM; i++) { ++ chan->tx_ring[i].addr0 = 0; ++ chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE; ++ } ++ for (i = 0; i < HSDMA_DESCS_NUM; i++) { ++ chan->rx_ring[i].addr0 = 0; ++ chan->rx_ring[i].flags = 0; ++ } ++ ++ /* reset */ ++ mtk_hsdma_reset_chan(hsdma, chan); ++ ++ /* enable intr */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0); ++ ++ /* enable dma */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT); ++} ++ ++static int mtk_hsdma_terminate_all(struct dma_chan *c) ++{ ++ struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c); ++ struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan); ++ unsigned long timeout; ++ LIST_HEAD(head); ++ ++ spin_lock_bh(&chan->vchan.lock); ++ chan->desc = NULL; ++ clear_bit(chan->id, &hsdma->chan_issued); ++ vchan_get_all_descriptors(&chan->vchan, &head); ++ spin_unlock_bh(&chan->vchan.lock); ++ ++ vchan_dma_desc_free_list(&chan->vchan, &head); ++ ++ /* wait dma transfer complete */ ++ timeout = jiffies + msecs_to_jiffies(2000); ++ while (mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG) & ++ (HSDMA_GLO_RX_BUSY | HSDMA_GLO_TX_BUSY)) { ++ if (time_after_eq(jiffies, timeout)) { ++ hsdma_dump_desc(hsdma, chan); ++ mtk_hsdma_reset(hsdma, chan); ++ dev_err(hsdma->ddev.dev, "timeout, reset it\n"); ++ break; ++ } ++ cpu_relax(); ++ } ++ ++ return 0; ++} ++ ++static int mtk_hsdma_start_transfer(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ dma_addr_t src, dst; ++ size_t len, tlen; ++ struct hsdma_desc *tx_desc, *rx_desc; ++ struct mtk_hsdma_sg *sg; ++ unsigned int i; ++ int rx_idx; ++ ++ sg = &chan->desc->sg[0]; ++ len = sg->len; ++ chan->desc->num_sgs = DIV_ROUND_UP(len, HSDMA_MAX_PLEN); ++ ++ /* tx desc */ ++ src = sg->src_addr; ++ for (i = 0; i < chan->desc->num_sgs; i++) { ++ tx_desc = &chan->tx_ring[chan->tx_idx]; ++ ++ if (len > HSDMA_MAX_PLEN) ++ tlen = HSDMA_MAX_PLEN; ++ else ++ tlen = len; ++ ++ if (i & 0x1) { ++ tx_desc->addr1 = src; ++ tx_desc->flags |= HSDMA_DESC_PLEN1(tlen); ++ } else { ++ tx_desc->addr0 = src; ++ tx_desc->flags = HSDMA_DESC_PLEN0(tlen); ++ ++ /* update index */ ++ chan->tx_idx = HSDMA_NEXT_DESC(chan->tx_idx); ++ } ++ ++ src += tlen; ++ len -= tlen; ++ } ++ if (i & 0x1) ++ tx_desc->flags |= HSDMA_DESC_LS0; ++ else ++ tx_desc->flags |= HSDMA_DESC_LS1; ++ ++ /* rx desc */ ++ rx_idx = HSDMA_NEXT_DESC(chan->rx_idx); ++ len = sg->len; ++ dst = sg->dst_addr; ++ for (i = 0; i < chan->desc->num_sgs; i++) { ++ rx_desc = &chan->rx_ring[rx_idx]; ++ if (len > HSDMA_MAX_PLEN) ++ tlen = HSDMA_MAX_PLEN; ++ else ++ tlen = len; ++ ++ rx_desc->addr0 = dst; ++ rx_desc->flags = HSDMA_DESC_PLEN0(tlen); ++ ++ dst += tlen; ++ len -= tlen; ++ ++ /* update index */ ++ rx_idx = HSDMA_NEXT_DESC(rx_idx); ++ } ++ ++ /* make sure desc and index all up to date */ ++ wmb(); ++ mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx); ++ ++ return 0; ++} ++ ++static int gdma_next_desc(struct mtk_hsdma_chan *chan) ++{ ++ struct virt_dma_desc *vdesc; ++ ++ vdesc = vchan_next_desc(&chan->vchan); ++ if (!vdesc) { ++ chan->desc = NULL; ++ return 0; ++ } ++ chan->desc = to_mtk_hsdma_desc(vdesc); ++ chan->next_sg = 0; ++ ++ return 1; ++} ++ ++static void mtk_hsdma_chan_done(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ struct mtk_hsdma_desc *desc; ++ int chan_issued; ++ ++ chan_issued = 0; ++ spin_lock_bh(&chan->vchan.lock); ++ desc = chan->desc; ++ if (likely(desc)) { ++ if (chan->next_sg == desc->num_sgs) { ++ list_del(&desc->vdesc.node); ++ vchan_cookie_complete(&desc->vdesc); ++ chan_issued = gdma_next_desc(chan); ++ } ++ } else { ++ dev_dbg(hsdma->ddev.dev, "no desc to complete\n"); ++ } ++ ++ if (chan_issued) ++ set_bit(chan->id, &hsdma->chan_issued); ++ spin_unlock_bh(&chan->vchan.lock); ++} ++ ++static irqreturn_t mtk_hsdma_irq(int irq, void *devid) ++{ ++ struct mtk_hsdam_engine *hsdma = devid; ++ u32 status; ++ ++ status = mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS); ++ if (unlikely(!status)) ++ return IRQ_NONE; ++ ++ if (likely(status & HSDMA_INT_RX_Q0)) ++ tasklet_schedule(&hsdma->task); ++ else ++ dev_dbg(hsdma->ddev.dev, "unhandle irq status %08x\n", status); ++ /* clean intr bits */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_INT_STATUS, status); ++ ++ return IRQ_HANDLED; ++} ++ ++static void mtk_hsdma_issue_pending(struct dma_chan *c) ++{ ++ struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c); ++ struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan); ++ ++ spin_lock_bh(&chan->vchan.lock); ++ if (vchan_issue_pending(&chan->vchan) && !chan->desc) { ++ if (gdma_next_desc(chan)) { ++ set_bit(chan->id, &hsdma->chan_issued); ++ tasklet_schedule(&hsdma->task); ++ } else { ++ dev_dbg(hsdma->ddev.dev, "no desc to issue\n"); ++ } ++ } ++ spin_unlock_bh(&chan->vchan.lock); ++} ++ ++static struct dma_async_tx_descriptor *mtk_hsdma_prep_dma_memcpy( ++ struct dma_chan *c, dma_addr_t dest, dma_addr_t src, ++ size_t len, unsigned long flags) ++{ ++ struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c); ++ struct mtk_hsdma_desc *desc; ++ ++ if (len <= 0) ++ return NULL; ++ ++ desc = kzalloc(sizeof(*desc), GFP_ATOMIC); ++ if (!desc) { ++ dev_err(c->device->dev, "alloc memcpy decs error\n"); ++ return NULL; ++ } ++ ++ desc->sg[0].src_addr = src; ++ desc->sg[0].dst_addr = dest; ++ desc->sg[0].len = len; ++ ++ return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); ++} ++ ++static enum dma_status mtk_hsdma_tx_status(struct dma_chan *c, ++ dma_cookie_t cookie, ++ struct dma_tx_state *state) ++{ ++ return dma_cookie_status(c, cookie, state); ++} ++ ++static void mtk_hsdma_free_chan_resources(struct dma_chan *c) ++{ ++ vchan_free_chan_resources(to_virt_chan(c)); ++} ++ ++static void mtk_hsdma_desc_free(struct virt_dma_desc *vdesc) ++{ ++ kfree(container_of(vdesc, struct mtk_hsdma_desc, vdesc)); ++} ++ ++static void mtk_hsdma_tx(struct mtk_hsdam_engine *hsdma) ++{ ++ struct mtk_hsdma_chan *chan; ++ ++ if (test_and_clear_bit(0, &hsdma->chan_issued)) { ++ chan = &hsdma->chan[0]; ++ if (chan->desc) ++ mtk_hsdma_start_transfer(hsdma, chan); ++ else ++ dev_dbg(hsdma->ddev.dev, "chan 0 no desc to issue\n"); ++ } ++} ++ ++static void mtk_hsdma_rx(struct mtk_hsdam_engine *hsdma) ++{ ++ struct mtk_hsdma_chan *chan; ++ int next_idx, drx_idx, cnt; ++ ++ chan = &hsdma->chan[0]; ++ next_idx = HSDMA_NEXT_DESC(chan->rx_idx); ++ drx_idx = mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX); ++ ++ cnt = (drx_idx - next_idx) & HSDMA_DESCS_MASK; ++ if (!cnt) ++ return; ++ ++ chan->next_sg += cnt; ++ chan->rx_idx = (chan->rx_idx + cnt) & HSDMA_DESCS_MASK; ++ ++ /* update rx crx */ ++ wmb(); ++ mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx); ++ ++ mtk_hsdma_chan_done(hsdma, chan); ++} ++ ++static void mtk_hsdma_tasklet(struct tasklet_struct *t) ++{ ++ struct mtk_hsdam_engine *hsdma = from_tasklet(hsdma, t, task); ++ ++ mtk_hsdma_rx(hsdma); ++ mtk_hsdma_tx(hsdma); ++} ++ ++static int mtk_hsdam_alloc_desc(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ int i; ++ ++ chan->tx_ring = dma_alloc_coherent(hsdma->ddev.dev, ++ 2 * HSDMA_DESCS_NUM * ++ sizeof(*chan->tx_ring), ++ &chan->desc_addr, GFP_ATOMIC | __GFP_ZERO); ++ if (!chan->tx_ring) ++ goto no_mem; ++ ++ chan->rx_ring = &chan->tx_ring[HSDMA_DESCS_NUM]; ++ ++ /* init tx ring value */ ++ for (i = 0; i < HSDMA_DESCS_NUM; i++) ++ chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE; ++ ++ return 0; ++no_mem: ++ return -ENOMEM; ++} ++ ++static void mtk_hsdam_free_desc(struct mtk_hsdam_engine *hsdma, ++ struct mtk_hsdma_chan *chan) ++{ ++ if (chan->tx_ring) { ++ dma_free_coherent(hsdma->ddev.dev, ++ 2 * HSDMA_DESCS_NUM * sizeof(*chan->tx_ring), ++ chan->tx_ring, chan->desc_addr); ++ chan->tx_ring = NULL; ++ chan->rx_ring = NULL; ++ } ++} ++ ++static int mtk_hsdma_init(struct mtk_hsdam_engine *hsdma) ++{ ++ struct mtk_hsdma_chan *chan; ++ int ret; ++ u32 reg; ++ ++ /* init desc */ ++ chan = &hsdma->chan[0]; ++ ret = mtk_hsdam_alloc_desc(hsdma, chan); ++ if (ret) ++ return ret; ++ ++ /* tx */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, chan->desc_addr); ++ mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, HSDMA_DESCS_NUM); ++ /* rx */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, chan->desc_addr + ++ (sizeof(struct hsdma_desc) * HSDMA_DESCS_NUM)); ++ mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, HSDMA_DESCS_NUM); ++ /* reset */ ++ mtk_hsdma_reset_chan(hsdma, chan); ++ ++ /* enable rx intr */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0); ++ ++ /* enable dma */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT); ++ ++ /* hardware info */ ++ reg = mtk_hsdma_read(hsdma, HSDMA_REG_INFO); ++ dev_info(hsdma->ddev.dev, "rx: %d, tx: %d\n", ++ (reg >> HSDMA_INFO_RX_SHIFT) & HSDMA_INFO_RX_MASK, ++ (reg >> HSDMA_INFO_TX_SHIFT) & HSDMA_INFO_TX_MASK); ++ ++ hsdma_dump_reg(hsdma); ++ ++ return ret; ++} ++ ++static void mtk_hsdma_uninit(struct mtk_hsdam_engine *hsdma) ++{ ++ struct mtk_hsdma_chan *chan; ++ ++ /* disable dma */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0); ++ ++ /* disable intr */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0); ++ ++ /* free desc */ ++ chan = &hsdma->chan[0]; ++ mtk_hsdam_free_desc(hsdma, chan); ++ ++ /* tx */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, 0); ++ mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, 0); ++ /* rx */ ++ mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, 0); ++ mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, 0); ++ /* reset */ ++ mtk_hsdma_reset_chan(hsdma, chan); ++} ++ ++static const struct of_device_id mtk_hsdma_of_match[] = { ++ { .compatible = "mediatek,mt7621-hsdma" }, ++ { }, ++}; ++ ++static int mtk_hsdma_probe(struct platform_device *pdev) ++{ ++ const struct of_device_id *match; ++ struct mtk_hsdma_chan *chan; ++ struct mtk_hsdam_engine *hsdma; ++ struct dma_device *dd; ++ int ret; ++ int irq; ++ void __iomem *base; ++ ++ ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); ++ if (ret) ++ return ret; ++ ++ match = of_match_device(mtk_hsdma_of_match, &pdev->dev); ++ if (!match) ++ return -EINVAL; ++ ++ hsdma = devm_kzalloc(&pdev->dev, sizeof(*hsdma), GFP_KERNEL); ++ if (!hsdma) ++ return -EINVAL; ++ ++ base = devm_platform_ioremap_resource(pdev, 0); ++ if (IS_ERR(base)) ++ return PTR_ERR(base); ++ hsdma->base = base + HSDMA_BASE_OFFSET; ++ tasklet_setup(&hsdma->task, mtk_hsdma_tasklet); ++ ++ irq = platform_get_irq(pdev, 0); ++ if (irq < 0) ++ return -EINVAL; ++ ret = devm_request_irq(&pdev->dev, irq, mtk_hsdma_irq, ++ 0, dev_name(&pdev->dev), hsdma); ++ if (ret) { ++ dev_err(&pdev->dev, "failed to request irq\n"); ++ return ret; ++ } ++ ++ device_reset(&pdev->dev); ++ ++ dd = &hsdma->ddev; ++ dma_cap_set(DMA_MEMCPY, dd->cap_mask); ++ dd->copy_align = HSDMA_ALIGN_SIZE; ++ dd->device_free_chan_resources = mtk_hsdma_free_chan_resources; ++ dd->device_prep_dma_memcpy = mtk_hsdma_prep_dma_memcpy; ++ dd->device_terminate_all = mtk_hsdma_terminate_all; ++ dd->device_tx_status = mtk_hsdma_tx_status; ++ dd->device_issue_pending = mtk_hsdma_issue_pending; ++ dd->dev = &pdev->dev; ++ dd->dev->dma_parms = &hsdma->dma_parms; ++ dma_set_max_seg_size(dd->dev, HSDMA_MAX_PLEN); ++ INIT_LIST_HEAD(&dd->channels); ++ ++ chan = &hsdma->chan[0]; ++ chan->id = 0; ++ chan->vchan.desc_free = mtk_hsdma_desc_free; ++ vchan_init(&chan->vchan, dd); ++ ++ /* init hardware */ ++ ret = mtk_hsdma_init(hsdma); ++ if (ret) { ++ dev_err(&pdev->dev, "failed to alloc ring descs\n"); ++ return ret; ++ } ++ ++ ret = dma_async_device_register(dd); ++ if (ret) { ++ dev_err(&pdev->dev, "failed to register dma device\n"); ++ goto err_uninit_hsdma; ++ } ++ ++ ret = of_dma_controller_register(pdev->dev.of_node, ++ of_dma_xlate_by_chan_id, hsdma); ++ if (ret) { ++ dev_err(&pdev->dev, "failed to register of dma controller\n"); ++ goto err_unregister; ++ } ++ ++ platform_set_drvdata(pdev, hsdma); ++ ++ return 0; ++ ++err_unregister: ++ dma_async_device_unregister(dd); ++err_uninit_hsdma: ++ mtk_hsdma_uninit(hsdma); ++ return ret; ++} ++ ++static int mtk_hsdma_remove(struct platform_device *pdev) ++{ ++ struct mtk_hsdam_engine *hsdma = platform_get_drvdata(pdev); ++ ++ mtk_hsdma_uninit(hsdma); ++ ++ of_dma_controller_free(pdev->dev.of_node); ++ dma_async_device_unregister(&hsdma->ddev); ++ ++ return 0; ++} ++ ++static struct platform_driver mtk_hsdma_driver = { ++ .probe = mtk_hsdma_probe, ++ .remove = mtk_hsdma_remove, ++ .driver = { ++ .name = KBUILD_MODNAME, ++ .of_match_table = mtk_hsdma_of_match, ++ }, ++}; ++module_platform_driver(mtk_hsdma_driver); ++ ++MODULE_AUTHOR("Michael Lee "); ++MODULE_DESCRIPTION("MTK HSDMA driver"); ++MODULE_LICENSE("GPL v2"); +diff --git a/drivers/staging/mt7621-dma/mtk-hsdma.c b/drivers/staging/mt7621-dma/mtk-hsdma.c +deleted file mode 100644 +index 5ad55ca620229..0000000000000 +--- a/drivers/staging/mt7621-dma/mtk-hsdma.c ++++ /dev/null +@@ -1,760 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0+ +-/* +- * Copyright (C) 2015, Michael Lee +- * MTK HSDMA support +- */ +- +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +- +-#include "virt-dma.h" +- +-#define HSDMA_BASE_OFFSET 0x800 +- +-#define HSDMA_REG_TX_BASE 0x00 +-#define HSDMA_REG_TX_CNT 0x04 +-#define HSDMA_REG_TX_CTX 0x08 +-#define HSDMA_REG_TX_DTX 0x0c +-#define HSDMA_REG_RX_BASE 0x100 +-#define HSDMA_REG_RX_CNT 0x104 +-#define HSDMA_REG_RX_CRX 0x108 +-#define HSDMA_REG_RX_DRX 0x10c +-#define HSDMA_REG_INFO 0x200 +-#define HSDMA_REG_GLO_CFG 0x204 +-#define HSDMA_REG_RST_CFG 0x208 +-#define HSDMA_REG_DELAY_INT 0x20c +-#define HSDMA_REG_FREEQ_THRES 0x210 +-#define HSDMA_REG_INT_STATUS 0x220 +-#define HSDMA_REG_INT_MASK 0x228 +-#define HSDMA_REG_SCH_Q01 0x280 +-#define HSDMA_REG_SCH_Q23 0x284 +- +-#define HSDMA_DESCS_MAX 0xfff +-#define HSDMA_DESCS_NUM 8 +-#define HSDMA_DESCS_MASK (HSDMA_DESCS_NUM - 1) +-#define HSDMA_NEXT_DESC(x) (((x) + 1) & HSDMA_DESCS_MASK) +- +-/* HSDMA_REG_INFO */ +-#define HSDMA_INFO_INDEX_MASK 0xf +-#define HSDMA_INFO_INDEX_SHIFT 24 +-#define HSDMA_INFO_BASE_MASK 0xff +-#define HSDMA_INFO_BASE_SHIFT 16 +-#define HSDMA_INFO_RX_MASK 0xff +-#define HSDMA_INFO_RX_SHIFT 8 +-#define HSDMA_INFO_TX_MASK 0xff +-#define HSDMA_INFO_TX_SHIFT 0 +- +-/* HSDMA_REG_GLO_CFG */ +-#define HSDMA_GLO_TX_2B_OFFSET BIT(31) +-#define HSDMA_GLO_CLK_GATE BIT(30) +-#define HSDMA_GLO_BYTE_SWAP BIT(29) +-#define HSDMA_GLO_MULTI_DMA BIT(10) +-#define HSDMA_GLO_TWO_BUF BIT(9) +-#define HSDMA_GLO_32B_DESC BIT(8) +-#define HSDMA_GLO_BIG_ENDIAN BIT(7) +-#define HSDMA_GLO_TX_DONE BIT(6) +-#define HSDMA_GLO_BT_MASK 0x3 +-#define HSDMA_GLO_BT_SHIFT 4 +-#define HSDMA_GLO_RX_BUSY BIT(3) +-#define HSDMA_GLO_RX_DMA BIT(2) +-#define HSDMA_GLO_TX_BUSY BIT(1) +-#define HSDMA_GLO_TX_DMA BIT(0) +- +-#define HSDMA_BT_SIZE_16BYTES (0 << HSDMA_GLO_BT_SHIFT) +-#define HSDMA_BT_SIZE_32BYTES (1 << HSDMA_GLO_BT_SHIFT) +-#define HSDMA_BT_SIZE_64BYTES (2 << HSDMA_GLO_BT_SHIFT) +-#define HSDMA_BT_SIZE_128BYTES (3 << HSDMA_GLO_BT_SHIFT) +- +-#define HSDMA_GLO_DEFAULT (HSDMA_GLO_MULTI_DMA | \ +- HSDMA_GLO_RX_DMA | HSDMA_GLO_TX_DMA | HSDMA_BT_SIZE_32BYTES) +- +-/* HSDMA_REG_RST_CFG */ +-#define HSDMA_RST_RX_SHIFT 16 +-#define HSDMA_RST_TX_SHIFT 0 +- +-/* HSDMA_REG_DELAY_INT */ +-#define HSDMA_DELAY_INT_EN BIT(15) +-#define HSDMA_DELAY_PEND_OFFSET 8 +-#define HSDMA_DELAY_TIME_OFFSET 0 +-#define HSDMA_DELAY_TX_OFFSET 16 +-#define HSDMA_DELAY_RX_OFFSET 0 +- +-#define HSDMA_DELAY_INIT(x) (HSDMA_DELAY_INT_EN | \ +- ((x) << HSDMA_DELAY_PEND_OFFSET)) +-#define HSDMA_DELAY(x) ((HSDMA_DELAY_INIT(x) << \ +- HSDMA_DELAY_TX_OFFSET) | HSDMA_DELAY_INIT(x)) +- +-/* HSDMA_REG_INT_STATUS */ +-#define HSDMA_INT_DELAY_RX_COH BIT(31) +-#define HSDMA_INT_DELAY_RX_INT BIT(30) +-#define HSDMA_INT_DELAY_TX_COH BIT(29) +-#define HSDMA_INT_DELAY_TX_INT BIT(28) +-#define HSDMA_INT_RX_MASK 0x3 +-#define HSDMA_INT_RX_SHIFT 16 +-#define HSDMA_INT_RX_Q0 BIT(16) +-#define HSDMA_INT_TX_MASK 0xf +-#define HSDMA_INT_TX_SHIFT 0 +-#define HSDMA_INT_TX_Q0 BIT(0) +- +-/* tx/rx dma desc flags */ +-#define HSDMA_PLEN_MASK 0x3fff +-#define HSDMA_DESC_DONE BIT(31) +-#define HSDMA_DESC_LS0 BIT(30) +-#define HSDMA_DESC_PLEN0(_x) (((_x) & HSDMA_PLEN_MASK) << 16) +-#define HSDMA_DESC_TAG BIT(15) +-#define HSDMA_DESC_LS1 BIT(14) +-#define HSDMA_DESC_PLEN1(_x) ((_x) & HSDMA_PLEN_MASK) +- +-/* align 4 bytes */ +-#define HSDMA_ALIGN_SIZE 3 +-/* align size 128bytes */ +-#define HSDMA_MAX_PLEN 0x3f80 +- +-struct hsdma_desc { +- u32 addr0; +- u32 flags; +- u32 addr1; +- u32 unused; +-}; +- +-struct mtk_hsdma_sg { +- dma_addr_t src_addr; +- dma_addr_t dst_addr; +- u32 len; +-}; +- +-struct mtk_hsdma_desc { +- struct virt_dma_desc vdesc; +- unsigned int num_sgs; +- struct mtk_hsdma_sg sg[1]; +-}; +- +-struct mtk_hsdma_chan { +- struct virt_dma_chan vchan; +- unsigned int id; +- dma_addr_t desc_addr; +- int tx_idx; +- int rx_idx; +- struct hsdma_desc *tx_ring; +- struct hsdma_desc *rx_ring; +- struct mtk_hsdma_desc *desc; +- unsigned int next_sg; +-}; +- +-struct mtk_hsdam_engine { +- struct dma_device ddev; +- struct device_dma_parameters dma_parms; +- void __iomem *base; +- struct tasklet_struct task; +- volatile unsigned long chan_issued; +- +- struct mtk_hsdma_chan chan[1]; +-}; +- +-static inline struct mtk_hsdam_engine *mtk_hsdma_chan_get_dev( +- struct mtk_hsdma_chan *chan) +-{ +- return container_of(chan->vchan.chan.device, struct mtk_hsdam_engine, +- ddev); +-} +- +-static inline struct mtk_hsdma_chan *to_mtk_hsdma_chan(struct dma_chan *c) +-{ +- return container_of(c, struct mtk_hsdma_chan, vchan.chan); +-} +- +-static inline struct mtk_hsdma_desc *to_mtk_hsdma_desc( +- struct virt_dma_desc *vdesc) +-{ +- return container_of(vdesc, struct mtk_hsdma_desc, vdesc); +-} +- +-static inline u32 mtk_hsdma_read(struct mtk_hsdam_engine *hsdma, u32 reg) +-{ +- return readl(hsdma->base + reg); +-} +- +-static inline void mtk_hsdma_write(struct mtk_hsdam_engine *hsdma, +- unsigned int reg, u32 val) +-{ +- writel(val, hsdma->base + reg); +-} +- +-static void mtk_hsdma_reset_chan(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- chan->tx_idx = 0; +- chan->rx_idx = HSDMA_DESCS_NUM - 1; +- +- mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx); +- mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx); +- +- mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG, +- 0x1 << (chan->id + HSDMA_RST_TX_SHIFT)); +- mtk_hsdma_write(hsdma, HSDMA_REG_RST_CFG, +- 0x1 << (chan->id + HSDMA_RST_RX_SHIFT)); +-} +- +-static void hsdma_dump_reg(struct mtk_hsdam_engine *hsdma) +-{ +- dev_dbg(hsdma->ddev.dev, "tbase %08x, tcnt %08x, " +- "tctx %08x, tdtx: %08x, rbase %08x, " +- "rcnt %08x, rctx %08x, rdtx %08x\n", +- mtk_hsdma_read(hsdma, HSDMA_REG_TX_BASE), +- mtk_hsdma_read(hsdma, HSDMA_REG_TX_CNT), +- mtk_hsdma_read(hsdma, HSDMA_REG_TX_CTX), +- mtk_hsdma_read(hsdma, HSDMA_REG_TX_DTX), +- mtk_hsdma_read(hsdma, HSDMA_REG_RX_BASE), +- mtk_hsdma_read(hsdma, HSDMA_REG_RX_CNT), +- mtk_hsdma_read(hsdma, HSDMA_REG_RX_CRX), +- mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX)); +- +- dev_dbg(hsdma->ddev.dev, "info %08x, glo %08x, delay %08x, intr_stat %08x, intr_mask %08x\n", +- mtk_hsdma_read(hsdma, HSDMA_REG_INFO), +- mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG), +- mtk_hsdma_read(hsdma, HSDMA_REG_DELAY_INT), +- mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS), +- mtk_hsdma_read(hsdma, HSDMA_REG_INT_MASK)); +-} +- +-static void hsdma_dump_desc(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- struct hsdma_desc *tx_desc; +- struct hsdma_desc *rx_desc; +- int i; +- +- dev_dbg(hsdma->ddev.dev, "tx idx: %d, rx idx: %d\n", +- chan->tx_idx, chan->rx_idx); +- +- for (i = 0; i < HSDMA_DESCS_NUM; i++) { +- tx_desc = &chan->tx_ring[i]; +- rx_desc = &chan->rx_ring[i]; +- +- dev_dbg(hsdma->ddev.dev, "%d tx addr0: %08x, flags %08x, " +- "tx addr1: %08x, rx addr0 %08x, flags %08x\n", +- i, tx_desc->addr0, tx_desc->flags, +- tx_desc->addr1, rx_desc->addr0, rx_desc->flags); +- } +-} +- +-static void mtk_hsdma_reset(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- int i; +- +- /* disable dma */ +- mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0); +- +- /* disable intr */ +- mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0); +- +- /* init desc value */ +- for (i = 0; i < HSDMA_DESCS_NUM; i++) { +- chan->tx_ring[i].addr0 = 0; +- chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE; +- } +- for (i = 0; i < HSDMA_DESCS_NUM; i++) { +- chan->rx_ring[i].addr0 = 0; +- chan->rx_ring[i].flags = 0; +- } +- +- /* reset */ +- mtk_hsdma_reset_chan(hsdma, chan); +- +- /* enable intr */ +- mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0); +- +- /* enable dma */ +- mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT); +-} +- +-static int mtk_hsdma_terminate_all(struct dma_chan *c) +-{ +- struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c); +- struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan); +- unsigned long timeout; +- LIST_HEAD(head); +- +- spin_lock_bh(&chan->vchan.lock); +- chan->desc = NULL; +- clear_bit(chan->id, &hsdma->chan_issued); +- vchan_get_all_descriptors(&chan->vchan, &head); +- spin_unlock_bh(&chan->vchan.lock); +- +- vchan_dma_desc_free_list(&chan->vchan, &head); +- +- /* wait dma transfer complete */ +- timeout = jiffies + msecs_to_jiffies(2000); +- while (mtk_hsdma_read(hsdma, HSDMA_REG_GLO_CFG) & +- (HSDMA_GLO_RX_BUSY | HSDMA_GLO_TX_BUSY)) { +- if (time_after_eq(jiffies, timeout)) { +- hsdma_dump_desc(hsdma, chan); +- mtk_hsdma_reset(hsdma, chan); +- dev_err(hsdma->ddev.dev, "timeout, reset it\n"); +- break; +- } +- cpu_relax(); +- } +- +- return 0; +-} +- +-static int mtk_hsdma_start_transfer(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- dma_addr_t src, dst; +- size_t len, tlen; +- struct hsdma_desc *tx_desc, *rx_desc; +- struct mtk_hsdma_sg *sg; +- unsigned int i; +- int rx_idx; +- +- sg = &chan->desc->sg[0]; +- len = sg->len; +- chan->desc->num_sgs = DIV_ROUND_UP(len, HSDMA_MAX_PLEN); +- +- /* tx desc */ +- src = sg->src_addr; +- for (i = 0; i < chan->desc->num_sgs; i++) { +- tx_desc = &chan->tx_ring[chan->tx_idx]; +- +- if (len > HSDMA_MAX_PLEN) +- tlen = HSDMA_MAX_PLEN; +- else +- tlen = len; +- +- if (i & 0x1) { +- tx_desc->addr1 = src; +- tx_desc->flags |= HSDMA_DESC_PLEN1(tlen); +- } else { +- tx_desc->addr0 = src; +- tx_desc->flags = HSDMA_DESC_PLEN0(tlen); +- +- /* update index */ +- chan->tx_idx = HSDMA_NEXT_DESC(chan->tx_idx); +- } +- +- src += tlen; +- len -= tlen; +- } +- if (i & 0x1) +- tx_desc->flags |= HSDMA_DESC_LS0; +- else +- tx_desc->flags |= HSDMA_DESC_LS1; +- +- /* rx desc */ +- rx_idx = HSDMA_NEXT_DESC(chan->rx_idx); +- len = sg->len; +- dst = sg->dst_addr; +- for (i = 0; i < chan->desc->num_sgs; i++) { +- rx_desc = &chan->rx_ring[rx_idx]; +- if (len > HSDMA_MAX_PLEN) +- tlen = HSDMA_MAX_PLEN; +- else +- tlen = len; +- +- rx_desc->addr0 = dst; +- rx_desc->flags = HSDMA_DESC_PLEN0(tlen); +- +- dst += tlen; +- len -= tlen; +- +- /* update index */ +- rx_idx = HSDMA_NEXT_DESC(rx_idx); +- } +- +- /* make sure desc and index all up to date */ +- wmb(); +- mtk_hsdma_write(hsdma, HSDMA_REG_TX_CTX, chan->tx_idx); +- +- return 0; +-} +- +-static int gdma_next_desc(struct mtk_hsdma_chan *chan) +-{ +- struct virt_dma_desc *vdesc; +- +- vdesc = vchan_next_desc(&chan->vchan); +- if (!vdesc) { +- chan->desc = NULL; +- return 0; +- } +- chan->desc = to_mtk_hsdma_desc(vdesc); +- chan->next_sg = 0; +- +- return 1; +-} +- +-static void mtk_hsdma_chan_done(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- struct mtk_hsdma_desc *desc; +- int chan_issued; +- +- chan_issued = 0; +- spin_lock_bh(&chan->vchan.lock); +- desc = chan->desc; +- if (likely(desc)) { +- if (chan->next_sg == desc->num_sgs) { +- list_del(&desc->vdesc.node); +- vchan_cookie_complete(&desc->vdesc); +- chan_issued = gdma_next_desc(chan); +- } +- } else { +- dev_dbg(hsdma->ddev.dev, "no desc to complete\n"); +- } +- +- if (chan_issued) +- set_bit(chan->id, &hsdma->chan_issued); +- spin_unlock_bh(&chan->vchan.lock); +-} +- +-static irqreturn_t mtk_hsdma_irq(int irq, void *devid) +-{ +- struct mtk_hsdam_engine *hsdma = devid; +- u32 status; +- +- status = mtk_hsdma_read(hsdma, HSDMA_REG_INT_STATUS); +- if (unlikely(!status)) +- return IRQ_NONE; +- +- if (likely(status & HSDMA_INT_RX_Q0)) +- tasklet_schedule(&hsdma->task); +- else +- dev_dbg(hsdma->ddev.dev, "unhandle irq status %08x\n", status); +- /* clean intr bits */ +- mtk_hsdma_write(hsdma, HSDMA_REG_INT_STATUS, status); +- +- return IRQ_HANDLED; +-} +- +-static void mtk_hsdma_issue_pending(struct dma_chan *c) +-{ +- struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c); +- struct mtk_hsdam_engine *hsdma = mtk_hsdma_chan_get_dev(chan); +- +- spin_lock_bh(&chan->vchan.lock); +- if (vchan_issue_pending(&chan->vchan) && !chan->desc) { +- if (gdma_next_desc(chan)) { +- set_bit(chan->id, &hsdma->chan_issued); +- tasklet_schedule(&hsdma->task); +- } else { +- dev_dbg(hsdma->ddev.dev, "no desc to issue\n"); +- } +- } +- spin_unlock_bh(&chan->vchan.lock); +-} +- +-static struct dma_async_tx_descriptor *mtk_hsdma_prep_dma_memcpy( +- struct dma_chan *c, dma_addr_t dest, dma_addr_t src, +- size_t len, unsigned long flags) +-{ +- struct mtk_hsdma_chan *chan = to_mtk_hsdma_chan(c); +- struct mtk_hsdma_desc *desc; +- +- if (len <= 0) +- return NULL; +- +- desc = kzalloc(sizeof(*desc), GFP_ATOMIC); +- if (!desc) { +- dev_err(c->device->dev, "alloc memcpy decs error\n"); +- return NULL; +- } +- +- desc->sg[0].src_addr = src; +- desc->sg[0].dst_addr = dest; +- desc->sg[0].len = len; +- +- return vchan_tx_prep(&chan->vchan, &desc->vdesc, flags); +-} +- +-static enum dma_status mtk_hsdma_tx_status(struct dma_chan *c, +- dma_cookie_t cookie, +- struct dma_tx_state *state) +-{ +- return dma_cookie_status(c, cookie, state); +-} +- +-static void mtk_hsdma_free_chan_resources(struct dma_chan *c) +-{ +- vchan_free_chan_resources(to_virt_chan(c)); +-} +- +-static void mtk_hsdma_desc_free(struct virt_dma_desc *vdesc) +-{ +- kfree(container_of(vdesc, struct mtk_hsdma_desc, vdesc)); +-} +- +-static void mtk_hsdma_tx(struct mtk_hsdam_engine *hsdma) +-{ +- struct mtk_hsdma_chan *chan; +- +- if (test_and_clear_bit(0, &hsdma->chan_issued)) { +- chan = &hsdma->chan[0]; +- if (chan->desc) +- mtk_hsdma_start_transfer(hsdma, chan); +- else +- dev_dbg(hsdma->ddev.dev, "chan 0 no desc to issue\n"); +- } +-} +- +-static void mtk_hsdma_rx(struct mtk_hsdam_engine *hsdma) +-{ +- struct mtk_hsdma_chan *chan; +- int next_idx, drx_idx, cnt; +- +- chan = &hsdma->chan[0]; +- next_idx = HSDMA_NEXT_DESC(chan->rx_idx); +- drx_idx = mtk_hsdma_read(hsdma, HSDMA_REG_RX_DRX); +- +- cnt = (drx_idx - next_idx) & HSDMA_DESCS_MASK; +- if (!cnt) +- return; +- +- chan->next_sg += cnt; +- chan->rx_idx = (chan->rx_idx + cnt) & HSDMA_DESCS_MASK; +- +- /* update rx crx */ +- wmb(); +- mtk_hsdma_write(hsdma, HSDMA_REG_RX_CRX, chan->rx_idx); +- +- mtk_hsdma_chan_done(hsdma, chan); +-} +- +-static void mtk_hsdma_tasklet(struct tasklet_struct *t) +-{ +- struct mtk_hsdam_engine *hsdma = from_tasklet(hsdma, t, task); +- +- mtk_hsdma_rx(hsdma); +- mtk_hsdma_tx(hsdma); +-} +- +-static int mtk_hsdam_alloc_desc(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- int i; +- +- chan->tx_ring = dma_alloc_coherent(hsdma->ddev.dev, +- 2 * HSDMA_DESCS_NUM * +- sizeof(*chan->tx_ring), +- &chan->desc_addr, GFP_ATOMIC | __GFP_ZERO); +- if (!chan->tx_ring) +- goto no_mem; +- +- chan->rx_ring = &chan->tx_ring[HSDMA_DESCS_NUM]; +- +- /* init tx ring value */ +- for (i = 0; i < HSDMA_DESCS_NUM; i++) +- chan->tx_ring[i].flags = HSDMA_DESC_LS0 | HSDMA_DESC_DONE; +- +- return 0; +-no_mem: +- return -ENOMEM; +-} +- +-static void mtk_hsdam_free_desc(struct mtk_hsdam_engine *hsdma, +- struct mtk_hsdma_chan *chan) +-{ +- if (chan->tx_ring) { +- dma_free_coherent(hsdma->ddev.dev, +- 2 * HSDMA_DESCS_NUM * sizeof(*chan->tx_ring), +- chan->tx_ring, chan->desc_addr); +- chan->tx_ring = NULL; +- chan->rx_ring = NULL; +- } +-} +- +-static int mtk_hsdma_init(struct mtk_hsdam_engine *hsdma) +-{ +- struct mtk_hsdma_chan *chan; +- int ret; +- u32 reg; +- +- /* init desc */ +- chan = &hsdma->chan[0]; +- ret = mtk_hsdam_alloc_desc(hsdma, chan); +- if (ret) +- return ret; +- +- /* tx */ +- mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, chan->desc_addr); +- mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, HSDMA_DESCS_NUM); +- /* rx */ +- mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, chan->desc_addr + +- (sizeof(struct hsdma_desc) * HSDMA_DESCS_NUM)); +- mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, HSDMA_DESCS_NUM); +- /* reset */ +- mtk_hsdma_reset_chan(hsdma, chan); +- +- /* enable rx intr */ +- mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, HSDMA_INT_RX_Q0); +- +- /* enable dma */ +- mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, HSDMA_GLO_DEFAULT); +- +- /* hardware info */ +- reg = mtk_hsdma_read(hsdma, HSDMA_REG_INFO); +- dev_info(hsdma->ddev.dev, "rx: %d, tx: %d\n", +- (reg >> HSDMA_INFO_RX_SHIFT) & HSDMA_INFO_RX_MASK, +- (reg >> HSDMA_INFO_TX_SHIFT) & HSDMA_INFO_TX_MASK); +- +- hsdma_dump_reg(hsdma); +- +- return ret; +-} +- +-static void mtk_hsdma_uninit(struct mtk_hsdam_engine *hsdma) +-{ +- struct mtk_hsdma_chan *chan; +- +- /* disable dma */ +- mtk_hsdma_write(hsdma, HSDMA_REG_GLO_CFG, 0); +- +- /* disable intr */ +- mtk_hsdma_write(hsdma, HSDMA_REG_INT_MASK, 0); +- +- /* free desc */ +- chan = &hsdma->chan[0]; +- mtk_hsdam_free_desc(hsdma, chan); +- +- /* tx */ +- mtk_hsdma_write(hsdma, HSDMA_REG_TX_BASE, 0); +- mtk_hsdma_write(hsdma, HSDMA_REG_TX_CNT, 0); +- /* rx */ +- mtk_hsdma_write(hsdma, HSDMA_REG_RX_BASE, 0); +- mtk_hsdma_write(hsdma, HSDMA_REG_RX_CNT, 0); +- /* reset */ +- mtk_hsdma_reset_chan(hsdma, chan); +-} +- +-static const struct of_device_id mtk_hsdma_of_match[] = { +- { .compatible = "mediatek,mt7621-hsdma" }, +- { }, +-}; +- +-static int mtk_hsdma_probe(struct platform_device *pdev) +-{ +- const struct of_device_id *match; +- struct mtk_hsdma_chan *chan; +- struct mtk_hsdam_engine *hsdma; +- struct dma_device *dd; +- int ret; +- int irq; +- void __iomem *base; +- +- ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); +- if (ret) +- return ret; +- +- match = of_match_device(mtk_hsdma_of_match, &pdev->dev); +- if (!match) +- return -EINVAL; +- +- hsdma = devm_kzalloc(&pdev->dev, sizeof(*hsdma), GFP_KERNEL); +- if (!hsdma) +- return -EINVAL; +- +- base = devm_platform_ioremap_resource(pdev, 0); +- if (IS_ERR(base)) +- return PTR_ERR(base); +- hsdma->base = base + HSDMA_BASE_OFFSET; +- tasklet_setup(&hsdma->task, mtk_hsdma_tasklet); +- +- irq = platform_get_irq(pdev, 0); +- if (irq < 0) +- return -EINVAL; +- ret = devm_request_irq(&pdev->dev, irq, mtk_hsdma_irq, +- 0, dev_name(&pdev->dev), hsdma); +- if (ret) { +- dev_err(&pdev->dev, "failed to request irq\n"); +- return ret; +- } +- +- device_reset(&pdev->dev); +- +- dd = &hsdma->ddev; +- dma_cap_set(DMA_MEMCPY, dd->cap_mask); +- dd->copy_align = HSDMA_ALIGN_SIZE; +- dd->device_free_chan_resources = mtk_hsdma_free_chan_resources; +- dd->device_prep_dma_memcpy = mtk_hsdma_prep_dma_memcpy; +- dd->device_terminate_all = mtk_hsdma_terminate_all; +- dd->device_tx_status = mtk_hsdma_tx_status; +- dd->device_issue_pending = mtk_hsdma_issue_pending; +- dd->dev = &pdev->dev; +- dd->dev->dma_parms = &hsdma->dma_parms; +- dma_set_max_seg_size(dd->dev, HSDMA_MAX_PLEN); +- INIT_LIST_HEAD(&dd->channels); +- +- chan = &hsdma->chan[0]; +- chan->id = 0; +- chan->vchan.desc_free = mtk_hsdma_desc_free; +- vchan_init(&chan->vchan, dd); +- +- /* init hardware */ +- ret = mtk_hsdma_init(hsdma); +- if (ret) { +- dev_err(&pdev->dev, "failed to alloc ring descs\n"); +- return ret; +- } +- +- ret = dma_async_device_register(dd); +- if (ret) { +- dev_err(&pdev->dev, "failed to register dma device\n"); +- goto err_uninit_hsdma; +- } +- +- ret = of_dma_controller_register(pdev->dev.of_node, +- of_dma_xlate_by_chan_id, hsdma); +- if (ret) { +- dev_err(&pdev->dev, "failed to register of dma controller\n"); +- goto err_unregister; +- } +- +- platform_set_drvdata(pdev, hsdma); +- +- return 0; +- +-err_unregister: +- dma_async_device_unregister(dd); +-err_uninit_hsdma: +- mtk_hsdma_uninit(hsdma); +- return ret; +-} +- +-static int mtk_hsdma_remove(struct platform_device *pdev) +-{ +- struct mtk_hsdam_engine *hsdma = platform_get_drvdata(pdev); +- +- mtk_hsdma_uninit(hsdma); +- +- of_dma_controller_free(pdev->dev.of_node); +- dma_async_device_unregister(&hsdma->ddev); +- +- return 0; +-} +- +-static struct platform_driver mtk_hsdma_driver = { +- .probe = mtk_hsdma_probe, +- .remove = mtk_hsdma_remove, +- .driver = { +- .name = "hsdma-mt7621", +- .of_match_table = mtk_hsdma_of_match, +- }, +-}; +-module_platform_driver(mtk_hsdma_driver); +- +-MODULE_AUTHOR("Michael Lee "); +-MODULE_DESCRIPTION("MTK HSDMA driver"); +-MODULE_LICENSE("GPL v2"); +diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +index 99bfc828672c2..497b4e0c358cc 100644 +--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c ++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +@@ -41,6 +41,7 @@ static const struct usb_device_id rtw_usb_id_tbl[] = { + {USB_DEVICE(0x2357, 0x0111)}, /* TP-Link TL-WN727N v5.21 */ + {USB_DEVICE(0x2C4E, 0x0102)}, /* MERCUSYS MW150US v2 */ + {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ ++ {USB_DEVICE(0x7392, 0xb811)}, /* Edimax EW-7811UN V2 */ + {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */ + {} /* Terminating entry */ + }; +diff --git a/drivers/staging/rtl8723bs/os_dep/wifi_regd.c b/drivers/staging/rtl8723bs/os_dep/wifi_regd.c +index 578b9f734231e..65592bf84f380 100644 +--- a/drivers/staging/rtl8723bs/os_dep/wifi_regd.c ++++ b/drivers/staging/rtl8723bs/os_dep/wifi_regd.c +@@ -34,7 +34,7 @@ + NL80211_RRF_PASSIVE_SCAN) + + static const struct ieee80211_regdomain rtw_regdom_rd = { +- .n_reg_rules = 3, ++ .n_reg_rules = 2, + .alpha2 = "99", + .reg_rules = { + RTW_2GHZ_CH01_11, +diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +index 01125d9f991bb..3d378da119e7a 100644 +--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c ++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +@@ -953,7 +953,7 @@ static int vchiq_irq_queue_bulk_tx_rx(struct vchiq_instance *instance, + struct vchiq_service *service; + struct bulk_waiter_node *waiter = NULL; + bool found = false; +- void *userdata = NULL; ++ void *userdata; + int status = 0; + int ret; + +@@ -992,6 +992,8 @@ static int vchiq_irq_queue_bulk_tx_rx(struct vchiq_instance *instance, + "found bulk_waiter %pK for pid %d", waiter, + current->pid); + userdata = &waiter->bulk_waiter; ++ } else { ++ userdata = args->userdata; + } + + /* +@@ -1712,7 +1714,7 @@ vchiq_compat_ioctl_queue_bulk(struct file *file, + { + struct vchiq_queue_bulk_transfer32 args32; + struct vchiq_queue_bulk_transfer args; +- enum vchiq_bulk_dir dir = (cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT) ? ++ enum vchiq_bulk_dir dir = (cmd == VCHIQ_IOC_QUEUE_BULK_TRANSMIT32) ? + VCHIQ_BULK_TRANSMIT : VCHIQ_BULK_RECEIVE; + + if (copy_from_user(&args32, argp, sizeof(args32))) +diff --git a/drivers/staging/wfx/data_tx.c b/drivers/staging/wfx/data_tx.c +index 36b36ef39d053..77fb104efdec1 100644 +--- a/drivers/staging/wfx/data_tx.c ++++ b/drivers/staging/wfx/data_tx.c +@@ -331,6 +331,7 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta, + { + struct hif_msg *hif_msg; + struct hif_req_tx *req; ++ struct wfx_tx_priv *tx_priv; + struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb); + struct ieee80211_key_conf *hw_key = tx_info->control.hw_key; + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; +@@ -344,11 +345,14 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta, + + // From now tx_info->control is unusable + memset(tx_info->rate_driver_data, 0, sizeof(struct wfx_tx_priv)); ++ // Fill tx_priv ++ tx_priv = (struct wfx_tx_priv *)tx_info->rate_driver_data; ++ tx_priv->icv_size = wfx_tx_get_icv_len(hw_key); + + // Fill hif_msg + WARN(skb_headroom(skb) < wmsg_len, "not enough space in skb"); + WARN(offset & 1, "attempt to transmit an unaligned frame"); +- skb_put(skb, wfx_tx_get_icv_len(hw_key)); ++ skb_put(skb, tx_priv->icv_size); + skb_push(skb, wmsg_len); + memset(skb->data, 0, wmsg_len); + hif_msg = (struct hif_msg *)skb->data; +@@ -484,6 +488,7 @@ static void wfx_tx_fill_rates(struct wfx_dev *wdev, + + void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct hif_cnf_tx *arg) + { ++ const struct wfx_tx_priv *tx_priv; + struct ieee80211_tx_info *tx_info; + struct wfx_vif *wvif; + struct sk_buff *skb; +@@ -495,6 +500,7 @@ void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct hif_cnf_tx *arg) + return; + } + tx_info = IEEE80211_SKB_CB(skb); ++ tx_priv = wfx_skb_tx_priv(skb); + wvif = wdev_to_wvif(wdev, ((struct hif_msg *)skb->data)->interface); + WARN_ON(!wvif); + if (!wvif) +@@ -503,6 +509,8 @@ void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct hif_cnf_tx *arg) + // Note that wfx_pending_get_pkt_us_delay() get data from tx_info + _trace_tx_stats(arg, skb, wfx_pending_get_pkt_us_delay(wdev, skb)); + wfx_tx_fill_rates(wdev, tx_info, arg); ++ skb_trim(skb, skb->len - tx_priv->icv_size); ++ + // From now, you can touch to tx_info->status, but do not touch to + // tx_priv anymore + // FIXME: use ieee80211_tx_info_clear_status() +diff --git a/drivers/staging/wfx/data_tx.h b/drivers/staging/wfx/data_tx.h +index 46c9fff7a870e..401363d6b563a 100644 +--- a/drivers/staging/wfx/data_tx.h ++++ b/drivers/staging/wfx/data_tx.h +@@ -35,6 +35,7 @@ struct tx_policy_cache { + + struct wfx_tx_priv { + ktime_t xmit_timestamp; ++ unsigned char icv_size; + }; + + void wfx_tx_policy_init(struct wfx_vif *wvif); +diff --git a/drivers/target/iscsi/cxgbit/cxgbit_target.c b/drivers/target/iscsi/cxgbit/cxgbit_target.c +index 9b3eb2e8c92ad..b926e1d6c7b8e 100644 +--- a/drivers/target/iscsi/cxgbit/cxgbit_target.c ++++ b/drivers/target/iscsi/cxgbit/cxgbit_target.c +@@ -86,8 +86,7 @@ static int cxgbit_is_ofld_imm(const struct sk_buff *skb) + if (likely(cxgbit_skcb_flags(skb) & SKCBF_TX_ISO)) + length += sizeof(struct cpl_tx_data_iso); + +-#define MAX_IMM_TX_PKT_LEN 256 +- return length <= MAX_IMM_TX_PKT_LEN; ++ return length <= MAX_IMM_OFLD_TX_DATA_WR_LEN; + } + + /* +diff --git a/drivers/tee/optee/rpc.c b/drivers/tee/optee/rpc.c +index 1e3614e4798f0..6cbb3643c6c48 100644 +--- a/drivers/tee/optee/rpc.c ++++ b/drivers/tee/optee/rpc.c +@@ -54,8 +54,9 @@ bad: + static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, + struct optee_msg_arg *arg) + { +- struct i2c_client client = { 0 }; + struct tee_param *params; ++ struct i2c_adapter *adapter; ++ struct i2c_msg msg = { }; + size_t i; + int ret = -EOPNOTSUPP; + u8 attr[] = { +@@ -85,48 +86,48 @@ static void handle_rpc_func_cmd_i2c_transfer(struct tee_context *ctx, + goto bad; + } + +- client.adapter = i2c_get_adapter(params[0].u.value.b); +- if (!client.adapter) ++ adapter = i2c_get_adapter(params[0].u.value.b); ++ if (!adapter) + goto bad; + + if (params[1].u.value.a & OPTEE_MSG_RPC_CMD_I2C_FLAGS_TEN_BIT) { +- if (!i2c_check_functionality(client.adapter, ++ if (!i2c_check_functionality(adapter, + I2C_FUNC_10BIT_ADDR)) { +- i2c_put_adapter(client.adapter); ++ i2c_put_adapter(adapter); + goto bad; + } + +- client.flags = I2C_CLIENT_TEN; ++ msg.flags = I2C_M_TEN; + } + +- client.addr = params[0].u.value.c; +- snprintf(client.name, I2C_NAME_SIZE, "i2c%d", client.adapter->nr); ++ msg.addr = params[0].u.value.c; ++ msg.buf = params[2].u.memref.shm->kaddr; ++ msg.len = params[2].u.memref.size; + + switch (params[0].u.value.a) { + case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_RD: +- ret = i2c_master_recv(&client, params[2].u.memref.shm->kaddr, +- params[2].u.memref.size); ++ msg.flags |= I2C_M_RD; + break; + case OPTEE_MSG_RPC_CMD_I2C_TRANSFER_WR: +- ret = i2c_master_send(&client, params[2].u.memref.shm->kaddr, +- params[2].u.memref.size); + break; + default: +- i2c_put_adapter(client.adapter); ++ i2c_put_adapter(adapter); + goto bad; + } + ++ ret = i2c_transfer(adapter, &msg, 1); ++ + if (ret < 0) { + arg->ret = TEEC_ERROR_COMMUNICATION; + } else { +- params[3].u.value.a = ret; ++ params[3].u.value.a = msg.len; + if (optee_to_msg_param(arg->params, arg->num_params, params)) + arg->ret = TEEC_ERROR_BAD_PARAMETERS; + else + arg->ret = TEEC_SUCCESS; + } + +- i2c_put_adapter(client.adapter); ++ i2c_put_adapter(adapter); + kfree(params); + return; + bad: +diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c +index 612f063c1cfcd..ddc166e3a93eb 100644 +--- a/drivers/thermal/cpufreq_cooling.c ++++ b/drivers/thermal/cpufreq_cooling.c +@@ -441,7 +441,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev, + frequency = get_state_freq(cpufreq_cdev, state); + + ret = freq_qos_update_request(&cpufreq_cdev->qos_req, frequency); +- if (ret > 0) { ++ if (ret >= 0) { + cpufreq_cdev->cpufreq_state = state; + cpus = cpufreq_cdev->policy->cpus; + max_capacity = arch_scale_cpu_capacity(cpumask_first(cpus)); +diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c +index 25f3152089c2a..fea1eeac5b907 100644 +--- a/drivers/tty/n_gsm.c ++++ b/drivers/tty/n_gsm.c +@@ -2557,7 +2557,8 @@ static void gsmld_write_wakeup(struct tty_struct *tty) + */ + + static ssize_t gsmld_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr) ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset) + { + return -EOPNOTSUPP; + } +diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c +index 12557ee1edb68..1363e659dc1db 100644 +--- a/drivers/tty/n_hdlc.c ++++ b/drivers/tty/n_hdlc.c +@@ -416,13 +416,19 @@ static void n_hdlc_tty_receive(struct tty_struct *tty, const __u8 *data, + * Returns the number of bytes returned or error code. + */ + static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file, +- __u8 __user *buf, size_t nr) ++ __u8 *kbuf, size_t nr, ++ void **cookie, unsigned long offset) + { + struct n_hdlc *n_hdlc = tty->disc_data; + int ret = 0; + struct n_hdlc_buf *rbuf; + DECLARE_WAITQUEUE(wait, current); + ++ /* Is this a repeated call for an rbuf we already found earlier? */ ++ rbuf = *cookie; ++ if (rbuf) ++ goto have_rbuf; ++ + add_wait_queue(&tty->read_wait, &wait); + + for (;;) { +@@ -436,25 +442,8 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file, + set_current_state(TASK_INTERRUPTIBLE); + + rbuf = n_hdlc_buf_get(&n_hdlc->rx_buf_list); +- if (rbuf) { +- if (rbuf->count > nr) { +- /* too large for caller's buffer */ +- ret = -EOVERFLOW; +- } else { +- __set_current_state(TASK_RUNNING); +- if (copy_to_user(buf, rbuf->buf, rbuf->count)) +- ret = -EFAULT; +- else +- ret = rbuf->count; +- } +- +- if (n_hdlc->rx_free_buf_list.count > +- DEFAULT_RX_BUF_COUNT) +- kfree(rbuf); +- else +- n_hdlc_buf_put(&n_hdlc->rx_free_buf_list, rbuf); ++ if (rbuf) + break; +- } + + /* no data */ + if (tty_io_nonblock(tty, file)) { +@@ -473,6 +462,39 @@ static ssize_t n_hdlc_tty_read(struct tty_struct *tty, struct file *file, + remove_wait_queue(&tty->read_wait, &wait); + __set_current_state(TASK_RUNNING); + ++ if (!rbuf) ++ return ret; ++ *cookie = rbuf; ++ ++have_rbuf: ++ /* Have we used it up entirely? */ ++ if (offset >= rbuf->count) ++ goto done_with_rbuf; ++ ++ /* More data to go, but can't copy any more? EOVERFLOW */ ++ ret = -EOVERFLOW; ++ if (!nr) ++ goto done_with_rbuf; ++ ++ /* Copy as much data as possible */ ++ ret = rbuf->count - offset; ++ if (ret > nr) ++ ret = nr; ++ memcpy(kbuf, rbuf->buf+offset, ret); ++ offset += ret; ++ ++ /* If we still have data left, we leave the rbuf in the cookie */ ++ if (offset < rbuf->count) ++ return ret; ++ ++done_with_rbuf: ++ *cookie = NULL; ++ ++ if (n_hdlc->rx_free_buf_list.count > DEFAULT_RX_BUF_COUNT) ++ kfree(rbuf); ++ else ++ n_hdlc_buf_put(&n_hdlc->rx_free_buf_list, rbuf); ++ + return ret; + + } /* end of n_hdlc_tty_read() */ +diff --git a/drivers/tty/n_null.c b/drivers/tty/n_null.c +index 96feabae47407..ce03ae78f5c6a 100644 +--- a/drivers/tty/n_null.c ++++ b/drivers/tty/n_null.c +@@ -20,7 +20,8 @@ static void n_null_close(struct tty_struct *tty) + } + + static ssize_t n_null_read(struct tty_struct *tty, struct file *file, +- unsigned char __user * buf, size_t nr) ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset) + { + return -EOPNOTSUPP; + } +diff --git a/drivers/tty/n_r3964.c b/drivers/tty/n_r3964.c +index 934dd2fb2ec80..3161f0a535e37 100644 +--- a/drivers/tty/n_r3964.c ++++ b/drivers/tty/n_r3964.c +@@ -129,7 +129,7 @@ static void remove_client_block(struct r3964_info *pInfo, + static int r3964_open(struct tty_struct *tty); + static void r3964_close(struct tty_struct *tty); + static ssize_t r3964_read(struct tty_struct *tty, struct file *file, +- unsigned char __user * buf, size_t nr); ++ void *cookie, unsigned char *buf, size_t nr); + static ssize_t r3964_write(struct tty_struct *tty, struct file *file, + const unsigned char *buf, size_t nr); + static int r3964_ioctl(struct tty_struct *tty, struct file *file, +@@ -1058,7 +1058,8 @@ static void r3964_close(struct tty_struct *tty) + } + + static ssize_t r3964_read(struct tty_struct *tty, struct file *file, +- unsigned char __user * buf, size_t nr) ++ unsigned char *kbuf, size_t nr, ++ void **cookie, unsigned long offset) + { + struct r3964_info *pInfo = tty->disc_data; + struct r3964_client_info *pClient; +@@ -1109,10 +1110,7 @@ static ssize_t r3964_read(struct tty_struct *tty, struct file *file, + kfree(pMsg); + TRACE_M("r3964_read - msg kfree %p", pMsg); + +- if (copy_to_user(buf, &theMsg, ret)) { +- ret = -EFAULT; +- goto unlock; +- } ++ memcpy(kbuf, &theMsg, ret); + + TRACE_PS("read - return %d", ret); + goto unlock; +diff --git a/drivers/tty/n_tracerouter.c b/drivers/tty/n_tracerouter.c +index 4479af4d2fa5c..3490ed51b1a3c 100644 +--- a/drivers/tty/n_tracerouter.c ++++ b/drivers/tty/n_tracerouter.c +@@ -118,7 +118,9 @@ static void n_tracerouter_close(struct tty_struct *tty) + * -EINVAL + */ + static ssize_t n_tracerouter_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr) { ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset) ++{ + return -EINVAL; + } + +diff --git a/drivers/tty/n_tracesink.c b/drivers/tty/n_tracesink.c +index d96ba82cc3569..1d9931041fd8b 100644 +--- a/drivers/tty/n_tracesink.c ++++ b/drivers/tty/n_tracesink.c +@@ -115,7 +115,9 @@ static void n_tracesink_close(struct tty_struct *tty) + * -EINVAL + */ + static ssize_t n_tracesink_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr) { ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset) ++{ + return -EINVAL; + } + +diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c +index c2869489ba681..e8963165082ee 100644 +--- a/drivers/tty/n_tty.c ++++ b/drivers/tty/n_tty.c +@@ -164,29 +164,24 @@ static void zero_buffer(struct tty_struct *tty, u8 *buffer, int size) + memset(buffer, 0x00, size); + } + +-static int tty_copy_to_user(struct tty_struct *tty, void __user *to, +- size_t tail, size_t n) ++static void tty_copy(struct tty_struct *tty, void *to, size_t tail, size_t n) + { + struct n_tty_data *ldata = tty->disc_data; + size_t size = N_TTY_BUF_SIZE - tail; + void *from = read_buf_addr(ldata, tail); +- int uncopied; + + if (n > size) { + tty_audit_add_data(tty, from, size); +- uncopied = copy_to_user(to, from, size); +- zero_buffer(tty, from, size - uncopied); +- if (uncopied) +- return uncopied; ++ memcpy(to, from, size); ++ zero_buffer(tty, from, size); + to += size; + n -= size; + from = ldata->read_buf; + } + + tty_audit_add_data(tty, from, n); +- uncopied = copy_to_user(to, from, n); +- zero_buffer(tty, from, n - uncopied); +- return uncopied; ++ memcpy(to, from, n); ++ zero_buffer(tty, from, n); + } + + /** +@@ -1942,15 +1937,16 @@ static inline int input_available_p(struct tty_struct *tty, int poll) + /** + * copy_from_read_buf - copy read data directly + * @tty: terminal device +- * @b: user data ++ * @kbp: data + * @nr: size of data + * + * Helper function to speed up n_tty_read. It is only called when +- * ICANON is off; it copies characters straight from the tty queue to +- * user space directly. It can be profitably called twice; once to +- * drain the space from the tail pointer to the (physical) end of the +- * buffer, and once to drain the space from the (physical) beginning of +- * the buffer to head pointer. ++ * ICANON is off; it copies characters straight from the tty queue. ++ * ++ * It can be profitably called twice; once to drain the space from ++ * the tail pointer to the (physical) end of the buffer, and once ++ * to drain the space from the (physical) beginning of the buffer ++ * to head pointer. + * + * Called under the ldata->atomic_read_lock sem + * +@@ -1960,7 +1956,7 @@ static inline int input_available_p(struct tty_struct *tty, int poll) + */ + + static int copy_from_read_buf(struct tty_struct *tty, +- unsigned char __user **b, ++ unsigned char **kbp, + size_t *nr) + + { +@@ -1976,8 +1972,7 @@ static int copy_from_read_buf(struct tty_struct *tty, + n = min(*nr, n); + if (n) { + unsigned char *from = read_buf_addr(ldata, tail); +- retval = copy_to_user(*b, from, n); +- n -= retval; ++ memcpy(*kbp, from, n); + is_eof = n == 1 && *from == EOF_CHAR(tty); + tty_audit_add_data(tty, from, n); + zero_buffer(tty, from, n); +@@ -1986,7 +1981,7 @@ static int copy_from_read_buf(struct tty_struct *tty, + if (L_EXTPROC(tty) && ldata->icanon && is_eof && + (head == ldata->read_tail)) + n = 0; +- *b += n; ++ *kbp += n; + *nr -= n; + } + return retval; +@@ -1995,12 +1990,12 @@ static int copy_from_read_buf(struct tty_struct *tty, + /** + * canon_copy_from_read_buf - copy read data in canonical mode + * @tty: terminal device +- * @b: user data ++ * @kbp: data + * @nr: size of data + * + * Helper function for n_tty_read. It is only called when ICANON is on; + * it copies one line of input up to and including the line-delimiting +- * character into the user-space buffer. ++ * character into the result buffer. + * + * NB: When termios is changed from non-canonical to canonical mode and + * the read buffer contains data, n_tty_set_termios() simulates an EOF +@@ -2016,14 +2011,14 @@ static int copy_from_read_buf(struct tty_struct *tty, + */ + + static int canon_copy_from_read_buf(struct tty_struct *tty, +- unsigned char __user **b, ++ unsigned char **kbp, + size_t *nr) + { + struct n_tty_data *ldata = tty->disc_data; + size_t n, size, more, c; + size_t eol; + size_t tail; +- int ret, found = 0; ++ int found = 0; + + /* N.B. avoid overrun if nr == 0 */ + if (!*nr) +@@ -2059,10 +2054,8 @@ static int canon_copy_from_read_buf(struct tty_struct *tty, + n_tty_trace("%s: eol:%zu found:%d n:%zu c:%zu tail:%zu more:%zu\n", + __func__, eol, found, n, c, tail, more); + +- ret = tty_copy_to_user(tty, *b, tail, n); +- if (ret) +- return -EFAULT; +- *b += n; ++ tty_copy(tty, *kbp, tail, n); ++ *kbp += n; + *nr -= n; + + if (found) +@@ -2127,10 +2120,11 @@ static int job_control(struct tty_struct *tty, struct file *file) + */ + + static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr) ++ unsigned char *kbuf, size_t nr, ++ void **cookie, unsigned long offset) + { + struct n_tty_data *ldata = tty->disc_data; +- unsigned char __user *b = buf; ++ unsigned char *kb = kbuf; + DEFINE_WAIT_FUNC(wait, woken_wake_function); + int c; + int minimum, time; +@@ -2176,17 +2170,13 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, + /* First test for status change. */ + if (packet && tty->link->ctrl_status) { + unsigned char cs; +- if (b != buf) ++ if (kb != kbuf) + break; + spin_lock_irq(&tty->link->ctrl_lock); + cs = tty->link->ctrl_status; + tty->link->ctrl_status = 0; + spin_unlock_irq(&tty->link->ctrl_lock); +- if (put_user(cs, b)) { +- retval = -EFAULT; +- break; +- } +- b++; ++ *kb++ = cs; + nr--; + break; + } +@@ -2229,24 +2219,20 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, + } + + if (ldata->icanon && !L_EXTPROC(tty)) { +- retval = canon_copy_from_read_buf(tty, &b, &nr); ++ retval = canon_copy_from_read_buf(tty, &kb, &nr); + if (retval) + break; + } else { + int uncopied; + + /* Deal with packet mode. */ +- if (packet && b == buf) { +- if (put_user(TIOCPKT_DATA, b)) { +- retval = -EFAULT; +- break; +- } +- b++; ++ if (packet && kb == kbuf) { ++ *kb++ = TIOCPKT_DATA; + nr--; + } + +- uncopied = copy_from_read_buf(tty, &b, &nr); +- uncopied += copy_from_read_buf(tty, &b, &nr); ++ uncopied = copy_from_read_buf(tty, &kb, &nr); ++ uncopied += copy_from_read_buf(tty, &kb, &nr); + if (uncopied) { + retval = -EFAULT; + break; +@@ -2255,7 +2241,7 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, + + n_tty_check_unthrottle(tty); + +- if (b - buf >= minimum) ++ if (kb - kbuf >= minimum) + break; + if (time) + timeout = time; +@@ -2267,8 +2253,8 @@ static ssize_t n_tty_read(struct tty_struct *tty, struct file *file, + remove_wait_queue(&tty->read_wait, &wait); + mutex_unlock(&ldata->atomic_read_lock); + +- if (b - buf) +- retval = b - buf; ++ if (kb - kbuf) ++ retval = kb - kbuf; + + return retval; + } +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c +index 21cd5ac6ca8b5..3f55fe7293f31 100644 +--- a/drivers/tty/tty_io.c ++++ b/drivers/tty/tty_io.c +@@ -142,7 +142,7 @@ LIST_HEAD(tty_drivers); /* linked list of tty drivers */ + /* Mutex to protect creating and releasing a tty */ + DEFINE_MUTEX(tty_mutex); + +-static ssize_t tty_read(struct file *, char __user *, size_t, loff_t *); ++static ssize_t tty_read(struct kiocb *, struct iov_iter *); + static ssize_t tty_write(struct kiocb *, struct iov_iter *); + static __poll_t tty_poll(struct file *, poll_table *); + static int tty_open(struct inode *, struct file *); +@@ -473,8 +473,9 @@ static void tty_show_fdinfo(struct seq_file *m, struct file *file) + + static const struct file_operations tty_fops = { + .llseek = no_llseek, +- .read = tty_read, ++ .read_iter = tty_read, + .write_iter = tty_write, ++ .splice_read = generic_file_splice_read, + .splice_write = iter_file_splice_write, + .poll = tty_poll, + .unlocked_ioctl = tty_ioctl, +@@ -487,8 +488,9 @@ static const struct file_operations tty_fops = { + + static const struct file_operations console_fops = { + .llseek = no_llseek, +- .read = tty_read, ++ .read_iter = tty_read, + .write_iter = redirected_tty_write, ++ .splice_read = generic_file_splice_read, + .splice_write = iter_file_splice_write, + .poll = tty_poll, + .unlocked_ioctl = tty_ioctl, +@@ -830,6 +832,65 @@ static void tty_update_time(struct timespec64 *time) + time->tv_sec = sec; + } + ++/* ++ * Iterate on the ldisc ->read() function until we've gotten all ++ * the data the ldisc has for us. ++ * ++ * The "cookie" is something that the ldisc read function can fill ++ * in to let us know that there is more data to be had. ++ * ++ * We promise to continue to call the ldisc until it stops returning ++ * data or clears the cookie. The cookie may be something that the ++ * ldisc maintains state for and needs to free. ++ */ ++static int iterate_tty_read(struct tty_ldisc *ld, struct tty_struct *tty, ++ struct file *file, struct iov_iter *to) ++{ ++ int retval = 0; ++ void *cookie = NULL; ++ unsigned long offset = 0; ++ char kernel_buf[64]; ++ size_t count = iov_iter_count(to); ++ ++ do { ++ int size, copied; ++ ++ size = count > sizeof(kernel_buf) ? sizeof(kernel_buf) : count; ++ size = ld->ops->read(tty, file, kernel_buf, size, &cookie, offset); ++ if (!size) ++ break; ++ ++ /* ++ * A ldisc read error return will override any previously copied ++ * data (eg -EOVERFLOW from HDLC) ++ */ ++ if (size < 0) { ++ memzero_explicit(kernel_buf, sizeof(kernel_buf)); ++ return size; ++ } ++ ++ copied = copy_to_iter(kernel_buf, size, to); ++ offset += copied; ++ count -= copied; ++ ++ /* ++ * If the user copy failed, we still need to do another ->read() ++ * call if we had a cookie to let the ldisc clear up. ++ * ++ * But make sure size is zeroed. ++ */ ++ if (unlikely(copied != size)) { ++ count = 0; ++ retval = -EFAULT; ++ } ++ } while (cookie); ++ ++ /* We always clear tty buffer in case they contained passwords */ ++ memzero_explicit(kernel_buf, sizeof(kernel_buf)); ++ return offset ? offset : retval; ++} ++ ++ + /** + * tty_read - read method for tty device files + * @file: pointer to tty file +@@ -845,10 +906,10 @@ static void tty_update_time(struct timespec64 *time) + * read calls may be outstanding in parallel. + */ + +-static ssize_t tty_read(struct file *file, char __user *buf, size_t count, +- loff_t *ppos) ++static ssize_t tty_read(struct kiocb *iocb, struct iov_iter *to) + { + int i; ++ struct file *file = iocb->ki_filp; + struct inode *inode = file_inode(file); + struct tty_struct *tty = file_tty(file); + struct tty_ldisc *ld; +@@ -861,12 +922,9 @@ static ssize_t tty_read(struct file *file, char __user *buf, size_t count, + /* We want to wait for the line discipline to sort out in this + situation */ + ld = tty_ldisc_ref_wait(tty); +- if (!ld) +- return hung_up_tty_read(file, buf, count, ppos); +- if (ld->ops->read) +- i = ld->ops->read(tty, file, buf, count); +- else +- i = -EIO; ++ i = -EIO; ++ if (ld && ld->ops->read) ++ i = iterate_tty_read(ld, tty, file, to); + tty_ldisc_deref(ld); + + if (i > 0) +@@ -2885,7 +2943,7 @@ static long tty_compat_ioctl(struct file *file, unsigned int cmd, + + static int this_tty(const void *t, struct file *file, unsigned fd) + { +- if (likely(file->f_op->read != tty_read)) ++ if (likely(file->f_op->read_iter != tty_read)) + return 0; + return file_tty(file) != t ? 0 : fd + 1; + } +diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c +index e9ac215b96633..fc3269f5faf19 100644 +--- a/drivers/usb/dwc2/hcd.c ++++ b/drivers/usb/dwc2/hcd.c +@@ -1313,19 +1313,20 @@ static void dwc2_hc_start_transfer(struct dwc2_hsotg *hsotg, + if (num_packets > max_hc_pkt_count) { + num_packets = max_hc_pkt_count; + chan->xfer_len = num_packets * chan->max_packet; ++ } else if (chan->ep_is_in) { ++ /* ++ * Always program an integral # of max packets ++ * for IN transfers. ++ * Note: This assumes that the input buffer is ++ * aligned and sized accordingly. ++ */ ++ chan->xfer_len = num_packets * chan->max_packet; + } + } else { + /* Need 1 packet for transfer length of 0 */ + num_packets = 1; + } + +- if (chan->ep_is_in) +- /* +- * Always program an integral # of max packets for IN +- * transfers +- */ +- chan->xfer_len = num_packets * chan->max_packet; +- + if (chan->ep_type == USB_ENDPOINT_XFER_INT || + chan->ep_type == USB_ENDPOINT_XFER_ISOC) + /* +diff --git a/drivers/usb/dwc2/hcd_intr.c b/drivers/usb/dwc2/hcd_intr.c +index a052d39b4375e..d5f4ec1b73b15 100644 +--- a/drivers/usb/dwc2/hcd_intr.c ++++ b/drivers/usb/dwc2/hcd_intr.c +@@ -500,7 +500,7 @@ static int dwc2_update_urb_state(struct dwc2_hsotg *hsotg, + &short_read); + + if (urb->actual_length + xfer_length > urb->length) { +- dev_warn(hsotg->dev, "%s(): trimming xfer length\n", __func__); ++ dev_dbg(hsotg->dev, "%s(): trimming xfer length\n", __func__); + xfer_length = urb->length - urb->actual_length; + } + +@@ -1977,6 +1977,18 @@ error: + qtd->error_count++; + dwc2_update_urb_state_abn(hsotg, chan, chnum, qtd->urb, + qtd, DWC2_HC_XFER_XACT_ERR); ++ /* ++ * We can get here after a completed transaction ++ * (urb->actual_length >= urb->length) which was not reported ++ * as completed. If that is the case, and we do not abort ++ * the transfer, a transfer of size 0 will be enqueued ++ * subsequently. If urb->actual_length is not DMA-aligned, ++ * the buffer will then point to an unaligned address, and ++ * the resulting behavior is undefined. Bail out in that ++ * situation. ++ */ ++ if (qtd->urb->actual_length >= qtd->urb->length) ++ qtd->error_count = 3; + dwc2_hcd_save_data_toggle(hsotg, chan, chnum, qtd); + dwc2_halt_channel(hsotg, chan, qtd, DWC2_HC_XFER_XACT_ERR); + } +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index ee44321fee386..56f7235bc068c 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -605,8 +605,23 @@ static int dwc3_gadget_set_ep_config(struct dwc3_ep *dep, unsigned int action) + params.param0 |= DWC3_DEPCFG_FIFO_NUMBER(dep->number >> 1); + + if (desc->bInterval) { +- params.param1 |= DWC3_DEPCFG_BINTERVAL_M1(desc->bInterval - 1); +- dep->interval = 1 << (desc->bInterval - 1); ++ u8 bInterval_m1; ++ ++ /* ++ * Valid range for DEPCFG.bInterval_m1 is from 0 to 13, and it ++ * must be set to 0 when the controller operates in full-speed. ++ */ ++ bInterval_m1 = min_t(u8, desc->bInterval - 1, 13); ++ if (dwc->gadget->speed == USB_SPEED_FULL) ++ bInterval_m1 = 0; ++ ++ if (usb_endpoint_type(desc) == USB_ENDPOINT_XFER_INT && ++ dwc->gadget->speed == USB_SPEED_FULL) ++ dep->interval = desc->bInterval; ++ else ++ dep->interval = 1 << (desc->bInterval - 1); ++ ++ params.param1 |= DWC3_DEPCFG_BINTERVAL_M1(bInterval_m1); + } + + return dwc3_send_gadget_ep_cmd(dep, DWC3_DEPCMD_SETEPCONFIG, ¶ms); +diff --git a/drivers/usb/gadget/function/u_audio.c b/drivers/usb/gadget/function/u_audio.c +index e6d32c5367812..908e49dafd620 100644 +--- a/drivers/usb/gadget/function/u_audio.c ++++ b/drivers/usb/gadget/function/u_audio.c +@@ -89,7 +89,12 @@ static void u_audio_iso_complete(struct usb_ep *ep, struct usb_request *req) + struct snd_uac_chip *uac = prm->uac; + + /* i/f shutting down */ +- if (!prm->ep_enabled || req->status == -ESHUTDOWN) ++ if (!prm->ep_enabled) { ++ usb_ep_free_request(ep, req); ++ return; ++ } ++ ++ if (req->status == -ESHUTDOWN) + return; + + /* +@@ -336,8 +341,14 @@ static inline void free_ep(struct uac_rtd_params *prm, struct usb_ep *ep) + + for (i = 0; i < params->req_number; i++) { + if (prm->ureq[i].req) { +- usb_ep_dequeue(ep, prm->ureq[i].req); +- usb_ep_free_request(ep, prm->ureq[i].req); ++ if (usb_ep_dequeue(ep, prm->ureq[i].req)) ++ usb_ep_free_request(ep, prm->ureq[i].req); ++ /* ++ * If usb_ep_dequeue() cannot successfully dequeue the ++ * request, the request will be freed by the completion ++ * callback. ++ */ ++ + prm->ureq[i].req = NULL; + } + } +diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c +index 849e0b770130a..1cd87729ba604 100644 +--- a/drivers/usb/musb/musb_core.c ++++ b/drivers/usb/musb/musb_core.c +@@ -2240,32 +2240,35 @@ int musb_queue_resume_work(struct musb *musb, + { + struct musb_pending_work *w; + unsigned long flags; ++ bool is_suspended; + int error; + + if (WARN_ON(!callback)) + return -EINVAL; + +- if (pm_runtime_active(musb->controller)) +- return callback(musb, data); ++ spin_lock_irqsave(&musb->list_lock, flags); ++ is_suspended = musb->is_runtime_suspended; ++ ++ if (is_suspended) { ++ w = devm_kzalloc(musb->controller, sizeof(*w), GFP_ATOMIC); ++ if (!w) { ++ error = -ENOMEM; ++ goto out_unlock; ++ } + +- w = devm_kzalloc(musb->controller, sizeof(*w), GFP_ATOMIC); +- if (!w) +- return -ENOMEM; ++ w->callback = callback; ++ w->data = data; + +- w->callback = callback; +- w->data = data; +- spin_lock_irqsave(&musb->list_lock, flags); +- if (musb->is_runtime_suspended) { + list_add_tail(&w->node, &musb->pending_list); + error = 0; +- } else { +- dev_err(musb->controller, "could not add resume work %p\n", +- callback); +- devm_kfree(musb->controller, w); +- error = -EINPROGRESS; + } ++ ++out_unlock: + spin_unlock_irqrestore(&musb->list_lock, flags); + ++ if (!is_suspended) ++ error = callback(musb, data); ++ + return error; + } + EXPORT_SYMBOL_GPL(musb_queue_resume_work); +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index e0f4c3d9649cd..56cd70ba201c7 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1386,8 +1386,9 @@ static int change_speed(struct tty_struct *tty, struct usb_serial_port *port) + index_value = get_ftdi_divisor(tty, port); + value = (u16)index_value; + index = (u16)(index_value >> 16); +- if ((priv->chip_type == FT2232C) || (priv->chip_type == FT2232H) || +- (priv->chip_type == FT4232H) || (priv->chip_type == FT232H)) { ++ if (priv->chip_type == FT2232C || priv->chip_type == FT2232H || ++ priv->chip_type == FT4232H || priv->chip_type == FT232H || ++ priv->chip_type == FTX) { + /* Probably the BM type needs the MSB of the encoded fractional + * divider also moved like for the chips above. Any infos? */ + index = (u16)((index << 8) | priv->interface); +diff --git a/drivers/usb/serial/mos7720.c b/drivers/usb/serial/mos7720.c +index 5a5d2a95070ed..b418a0d4adb89 100644 +--- a/drivers/usb/serial/mos7720.c ++++ b/drivers/usb/serial/mos7720.c +@@ -1250,8 +1250,10 @@ static int mos7720_write(struct tty_struct *tty, struct usb_serial_port *port, + if (urb->transfer_buffer == NULL) { + urb->transfer_buffer = kmalloc(URB_TRANSFER_BUFFER_SIZE, + GFP_ATOMIC); +- if (!urb->transfer_buffer) ++ if (!urb->transfer_buffer) { ++ bytes_sent = -ENOMEM; + goto exit; ++ } + } + transfer_size = min(count, URB_TRANSFER_BUFFER_SIZE); + +diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c +index 23f91d658cb46..30c25ef0dacd2 100644 +--- a/drivers/usb/serial/mos7840.c ++++ b/drivers/usb/serial/mos7840.c +@@ -883,8 +883,10 @@ static int mos7840_write(struct tty_struct *tty, struct usb_serial_port *port, + if (urb->transfer_buffer == NULL) { + urb->transfer_buffer = kmalloc(URB_TRANSFER_BUFFER_SIZE, + GFP_ATOMIC); +- if (!urb->transfer_buffer) ++ if (!urb->transfer_buffer) { ++ bytes_sent = -ENOMEM; + goto exit; ++ } + } + transfer_size = min(count, URB_TRANSFER_BUFFER_SIZE); + +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 2049e66f34a3f..c6969ca728390 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -1569,7 +1569,8 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1272, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1273, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1274, 0xff, 0xff, 0xff) }, +- { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1275, 0xff, 0xff, 0xff) }, ++ { USB_DEVICE(ZTE_VENDOR_ID, 0x1275), /* ZTE P685M */ ++ .driver_info = RSVD(3) | RSVD(4) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1276, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1277, 0xff, 0xff, 0xff) }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1278, 0xff, 0xff, 0xff) }, +diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c +index be8067017eaa5..29dda60e3bcde 100644 +--- a/drivers/usb/serial/pl2303.c ++++ b/drivers/usb/serial/pl2303.c +@@ -183,6 +183,7 @@ struct pl2303_type_data { + speed_t max_baud_rate; + unsigned long quirks; + unsigned int no_autoxonxoff:1; ++ unsigned int no_divisors:1; + }; + + struct pl2303_serial_private { +@@ -209,6 +210,7 @@ static const struct pl2303_type_data pl2303_type_data[TYPE_COUNT] = { + }, + [TYPE_HXN] = { + .max_baud_rate = 12000000, ++ .no_divisors = true, + }, + }; + +@@ -571,8 +573,12 @@ static void pl2303_encode_baud_rate(struct tty_struct *tty, + baud = min_t(speed_t, baud, spriv->type->max_baud_rate); + /* + * Use direct method for supported baud rates, otherwise use divisors. ++ * Newer chip types do not support divisor encoding. + */ +- baud_sup = pl2303_get_supported_baud_rate(baud); ++ if (spriv->type->no_divisors) ++ baud_sup = baud; ++ else ++ baud_sup = pl2303_get_supported_baud_rate(baud); + + if (baud == baud_sup) + baud = pl2303_encode_baud_rate_direct(buf, baud); +diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c +index c6529f7c3034a..5a86ede36c1ca 100644 +--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c ++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c +@@ -1805,7 +1805,7 @@ static void mlx5_vdpa_get_config(struct vdpa_device *vdev, unsigned int offset, + struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); + struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); + +- if (offset + len < sizeof(struct virtio_net_config)) ++ if (offset + len <= sizeof(struct virtio_net_config)) + memcpy(buf, (u8 *)&ndev->config + offset, len); + } + +diff --git a/drivers/vfio/pci/vfio_pci_zdev.c b/drivers/vfio/pci/vfio_pci_zdev.c +index 2296856340311..1bb7edac56899 100644 +--- a/drivers/vfio/pci/vfio_pci_zdev.c ++++ b/drivers/vfio/pci/vfio_pci_zdev.c +@@ -74,6 +74,8 @@ static int zpci_util_cap(struct zpci_dev *zdev, struct vfio_pci_device *vdev, + int ret; + + cap = kmalloc(cap_size, GFP_KERNEL); ++ if (!cap) ++ return -ENOMEM; + + cap->header.id = VFIO_DEVICE_INFO_CAP_ZPCI_UTIL; + cap->header.version = 1; +@@ -98,6 +100,8 @@ static int zpci_pfip_cap(struct zpci_dev *zdev, struct vfio_pci_device *vdev, + int ret; + + cap = kmalloc(cap_size, GFP_KERNEL); ++ if (!cap) ++ return -ENOMEM; + + cap->header.id = VFIO_DEVICE_INFO_CAP_ZPCI_PFIP; + cap->header.version = 1; +diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c +index 67e8276389951..fbd438e9b9b03 100644 +--- a/drivers/vfio/vfio_iommu_type1.c ++++ b/drivers/vfio/vfio_iommu_type1.c +@@ -24,6 +24,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -236,6 +237,18 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) + } + } + ++static void vfio_iommu_populate_bitmap_full(struct vfio_iommu *iommu) ++{ ++ struct rb_node *n; ++ unsigned long pgshift = __ffs(iommu->pgsize_bitmap); ++ ++ for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { ++ struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); ++ ++ bitmap_set(dma->bitmap, 0, dma->size >> pgshift); ++ } ++} ++ + static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize) + { + struct rb_node *n; +@@ -419,9 +432,11 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm, + unsigned long vaddr, unsigned long *pfn, + bool write_fault) + { ++ pte_t *ptep; ++ spinlock_t *ptl; + int ret; + +- ret = follow_pfn(vma, vaddr, pfn); ++ ret = follow_pte(vma->vm_mm, vaddr, &ptep, &ptl); + if (ret) { + bool unlocked = false; + +@@ -435,9 +450,17 @@ static int follow_fault_pfn(struct vm_area_struct *vma, struct mm_struct *mm, + if (ret) + return ret; + +- ret = follow_pfn(vma, vaddr, pfn); ++ ret = follow_pte(vma->vm_mm, vaddr, &ptep, &ptl); ++ if (ret) ++ return ret; + } + ++ if (write_fault && !pte_write(*ptep)) ++ ret = -EFAULT; ++ else ++ *pfn = pte_pfn(*ptep); ++ ++ pte_unmap_unlock(ptep, ptl); + return ret; + } + +@@ -945,6 +968,7 @@ static long vfio_unmap_unpin(struct vfio_iommu *iommu, struct vfio_dma *dma, + + static void vfio_remove_dma(struct vfio_iommu *iommu, struct vfio_dma *dma) + { ++ WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list)); + vfio_unmap_unpin(iommu, dma, true); + vfio_unlink_dma(iommu, dma); + put_task_struct(dma->task); +@@ -2238,23 +2262,6 @@ static void vfio_iommu_unmap_unpin_reaccount(struct vfio_iommu *iommu) + } + } + +-static void vfio_sanity_check_pfn_list(struct vfio_iommu *iommu) +-{ +- struct rb_node *n; +- +- n = rb_first(&iommu->dma_list); +- for (; n; n = rb_next(n)) { +- struct vfio_dma *dma; +- +- dma = rb_entry(n, struct vfio_dma, node); +- +- if (WARN_ON(!RB_EMPTY_ROOT(&dma->pfn_list))) +- break; +- } +- /* mdev vendor driver must unregister notifier */ +- WARN_ON(iommu->notifier.head); +-} +- + /* + * Called when a domain is removed in detach. It is possible that + * the removed domain decided the iova aperture window. Modify the +@@ -2354,10 +2361,10 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, + kfree(group); + + if (list_empty(&iommu->external_domain->group_list)) { +- vfio_sanity_check_pfn_list(iommu); +- +- if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) ++ if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) { ++ WARN_ON(iommu->notifier.head); + vfio_iommu_unmap_unpin_all(iommu); ++ } + + kfree(iommu->external_domain); + iommu->external_domain = NULL; +@@ -2391,10 +2398,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, + */ + if (list_empty(&domain->group_list)) { + if (list_is_singular(&iommu->domain_list)) { +- if (!iommu->external_domain) ++ if (!iommu->external_domain) { ++ WARN_ON(iommu->notifier.head); + vfio_iommu_unmap_unpin_all(iommu); +- else ++ } else { + vfio_iommu_unmap_unpin_reaccount(iommu); ++ } + } + iommu_domain_free(domain->domain); + list_del(&domain->next); +@@ -2415,8 +2424,11 @@ detach_group_done: + * Removal of a group without dirty tracking may allow the iommu scope + * to be promoted. + */ +- if (update_dirty_scope) ++ if (update_dirty_scope) { + update_pinned_page_dirty_scope(iommu); ++ if (iommu->dirty_page_tracking) ++ vfio_iommu_populate_bitmap_full(iommu); ++ } + mutex_unlock(&iommu->lock); + } + +@@ -2475,7 +2487,6 @@ static void vfio_iommu_type1_release(void *iommu_data) + + if (iommu->external_domain) { + vfio_release_domain(iommu->external_domain, true); +- vfio_sanity_check_pfn_list(iommu); + kfree(iommu->external_domain); + } + +diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig +index cfb7f5612ef0f..4f02db65dedec 100644 +--- a/drivers/video/fbdev/Kconfig ++++ b/drivers/video/fbdev/Kconfig +@@ -1269,6 +1269,7 @@ config FB_ATY + select FB_CFB_IMAGEBLIT + select FB_BACKLIGHT if FB_ATY_BACKLIGHT + select FB_MACMODES if PPC ++ select FB_ATY_CT if SPARC64 && PCI + help + This driver supports graphics boards with the ATI Mach64 chips. + Say Y if you have such a graphics board. +@@ -1279,7 +1280,6 @@ config FB_ATY + config FB_ATY_CT + bool "Mach64 CT/VT/GT/LT (incl. 3D RAGE) support" + depends on PCI && FB_ATY +- default y if SPARC64 && PCI + help + Say Y here to support use of ATI's 64-bit Rage boards (or other + boards based on the Mach64 CT, VT, GT, and LT chipsets) as a +diff --git a/drivers/virt/vboxguest/vboxguest_utils.c b/drivers/virt/vboxguest/vboxguest_utils.c +index ea05af41ec69e..8d195e3f83012 100644 +--- a/drivers/virt/vboxguest/vboxguest_utils.c ++++ b/drivers/virt/vboxguest/vboxguest_utils.c +@@ -468,7 +468,7 @@ static int hgcm_cancel_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call) + * Cancellation fun. + */ + static int vbg_hgcm_do_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call, +- u32 timeout_ms, bool *leak_it) ++ u32 timeout_ms, bool interruptible, bool *leak_it) + { + int rc, cancel_rc, ret; + long timeout; +@@ -495,10 +495,15 @@ static int vbg_hgcm_do_call(struct vbg_dev *gdev, struct vmmdev_hgcm_call *call, + else + timeout = msecs_to_jiffies(timeout_ms); + +- timeout = wait_event_interruptible_timeout( +- gdev->hgcm_wq, +- hgcm_req_done(gdev, &call->header), +- timeout); ++ if (interruptible) { ++ timeout = wait_event_interruptible_timeout(gdev->hgcm_wq, ++ hgcm_req_done(gdev, &call->header), ++ timeout); ++ } else { ++ timeout = wait_event_timeout(gdev->hgcm_wq, ++ hgcm_req_done(gdev, &call->header), ++ timeout); ++ } + + /* timeout > 0 means hgcm_req_done has returned true, so success */ + if (timeout > 0) +@@ -631,7 +636,8 @@ int vbg_hgcm_call(struct vbg_dev *gdev, u32 requestor, u32 client_id, + hgcm_call_init_call(call, client_id, function, parms, parm_count, + bounce_bufs); + +- ret = vbg_hgcm_do_call(gdev, call, timeout_ms, &leak_it); ++ ret = vbg_hgcm_do_call(gdev, call, timeout_ms, ++ requestor & VMMDEV_REQUESTOR_USERMODE, &leak_it); + if (ret == 0) { + *vbox_status = call->header.result; + ret = hgcm_call_copy_back_result(call, parms, parm_count, +diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c +index cddf60b7309ca..974d02bb3a45c 100644 +--- a/drivers/w1/slaves/w1_therm.c ++++ b/drivers/w1/slaves/w1_therm.c +@@ -667,28 +667,24 @@ static inline int w1_DS18B20_get_resolution(struct w1_slave *sl) + */ + static inline int w1_DS18B20_convert_temp(u8 rom[9]) + { +- int t; +- u32 bv; ++ u16 bv; ++ s16 t; ++ ++ /* Signed 16-bit value to unsigned, cpu order */ ++ bv = le16_to_cpup((__le16 *)rom); + + /* Config register bit R2 = 1 - GX20MH01 in 13 or 14 bit resolution mode */ + if (rom[4] & 0x80) { +- /* Signed 16-bit value to unsigned, cpu order */ +- bv = le16_to_cpup((__le16 *)rom); +- + /* Insert two temperature bits from config register */ + /* Avoid arithmetic shift of signed value */ + bv = (bv << 2) | (rom[4] & 3); +- +- t = (int) sign_extend32(bv, 17); /* Degrees, lowest bit is 2^-6 */ +- return (t*1000)/64; /* Millidegrees */ ++ t = (s16) bv; /* Degrees, lowest bit is 2^-6 */ ++ return (int)t * 1000 / 64; /* Sign-extend to int; millidegrees */ + } +- +- t = (int)le16_to_cpup((__le16 *)rom); +- return t*1000/16; ++ t = (s16)bv; /* Degrees, lowest bit is 2^-4 */ ++ return (int)t * 1000 / 16; /* Sign-extend to int; millidegrees */ + } + +- +- + /** + * w1_DS18S20_convert_temp() - temperature computation for DS18S20 + * @rom: data read from device RAM (8 data bytes + 1 CRC byte) +diff --git a/drivers/watchdog/intel-mid_wdt.c b/drivers/watchdog/intel-mid_wdt.c +index 1ae03b64ef8bf..9b2173f765c8c 100644 +--- a/drivers/watchdog/intel-mid_wdt.c ++++ b/drivers/watchdog/intel-mid_wdt.c +@@ -154,6 +154,10 @@ static int mid_wdt_probe(struct platform_device *pdev) + watchdog_set_nowayout(wdt_dev, WATCHDOG_NOWAYOUT); + watchdog_set_drvdata(wdt_dev, mid); + ++ mid->scu = devm_intel_scu_ipc_dev_get(dev); ++ if (!mid->scu) ++ return -EPROBE_DEFER; ++ + ret = devm_request_irq(dev, pdata->irq, mid_wdt_irq, + IRQF_SHARED | IRQF_NO_SUSPEND, "watchdog", + wdt_dev); +@@ -162,10 +166,6 @@ static int mid_wdt_probe(struct platform_device *pdev) + return ret; + } + +- mid->scu = devm_intel_scu_ipc_dev_get(dev); +- if (!mid->scu) +- return -EPROBE_DEFER; +- + /* + * The firmware followed by U-Boot leaves the watchdog running + * with the default threshold which may vary. When we get here +diff --git a/drivers/watchdog/mei_wdt.c b/drivers/watchdog/mei_wdt.c +index 5391bf3e6b11d..c5967d8b4256a 100644 +--- a/drivers/watchdog/mei_wdt.c ++++ b/drivers/watchdog/mei_wdt.c +@@ -382,6 +382,7 @@ static int mei_wdt_register(struct mei_wdt *wdt) + + watchdog_set_drvdata(&wdt->wdd, wdt); + watchdog_stop_on_reboot(&wdt->wdd); ++ watchdog_stop_on_unregister(&wdt->wdd); + + ret = watchdog_register_device(&wdt->wdd); + if (ret) +diff --git a/drivers/watchdog/qcom-wdt.c b/drivers/watchdog/qcom-wdt.c +index cdf754233e53d..bdab184215d27 100644 +--- a/drivers/watchdog/qcom-wdt.c ++++ b/drivers/watchdog/qcom-wdt.c +@@ -22,7 +22,6 @@ enum wdt_reg { + }; + + #define QCOM_WDT_ENABLE BIT(0) +-#define QCOM_WDT_ENABLE_IRQ BIT(1) + + static const u32 reg_offset_data_apcs_tmr[] = { + [WDT_RST] = 0x38, +@@ -63,16 +62,6 @@ struct qcom_wdt *to_qcom_wdt(struct watchdog_device *wdd) + return container_of(wdd, struct qcom_wdt, wdd); + } + +-static inline int qcom_get_enable(struct watchdog_device *wdd) +-{ +- int enable = QCOM_WDT_ENABLE; +- +- if (wdd->pretimeout) +- enable |= QCOM_WDT_ENABLE_IRQ; +- +- return enable; +-} +- + static irqreturn_t qcom_wdt_isr(int irq, void *arg) + { + struct watchdog_device *wdd = arg; +@@ -91,7 +80,7 @@ static int qcom_wdt_start(struct watchdog_device *wdd) + writel(1, wdt_addr(wdt, WDT_RST)); + writel(bark * wdt->rate, wdt_addr(wdt, WDT_BARK_TIME)); + writel(wdd->timeout * wdt->rate, wdt_addr(wdt, WDT_BITE_TIME)); +- writel(qcom_get_enable(wdd), wdt_addr(wdt, WDT_EN)); ++ writel(QCOM_WDT_ENABLE, wdt_addr(wdt, WDT_EN)); + return 0; + } + +diff --git a/fs/affs/namei.c b/fs/affs/namei.c +index 41c5749f4db78..5400a876d73fb 100644 +--- a/fs/affs/namei.c ++++ b/fs/affs/namei.c +@@ -460,8 +460,10 @@ affs_xrename(struct inode *old_dir, struct dentry *old_dentry, + return -EIO; + + bh_new = affs_bread(sb, d_inode(new_dentry)->i_ino); +- if (!bh_new) ++ if (!bh_new) { ++ affs_brelse(bh_old); + return -EIO; ++ } + + /* Remove old header from its parent directory. */ + affs_lock_dir(old_dir); +diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c +index 553b4f6ec8639..6e447bdaf9ec8 100644 +--- a/fs/btrfs/backref.c ++++ b/fs/btrfs/backref.c +@@ -2548,13 +2548,6 @@ void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache, + list_del(&edge->list[UPPER]); + btrfs_backref_free_edge(cache, edge); + +- if (RB_EMPTY_NODE(&upper->rb_node)) { +- BUG_ON(!list_empty(&node->upper)); +- btrfs_backref_drop_node(cache, node); +- node = upper; +- node->lowest = 1; +- continue; +- } + /* + * Add the node to leaf node list if no other child block + * cached. +@@ -2631,7 +2624,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache, + /* Only reloc backref cache cares about a specific root */ + if (cache->is_reloc) { + root = find_reloc_root(cache->fs_info, cur->bytenr); +- if (WARN_ON(!root)) ++ if (!root) + return -ENOENT; + cur->root = root; + } else { +diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h +index ff705cc564a9a..17abde7f794ce 100644 +--- a/fs/btrfs/backref.h ++++ b/fs/btrfs/backref.h +@@ -296,6 +296,9 @@ static inline void btrfs_backref_free_node(struct btrfs_backref_cache *cache, + struct btrfs_backref_node *node) + { + if (node) { ++ ASSERT(list_empty(&node->list)); ++ ASSERT(list_empty(&node->lower)); ++ ASSERT(node->eb == NULL); + cache->nr_nodes--; + btrfs_put_root(node->root); + kfree(node); +@@ -340,11 +343,11 @@ static inline void btrfs_backref_drop_node_buffer( + static inline void btrfs_backref_drop_node(struct btrfs_backref_cache *tree, + struct btrfs_backref_node *node) + { +- BUG_ON(!list_empty(&node->upper)); ++ ASSERT(list_empty(&node->upper)); + + btrfs_backref_drop_node_buffer(node); +- list_del(&node->list); +- list_del(&node->lower); ++ list_del_init(&node->list); ++ list_del_init(&node->lower); + if (!RB_EMPTY_NODE(&node->rb_node)) + rb_erase(&node->rb_node, &tree->rb_root); + btrfs_backref_free_node(tree, node); +diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c +index a2111eab614f2..9a5d652c1672e 100644 +--- a/fs/btrfs/block-group.c ++++ b/fs/btrfs/block-group.c +@@ -1450,9 +1450,7 @@ void btrfs_delete_unused_bgs(struct btrfs_fs_info *fs_info) + btrfs_space_info_update_bytes_pinned(fs_info, space_info, + -block_group->pinned); + space_info->bytes_readonly += block_group->pinned; +- percpu_counter_add_batch(&space_info->total_bytes_pinned, +- -block_group->pinned, +- BTRFS_TOTAL_BYTES_PINNED_BATCH); ++ __btrfs_mod_total_bytes_pinned(space_info, -block_group->pinned); + block_group->pinned = 0; + + spin_unlock(&block_group->lock); +@@ -2582,8 +2580,10 @@ again: + + if (!path) { + path = btrfs_alloc_path(); +- if (!path) +- return -ENOMEM; ++ if (!path) { ++ ret = -ENOMEM; ++ goto out; ++ } + } + + /* +@@ -2677,16 +2677,14 @@ again: + btrfs_put_block_group(cache); + if (drop_reserve) + btrfs_delayed_refs_rsv_release(fs_info, 1); +- +- if (ret) +- break; +- + /* + * Avoid blocking other tasks for too long. It might even save + * us from writing caches for block groups that are going to be + * removed. + */ + mutex_unlock(&trans->transaction->cache_write_mutex); ++ if (ret) ++ goto out; + mutex_lock(&trans->transaction->cache_write_mutex); + } + mutex_unlock(&trans->transaction->cache_write_mutex); +@@ -2710,7 +2708,12 @@ again: + goto again; + } + spin_unlock(&cur_trans->dirty_bgs_lock); +- } else if (ret < 0) { ++ } ++out: ++ if (ret < 0) { ++ spin_lock(&cur_trans->dirty_bgs_lock); ++ list_splice_init(&dirty, &cur_trans->dirty_bgs); ++ spin_unlock(&cur_trans->dirty_bgs_lock); + btrfs_cleanup_dirty_bgs(cur_trans, fs_info); + } + +@@ -2914,10 +2917,8 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans, + spin_unlock(&cache->lock); + spin_unlock(&cache->space_info->lock); + +- percpu_counter_add_batch( +- &cache->space_info->total_bytes_pinned, +- num_bytes, +- BTRFS_TOTAL_BYTES_PINNED_BATCH); ++ __btrfs_mod_total_bytes_pinned(cache->space_info, ++ num_bytes); + set_extent_dirty(&trans->transaction->pinned_extents, + bytenr, bytenr + num_bytes - 1, + GFP_NOFS | __GFP_NOFAIL); +diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c +index 113da62dc17f6..f2f6f65038923 100644 +--- a/fs/btrfs/ctree.c ++++ b/fs/btrfs/ctree.c +@@ -221,9 +221,12 @@ int btrfs_copy_root(struct btrfs_trans_handle *trans, + ret = btrfs_inc_ref(trans, root, cow, 1); + else + ret = btrfs_inc_ref(trans, root, cow, 0); +- +- if (ret) ++ if (ret) { ++ btrfs_tree_unlock(cow); ++ free_extent_buffer(cow); ++ btrfs_abort_transaction(trans, ret); + return ret; ++ } + + btrfs_mark_buffer_dirty(cow); + *cow_ret = cow; +diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c +index 353cc2994d106..30883b9a26d84 100644 +--- a/fs/btrfs/delayed-ref.c ++++ b/fs/btrfs/delayed-ref.c +@@ -648,12 +648,12 @@ inserted: + */ + static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans, + struct btrfs_delayed_ref_head *existing, +- struct btrfs_delayed_ref_head *update, +- int *old_ref_mod_ret) ++ struct btrfs_delayed_ref_head *update) + { + struct btrfs_delayed_ref_root *delayed_refs = + &trans->transaction->delayed_refs; + struct btrfs_fs_info *fs_info = trans->fs_info; ++ u64 flags = btrfs_ref_head_to_space_flags(existing); + int old_ref_mod; + + BUG_ON(existing->is_data != update->is_data); +@@ -701,8 +701,6 @@ static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans, + * currently, for refs we just added we know we're a-ok. + */ + old_ref_mod = existing->total_ref_mod; +- if (old_ref_mod_ret) +- *old_ref_mod_ret = old_ref_mod; + existing->ref_mod += update->ref_mod; + existing->total_ref_mod += update->ref_mod; + +@@ -724,6 +722,27 @@ static noinline void update_existing_head_ref(struct btrfs_trans_handle *trans, + trans->delayed_ref_updates += csum_leaves; + } + } ++ ++ /* ++ * This handles the following conditions: ++ * ++ * 1. We had a ref mod of 0 or more and went negative, indicating that ++ * we may be freeing space, so add our space to the ++ * total_bytes_pinned counter. ++ * 2. We were negative and went to 0 or positive, so no longer can say ++ * that the space would be pinned, decrement our counter from the ++ * total_bytes_pinned counter. ++ * 3. We are now at 0 and have ->must_insert_reserved set, which means ++ * this was a new allocation and then we dropped it, and thus must ++ * add our space to the total_bytes_pinned counter. ++ */ ++ if (existing->total_ref_mod < 0 && old_ref_mod >= 0) ++ btrfs_mod_total_bytes_pinned(fs_info, flags, existing->num_bytes); ++ else if (existing->total_ref_mod >= 0 && old_ref_mod < 0) ++ btrfs_mod_total_bytes_pinned(fs_info, flags, -existing->num_bytes); ++ else if (existing->total_ref_mod == 0 && existing->must_insert_reserved) ++ btrfs_mod_total_bytes_pinned(fs_info, flags, existing->num_bytes); ++ + spin_unlock(&existing->lock); + } + +@@ -798,8 +817,7 @@ static noinline struct btrfs_delayed_ref_head * + add_delayed_ref_head(struct btrfs_trans_handle *trans, + struct btrfs_delayed_ref_head *head_ref, + struct btrfs_qgroup_extent_record *qrecord, +- int action, int *qrecord_inserted_ret, +- int *old_ref_mod, int *new_ref_mod) ++ int action, int *qrecord_inserted_ret) + { + struct btrfs_delayed_ref_head *existing; + struct btrfs_delayed_ref_root *delayed_refs; +@@ -821,8 +839,7 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, + existing = htree_insert(&delayed_refs->href_root, + &head_ref->href_node); + if (existing) { +- update_existing_head_ref(trans, existing, head_ref, +- old_ref_mod); ++ update_existing_head_ref(trans, existing, head_ref); + /* + * we've updated the existing ref, free the newly + * allocated ref +@@ -830,14 +847,17 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, + kmem_cache_free(btrfs_delayed_ref_head_cachep, head_ref); + head_ref = existing; + } else { +- if (old_ref_mod) +- *old_ref_mod = 0; ++ u64 flags = btrfs_ref_head_to_space_flags(head_ref); ++ + if (head_ref->is_data && head_ref->ref_mod < 0) { + delayed_refs->pending_csums += head_ref->num_bytes; + trans->delayed_ref_updates += + btrfs_csum_bytes_to_leaves(trans->fs_info, + head_ref->num_bytes); + } ++ if (head_ref->ref_mod < 0) ++ btrfs_mod_total_bytes_pinned(trans->fs_info, flags, ++ head_ref->num_bytes); + delayed_refs->num_heads++; + delayed_refs->num_heads_ready++; + atomic_inc(&delayed_refs->num_entries); +@@ -845,8 +865,6 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans, + } + if (qrecord_inserted_ret) + *qrecord_inserted_ret = qrecord_inserted; +- if (new_ref_mod) +- *new_ref_mod = head_ref->total_ref_mod; + + return head_ref; + } +@@ -909,8 +927,7 @@ static void init_delayed_ref_common(struct btrfs_fs_info *fs_info, + */ + int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans, + struct btrfs_ref *generic_ref, +- struct btrfs_delayed_extent_op *extent_op, +- int *old_ref_mod, int *new_ref_mod) ++ struct btrfs_delayed_extent_op *extent_op) + { + struct btrfs_fs_info *fs_info = trans->fs_info; + struct btrfs_delayed_tree_ref *ref; +@@ -977,8 +994,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans, + * the spin lock + */ + head_ref = add_delayed_ref_head(trans, head_ref, record, +- action, &qrecord_inserted, +- old_ref_mod, new_ref_mod); ++ action, &qrecord_inserted); + + ret = insert_delayed_ref(trans, delayed_refs, head_ref, &ref->node); + spin_unlock(&delayed_refs->lock); +@@ -1006,8 +1022,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans, + */ + int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans, + struct btrfs_ref *generic_ref, +- u64 reserved, int *old_ref_mod, +- int *new_ref_mod) ++ u64 reserved) + { + struct btrfs_fs_info *fs_info = trans->fs_info; + struct btrfs_delayed_data_ref *ref; +@@ -1073,8 +1088,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans, + * the spin lock + */ + head_ref = add_delayed_ref_head(trans, head_ref, record, +- action, &qrecord_inserted, +- old_ref_mod, new_ref_mod); ++ action, &qrecord_inserted); + + ret = insert_delayed_ref(trans, delayed_refs, head_ref, &ref->node); + spin_unlock(&delayed_refs->lock); +@@ -1117,7 +1131,7 @@ int btrfs_add_delayed_extent_op(struct btrfs_trans_handle *trans, + spin_lock(&delayed_refs->lock); + + add_delayed_ref_head(trans, head_ref, NULL, BTRFS_UPDATE_DELAYED_HEAD, +- NULL, NULL, NULL); ++ NULL); + + spin_unlock(&delayed_refs->lock); + +diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h +index 1c977e6d45dc3..3ba140468f126 100644 +--- a/fs/btrfs/delayed-ref.h ++++ b/fs/btrfs/delayed-ref.h +@@ -326,6 +326,16 @@ static inline void btrfs_put_delayed_ref(struct btrfs_delayed_ref_node *ref) + } + } + ++static inline u64 btrfs_ref_head_to_space_flags( ++ struct btrfs_delayed_ref_head *head_ref) ++{ ++ if (head_ref->is_data) ++ return BTRFS_BLOCK_GROUP_DATA; ++ else if (head_ref->is_system) ++ return BTRFS_BLOCK_GROUP_SYSTEM; ++ return BTRFS_BLOCK_GROUP_METADATA; ++} ++ + static inline void btrfs_put_delayed_ref_head(struct btrfs_delayed_ref_head *head) + { + if (refcount_dec_and_test(&head->refs)) +@@ -334,12 +344,10 @@ static inline void btrfs_put_delayed_ref_head(struct btrfs_delayed_ref_head *hea + + int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans, + struct btrfs_ref *generic_ref, +- struct btrfs_delayed_extent_op *extent_op, +- int *old_ref_mod, int *new_ref_mod); ++ struct btrfs_delayed_extent_op *extent_op); + int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans, + struct btrfs_ref *generic_ref, +- u64 reserved, int *old_ref_mod, +- int *new_ref_mod); ++ u64 reserved); + int btrfs_add_delayed_extent_op(struct btrfs_trans_handle *trans, + u64 bytenr, u64 num_bytes, + struct btrfs_delayed_extent_op *extent_op); +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 8fba1c219b190..51c18da4792ec 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -82,41 +82,6 @@ void btrfs_free_excluded_extents(struct btrfs_block_group *cache) + EXTENT_UPTODATE); + } + +-static u64 generic_ref_to_space_flags(struct btrfs_ref *ref) +-{ +- if (ref->type == BTRFS_REF_METADATA) { +- if (ref->tree_ref.root == BTRFS_CHUNK_TREE_OBJECTID) +- return BTRFS_BLOCK_GROUP_SYSTEM; +- else +- return BTRFS_BLOCK_GROUP_METADATA; +- } +- return BTRFS_BLOCK_GROUP_DATA; +-} +- +-static void add_pinned_bytes(struct btrfs_fs_info *fs_info, +- struct btrfs_ref *ref) +-{ +- struct btrfs_space_info *space_info; +- u64 flags = generic_ref_to_space_flags(ref); +- +- space_info = btrfs_find_space_info(fs_info, flags); +- ASSERT(space_info); +- percpu_counter_add_batch(&space_info->total_bytes_pinned, ref->len, +- BTRFS_TOTAL_BYTES_PINNED_BATCH); +-} +- +-static void sub_pinned_bytes(struct btrfs_fs_info *fs_info, +- struct btrfs_ref *ref) +-{ +- struct btrfs_space_info *space_info; +- u64 flags = generic_ref_to_space_flags(ref); +- +- space_info = btrfs_find_space_info(fs_info, flags); +- ASSERT(space_info); +- percpu_counter_add_batch(&space_info->total_bytes_pinned, -ref->len, +- BTRFS_TOTAL_BYTES_PINNED_BATCH); +-} +- + /* simple helper to search for an existing data extent at a given offset */ + int btrfs_lookup_data_extent(struct btrfs_fs_info *fs_info, u64 start, u64 len) + { +@@ -1386,7 +1351,6 @@ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans, + struct btrfs_ref *generic_ref) + { + struct btrfs_fs_info *fs_info = trans->fs_info; +- int old_ref_mod, new_ref_mod; + int ret; + + ASSERT(generic_ref->type != BTRFS_REF_NOT_SET && +@@ -1395,17 +1359,12 @@ int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans, + generic_ref->tree_ref.root == BTRFS_TREE_LOG_OBJECTID); + + if (generic_ref->type == BTRFS_REF_METADATA) +- ret = btrfs_add_delayed_tree_ref(trans, generic_ref, +- NULL, &old_ref_mod, &new_ref_mod); ++ ret = btrfs_add_delayed_tree_ref(trans, generic_ref, NULL); + else +- ret = btrfs_add_delayed_data_ref(trans, generic_ref, 0, +- &old_ref_mod, &new_ref_mod); ++ ret = btrfs_add_delayed_data_ref(trans, generic_ref, 0); + + btrfs_ref_tree_mod(fs_info, generic_ref); + +- if (ret == 0 && old_ref_mod < 0 && new_ref_mod >= 0) +- sub_pinned_bytes(fs_info, generic_ref); +- + return ret; + } + +@@ -1796,34 +1755,28 @@ void btrfs_cleanup_ref_head_accounting(struct btrfs_fs_info *fs_info, + { + int nr_items = 1; /* Dropping this ref head update. */ + +- if (head->total_ref_mod < 0) { +- struct btrfs_space_info *space_info; +- u64 flags; ++ /* ++ * We had csum deletions accounted for in our delayed refs rsv, we need ++ * to drop the csum leaves for this update from our delayed_refs_rsv. ++ */ ++ if (head->total_ref_mod < 0 && head->is_data) { ++ spin_lock(&delayed_refs->lock); ++ delayed_refs->pending_csums -= head->num_bytes; ++ spin_unlock(&delayed_refs->lock); ++ nr_items += btrfs_csum_bytes_to_leaves(fs_info, head->num_bytes); ++ } + +- if (head->is_data) +- flags = BTRFS_BLOCK_GROUP_DATA; +- else if (head->is_system) +- flags = BTRFS_BLOCK_GROUP_SYSTEM; +- else +- flags = BTRFS_BLOCK_GROUP_METADATA; +- space_info = btrfs_find_space_info(fs_info, flags); +- ASSERT(space_info); +- percpu_counter_add_batch(&space_info->total_bytes_pinned, +- -head->num_bytes, +- BTRFS_TOTAL_BYTES_PINNED_BATCH); ++ /* ++ * We were dropping refs, or had a new ref and dropped it, and thus must ++ * adjust down our total_bytes_pinned, the space may or may not have ++ * been pinned and so is accounted for properly in the pinned space by ++ * now. ++ */ ++ if (head->total_ref_mod < 0 || ++ (head->total_ref_mod == 0 && head->must_insert_reserved)) { ++ u64 flags = btrfs_ref_head_to_space_flags(head); + +- /* +- * We had csum deletions accounted for in our delayed refs rsv, +- * we need to drop the csum leaves for this update from our +- * delayed_refs_rsv. +- */ +- if (head->is_data) { +- spin_lock(&delayed_refs->lock); +- delayed_refs->pending_csums -= head->num_bytes; +- spin_unlock(&delayed_refs->lock); +- nr_items += btrfs_csum_bytes_to_leaves(fs_info, +- head->num_bytes); +- } ++ btrfs_mod_total_bytes_pinned(fs_info, flags, -head->num_bytes); + } + + btrfs_delayed_refs_rsv_release(fs_info, nr_items); +@@ -2592,8 +2545,7 @@ static int pin_down_extent(struct btrfs_trans_handle *trans, + spin_unlock(&cache->lock); + spin_unlock(&cache->space_info->lock); + +- percpu_counter_add_batch(&cache->space_info->total_bytes_pinned, +- num_bytes, BTRFS_TOTAL_BYTES_PINNED_BATCH); ++ __btrfs_mod_total_bytes_pinned(cache->space_info, num_bytes); + set_extent_dirty(&trans->transaction->pinned_extents, bytenr, + bytenr + num_bytes - 1, GFP_NOFS | __GFP_NOFAIL); + return 0; +@@ -2819,8 +2771,7 @@ static int unpin_extent_range(struct btrfs_fs_info *fs_info, + cache->pinned -= len; + btrfs_space_info_update_bytes_pinned(fs_info, space_info, -len); + space_info->max_extent_size = 0; +- percpu_counter_add_batch(&space_info->total_bytes_pinned, +- -len, BTRFS_TOTAL_BYTES_PINNED_BATCH); ++ __btrfs_mod_total_bytes_pinned(space_info, -len); + if (cache->ro) { + space_info->bytes_readonly += len; + readonly = true; +@@ -3359,7 +3310,6 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans, + { + struct btrfs_fs_info *fs_info = root->fs_info; + struct btrfs_ref generic_ref = { 0 }; +- int pin = 1; + int ret; + + btrfs_init_generic_ref(&generic_ref, BTRFS_DROP_DELAYED_REF, +@@ -3368,13 +3318,9 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans, + root->root_key.objectid); + + if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) { +- int old_ref_mod, new_ref_mod; +- + btrfs_ref_tree_mod(fs_info, &generic_ref); +- ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL, +- &old_ref_mod, &new_ref_mod); ++ ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, NULL); + BUG_ON(ret); /* -ENOMEM */ +- pin = old_ref_mod >= 0 && new_ref_mod < 0; + } + + if (last_ref && btrfs_header_generation(buf) == trans->transid) { +@@ -3386,7 +3332,6 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans, + goto out; + } + +- pin = 0; + cache = btrfs_lookup_block_group(fs_info, buf->start); + + if (btrfs_header_flag(buf, BTRFS_HEADER_FLAG_WRITTEN)) { +@@ -3403,9 +3348,6 @@ void btrfs_free_tree_block(struct btrfs_trans_handle *trans, + trace_btrfs_reserved_extent_free(fs_info, buf->start, buf->len); + } + out: +- if (pin) +- add_pinned_bytes(fs_info, &generic_ref); +- + if (last_ref) { + /* + * Deleting the buffer, clear the corrupt flag since it doesn't +@@ -3419,7 +3361,6 @@ out: + int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_ref *ref) + { + struct btrfs_fs_info *fs_info = trans->fs_info; +- int old_ref_mod, new_ref_mod; + int ret; + + if (btrfs_is_testing(fs_info)) +@@ -3435,14 +3376,11 @@ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_ref *ref) + ref->data_ref.ref_root == BTRFS_TREE_LOG_OBJECTID)) { + /* unlocks the pinned mutex */ + btrfs_pin_extent(trans, ref->bytenr, ref->len, 1); +- old_ref_mod = new_ref_mod = 0; + ret = 0; + } else if (ref->type == BTRFS_REF_METADATA) { +- ret = btrfs_add_delayed_tree_ref(trans, ref, NULL, +- &old_ref_mod, &new_ref_mod); ++ ret = btrfs_add_delayed_tree_ref(trans, ref, NULL); + } else { +- ret = btrfs_add_delayed_data_ref(trans, ref, 0, +- &old_ref_mod, &new_ref_mod); ++ ret = btrfs_add_delayed_data_ref(trans, ref, 0); + } + + if (!((ref->type == BTRFS_REF_METADATA && +@@ -3451,9 +3389,6 @@ int btrfs_free_extent(struct btrfs_trans_handle *trans, struct btrfs_ref *ref) + ref->data_ref.ref_root == BTRFS_TREE_LOG_OBJECTID))) + btrfs_ref_tree_mod(fs_info, ref); + +- if (ret == 0 && old_ref_mod >= 0 && new_ref_mod < 0) +- add_pinned_bytes(fs_info, ref); +- + return ret; + } + +@@ -4571,7 +4506,6 @@ int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans, + struct btrfs_key *ins) + { + struct btrfs_ref generic_ref = { 0 }; +- int ret; + + BUG_ON(root->root_key.objectid == BTRFS_TREE_LOG_OBJECTID); + +@@ -4579,9 +4513,8 @@ int btrfs_alloc_reserved_file_extent(struct btrfs_trans_handle *trans, + ins->objectid, ins->offset, 0); + btrfs_init_data_ref(&generic_ref, root->root_key.objectid, owner, offset); + btrfs_ref_tree_mod(root->fs_info, &generic_ref); +- ret = btrfs_add_delayed_data_ref(trans, &generic_ref, +- ram_bytes, NULL, NULL); +- return ret; ++ ++ return btrfs_add_delayed_data_ref(trans, &generic_ref, ram_bytes); + } + + /* +@@ -4769,8 +4702,7 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans, + generic_ref.real_root = root->root_key.objectid; + btrfs_init_tree_ref(&generic_ref, level, root_objectid); + btrfs_ref_tree_mod(fs_info, &generic_ref); +- ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, +- extent_op, NULL, NULL); ++ ret = btrfs_add_delayed_tree_ref(trans, &generic_ref, extent_op); + if (ret) + goto out_free_delayed; + } +diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c +index af0013d3df63f..ae4059ce2f84c 100644 +--- a/fs/btrfs/free-space-cache.c ++++ b/fs/btrfs/free-space-cache.c +@@ -744,8 +744,10 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode, + while (num_entries) { + e = kmem_cache_zalloc(btrfs_free_space_cachep, + GFP_NOFS); +- if (!e) ++ if (!e) { ++ ret = -ENOMEM; + goto free_cache; ++ } + + ret = io_ctl_read_entry(&io_ctl, e, &type); + if (ret) { +@@ -764,6 +766,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode, + e->trim_state = BTRFS_TRIM_STATE_TRIMMED; + + if (!e->bytes) { ++ ret = -1; + kmem_cache_free(btrfs_free_space_cachep, e); + goto free_cache; + } +@@ -784,6 +787,7 @@ static int __load_free_space_cache(struct btrfs_root *root, struct inode *inode, + e->bitmap = kmem_cache_zalloc( + btrfs_free_space_bitmap_cachep, GFP_NOFS); + if (!e->bitmap) { ++ ret = -ENOMEM; + kmem_cache_free( + btrfs_free_space_cachep, e); + goto free_cache; +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index b536d21541a9f..4d85f3a6695d1 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -8207,8 +8207,9 @@ static void btrfs_invalidatepage(struct page *page, unsigned int offset, + + if (!inode_evicting) + lock_extent_bits(tree, page_start, page_end, &cached_state); +-again: ++ + start = page_start; ++again: + ordered = btrfs_lookup_ordered_range(inode, start, page_end - start + 1); + if (ordered) { + end = min(page_end, +diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c +index 108e93ff6cb6f..6a44d8f5e12e2 100644 +--- a/fs/btrfs/relocation.c ++++ b/fs/btrfs/relocation.c +@@ -669,9 +669,7 @@ static void __del_reloc_root(struct btrfs_root *root) + RB_CLEAR_NODE(&node->rb_node); + } + spin_unlock(&rc->reloc_root_tree.lock); +- if (!node) +- return; +- BUG_ON((struct btrfs_root *)node->data != root); ++ ASSERT(!node || (struct btrfs_root *)node->data == root); + } + + /* +diff --git a/fs/btrfs/space-info.h b/fs/btrfs/space-info.h +index 5646393b928c9..74706f604bce1 100644 +--- a/fs/btrfs/space-info.h ++++ b/fs/btrfs/space-info.h +@@ -152,4 +152,21 @@ static inline void btrfs_space_info_free_bytes_may_use( + int btrfs_reserve_data_bytes(struct btrfs_fs_info *fs_info, u64 bytes, + enum btrfs_reserve_flush_enum flush); + ++static inline void __btrfs_mod_total_bytes_pinned( ++ struct btrfs_space_info *space_info, ++ s64 mod) ++{ ++ percpu_counter_add_batch(&space_info->total_bytes_pinned, mod, ++ BTRFS_TOTAL_BYTES_PINNED_BATCH); ++} ++ ++static inline void btrfs_mod_total_bytes_pinned(struct btrfs_fs_info *fs_info, ++ u64 flags, s64 mod) ++{ ++ struct btrfs_space_info *space_info = btrfs_find_space_info(fs_info, flags); ++ ++ ASSERT(space_info); ++ __btrfs_mod_total_bytes_pinned(space_info, mod); ++} ++ + #endif /* BTRFS_SPACE_INFO_H */ +diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c +index 2b200b5a44c3a..576d01275bbd7 100644 +--- a/fs/ceph/caps.c ++++ b/fs/ceph/caps.c +@@ -3092,10 +3092,12 @@ static void __ceph_put_cap_refs(struct ceph_inode_info *ci, int had, + dout("put_cap_refs %p had %s%s%s\n", inode, ceph_cap_string(had), + last ? " last" : "", put ? " put" : ""); + +- if (last && !skip_checking_caps) +- ceph_check_caps(ci, 0, NULL); +- else if (flushsnaps) +- ceph_flush_snaps(ci, NULL); ++ if (!skip_checking_caps) { ++ if (last) ++ ceph_check_caps(ci, 0, NULL); ++ else if (flushsnaps) ++ ceph_flush_snaps(ci, NULL); ++ } + if (wake) + wake_up_all(&ci->i_cap_wq); + while (put-- > 0) +diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c +index 2fcf66473436b..86c7f04896207 100644 +--- a/fs/debugfs/inode.c ++++ b/fs/debugfs/inode.c +@@ -297,7 +297,7 @@ struct dentry *debugfs_lookup(const char *name, struct dentry *parent) + { + struct dentry *dentry; + +- if (IS_ERR(parent)) ++ if (!debugfs_initialized() || IS_ERR_OR_NULL(name) || IS_ERR(parent)) + return NULL; + + if (!parent) +@@ -318,6 +318,9 @@ static struct dentry *start_creating(const char *name, struct dentry *parent) + if (!(debugfs_allow & DEBUGFS_ALLOW_API)) + return ERR_PTR(-EPERM); + ++ if (!debugfs_initialized()) ++ return ERR_PTR(-ENOENT); ++ + pr_debug("creating file '%s'\n", name); + + if (IS_ERR(parent)) +diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c +index 5bde77d708524..47314a26767a8 100644 +--- a/fs/erofs/xattr.c ++++ b/fs/erofs/xattr.c +@@ -48,8 +48,14 @@ static int init_inode_xattrs(struct inode *inode) + int ret = 0; + + /* the most case is that xattrs of this inode are initialized. */ +- if (test_bit(EROFS_I_EA_INITED_BIT, &vi->flags)) ++ if (test_bit(EROFS_I_EA_INITED_BIT, &vi->flags)) { ++ /* ++ * paired with smp_mb() at the end of the function to ensure ++ * fields will only be observed after the bit is set. ++ */ ++ smp_mb(); + return 0; ++ } + + if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_XATTR_BIT, TASK_KILLABLE)) + return -ERESTARTSYS; +@@ -137,6 +143,8 @@ static int init_inode_xattrs(struct inode *inode) + } + xattr_iter_end(&it, atomic_map); + ++ /* paired with smp_mb() at the beginning of the function. */ ++ smp_mb(); + set_bit(EROFS_I_EA_INITED_BIT, &vi->flags); + + out_unlock: +diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c +index ae325541884e3..14d2de35110cc 100644 +--- a/fs/erofs/zmap.c ++++ b/fs/erofs/zmap.c +@@ -36,8 +36,14 @@ static int z_erofs_fill_inode_lazy(struct inode *inode) + void *kaddr; + struct z_erofs_map_header *h; + +- if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags)) ++ if (test_bit(EROFS_I_Z_INITED_BIT, &vi->flags)) { ++ /* ++ * paired with smp_mb() at the end of the function to ensure ++ * fields will only be observed after the bit is set. ++ */ ++ smp_mb(); + return 0; ++ } + + if (wait_on_bit_lock(&vi->flags, EROFS_I_BL_Z_BIT, TASK_KILLABLE)) + return -ERESTARTSYS; +@@ -83,6 +89,8 @@ static int z_erofs_fill_inode_lazy(struct inode *inode) + + vi->z_physical_clusterbits[1] = vi->z_logical_clusterbits + + ((h->h_clusterbits >> 5) & 7); ++ /* paired with smp_mb() at the beginning of the function */ ++ smp_mb(); + set_bit(EROFS_I_Z_INITED_BIT, &vi->flags); + unmap_done: + kunmap_atomic(kaddr); +diff --git a/fs/eventpoll.c b/fs/eventpoll.c +index 117b1c395ae4a..2d5b172f490e0 100644 +--- a/fs/eventpoll.c ++++ b/fs/eventpoll.c +@@ -1062,7 +1062,7 @@ static struct epitem *ep_find(struct eventpoll *ep, struct file *file, int fd) + return epir; + } + +-#ifdef CONFIG_CHECKPOINT_RESTORE ++#ifdef CONFIG_KCMP + static struct epitem *ep_find_tfd(struct eventpoll *ep, int tfd, unsigned long toff) + { + struct rb_node *rbp; +@@ -1104,7 +1104,7 @@ struct file *get_epoll_tfile_raw_ptr(struct file *file, int tfd, + + return file_raw; + } +-#endif /* CONFIG_CHECKPOINT_RESTORE */ ++#endif /* CONFIG_KCMP */ + + /** + * Adds a new entry to the tail of the list in a lockless way, i.e. +diff --git a/fs/exfat/exfat_raw.h b/fs/exfat/exfat_raw.h +index 6aec6288e1f21..7f39b1c6469c4 100644 +--- a/fs/exfat/exfat_raw.h ++++ b/fs/exfat/exfat_raw.h +@@ -77,6 +77,10 @@ + + #define EXFAT_FILE_NAME_LEN 15 + ++#define EXFAT_MIN_SECT_SIZE_BITS 9 ++#define EXFAT_MAX_SECT_SIZE_BITS 12 ++#define EXFAT_MAX_SECT_PER_CLUS_BITS(x) (25 - (x)->sect_size_bits) ++ + /* EXFAT: Main and Backup Boot Sector (512 bytes) */ + struct boot_sector { + __u8 jmp_boot[BOOTSEC_JUMP_BOOT_LEN]; +diff --git a/fs/exfat/super.c b/fs/exfat/super.c +index 87be5bfc31eb4..c6d8d2e534865 100644 +--- a/fs/exfat/super.c ++++ b/fs/exfat/super.c +@@ -381,8 +381,7 @@ static int exfat_calibrate_blocksize(struct super_block *sb, int logical_sect) + { + struct exfat_sb_info *sbi = EXFAT_SB(sb); + +- if (!is_power_of_2(logical_sect) || +- logical_sect < 512 || logical_sect > 4096) { ++ if (!is_power_of_2(logical_sect)) { + exfat_err(sb, "bogus logical sector size %u", logical_sect); + return -EIO; + } +@@ -451,6 +450,25 @@ static int exfat_read_boot_sector(struct super_block *sb) + return -EINVAL; + } + ++ /* ++ * sect_size_bits could be at least 9 and at most 12. ++ */ ++ if (p_boot->sect_size_bits < EXFAT_MIN_SECT_SIZE_BITS || ++ p_boot->sect_size_bits > EXFAT_MAX_SECT_SIZE_BITS) { ++ exfat_err(sb, "bogus sector size bits : %u\n", ++ p_boot->sect_size_bits); ++ return -EINVAL; ++ } ++ ++ /* ++ * sect_per_clus_bits could be at least 0 and at most 25 - sect_size_bits. ++ */ ++ if (p_boot->sect_per_clus_bits > EXFAT_MAX_SECT_PER_CLUS_BITS(p_boot)) { ++ exfat_err(sb, "bogus sectors bits per cluster : %u\n", ++ p_boot->sect_per_clus_bits); ++ return -EINVAL; ++ } ++ + sbi->sect_per_clus = 1 << p_boot->sect_per_clus_bits; + sbi->sect_per_clus_bits = p_boot->sect_per_clus_bits; + sbi->cluster_size_bits = p_boot->sect_per_clus_bits + +@@ -477,16 +495,19 @@ static int exfat_read_boot_sector(struct super_block *sb) + sbi->used_clusters = EXFAT_CLUSTERS_UNTRACKED; + + /* check consistencies */ +- if (sbi->num_FAT_sectors << p_boot->sect_size_bits < +- sbi->num_clusters * 4) { ++ if ((u64)sbi->num_FAT_sectors << p_boot->sect_size_bits < ++ (u64)sbi->num_clusters * 4) { + exfat_err(sb, "bogus fat length"); + return -EINVAL; + } ++ + if (sbi->data_start_sector < +- sbi->FAT1_start_sector + sbi->num_FAT_sectors * p_boot->num_fats) { ++ (u64)sbi->FAT1_start_sector + ++ (u64)sbi->num_FAT_sectors * p_boot->num_fats) { + exfat_err(sb, "bogus data start sector"); + return -EINVAL; + } ++ + if (sbi->vol_flags & VOLUME_DIRTY) + exfat_warn(sb, "Volume was not properly unmounted. Some data may be corrupt. Please run fsck."); + if (sbi->vol_flags & MEDIA_FAILURE) +diff --git a/fs/ext4/Kconfig b/fs/ext4/Kconfig +index 619dd35ddd48a..86699c8cab281 100644 +--- a/fs/ext4/Kconfig ++++ b/fs/ext4/Kconfig +@@ -103,8 +103,7 @@ config EXT4_DEBUG + + config EXT4_KUNIT_TESTS + tristate "KUnit tests for ext4" if !KUNIT_ALL_TESTS +- select EXT4_FS +- depends on KUNIT ++ depends on EXT4_FS && KUNIT + default KUNIT_ALL_TESTS + help + This builds the ext4 KUnit tests. +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index df0886e08a772..14783f7dcbe98 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -2410,11 +2410,10 @@ again: + (frame - 1)->bh); + if (err) + goto journal_error; +- if (restart) { +- err = ext4_handle_dirty_dx_node(handle, dir, +- frame->bh); ++ err = ext4_handle_dirty_dx_node(handle, dir, ++ frame->bh); ++ if (err) + goto journal_error; +- } + } else { + struct dx_root *dxroot; + memcpy((char *) entries2, (char *) entries, +diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c +index c5fee4d7ea72f..d3f407ba64c9e 100644 +--- a/fs/f2fs/compress.c ++++ b/fs/f2fs/compress.c +@@ -1393,7 +1393,7 @@ retry_write: + + ret = f2fs_write_single_data_page(cc->rpages[i], &_submitted, + NULL, NULL, wbc, io_type, +- compr_blocks); ++ compr_blocks, false); + if (ret) { + if (ret == AOP_WRITEPAGE_ACTIVATE) { + unlock_page(cc->rpages[i]); +@@ -1428,6 +1428,9 @@ retry_write: + + *submitted += _submitted; + } ++ ++ f2fs_balance_fs(F2FS_M_SB(mapping), true); ++ + return 0; + out_err: + for (++i; i < cc->cluster_size; i++) { +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index b29243ee1c3e5..901bd1d963ee8 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -499,7 +499,7 @@ static inline void __submit_bio(struct f2fs_sb_info *sbi, + if (f2fs_lfs_mode(sbi) && current->plug) + blk_finish_plug(current->plug); + +- if (F2FS_IO_ALIGNED(sbi)) ++ if (!F2FS_IO_ALIGNED(sbi)) + goto submit_io; + + start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS; +@@ -2757,7 +2757,8 @@ int f2fs_write_single_data_page(struct page *page, int *submitted, + sector_t *last_block, + struct writeback_control *wbc, + enum iostat_type io_type, +- int compr_blocks) ++ int compr_blocks, ++ bool allow_balance) + { + struct inode *inode = page->mapping->host; + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); +@@ -2895,7 +2896,7 @@ out: + } + unlock_page(page); + if (!S_ISDIR(inode->i_mode) && !IS_NOQUOTA(inode) && +- !F2FS_I(inode)->cp_task) ++ !F2FS_I(inode)->cp_task && allow_balance) + f2fs_balance_fs(sbi, need_balance_fs); + + if (unlikely(f2fs_cp_error(sbi))) { +@@ -2942,7 +2943,7 @@ out: + #endif + + return f2fs_write_single_data_page(page, NULL, NULL, NULL, +- wbc, FS_DATA_IO, 0); ++ wbc, FS_DATA_IO, 0, true); + } + + /* +@@ -3110,7 +3111,8 @@ continue_unlock: + } + #endif + ret = f2fs_write_single_data_page(page, &submitted, +- &bio, &last_block, wbc, io_type, 0); ++ &bio, &last_block, wbc, io_type, ++ 0, true); + if (ret == AOP_WRITEPAGE_ACTIVATE) + unlock_page(page); + #ifdef CONFIG_F2FS_FS_COMPRESSION +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 06e5a6053f3f9..699815e94bd30 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -3507,7 +3507,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted, + struct bio **bio, sector_t *last_block, + struct writeback_control *wbc, + enum iostat_type io_type, +- int compr_blocks); ++ int compr_blocks, bool allow_balance); + void f2fs_invalidate_page(struct page *page, unsigned int offset, + unsigned int length); + int f2fs_release_page(struct page *page, gfp_t wait); +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index fe39e591e5b4c..498e3aac79340 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -59,6 +59,9 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf) + bool need_alloc = true; + int err = 0; + ++ if (unlikely(IS_IMMUTABLE(inode))) ++ return VM_FAULT_SIGBUS; ++ + if (unlikely(f2fs_cp_error(sbi))) { + err = -EIO; + goto err; +@@ -766,6 +769,10 @@ int f2fs_truncate(struct inode *inode) + return -EIO; + } + ++ err = dquot_initialize(inode); ++ if (err) ++ return err; ++ + /* we should check inline_data size */ + if (!f2fs_may_inline_data(inode)) { + err = f2fs_convert_inline_inode(inode); +@@ -847,7 +854,8 @@ static void __setattr_copy(struct inode *inode, const struct iattr *attr) + if (ia_valid & ATTR_MODE) { + umode_t mode = attr->ia_mode; + +- if (!in_group_p(inode->i_gid) && !capable(CAP_FSETID)) ++ if (!in_group_p(inode->i_gid) && ++ !capable_wrt_inode_uidgid(inode, CAP_FSETID)) + mode &= ~S_ISGID; + set_acl_inode(inode, mode); + } +@@ -864,6 +872,14 @@ int f2fs_setattr(struct dentry *dentry, struct iattr *attr) + if (unlikely(f2fs_cp_error(F2FS_I_SB(inode)))) + return -EIO; + ++ if (unlikely(IS_IMMUTABLE(inode))) ++ return -EPERM; ++ ++ if (unlikely(IS_APPEND(inode) && ++ (attr->ia_valid & (ATTR_MODE | ATTR_UID | ++ ATTR_GID | ATTR_TIMES_SET)))) ++ return -EPERM; ++ + if ((attr->ia_valid & ATTR_SIZE) && + !f2fs_is_compress_backend_ready(inode)) + return -EOPNOTSUPP; +@@ -4079,6 +4095,11 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) + inode_lock(inode); + } + ++ if (unlikely(IS_IMMUTABLE(inode))) { ++ ret = -EPERM; ++ goto unlock; ++ } ++ + ret = generic_write_checks(iocb, from); + if (ret > 0) { + bool preallocated = false; +@@ -4143,6 +4164,7 @@ write: + if (ret > 0) + f2fs_update_iostat(F2FS_I_SB(inode), APP_WRITE_IO, ret); + } ++unlock: + inode_unlock(inode); + out: + trace_f2fs_file_write_iter(inode, iocb->ki_pos, +diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c +index 70384e31788db..b9e37f0b3e093 100644 +--- a/fs/f2fs/inline.c ++++ b/fs/f2fs/inline.c +@@ -191,6 +191,10 @@ int f2fs_convert_inline_inode(struct inode *inode) + if (!f2fs_has_inline_data(inode)) + return 0; + ++ err = dquot_initialize(inode); ++ if (err) ++ return err; ++ + page = f2fs_grab_cache_page(inode->i_mapping, 0, false); + if (!page) + return -ENOMEM; +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index aa284ce7ec00d..4fffbef216af8 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -1764,6 +1764,9 @@ restore_flag: + + static void f2fs_enable_checkpoint(struct f2fs_sb_info *sbi) + { ++ /* we should flush all the data to keep data consistency */ ++ sync_inodes_sb(sbi->sb); ++ + down_write(&sbi->gc_lock); + f2fs_dirty_to_prefree(sbi); + +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c +index 62d9081d1e26e..a1f9dde33058f 100644 +--- a/fs/gfs2/bmap.c ++++ b/fs/gfs2/bmap.c +@@ -1230,6 +1230,9 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length, + + gfs2_inplace_release(ip); + ++ if (ip->i_qadata && ip->i_qadata->qa_qd_num) ++ gfs2_quota_unlock(ip); ++ + if (length != written && (iomap->flags & IOMAP_F_NEW)) { + /* Deallocate blocks that were just allocated. */ + loff_t blockmask = i_blocksize(inode) - 1; +@@ -1242,9 +1245,6 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length, + } + } + +- if (ip->i_qadata && ip->i_qadata->qa_qd_num) +- gfs2_quota_unlock(ip); +- + if (unlikely(!written)) + goto out_unlock; + +diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c +index 9f2b5609f225d..153272f82984b 100644 +--- a/fs/gfs2/lock_dlm.c ++++ b/fs/gfs2/lock_dlm.c +@@ -284,7 +284,6 @@ static void gdlm_put_lock(struct gfs2_glock *gl) + { + struct gfs2_sbd *sdp = gl->gl_name.ln_sbd; + struct lm_lockstruct *ls = &sdp->sd_lockstruct; +- int lvb_needs_unlock = 0; + int error; + + if (gl->gl_lksb.sb_lkid == 0) { +@@ -297,13 +296,10 @@ static void gdlm_put_lock(struct gfs2_glock *gl) + gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT); + gfs2_update_request_times(gl); + +- /* don't want to skip dlm_unlock writing the lvb when lock is ex */ +- +- if (gl->gl_lksb.sb_lvbptr && (gl->gl_state == LM_ST_EXCLUSIVE)) +- lvb_needs_unlock = 1; ++ /* don't want to skip dlm_unlock writing the lvb when lock has one */ + + if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) && +- !lvb_needs_unlock) { ++ !gl->gl_lksb.sb_lvbptr) { + gfs2_glock_free(gl); + return; + } +diff --git a/fs/gfs2/recovery.c b/fs/gfs2/recovery.c +index c26c68ebd29d4..a3c1911862f01 100644 +--- a/fs/gfs2/recovery.c ++++ b/fs/gfs2/recovery.c +@@ -514,8 +514,10 @@ void gfs2_recover_func(struct work_struct *work) + error = foreach_descriptor(jd, head.lh_tail, + head.lh_blkno, pass); + lops_after_scan(jd, error, pass); +- if (error) ++ if (error) { ++ up_read(&sdp->sd_log_flush_lock); + goto fail_gunlock_thaw; ++ } + } + + recover_local_statfs(jd, &head); +diff --git a/fs/gfs2/util.c b/fs/gfs2/util.c +index 0fba3bf641890..b7d4e4550880d 100644 +--- a/fs/gfs2/util.c ++++ b/fs/gfs2/util.c +@@ -93,9 +93,10 @@ out_unlock: + + static void signal_our_withdraw(struct gfs2_sbd *sdp) + { +- struct gfs2_glock *gl = sdp->sd_live_gh.gh_gl; ++ struct gfs2_glock *live_gl = sdp->sd_live_gh.gh_gl; + struct inode *inode = sdp->sd_jdesc->jd_inode; + struct gfs2_inode *ip = GFS2_I(inode); ++ struct gfs2_glock *i_gl = ip->i_gl; + u64 no_formal_ino = ip->i_no_formal_ino; + int ret = 0; + int tries; +@@ -141,7 +142,8 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp) + atomic_set(&sdp->sd_freeze_state, SFS_FROZEN); + thaw_super(sdp->sd_vfs); + } else { +- wait_on_bit(&gl->gl_flags, GLF_DEMOTE, TASK_UNINTERRUPTIBLE); ++ wait_on_bit(&i_gl->gl_flags, GLF_DEMOTE, ++ TASK_UNINTERRUPTIBLE); + } + + /* +@@ -161,15 +163,15 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp) + * on other nodes to be successful, otherwise we remain the owner of + * the glock as far as dlm is concerned. + */ +- if (gl->gl_ops->go_free) { +- set_bit(GLF_FREEING, &gl->gl_flags); +- wait_on_bit(&gl->gl_flags, GLF_FREEING, TASK_UNINTERRUPTIBLE); ++ if (i_gl->gl_ops->go_free) { ++ set_bit(GLF_FREEING, &i_gl->gl_flags); ++ wait_on_bit(&i_gl->gl_flags, GLF_FREEING, TASK_UNINTERRUPTIBLE); + } + + /* + * Dequeue the "live" glock, but keep a reference so it's never freed. + */ +- gfs2_glock_hold(gl); ++ gfs2_glock_hold(live_gl); + gfs2_glock_dq_wait(&sdp->sd_live_gh); + /* + * We enqueue the "live" glock in EX so that all other nodes +@@ -208,7 +210,7 @@ static void signal_our_withdraw(struct gfs2_sbd *sdp) + gfs2_glock_nq(&sdp->sd_live_gh); + } + +- gfs2_glock_queue_put(gl); /* drop the extra reference we acquired */ ++ gfs2_glock_queue_put(live_gl); /* drop extra reference we acquired */ + clear_bit(SDF_WITHDRAW_RECOVERY, &sdp->sd_flags); + + /* +diff --git a/fs/io_uring.c b/fs/io_uring.c +index d0b7332ca7033..d0172cc4f6427 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -8440,8 +8440,21 @@ static __poll_t io_uring_poll(struct file *file, poll_table *wait) + smp_rmb(); + if (!io_sqring_full(ctx)) + mask |= EPOLLOUT | EPOLLWRNORM; +- io_cqring_overflow_flush(ctx, false, NULL, NULL); +- if (io_cqring_events(ctx)) ++ ++ /* ++ * Don't flush cqring overflow list here, just do a simple check. ++ * Otherwise there could possible be ABBA deadlock: ++ * CPU0 CPU1 ++ * ---- ---- ++ * lock(&ctx->uring_lock); ++ * lock(&ep->mtx); ++ * lock(&ctx->uring_lock); ++ * lock(&ep->mtx); ++ * ++ * Users may get EPOLLIN meanwhile seeing nothing in cqring, this ++ * pushs them to do the flush. ++ */ ++ if (io_cqring_events(ctx) || test_bit(0, &ctx->cq_check_overflow)) + mask |= EPOLLIN | EPOLLRDNORM; + + return mask; +diff --git a/fs/isofs/dir.c b/fs/isofs/dir.c +index f0fe641893a5e..b9e6a7ec78be4 100644 +--- a/fs/isofs/dir.c ++++ b/fs/isofs/dir.c +@@ -152,6 +152,7 @@ static int do_isofs_readdir(struct inode *inode, struct file *file, + printk(KERN_NOTICE "iso9660: Corrupted directory entry" + " in block %lu of inode %lu\n", block, + inode->i_ino); ++ brelse(bh); + return -EIO; + } + +diff --git a/fs/isofs/namei.c b/fs/isofs/namei.c +index 402769881c32b..58f80e1b3ac0d 100644 +--- a/fs/isofs/namei.c ++++ b/fs/isofs/namei.c +@@ -102,6 +102,7 @@ isofs_find_entry(struct inode *dir, struct dentry *dentry, + printk(KERN_NOTICE "iso9660: Corrupted directory entry" + " in block %lu of inode %lu\n", block, + dir->i_ino); ++ brelse(bh); + return 0; + } + +diff --git a/fs/jffs2/summary.c b/fs/jffs2/summary.c +index be7c8a6a57480..4fe64519870f1 100644 +--- a/fs/jffs2/summary.c ++++ b/fs/jffs2/summary.c +@@ -783,6 +783,8 @@ static int jffs2_sum_write_data(struct jffs2_sb_info *c, struct jffs2_eraseblock + dbg_summary("Writing unknown RWCOMPAT_COPY node type %x\n", + je16_to_cpu(temp->u.nodetype)); + jffs2_sum_disable_collecting(c->summary); ++ /* The above call removes the list, nothing more to do */ ++ goto bail_rwcompat; + } else { + BUG(); /* unknown node in summary information */ + } +@@ -794,6 +796,7 @@ static int jffs2_sum_write_data(struct jffs2_sb_info *c, struct jffs2_eraseblock + + c->summary->sum_num--; + } ++ bail_rwcompat: + + jffs2_sum_reset_collected(c->summary); + +diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c +index 7dfcab2a2da68..aedad59f8a458 100644 +--- a/fs/jfs/jfs_dmap.c ++++ b/fs/jfs/jfs_dmap.c +@@ -1656,7 +1656,7 @@ s64 dbDiscardAG(struct inode *ip, int agno, s64 minlen) + } else if (rc == -ENOSPC) { + /* search for next smaller log2 block */ + l2nb = BLKSTOL2(nblocks) - 1; +- nblocks = 1 << l2nb; ++ nblocks = 1LL << l2nb; + } else { + /* Trim any already allocated blocks */ + jfs_error(bmp->db_ipbmap->i_sb, "-EIO\n"); +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 0cd5b127f3bb9..a811d42ffbd11 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -5433,15 +5433,16 @@ static void nfs4_bitmask_adjust(__u32 *bitmask, struct inode *inode, + + if (cache_validity & NFS_INO_INVALID_ATIME) + bitmask[1] |= FATTR4_WORD1_TIME_ACCESS; +- if (cache_validity & NFS_INO_INVALID_ACCESS) +- bitmask[0] |= FATTR4_WORD1_MODE | FATTR4_WORD1_OWNER | +- FATTR4_WORD1_OWNER_GROUP; +- if (cache_validity & NFS_INO_INVALID_ACL) +- bitmask[0] |= FATTR4_WORD0_ACL; +- if (cache_validity & NFS_INO_INVALID_LABEL) ++ if (cache_validity & NFS_INO_INVALID_OTHER) ++ bitmask[1] |= FATTR4_WORD1_MODE | FATTR4_WORD1_OWNER | ++ FATTR4_WORD1_OWNER_GROUP | ++ FATTR4_WORD1_NUMLINKS; ++ if (label && label->len && cache_validity & NFS_INO_INVALID_LABEL) + bitmask[2] |= FATTR4_WORD2_SECURITY_LABEL; +- if (cache_validity & NFS_INO_INVALID_CTIME) ++ if (cache_validity & NFS_INO_INVALID_CHANGE) + bitmask[0] |= FATTR4_WORD0_CHANGE; ++ if (cache_validity & NFS_INO_INVALID_CTIME) ++ bitmask[1] |= FATTR4_WORD1_TIME_METADATA; + if (cache_validity & NFS_INO_INVALID_MTIME) + bitmask[1] |= FATTR4_WORD1_TIME_MODIFY; + if (cache_validity & NFS_INO_INVALID_SIZE) +diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c +index f6d5d783f4a45..0759e589ab52b 100644 +--- a/fs/nfsd/nfsctl.c ++++ b/fs/nfsd/nfsctl.c +@@ -1522,12 +1522,9 @@ static int __init init_nfsd(void) + int retval; + printk(KERN_INFO "Installing knfsd (copyright (C) 1996 okir@monad.swb.de).\n"); + +- retval = register_pernet_subsys(&nfsd_net_ops); +- if (retval < 0) +- return retval; + retval = register_cld_notifier(); + if (retval) +- goto out_unregister_pernet; ++ return retval; + retval = nfsd4_init_slabs(); + if (retval) + goto out_unregister_notifier; +@@ -1544,9 +1541,14 @@ static int __init init_nfsd(void) + goto out_free_lockd; + retval = register_filesystem(&nfsd_fs_type); + if (retval) ++ goto out_free_exports; ++ retval = register_pernet_subsys(&nfsd_net_ops); ++ if (retval < 0) + goto out_free_all; + return 0; + out_free_all: ++ unregister_pernet_subsys(&nfsd_net_ops); ++out_free_exports: + remove_proc_entry("fs/nfs/exports", NULL); + remove_proc_entry("fs/nfs", NULL); + out_free_lockd: +@@ -1559,13 +1561,12 @@ out_free_slabs: + nfsd4_free_slabs(); + out_unregister_notifier: + unregister_cld_notifier(); +-out_unregister_pernet: +- unregister_pernet_subsys(&nfsd_net_ops); + return retval; + } + + static void __exit exit_nfsd(void) + { ++ unregister_pernet_subsys(&nfsd_net_ops); + nfsd_drc_slab_free(); + remove_proc_entry("fs/nfs/exports", NULL); + remove_proc_entry("fs/nfs", NULL); +@@ -1575,7 +1576,6 @@ static void __exit exit_nfsd(void) + nfsd4_exit_pnfs(); + unregister_filesystem(&nfsd_fs_type); + unregister_cld_notifier(); +- unregister_pernet_subsys(&nfsd_net_ops); + } + + MODULE_AUTHOR("Olaf Kirch "); +diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c +index 0179a73a3fa2c..12a7590601ddb 100644 +--- a/fs/ocfs2/cluster/heartbeat.c ++++ b/fs/ocfs2/cluster/heartbeat.c +@@ -2042,7 +2042,7 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g + o2hb_nego_timeout_handler, + reg, NULL, ®->hr_handler_list); + if (ret) +- goto free; ++ goto remove_item; + + ret = o2net_register_handler(O2HB_NEGO_APPROVE_MSG, reg->hr_key, + sizeof(struct o2hb_nego_msg), +@@ -2057,6 +2057,12 @@ static struct config_item *o2hb_heartbeat_group_make_item(struct config_group *g + + unregister_handler: + o2net_unregister_handler_list(®->hr_handler_list); ++remove_item: ++ spin_lock(&o2hb_live_lock); ++ list_del(®->hr_all_item); ++ if (o2hb_global_heartbeat_active()) ++ clear_bit(reg->hr_region_num, o2hb_region_bitmap); ++ spin_unlock(&o2hb_live_lock); + free: + kfree(reg); + return ERR_PTR(ret); +diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c +index d2018f70d1fae..070d2df8ab9cf 100644 +--- a/fs/proc/proc_sysctl.c ++++ b/fs/proc/proc_sysctl.c +@@ -571,7 +571,7 @@ static ssize_t proc_sys_call_handler(struct kiocb *iocb, struct iov_iter *iter, + error = -ENOMEM; + if (count >= KMALLOC_MAX_SIZE) + goto out; +- kbuf = kzalloc(count + 1, GFP_KERNEL); ++ kbuf = kvzalloc(count + 1, GFP_KERNEL); + if (!kbuf) + goto out; + +@@ -600,7 +600,7 @@ static ssize_t proc_sys_call_handler(struct kiocb *iocb, struct iov_iter *iter, + + error = count; + out_free_buf: +- kfree(kbuf); ++ kvfree(kbuf); + out: + sysctl_head_finish(head); + +diff --git a/fs/proc/self.c b/fs/proc/self.c +index cc71ce3466dc0..a4012154e1096 100644 +--- a/fs/proc/self.c ++++ b/fs/proc/self.c +@@ -20,7 +20,7 @@ static const char *proc_self_get_link(struct dentry *dentry, + * Not currently supported. Once we can inherit all of struct pid, + * we can allow this. + */ +- if (current->flags & PF_KTHREAD) ++ if (current->flags & PF_IO_WORKER) + return ERR_PTR(-EOPNOTSUPP); + + if (!tgid) +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c +index 602e3a52884d8..3cec6fbef725e 100644 +--- a/fs/proc/task_mmu.c ++++ b/fs/proc/task_mmu.c +@@ -1210,7 +1210,6 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, + struct mm_struct *mm; + struct vm_area_struct *vma; + enum clear_refs_types type; +- struct mmu_gather tlb; + int itype; + int rv; + +@@ -1249,7 +1248,6 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, + goto out_unlock; + } + +- tlb_gather_mmu(&tlb, mm, 0, -1); + if (type == CLEAR_REFS_SOFT_DIRTY) { + for (vma = mm->mmap; vma; vma = vma->vm_next) { + if (!(vma->vm_flags & VM_SOFTDIRTY)) +@@ -1258,15 +1256,18 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, + vma_set_page_prot(vma); + } + ++ inc_tlb_flush_pending(mm); + mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY, + 0, NULL, mm, 0, -1UL); + mmu_notifier_invalidate_range_start(&range); + } + walk_page_range(mm, 0, mm->highest_vm_end, &clear_refs_walk_ops, + &cp); +- if (type == CLEAR_REFS_SOFT_DIRTY) ++ if (type == CLEAR_REFS_SOFT_DIRTY) { + mmu_notifier_invalidate_range_end(&range); +- tlb_finish_mmu(&tlb, 0, -1); ++ flush_tlb_mm(mm); ++ dec_tlb_flush_pending(mm); ++ } + out_unlock: + mmap_write_unlock(mm); + out_mm: +diff --git a/fs/proc/thread_self.c b/fs/proc/thread_self.c +index a553273fbd417..d56681d86d28a 100644 +--- a/fs/proc/thread_self.c ++++ b/fs/proc/thread_self.c +@@ -17,6 +17,13 @@ static const char *proc_thread_self_get_link(struct dentry *dentry, + pid_t pid = task_pid_nr_ns(current, ns); + char *name; + ++ /* ++ * Not currently supported. Once we can inherit all of struct pid, ++ * we can allow this. ++ */ ++ if (current->flags & PF_IO_WORKER) ++ return ERR_PTR(-EOPNOTSUPP); ++ + if (!pid) + return ERR_PTR(-ENOENT); + name = kmalloc(10 + 6 + 10 + 1, dentry ? GFP_KERNEL : GFP_ATOMIC); +diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c +index 36714df37d5d8..b1ebf7b61732c 100644 +--- a/fs/pstore/platform.c ++++ b/fs/pstore/platform.c +@@ -269,7 +269,7 @@ static int pstore_compress(const void *in, void *out, + { + int ret; + +- if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION)) ++ if (!IS_ENABLED(CONFIG_PSTORE_COMPRESS)) + return -EINVAL; + + ret = crypto_comp_compress(tfm, in, inlen, out, &outlen); +@@ -671,7 +671,7 @@ static void decompress_record(struct pstore_record *record) + int unzipped_len; + char *unzipped, *workspace; + +- if (!IS_ENABLED(CONFIG_PSTORE_COMPRESSION) || !record->compressed) ++ if (!IS_ENABLED(CONFIG_PSTORE_COMPRESS) || !record->compressed) + return; + + /* Only PSTORE_TYPE_DMESG support compression. */ +diff --git a/fs/quota/quota_v2.c b/fs/quota/quota_v2.c +index c21106557a37e..b1467f3921c28 100644 +--- a/fs/quota/quota_v2.c ++++ b/fs/quota/quota_v2.c +@@ -164,19 +164,24 @@ static int v2_read_file_info(struct super_block *sb, int type) + quota_error(sb, "Number of blocks too big for quota file size (%llu > %llu).", + (loff_t)qinfo->dqi_blocks << qinfo->dqi_blocksize_bits, + i_size_read(sb_dqopt(sb)->files[type])); +- goto out; ++ goto out_free; + } + if (qinfo->dqi_free_blk >= qinfo->dqi_blocks) { + quota_error(sb, "Free block number too big (%u >= %u).", + qinfo->dqi_free_blk, qinfo->dqi_blocks); +- goto out; ++ goto out_free; + } + if (qinfo->dqi_free_entry >= qinfo->dqi_blocks) { + quota_error(sb, "Block with free entry too big (%u >= %u).", + qinfo->dqi_free_entry, qinfo->dqi_blocks); +- goto out; ++ goto out_free; + } + ret = 0; ++out_free: ++ if (ret) { ++ kfree(info->dqi_priv); ++ info->dqi_priv = NULL; ++ } + out: + up_read(&dqopt->dqio_sem); + return ret; +diff --git a/fs/ubifs/auth.c b/fs/ubifs/auth.c +index 8c50de693e1d4..50e88a2ab88ff 100644 +--- a/fs/ubifs/auth.c ++++ b/fs/ubifs/auth.c +@@ -328,7 +328,7 @@ int ubifs_init_authentication(struct ubifs_info *c) + ubifs_err(c, "hmac %s is bigger than maximum allowed hmac size (%d > %d)", + hmac_name, c->hmac_desc_len, UBIFS_HMAC_ARR_SZ); + err = -EINVAL; +- goto out_free_hash; ++ goto out_free_hmac; + } + + err = crypto_shash_setkey(c->hmac_tfm, ukp->data, ukp->datalen); +diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c +index 2f8d8f4f411ab..9a151a1f5e260 100644 +--- a/fs/ubifs/replay.c ++++ b/fs/ubifs/replay.c +@@ -559,7 +559,9 @@ static int is_last_bud(struct ubifs_info *c, struct ubifs_bud *bud) + } + + /* authenticate_sleb_hash is split out for stack usage */ +-static int authenticate_sleb_hash(struct ubifs_info *c, struct shash_desc *log_hash, u8 *hash) ++static int noinline_for_stack ++authenticate_sleb_hash(struct ubifs_info *c, ++ struct shash_desc *log_hash, u8 *hash) + { + SHASH_DESC_ON_STACK(hash_desc, c->hash_tfm); + +diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c +index cb3acfb7dd1fc..dacbb999ae34d 100644 +--- a/fs/ubifs/super.c ++++ b/fs/ubifs/super.c +@@ -838,8 +838,10 @@ static int alloc_wbufs(struct ubifs_info *c) + c->jheads[i].wbuf.jhead = i; + c->jheads[i].grouped = 1; + c->jheads[i].log_hash = ubifs_hash_get_desc(c); +- if (IS_ERR(c->jheads[i].log_hash)) ++ if (IS_ERR(c->jheads[i].log_hash)) { ++ err = PTR_ERR(c->jheads[i].log_hash); + goto out; ++ } + } + + /* +diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c +index bec47f2d074be..3fe933b1010c3 100644 +--- a/fs/zonefs/super.c ++++ b/fs/zonefs/super.c +@@ -250,6 +250,9 @@ static loff_t zonefs_check_zone_condition(struct inode *inode, + } + inode->i_mode &= ~0222; + return i_size_read(inode); ++ case BLK_ZONE_COND_FULL: ++ /* The write pointer of full zones is invalid. */ ++ return zi->i_max_size; + default: + if (zi->i_ztype == ZONEFS_ZTYPE_CNV) + return zi->i_max_size; +diff --git a/include/acpi/acexcep.h b/include/acpi/acexcep.h +index 2fc624a617690..f8a4afb0279a3 100644 +--- a/include/acpi/acexcep.h ++++ b/include/acpi/acexcep.h +@@ -59,11 +59,11 @@ struct acpi_exception_info { + + #define AE_OK (acpi_status) 0x0000 + +-#define ACPI_ENV_EXCEPTION(status) (status & AE_CODE_ENVIRONMENTAL) +-#define ACPI_AML_EXCEPTION(status) (status & AE_CODE_AML) +-#define ACPI_PROG_EXCEPTION(status) (status & AE_CODE_PROGRAMMER) +-#define ACPI_TABLE_EXCEPTION(status) (status & AE_CODE_ACPI_TABLES) +-#define ACPI_CNTL_EXCEPTION(status) (status & AE_CODE_CONTROL) ++#define ACPI_ENV_EXCEPTION(status) (((status) & AE_CODE_MASK) == AE_CODE_ENVIRONMENTAL) ++#define ACPI_AML_EXCEPTION(status) (((status) & AE_CODE_MASK) == AE_CODE_AML) ++#define ACPI_PROG_EXCEPTION(status) (((status) & AE_CODE_MASK) == AE_CODE_PROGRAMMER) ++#define ACPI_TABLE_EXCEPTION(status) (((status) & AE_CODE_MASK) == AE_CODE_ACPI_TABLES) ++#define ACPI_CNTL_EXCEPTION(status) (((status) & AE_CODE_MASK) == AE_CODE_CONTROL) + + /* + * Environmental exceptions +diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h +index b97c628ad91ff..34d8287cd7749 100644 +--- a/include/asm-generic/vmlinux.lds.h ++++ b/include/asm-generic/vmlinux.lds.h +@@ -828,8 +828,13 @@ + /* DWARF 4 */ \ + .debug_types 0 : { *(.debug_types) } \ + /* DWARF 5 */ \ ++ .debug_addr 0 : { *(.debug_addr) } \ ++ .debug_line_str 0 : { *(.debug_line_str) } \ ++ .debug_loclists 0 : { *(.debug_loclists) } \ + .debug_macro 0 : { *(.debug_macro) } \ +- .debug_addr 0 : { *(.debug_addr) } ++ .debug_names 0 : { *(.debug_names) } \ ++ .debug_rnglists 0 : { *(.debug_rnglists) } \ ++ .debug_str_offsets 0 : { *(.debug_str_offsets) } + + /* Stabs debugging sections. */ + #define STABS_DEBUG \ +@@ -988,12 +993,13 @@ + #endif + + /* +- * Clang's -fsanitize=kernel-address and -fsanitize=thread produce +- * unwanted sections (.eh_frame and .init_array.*), but +- * CONFIG_CONSTRUCTORS wants to keep any .init_array.* sections. ++ * Clang's -fprofile-arcs, -fsanitize=kernel-address, and ++ * -fsanitize=thread produce unwanted sections (.eh_frame ++ * and .init_array.*), but CONFIG_CONSTRUCTORS wants to ++ * keep any .init_array.* sections. + * https://bugs.llvm.org/show_bug.cgi?id=46478 + */ +-#if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) ++#if defined(CONFIG_GCOV_KERNEL) || defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KCSAN) + # ifdef CONFIG_CONSTRUCTORS + # define SANITIZER_DISCARDS \ + *(.eh_frame) +diff --git a/include/linux/bpf.h b/include/linux/bpf.h +index 2b16bf48aab61..642ce03f19c4c 100644 +--- a/include/linux/bpf.h ++++ b/include/linux/bpf.h +@@ -1371,7 +1371,10 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) + /* verify correctness of eBPF program */ + int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, + union bpf_attr __user *uattr); ++ ++#ifndef CONFIG_BPF_JIT_ALWAYS_ON + void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); ++#endif + + struct btf *bpf_get_btf_vmlinux(void); + +diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h +index 61a66fb8ebb34..d2d7f9b6a2761 100644 +--- a/include/linux/device-mapper.h ++++ b/include/linux/device-mapper.h +@@ -325,6 +325,11 @@ struct dm_target { + * whether or not its underlying devices have support. + */ + bool discards_supported:1; ++ ++ /* ++ * Set if we need to limit the number of in-flight bios when swapping. ++ */ ++ bool limit_swap_bios:1; + }; + + void *dm_per_bio_data(struct bio *bio, size_t data_size); +diff --git a/include/linux/eventpoll.h b/include/linux/eventpoll.h +index 8f000fada5a46..0df0de0cf45e3 100644 +--- a/include/linux/eventpoll.h ++++ b/include/linux/eventpoll.h +@@ -18,7 +18,7 @@ struct file; + + #ifdef CONFIG_EPOLL + +-#ifdef CONFIG_CHECKPOINT_RESTORE ++#ifdef CONFIG_KCMP + struct file *get_epoll_tfile_raw_ptr(struct file *file, int tfd, unsigned long toff); + #endif + +diff --git a/include/linux/filter.h b/include/linux/filter.h +index 1b62397bd1247..e2ffa02f9067a 100644 +--- a/include/linux/filter.h ++++ b/include/linux/filter.h +@@ -886,7 +886,7 @@ void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp); + u64 __bpf_call_base(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5); + #define __bpf_call_base_args \ + ((u64 (*)(u64, u64, u64, u64, u64, const struct bpf_insn *)) \ +- __bpf_call_base) ++ (void *)__bpf_call_base) + + struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog); + void bpf_jit_compile(struct bpf_prog *prog); +diff --git a/include/linux/icmpv6.h b/include/linux/icmpv6.h +index 1b3371ae81936..9055cb380ee24 100644 +--- a/include/linux/icmpv6.h ++++ b/include/linux/icmpv6.h +@@ -3,6 +3,7 @@ + #define _LINUX_ICMPV6_H + + #include ++#include + #include + + static inline struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb) +@@ -15,13 +16,16 @@ static inline struct icmp6hdr *icmp6_hdr(const struct sk_buff *skb) + #if IS_ENABLED(CONFIG_IPV6) + + typedef void ip6_icmp_send_t(struct sk_buff *skb, u8 type, u8 code, __u32 info, +- const struct in6_addr *force_saddr); +-#if IS_BUILTIN(CONFIG_IPV6) ++ const struct in6_addr *force_saddr, ++ const struct inet6_skb_parm *parm); + void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, +- const struct in6_addr *force_saddr); +-static inline void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info) ++ const struct in6_addr *force_saddr, ++ const struct inet6_skb_parm *parm); ++#if IS_BUILTIN(CONFIG_IPV6) ++static inline void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, ++ const struct inet6_skb_parm *parm) + { +- icmp6_send(skb, type, code, info, NULL); ++ icmp6_send(skb, type, code, info, NULL, parm); + } + static inline int inet6_register_icmp_sender(ip6_icmp_send_t *fn) + { +@@ -34,18 +38,28 @@ static inline int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn) + return 0; + } + #else +-extern void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info); ++extern void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, ++ const struct inet6_skb_parm *parm); + extern int inet6_register_icmp_sender(ip6_icmp_send_t *fn); + extern int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn); + #endif + ++static inline void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info) ++{ ++ __icmpv6_send(skb, type, code, info, IP6CB(skb)); ++} ++ + int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type, + unsigned int data_len); + + #if IS_ENABLED(CONFIG_NF_NAT) + void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info); + #else +-#define icmpv6_ndo_send icmpv6_send ++static inline void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info) ++{ ++ struct inet6_skb_parm parm = { 0 }; ++ __icmpv6_send(skb_in, type, code, info, &parm); ++} + #endif + + #else +diff --git a/include/linux/iommu.h b/include/linux/iommu.h +index 9bbcfe3b0bb12..f11f5072af5dc 100644 +--- a/include/linux/iommu.h ++++ b/include/linux/iommu.h +@@ -169,7 +169,7 @@ enum iommu_dev_features { + * struct iommu_iotlb_gather - Range information for a pending IOTLB flush + * + * @start: IOVA representing the start of the range to be flushed +- * @end: IOVA representing the end of the range to be flushed (exclusive) ++ * @end: IOVA representing the end of the range to be flushed (inclusive) + * @pgsize: The interval at which to perform the flush + * + * This structure is intended to be updated by multiple calls to the +@@ -536,7 +536,7 @@ static inline void iommu_iotlb_gather_add_page(struct iommu_domain *domain, + struct iommu_iotlb_gather *gather, + unsigned long iova, size_t size) + { +- unsigned long start = iova, end = start + size; ++ unsigned long start = iova, end = start + size - 1; + + /* + * If the new page is disjoint from the current range or is mapped at +diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h +index dda61d150a138..f514a7dd8c9cf 100644 +--- a/include/linux/ipv6.h ++++ b/include/linux/ipv6.h +@@ -84,7 +84,6 @@ struct ipv6_params { + __s32 autoconf; + }; + extern struct ipv6_params ipv6_defaults; +-#include + #include + #include + +diff --git a/include/linux/kexec.h b/include/linux/kexec.h +index 9e93bef529680..5f61389f5f361 100644 +--- a/include/linux/kexec.h ++++ b/include/linux/kexec.h +@@ -300,6 +300,11 @@ struct kimage { + /* Information for loading purgatory */ + struct purgatory_info purgatory_info; + #endif ++ ++#ifdef CONFIG_IMA_KEXEC ++ /* Virtual address of IMA measurement buffer for kexec syscall */ ++ void *ima_buffer; ++#endif + }; + + /* kexec interface functions */ +diff --git a/include/linux/key.h b/include/linux/key.h +index 0f2e24f13c2bd..eed3ce139a32e 100644 +--- a/include/linux/key.h ++++ b/include/linux/key.h +@@ -289,6 +289,7 @@ extern struct key *key_alloc(struct key_type *type, + #define KEY_ALLOC_BUILT_IN 0x0004 /* Key is built into kernel */ + #define KEY_ALLOC_BYPASS_RESTRICTION 0x0008 /* Override the check on restricted keyrings */ + #define KEY_ALLOC_UID_KEYRING 0x0010 /* allocating a user or user session keyring */ ++#define KEY_ALLOC_SET_KEEP 0x0020 /* Set the KEEP flag on the key/keyring */ + + extern void key_revoke(struct key *key); + extern void key_invalidate(struct key *key); +diff --git a/include/linux/kgdb.h b/include/linux/kgdb.h +index 0d6cf64c8bb12..3c755f6eaefd8 100644 +--- a/include/linux/kgdb.h ++++ b/include/linux/kgdb.h +@@ -360,9 +360,11 @@ extern atomic_t kgdb_active; + extern bool dbg_is_early; + extern void __init dbg_late_init(void); + extern void kgdb_panic(const char *msg); ++extern void kgdb_free_init_mem(void); + #else /* ! CONFIG_KGDB */ + #define in_dbg_master() (0) + #define dbg_late_init() + static inline void kgdb_panic(const char *msg) {} ++static inline void kgdb_free_init_mem(void) { } + #endif /* ! CONFIG_KGDB */ + #endif /* _KGDB_H_ */ +diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h +index c941b73773216..2fcc01891b474 100644 +--- a/include/linux/khugepaged.h ++++ b/include/linux/khugepaged.h +@@ -3,6 +3,7 @@ + #define _LINUX_KHUGEPAGED_H + + #include /* MMF_VM_HUGEPAGE */ ++#include + + + #ifdef CONFIG_TRANSPARENT_HUGEPAGE +@@ -57,6 +58,7 @@ static inline int khugepaged_enter(struct vm_area_struct *vma, + { + if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags)) + if ((khugepaged_always() || ++ (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) || + (khugepaged_req_madv() && (vm_flags & VM_HUGEPAGE))) && + !(vm_flags & VM_NOHUGEPAGE) && + !test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) +diff --git a/include/linux/memremap.h b/include/linux/memremap.h +index 79c49e7f5c304..f5b464daeeca5 100644 +--- a/include/linux/memremap.h ++++ b/include/linux/memremap.h +@@ -137,6 +137,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); + void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); + struct dev_pagemap *get_dev_pagemap(unsigned long pfn, + struct dev_pagemap *pgmap); ++bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn); + + unsigned long vmem_altmap_offset(struct vmem_altmap *altmap); + void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns); +@@ -165,6 +166,11 @@ static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn, + return NULL; + } + ++static inline bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn) ++{ ++ return false; ++} ++ + static inline unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) + { + return 0; +diff --git a/include/linux/mfd/rohm-generic.h b/include/linux/mfd/rohm-generic.h +index 4283b5b33e040..2b85b9deb03ae 100644 +--- a/include/linux/mfd/rohm-generic.h ++++ b/include/linux/mfd/rohm-generic.h +@@ -20,14 +20,12 @@ struct rohm_regmap_dev { + struct regmap *regmap; + }; + +-enum { +- ROHM_DVS_LEVEL_UNKNOWN, +- ROHM_DVS_LEVEL_RUN, +- ROHM_DVS_LEVEL_IDLE, +- ROHM_DVS_LEVEL_SUSPEND, +- ROHM_DVS_LEVEL_LPSR, +- ROHM_DVS_LEVEL_MAX = ROHM_DVS_LEVEL_LPSR, +-}; ++#define ROHM_DVS_LEVEL_RUN BIT(0) ++#define ROHM_DVS_LEVEL_IDLE BIT(1) ++#define ROHM_DVS_LEVEL_SUSPEND BIT(2) ++#define ROHM_DVS_LEVEL_LPSR BIT(3) ++#define ROHM_DVS_LEVEL_VALID_AMOUNT 4 ++#define ROHM_DVS_LEVEL_UNKNOWN 0 + + /** + * struct rohm_dvs_config - dynamic voltage scaling register descriptions +diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h +index 5c119d6cecf14..c5adba5e79e7e 100644 +--- a/include/linux/rcupdate.h ++++ b/include/linux/rcupdate.h +@@ -110,8 +110,10 @@ static inline void rcu_user_exit(void) { } + + #ifdef CONFIG_RCU_NOCB_CPU + void rcu_init_nohz(void); ++void rcu_nocb_flush_deferred_wakeup(void); + #else /* #ifdef CONFIG_RCU_NOCB_CPU */ + static inline void rcu_init_nohz(void) { } ++static inline void rcu_nocb_flush_deferred_wakeup(void) { } + #endif /* #else #ifdef CONFIG_RCU_NOCB_CPU */ + + /** +diff --git a/include/linux/rmap.h b/include/linux/rmap.h +index 70085ca1a3fc9..def5c62c93b3b 100644 +--- a/include/linux/rmap.h ++++ b/include/linux/rmap.h +@@ -213,7 +213,8 @@ struct page_vma_mapped_walk { + + static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw) + { +- if (pvmw->pte) ++ /* HugeTLB pte is set to the relevant page table entry without pte_mapped. */ ++ if (pvmw->pte && !PageHuge(pvmw->page)) + pte_unmap(pvmw->pte); + if (pvmw->ptl) + spin_unlock(pvmw->ptl); +diff --git a/include/linux/soundwire/sdw.h b/include/linux/soundwire/sdw.h +index 41cc1192f9aab..57cda3a3a9d95 100644 +--- a/include/linux/soundwire/sdw.h ++++ b/include/linux/soundwire/sdw.h +@@ -1001,6 +1001,8 @@ int sdw_bus_exit_clk_stop(struct sdw_bus *bus); + + int sdw_read(struct sdw_slave *slave, u32 addr); + int sdw_write(struct sdw_slave *slave, u32 addr, u8 value); ++int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value); ++int sdw_read_no_pm(struct sdw_slave *slave, u32 addr); + int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); + int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); + +diff --git a/include/linux/tpm.h b/include/linux/tpm.h +index 8f4ff39f51e7d..804a3f69bbd93 100644 +--- a/include/linux/tpm.h ++++ b/include/linux/tpm.h +@@ -397,6 +397,10 @@ static inline u32 tpm2_rc_value(u32 rc) + #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE) + + extern int tpm_is_tpm2(struct tpm_chip *chip); ++extern __must_check int tpm_try_get_ops(struct tpm_chip *chip); ++extern void tpm_put_ops(struct tpm_chip *chip); ++extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf, ++ size_t min_rsp_body_length, const char *desc); + extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx, + struct tpm_digest *digest); + extern int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, +@@ -410,7 +414,6 @@ static inline int tpm_is_tpm2(struct tpm_chip *chip) + { + return -ENODEV; + } +- + static inline int tpm_pcr_read(struct tpm_chip *chip, int pcr_idx, + struct tpm_digest *digest) + { +diff --git a/include/linux/tty_ldisc.h b/include/linux/tty_ldisc.h +index b1e6043e99175..572a079761165 100644 +--- a/include/linux/tty_ldisc.h ++++ b/include/linux/tty_ldisc.h +@@ -185,7 +185,8 @@ struct tty_ldisc_ops { + void (*close)(struct tty_struct *); + void (*flush_buffer)(struct tty_struct *tty); + ssize_t (*read)(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr); ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset); + ssize_t (*write)(struct tty_struct *tty, struct file *file, + const unsigned char *buf, size_t nr); + int (*ioctl)(struct tty_struct *tty, struct file *file, +diff --git a/include/net/act_api.h b/include/net/act_api.h +index 87214927314a1..89b42a1e4f88e 100644 +--- a/include/net/act_api.h ++++ b/include/net/act_api.h +@@ -166,6 +166,7 @@ int tcf_idr_create_from_flags(struct tc_action_net *tn, u32 index, + struct nlattr *est, struct tc_action **a, + const struct tc_action_ops *ops, int bind, + u32 flags); ++void tcf_idr_insert_many(struct tc_action *actions[]); + void tcf_idr_cleanup(struct tc_action_net *tn, u32 index); + int tcf_idr_check_alloc(struct tc_action_net *tn, u32 *index, + struct tc_action **a, int bind); +@@ -186,10 +187,13 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, + struct nlattr *est, char *name, int ovr, int bind, + struct tc_action *actions[], size_t *attr_size, + bool rtnl_held, struct netlink_ext_ack *extack); ++struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla, ++ bool rtnl_held, ++ struct netlink_ext_ack *extack); + struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + struct nlattr *nla, struct nlattr *est, + char *name, int ovr, int bind, +- bool rtnl_held, ++ struct tc_action_ops *ops, bool rtnl_held, + struct netlink_ext_ack *extack); + int tcf_action_dump(struct sk_buff *skb, struct tc_action *actions[], int bind, + int ref, bool terse); +diff --git a/include/net/icmp.h b/include/net/icmp.h +index 9ac2d2672a938..fd84adc479633 100644 +--- a/include/net/icmp.h ++++ b/include/net/icmp.h +@@ -46,7 +46,11 @@ static inline void icmp_send(struct sk_buff *skb_in, int type, int code, __be32 + #if IS_ENABLED(CONFIG_NF_NAT) + void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info); + #else +-#define icmp_ndo_send icmp_send ++static inline void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info) ++{ ++ struct ip_options opts = { 0 }; ++ __icmp_send(skb_in, type, code, info, &opts); ++} + #endif + + int icmp_rcv(struct sk_buff *skb); +diff --git a/include/net/tcp.h b/include/net/tcp.h +index fe9747ee70a6f..7d66c61d22c7d 100644 +--- a/include/net/tcp.h ++++ b/include/net/tcp.h +@@ -1424,8 +1424,13 @@ void tcp_cleanup_rbuf(struct sock *sk, int copied); + */ + static inline bool tcp_rmem_pressure(const struct sock *sk) + { +- int rcvbuf = READ_ONCE(sk->sk_rcvbuf); +- int threshold = rcvbuf - (rcvbuf >> 3); ++ int rcvbuf, threshold; ++ ++ if (tcp_under_memory_pressure(sk)) ++ return true; ++ ++ rcvbuf = READ_ONCE(sk->sk_rcvbuf); ++ threshold = rcvbuf - (rcvbuf >> 3); + + return atomic_read(&sk->sk_rmem_alloc) > threshold; + } +diff --git a/include/uapi/drm/drm_fourcc.h b/include/uapi/drm/drm_fourcc.h +index 82f3278012677..5498d7a6556a7 100644 +--- a/include/uapi/drm/drm_fourcc.h ++++ b/include/uapi/drm/drm_fourcc.h +@@ -997,9 +997,9 @@ drm_fourcc_canonicalize_nvidia_format_mod(__u64 modifier) + * Not all combinations are valid, and different SoCs may support different + * combinations of layout and options. + */ +-#define __fourcc_mod_amlogic_layout_mask 0xf ++#define __fourcc_mod_amlogic_layout_mask 0xff + #define __fourcc_mod_amlogic_options_shift 8 +-#define __fourcc_mod_amlogic_options_mask 0xf ++#define __fourcc_mod_amlogic_options_mask 0xff + + #define DRM_FORMAT_MOD_AMLOGIC_FBC(__layout, __options) \ + fourcc_mod_code(AMLOGIC, \ +diff --git a/init/Kconfig b/init/Kconfig +index 0872a5a2e7590..d559abf38c905 100644 +--- a/init/Kconfig ++++ b/init/Kconfig +@@ -1194,6 +1194,7 @@ endif # NAMESPACES + config CHECKPOINT_RESTORE + bool "Checkpoint/restore support" + select PROC_CHILDREN ++ select KCMP + default n + help + Enables additional kernel features in a sake of checkpoint/restore. +@@ -1737,6 +1738,16 @@ config ARCH_HAS_MEMBARRIER_CALLBACKS + config ARCH_HAS_MEMBARRIER_SYNC_CORE + bool + ++config KCMP ++ bool "Enable kcmp() system call" if EXPERT ++ help ++ Enable the kernel resource comparison system call. It provides ++ user-space with the ability to compare two processes to see if they ++ share a common resource, such as a file descriptor or even virtual ++ memory space. ++ ++ If unsure, say N. ++ + config RSEQ + bool "Enable rseq() system call" if EXPERT + default y +diff --git a/init/main.c b/init/main.c +index 9d964511fe0c2..d9d9141112511 100644 +--- a/init/main.c ++++ b/init/main.c +@@ -1417,6 +1417,7 @@ static int __ref kernel_init(void *unused) + async_synchronize_full(); + kprobe_free_init_mem(); + ftrace_free_init_mem(); ++ kgdb_free_init_mem(); + free_initmem(); + mark_readonly(); + +diff --git a/kernel/Makefile b/kernel/Makefile +index 6c9f19911be03..88b60a6e5dd0a 100644 +--- a/kernel/Makefile ++++ b/kernel/Makefile +@@ -48,7 +48,7 @@ obj-y += livepatch/ + obj-y += dma/ + obj-y += entry/ + +-obj-$(CONFIG_CHECKPOINT_RESTORE) += kcmp.o ++obj-$(CONFIG_KCMP) += kcmp.o + obj-$(CONFIG_FREEZER) += freezer.o + obj-$(CONFIG_PROFILING) += profile.o + obj-$(CONFIG_STACKTRACE) += stacktrace.o +diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c +index 8f10e30ea0b08..e8957e911de31 100644 +--- a/kernel/bpf/bpf_iter.c ++++ b/kernel/bpf/bpf_iter.c +@@ -273,7 +273,7 @@ int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info) + { + struct bpf_iter_target_info *tinfo; + +- tinfo = kmalloc(sizeof(*tinfo), GFP_KERNEL); ++ tinfo = kzalloc(sizeof(*tinfo), GFP_KERNEL); + if (!tinfo) + return -ENOMEM; + +diff --git a/kernel/bpf/bpf_lru_list.c b/kernel/bpf/bpf_lru_list.c +index 1b6b9349cb857..d99e89f113c43 100644 +--- a/kernel/bpf/bpf_lru_list.c ++++ b/kernel/bpf/bpf_lru_list.c +@@ -502,13 +502,14 @@ struct bpf_lru_node *bpf_lru_pop_free(struct bpf_lru *lru, u32 hash) + static void bpf_common_lru_push_free(struct bpf_lru *lru, + struct bpf_lru_node *node) + { ++ u8 node_type = READ_ONCE(node->type); + unsigned long flags; + +- if (WARN_ON_ONCE(node->type == BPF_LRU_LIST_T_FREE) || +- WARN_ON_ONCE(node->type == BPF_LRU_LOCAL_LIST_T_FREE)) ++ if (WARN_ON_ONCE(node_type == BPF_LRU_LIST_T_FREE) || ++ WARN_ON_ONCE(node_type == BPF_LRU_LOCAL_LIST_T_FREE)) + return; + +- if (node->type == BPF_LRU_LOCAL_LIST_T_PENDING) { ++ if (node_type == BPF_LRU_LOCAL_LIST_T_PENDING) { + struct bpf_lru_locallist *loc_l; + + loc_l = per_cpu_ptr(lru->common_lru.local_list, node->cpu); +diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c +index 2b5ca93c17dec..b5be9659ab590 100644 +--- a/kernel/bpf/devmap.c ++++ b/kernel/bpf/devmap.c +@@ -815,9 +815,7 @@ static int dev_map_notification(struct notifier_block *notifier, + break; + + /* will be freed in free_netdev() */ +- netdev->xdp_bulkq = +- __alloc_percpu_gfp(sizeof(struct xdp_dev_bulk_queue), +- sizeof(void *), GFP_ATOMIC); ++ netdev->xdp_bulkq = alloc_percpu(struct xdp_dev_bulk_queue); + if (!netdev->xdp_bulkq) + return NOTIFY_BAD; + +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index c09594e70f90a..6c2e4947beaeb 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -4786,8 +4786,9 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn, + subprog); + clear_caller_saved_regs(env, caller->regs); + +- /* All global functions return SCALAR_VALUE */ ++ /* All global functions return a 64-bit SCALAR_VALUE */ + mark_reg_unknown(env, caller->regs, BPF_REG_0); ++ caller->regs[BPF_REG_0].subreg_def = DEF_NOT_SUBREG; + + /* continue with next insn after call */ + return 0; +diff --git a/kernel/debug/debug_core.c b/kernel/debug/debug_core.c +index 1e75a8923a8d1..8661eb2b17711 100644 +--- a/kernel/debug/debug_core.c ++++ b/kernel/debug/debug_core.c +@@ -456,6 +456,17 @@ setundefined: + return 0; + } + ++void kgdb_free_init_mem(void) ++{ ++ int i; ++ ++ /* Clear init memory breakpoints. */ ++ for (i = 0; i < KGDB_MAX_BREAKPOINTS; i++) { ++ if (init_section_contains((void *)kgdb_break[i].bpt_addr, 0)) ++ kgdb_break[i].state = BP_UNDEFINED; ++ } ++} ++ + #ifdef CONFIG_KGDB_KDB + void kdb_dump_stack_on_cpu(int cpu) + { +diff --git a/kernel/debug/kdb/kdb_private.h b/kernel/debug/kdb/kdb_private.h +index a4281fb99299e..81874213b0fe9 100644 +--- a/kernel/debug/kdb/kdb_private.h ++++ b/kernel/debug/kdb/kdb_private.h +@@ -230,7 +230,7 @@ extern struct task_struct *kdb_curr_task(int); + + #define kdb_task_has_cpu(p) (task_curr(p)) + +-#define GFP_KDB (in_interrupt() ? GFP_ATOMIC : GFP_KERNEL) ++#define GFP_KDB (in_dbg_master() ? GFP_ATOMIC : GFP_KERNEL) + + extern void *debug_kmalloc(size_t size, gfp_t flags); + extern void debug_kfree(void *); +diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c +index 3994a217bde76..3bf98db9c702d 100644 +--- a/kernel/kcsan/core.c ++++ b/kernel/kcsan/core.c +@@ -12,7 +12,6 @@ + #include + #include + #include +-#include + #include + #include + +@@ -101,7 +100,7 @@ static atomic_long_t watchpoints[CONFIG_KCSAN_NUM_WATCHPOINTS + NUM_SLOTS-1]; + static DEFINE_PER_CPU(long, kcsan_skip); + + /* For kcsan_prandom_u32_max(). */ +-static DEFINE_PER_CPU(struct rnd_state, kcsan_rand_state); ++static DEFINE_PER_CPU(u32, kcsan_rand_state); + + static __always_inline atomic_long_t *find_watchpoint(unsigned long addr, + size_t size, +@@ -275,20 +274,17 @@ should_watch(const volatile void *ptr, size_t size, int type, struct kcsan_ctx * + } + + /* +- * Returns a pseudo-random number in interval [0, ep_ro). See prandom_u32_max() +- * for more details. +- * +- * The open-coded version here is using only safe primitives for all contexts +- * where we can have KCSAN instrumentation. In particular, we cannot use +- * prandom_u32() directly, as its tracepoint could cause recursion. ++ * Returns a pseudo-random number in interval [0, ep_ro). Simple linear ++ * congruential generator, using constants from "Numerical Recipes". + */ + static u32 kcsan_prandom_u32_max(u32 ep_ro) + { +- struct rnd_state *state = &get_cpu_var(kcsan_rand_state); +- const u32 res = prandom_u32_state(state); ++ u32 state = this_cpu_read(kcsan_rand_state); ++ ++ state = 1664525 * state + 1013904223; ++ this_cpu_write(kcsan_rand_state, state); + +- put_cpu_var(kcsan_rand_state); +- return (u32)(((u64) res * ep_ro) >> 32); ++ return state % ep_ro; + } + + static inline void reset_kcsan_skip(void) +@@ -639,10 +635,14 @@ static __always_inline void check_access(const volatile void *ptr, size_t size, + + void __init kcsan_init(void) + { ++ int cpu; ++ + BUG_ON(!in_task()); + + kcsan_debugfs_init(); +- prandom_seed_full_state(&kcsan_rand_state); ++ ++ for_each_possible_cpu(cpu) ++ per_cpu(kcsan_rand_state, cpu) = (u32)get_cycles(); + + /* + * We are in the init task, and no other tasks should be running; +diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c +index e21f6b9234f7a..7825adcc5efc3 100644 +--- a/kernel/kexec_file.c ++++ b/kernel/kexec_file.c +@@ -166,6 +166,11 @@ void kimage_file_post_load_cleanup(struct kimage *image) + vfree(pi->sechdrs); + pi->sechdrs = NULL; + ++#ifdef CONFIG_IMA_KEXEC ++ vfree(image->ima_buffer); ++ image->ima_buffer = NULL; ++#endif /* CONFIG_IMA_KEXEC */ ++ + /* See if architecture has anything to cleanup post load */ + arch_kimage_file_post_load_cleanup(image); + +diff --git a/kernel/kprobes.c b/kernel/kprobes.c +index 911c77ef5bbcd..f590e9ff37062 100644 +--- a/kernel/kprobes.c ++++ b/kernel/kprobes.c +@@ -871,7 +871,6 @@ out: + cpus_read_unlock(); + } + +-#ifdef CONFIG_SYSCTL + static void optimize_all_kprobes(void) + { + struct hlist_head *head; +@@ -897,6 +896,7 @@ out: + mutex_unlock(&kprobe_mutex); + } + ++#ifdef CONFIG_SYSCTL + static void unoptimize_all_kprobes(void) + { + struct hlist_head *head; +@@ -2627,18 +2627,14 @@ static int __init init_kprobes(void) + } + } + +-#if defined(CONFIG_OPTPROBES) +-#if defined(__ARCH_WANT_KPROBES_INSN_SLOT) +- /* Init kprobe_optinsn_slots */ +- kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE; +-#endif +- /* By default, kprobes can be optimized */ +- kprobes_allow_optimization = true; +-#endif +- + /* By default, kprobes are armed */ + kprobes_all_disarmed = false; + ++#if defined(CONFIG_OPTPROBES) && defined(__ARCH_WANT_KPROBES_INSN_SLOT) ++ /* Init kprobe_optinsn_slots for allocation */ ++ kprobe_optinsn_slots.insn_size = MAX_OPTINSN_SIZE; ++#endif ++ + err = arch_init_kprobes(); + if (!err) + err = register_die_notifier(&kprobe_exceptions_nb); +@@ -2653,6 +2649,21 @@ static int __init init_kprobes(void) + } + early_initcall(init_kprobes); + ++#if defined(CONFIG_OPTPROBES) ++static int __init init_optprobes(void) ++{ ++ /* ++ * Enable kprobe optimization - this kicks the optimizer which ++ * depends on synchronize_rcu_tasks() and ksoftirqd, that is ++ * not spawned in early initcall. So delay the optimization. ++ */ ++ optimize_all_kprobes(); ++ ++ return 0; ++} ++subsys_initcall(init_optprobes); ++#endif ++ + #ifdef CONFIG_DEBUG_FS + static void report_probe(struct seq_file *pi, struct kprobe *p, + const char *sym, int offset, char *modname, struct kprobe *pp) +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c +index bdaf4829098c0..780012eb2f3fe 100644 +--- a/kernel/locking/lockdep.c ++++ b/kernel/locking/lockdep.c +@@ -3707,7 +3707,7 @@ static void + print_usage_bug(struct task_struct *curr, struct held_lock *this, + enum lock_usage_bit prev_bit, enum lock_usage_bit new_bit) + { +- if (!debug_locks_off_graph_unlock() || debug_locks_silent) ++ if (!debug_locks_off() || debug_locks_silent) + return; + + pr_warn("\n"); +@@ -3748,6 +3748,7 @@ valid_state(struct task_struct *curr, struct held_lock *this, + enum lock_usage_bit new_bit, enum lock_usage_bit bad_bit) + { + if (unlikely(hlock_class(this)->usage_mask & (1 << bad_bit))) { ++ graph_unlock(); + print_usage_bug(curr, this, bad_bit, new_bit); + return 0; + } +diff --git a/kernel/module.c b/kernel/module.c +index e20499309b2af..94f926473e350 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -2315,6 +2315,21 @@ static int verify_exported_symbols(struct module *mod) + return 0; + } + ++static bool ignore_undef_symbol(Elf_Half emachine, const char *name) ++{ ++ /* ++ * On x86, PIC code and Clang non-PIC code may have call foo@PLT. GNU as ++ * before 2.37 produces an unreferenced _GLOBAL_OFFSET_TABLE_ on x86-64. ++ * i386 has a similar problem but may not deserve a fix. ++ * ++ * If we ever have to ignore many symbols, consider refactoring the code to ++ * only warn if referenced by a relocation. ++ */ ++ if (emachine == EM_386 || emachine == EM_X86_64) ++ return !strcmp(name, "_GLOBAL_OFFSET_TABLE_"); ++ return false; ++} ++ + /* Change all symbols so that st_value encodes the pointer directly. */ + static int simplify_symbols(struct module *mod, const struct load_info *info) + { +@@ -2360,8 +2375,10 @@ static int simplify_symbols(struct module *mod, const struct load_info *info) + break; + } + +- /* Ok if weak. */ +- if (!ksym && ELF_ST_BIND(sym[i].st_info) == STB_WEAK) ++ /* Ok if weak or ignored. */ ++ if (!ksym && ++ (ELF_ST_BIND(sym[i].st_info) == STB_WEAK || ++ ignore_undef_symbol(info->hdr->e_machine, name))) + break; + + ret = PTR_ERR(ksym) ?: -ENOENT; +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index aafec8cb8637d..d0df95346ab3f 100644 +--- a/kernel/printk/printk.c ++++ b/kernel/printk/printk.c +@@ -782,9 +782,9 @@ static ssize_t devkmsg_read(struct file *file, char __user *buf, + logbuf_lock_irq(); + } + +- if (user->seq < prb_first_valid_seq(prb)) { ++ if (r->info->seq != user->seq) { + /* our last seen message is gone, return error and reset */ +- user->seq = prb_first_valid_seq(prb); ++ user->seq = r->info->seq; + ret = -EPIPE; + logbuf_unlock_irq(); + goto out; +@@ -859,6 +859,7 @@ static loff_t devkmsg_llseek(struct file *file, loff_t offset, int whence) + static __poll_t devkmsg_poll(struct file *file, poll_table *wait) + { + struct devkmsg_user *user = file->private_data; ++ struct printk_info info; + __poll_t ret = 0; + + if (!user) +@@ -867,9 +868,9 @@ static __poll_t devkmsg_poll(struct file *file, poll_table *wait) + poll_wait(file, &log_wait, wait); + + logbuf_lock_irq(); +- if (prb_read_valid(prb, user->seq, NULL)) { ++ if (prb_read_valid_info(prb, user->seq, &info, NULL)) { + /* return error when data has vanished underneath us */ +- if (user->seq < prb_first_valid_seq(prb)) ++ if (info.seq != user->seq) + ret = EPOLLIN|EPOLLRDNORM|EPOLLERR|EPOLLPRI; + else + ret = EPOLLIN|EPOLLRDNORM; +@@ -1606,6 +1607,7 @@ static void syslog_clear(void) + + int do_syslog(int type, char __user *buf, int len, int source) + { ++ struct printk_info info; + bool clear = false; + static int saved_console_loglevel = LOGLEVEL_DEFAULT; + int error; +@@ -1676,9 +1678,14 @@ int do_syslog(int type, char __user *buf, int len, int source) + /* Number of chars in the log buffer */ + case SYSLOG_ACTION_SIZE_UNREAD: + logbuf_lock_irq(); +- if (syslog_seq < prb_first_valid_seq(prb)) { ++ if (!prb_read_valid_info(prb, syslog_seq, &info, NULL)) { ++ /* No unread messages. */ ++ logbuf_unlock_irq(); ++ return 0; ++ } ++ if (info.seq != syslog_seq) { + /* messages are gone, move to first one */ +- syslog_seq = prb_first_valid_seq(prb); ++ syslog_seq = info.seq; + syslog_partial = 0; + } + if (source == SYSLOG_FROM_PROC) { +@@ -1690,7 +1697,6 @@ int do_syslog(int type, char __user *buf, int len, int source) + error = prb_next_seq(prb) - syslog_seq; + } else { + bool time = syslog_partial ? syslog_time : printk_time; +- struct printk_info info; + unsigned int line_count; + u64 seq; + +@@ -3378,9 +3384,11 @@ bool kmsg_dump_get_buffer(struct kmsg_dumper *dumper, bool syslog, + goto out; + + logbuf_lock_irqsave(flags); +- if (dumper->cur_seq < prb_first_valid_seq(prb)) { +- /* messages are gone, move to first available one */ +- dumper->cur_seq = prb_first_valid_seq(prb); ++ if (prb_read_valid_info(prb, dumper->cur_seq, &info, NULL)) { ++ if (info.seq != dumper->cur_seq) { ++ /* messages are gone, move to first available one */ ++ dumper->cur_seq = info.seq; ++ } + } + + /* last entry */ +diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c +index a0e6f746de6c4..2e9e3ed7d63ef 100644 +--- a/kernel/printk/printk_safe.c ++++ b/kernel/printk/printk_safe.c +@@ -45,6 +45,8 @@ struct printk_safe_seq_buf { + static DEFINE_PER_CPU(struct printk_safe_seq_buf, safe_print_seq); + static DEFINE_PER_CPU(int, printk_context); + ++static DEFINE_RAW_SPINLOCK(safe_read_lock); ++ + #ifdef CONFIG_PRINTK_NMI + static DEFINE_PER_CPU(struct printk_safe_seq_buf, nmi_print_seq); + #endif +@@ -180,8 +182,6 @@ static void report_message_lost(struct printk_safe_seq_buf *s) + */ + static void __printk_safe_flush(struct irq_work *work) + { +- static raw_spinlock_t read_lock = +- __RAW_SPIN_LOCK_INITIALIZER(read_lock); + struct printk_safe_seq_buf *s = + container_of(work, struct printk_safe_seq_buf, work); + unsigned long flags; +@@ -195,7 +195,7 @@ static void __printk_safe_flush(struct irq_work *work) + * different CPUs. This is especially important when printing + * a backtrace. + */ +- raw_spin_lock_irqsave(&read_lock, flags); ++ raw_spin_lock_irqsave(&safe_read_lock, flags); + + i = 0; + more: +@@ -232,7 +232,7 @@ more: + + out: + report_message_lost(s); +- raw_spin_unlock_irqrestore(&read_lock, flags); ++ raw_spin_unlock_irqrestore(&safe_read_lock, flags); + } + + /** +@@ -278,6 +278,14 @@ void printk_safe_flush_on_panic(void) + raw_spin_lock_init(&logbuf_lock); + } + ++ if (raw_spin_is_locked(&safe_read_lock)) { ++ if (num_online_cpus() > 1) ++ return; ++ ++ debug_locks_off(); ++ raw_spin_lock_init(&safe_read_lock); ++ } ++ + printk_safe_flush(); + } + +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 593df7edfe97f..5dc36c6e80fdc 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -636,7 +636,6 @@ static noinstr void rcu_eqs_enter(bool user) + trace_rcu_dyntick(TPS("Start"), rdp->dynticks_nesting, 0, atomic_read(&rdp->dynticks)); + WARN_ON_ONCE(IS_ENABLED(CONFIG_RCU_EQS_DEBUG) && !user && !is_idle_task(current)); + rdp = this_cpu_ptr(&rcu_data); +- do_nocb_deferred_wakeup(rdp); + rcu_prepare_for_idle(); + rcu_preempt_deferred_qs(current); + +@@ -683,7 +682,14 @@ EXPORT_SYMBOL_GPL(rcu_idle_enter); + */ + noinstr void rcu_user_enter(void) + { ++ struct rcu_data *rdp = this_cpu_ptr(&rcu_data); ++ + lockdep_assert_irqs_disabled(); ++ ++ instrumentation_begin(); ++ do_nocb_deferred_wakeup(rdp); ++ instrumentation_end(); ++ + rcu_eqs_enter(true); + } + #endif /* CONFIG_NO_HZ_FULL */ +diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h +index fd8a52e9a8874..7d4f78bf40577 100644 +--- a/kernel/rcu/tree_plugin.h ++++ b/kernel/rcu/tree_plugin.h +@@ -2187,6 +2187,11 @@ static void do_nocb_deferred_wakeup(struct rcu_data *rdp) + do_nocb_deferred_wakeup_common(rdp); + } + ++void rcu_nocb_flush_deferred_wakeup(void) ++{ ++ do_nocb_deferred_wakeup(this_cpu_ptr(&rcu_data)); ++} ++ + void __init rcu_init_nohz(void) + { + int cpu; +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index ae7ceba8fd4f2..3486053060276 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -3932,6 +3932,22 @@ static inline void util_est_enqueue(struct cfs_rq *cfs_rq, + trace_sched_util_est_cfs_tp(cfs_rq); + } + ++static inline void util_est_dequeue(struct cfs_rq *cfs_rq, ++ struct task_struct *p) ++{ ++ unsigned int enqueued; ++ ++ if (!sched_feat(UTIL_EST)) ++ return; ++ ++ /* Update root cfs_rq's estimated utilization */ ++ enqueued = cfs_rq->avg.util_est.enqueued; ++ enqueued -= min_t(unsigned int, enqueued, _task_util_est(p)); ++ WRITE_ONCE(cfs_rq->avg.util_est.enqueued, enqueued); ++ ++ trace_sched_util_est_cfs_tp(cfs_rq); ++} ++ + /* + * Check if a (signed) value is within a specified (unsigned) margin, + * based on the observation that: +@@ -3945,23 +3961,16 @@ static inline bool within_margin(int value, int margin) + return ((unsigned int)(value + margin - 1) < (2 * margin - 1)); + } + +-static void +-util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) ++static inline void util_est_update(struct cfs_rq *cfs_rq, ++ struct task_struct *p, ++ bool task_sleep) + { + long last_ewma_diff; + struct util_est ue; +- int cpu; + + if (!sched_feat(UTIL_EST)) + return; + +- /* Update root cfs_rq's estimated utilization */ +- ue.enqueued = cfs_rq->avg.util_est.enqueued; +- ue.enqueued -= min_t(unsigned int, ue.enqueued, _task_util_est(p)); +- WRITE_ONCE(cfs_rq->avg.util_est.enqueued, ue.enqueued); +- +- trace_sched_util_est_cfs_tp(cfs_rq); +- + /* + * Skip update of task's estimated utilization when the task has not + * yet completed an activation, e.g. being migrated. +@@ -4001,8 +4010,7 @@ util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, bool task_sleep) + * To avoid overestimation of actual task utilization, skip updates if + * we cannot grant there is idle time in this CPU. + */ +- cpu = cpu_of(rq_of(cfs_rq)); +- if (task_util(p) > capacity_orig_of(cpu)) ++ if (task_util(p) > capacity_orig_of(cpu_of(rq_of(cfs_rq)))) + return; + + /* +@@ -4041,7 +4049,7 @@ static inline void update_misfit_status(struct task_struct *p, struct rq *rq) + if (!static_branch_unlikely(&sched_asym_cpucapacity)) + return; + +- if (!p) { ++ if (!p || p->nr_cpus_allowed == 1) { + rq->misfit_task_load = 0; + return; + } +@@ -4085,8 +4093,11 @@ static inline void + util_est_enqueue(struct cfs_rq *cfs_rq, struct task_struct *p) {} + + static inline void +-util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p, +- bool task_sleep) {} ++util_est_dequeue(struct cfs_rq *cfs_rq, struct task_struct *p) {} ++ ++static inline void ++util_est_update(struct cfs_rq *cfs_rq, struct task_struct *p, ++ bool task_sleep) {} + static inline void update_misfit_status(struct task_struct *p, struct rq *rq) {} + + #endif /* CONFIG_SMP */ +@@ -5589,6 +5600,8 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) + int idle_h_nr_running = task_has_idle_policy(p); + bool was_sched_idle = sched_idle_rq(rq); + ++ util_est_dequeue(&rq->cfs, p); ++ + for_each_sched_entity(se) { + cfs_rq = cfs_rq_of(se); + dequeue_entity(cfs_rq, se, flags); +@@ -5639,7 +5652,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) + rq->next_balance = jiffies; + + dequeue_throttle: +- util_est_dequeue(&rq->cfs, p, task_sleep); ++ util_est_update(&rq->cfs, p, task_sleep); + hrtick_update(rq); + } + +diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c +index c6932b8f4467a..36b545f17206f 100644 +--- a/kernel/sched/idle.c ++++ b/kernel/sched/idle.c +@@ -285,6 +285,7 @@ static void do_idle(void) + } + + arch_cpu_idle_enter(); ++ rcu_nocb_flush_deferred_wakeup(); + + /* + * In poll mode we reenable interrupts and spin. Also if we +diff --git a/kernel/seccomp.c b/kernel/seccomp.c +index 53a7d1512dd73..0ceaaba36c2e1 100644 +--- a/kernel/seccomp.c ++++ b/kernel/seccomp.c +@@ -1050,6 +1050,8 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, + const bool recheck_after_trace) + { + BUG(); ++ ++ return -1; + } + #endif + +diff --git a/kernel/smp.c b/kernel/smp.c +index 4d17501433be7..25240fb2df949 100644 +--- a/kernel/smp.c ++++ b/kernel/smp.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -449,6 +450,9 @@ void flush_smp_call_function_from_idle(void) + + local_irq_save(flags); + flush_smp_call_function_queue(true); ++ if (local_softirq_pending()) ++ do_softirq(); ++ + local_irq_restore(flags); + } + +diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c +index 3f659f8550741..3e261482296cf 100644 +--- a/kernel/tracepoint.c ++++ b/kernel/tracepoint.c +@@ -53,6 +53,12 @@ struct tp_probes { + struct tracepoint_func probes[]; + }; + ++/* Called in removal of a func but failed to allocate a new tp_funcs */ ++static void tp_stub_func(void) ++{ ++ return; ++} ++ + static inline void *allocate_probes(int count) + { + struct tp_probes *p = kmalloc(struct_size(p, probes, count), +@@ -131,6 +137,7 @@ func_add(struct tracepoint_func **funcs, struct tracepoint_func *tp_func, + { + struct tracepoint_func *old, *new; + int nr_probes = 0; ++ int stub_funcs = 0; + int pos = -1; + + if (WARN_ON(!tp_func->func)) +@@ -147,14 +154,34 @@ func_add(struct tracepoint_func **funcs, struct tracepoint_func *tp_func, + if (old[nr_probes].func == tp_func->func && + old[nr_probes].data == tp_func->data) + return ERR_PTR(-EEXIST); ++ if (old[nr_probes].func == tp_stub_func) ++ stub_funcs++; + } + } +- /* + 2 : one for new probe, one for NULL func */ +- new = allocate_probes(nr_probes + 2); ++ /* + 2 : one for new probe, one for NULL func - stub functions */ ++ new = allocate_probes(nr_probes + 2 - stub_funcs); + if (new == NULL) + return ERR_PTR(-ENOMEM); + if (old) { +- if (pos < 0) { ++ if (stub_funcs) { ++ /* Need to copy one at a time to remove stubs */ ++ int probes = 0; ++ ++ pos = -1; ++ for (nr_probes = 0; old[nr_probes].func; nr_probes++) { ++ if (old[nr_probes].func == tp_stub_func) ++ continue; ++ if (pos < 0 && old[nr_probes].prio < prio) ++ pos = probes++; ++ new[probes++] = old[nr_probes]; ++ } ++ nr_probes = probes; ++ if (pos < 0) ++ pos = probes; ++ else ++ nr_probes--; /* Account for insertion */ ++ ++ } else if (pos < 0) { + pos = nr_probes; + memcpy(new, old, nr_probes * sizeof(struct tracepoint_func)); + } else { +@@ -188,8 +215,9 @@ static void *func_remove(struct tracepoint_func **funcs, + /* (N -> M), (N > 1, M >= 0) probes */ + if (tp_func->func) { + for (nr_probes = 0; old[nr_probes].func; nr_probes++) { +- if (old[nr_probes].func == tp_func->func && +- old[nr_probes].data == tp_func->data) ++ if ((old[nr_probes].func == tp_func->func && ++ old[nr_probes].data == tp_func->data) || ++ old[nr_probes].func == tp_stub_func) + nr_del++; + } + } +@@ -208,14 +236,32 @@ static void *func_remove(struct tracepoint_func **funcs, + /* N -> M, (N > 1, M > 0) */ + /* + 1 for NULL */ + new = allocate_probes(nr_probes - nr_del + 1); +- if (new == NULL) +- return ERR_PTR(-ENOMEM); +- for (i = 0; old[i].func; i++) +- if (old[i].func != tp_func->func +- || old[i].data != tp_func->data) +- new[j++] = old[i]; +- new[nr_probes - nr_del].func = NULL; +- *funcs = new; ++ if (new) { ++ for (i = 0; old[i].func; i++) ++ if ((old[i].func != tp_func->func ++ || old[i].data != tp_func->data) ++ && old[i].func != tp_stub_func) ++ new[j++] = old[i]; ++ new[nr_probes - nr_del].func = NULL; ++ *funcs = new; ++ } else { ++ /* ++ * Failed to allocate, replace the old function ++ * with calls to tp_stub_func. ++ */ ++ for (i = 0; old[i].func; i++) ++ if (old[i].func == tp_func->func && ++ old[i].data == tp_func->data) { ++ old[i].func = tp_stub_func; ++ /* Set the prio to the next event. */ ++ if (old[i + 1].func) ++ old[i].prio = ++ old[i + 1].prio; ++ else ++ old[i].prio = -1; ++ } ++ *funcs = old; ++ } + } + debug_print_probes(*funcs); + return old; +@@ -295,10 +341,12 @@ static int tracepoint_remove_func(struct tracepoint *tp, + tp_funcs = rcu_dereference_protected(tp->funcs, + lockdep_is_held(&tracepoints_mutex)); + old = func_remove(&tp_funcs, func); +- if (IS_ERR(old)) { +- WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); ++ if (WARN_ON_ONCE(IS_ERR(old))) + return PTR_ERR(old); +- } ++ ++ if (tp_funcs == old) ++ /* Failed allocating new tp_funcs, replaced func with stub */ ++ return 0; + + if (!tp_funcs) { + /* Removed last function */ +diff --git a/mm/compaction.c b/mm/compaction.c +index 0846d4ffa3387..dba424447473d 100644 +--- a/mm/compaction.c ++++ b/mm/compaction.c +@@ -1248,7 +1248,7 @@ static void + fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long nr_isolated) + { + unsigned long start_pfn, end_pfn; +- struct page *page = pfn_to_page(pfn); ++ struct page *page; + + /* Do not search around if there are enough pages already */ + if (cc->nr_freepages >= cc->nr_migratepages) +@@ -1259,8 +1259,12 @@ fast_isolate_around(struct compact_control *cc, unsigned long pfn, unsigned long + return; + + /* Pageblock boundaries */ +- start_pfn = pageblock_start_pfn(pfn); +- end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(cc->zone)) - 1; ++ start_pfn = max(pageblock_start_pfn(pfn), cc->zone->zone_start_pfn); ++ end_pfn = min(pageblock_end_pfn(pfn), zone_end_pfn(cc->zone)); ++ ++ page = pageblock_pfn_to_page(start_pfn, end_pfn, cc->zone); ++ if (!page) ++ return; + + /* Scan before */ + if (start_pfn != pfn) { +@@ -1362,7 +1366,8 @@ fast_isolate_freepages(struct compact_control *cc) + pfn = page_to_pfn(freepage); + + if (pfn >= highest) +- highest = pageblock_start_pfn(pfn); ++ highest = max(pageblock_start_pfn(pfn), ++ cc->zone->zone_start_pfn); + + if (pfn >= low_pfn) { + cc->fast_search_fail = 0; +@@ -1432,7 +1437,8 @@ fast_isolate_freepages(struct compact_control *cc) + } else { + if (cc->direct_compaction && pfn_valid(min_pfn)) { + page = pageblock_pfn_to_page(min_pfn, +- pageblock_end_pfn(min_pfn), ++ min(pageblock_end_pfn(min_pfn), ++ zone_end_pfn(cc->zone)), + cc->zone); + cc->free_pfn = min_pfn; + } +@@ -1662,6 +1668,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + unsigned long pfn = cc->migrate_pfn; + unsigned long high_pfn; + int order; ++ bool found_block = false; + + /* Skip hints are relied on to avoid repeats on the fast search */ + if (cc->ignore_skip_hint) +@@ -1704,7 +1711,7 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + high_pfn = pageblock_start_pfn(cc->migrate_pfn + distance); + + for (order = cc->order - 1; +- order >= PAGE_ALLOC_COSTLY_ORDER && pfn == cc->migrate_pfn && nr_scanned < limit; ++ order >= PAGE_ALLOC_COSTLY_ORDER && !found_block && nr_scanned < limit; + order--) { + struct free_area *area = &cc->zone->free_area[order]; + struct list_head *freelist; +@@ -1719,7 +1726,11 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + list_for_each_entry(freepage, freelist, lru) { + unsigned long free_pfn; + +- nr_scanned++; ++ if (nr_scanned++ >= limit) { ++ move_freelist_tail(freelist, freepage); ++ break; ++ } ++ + free_pfn = page_to_pfn(freepage); + if (free_pfn < high_pfn) { + /* +@@ -1728,12 +1739,8 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + * the list assumes an entry is deleted, not + * reordered. + */ +- if (get_pageblock_skip(freepage)) { +- if (list_is_last(freelist, &freepage->lru)) +- break; +- ++ if (get_pageblock_skip(freepage)) + continue; +- } + + /* Reorder to so a future search skips recent pages */ + move_freelist_tail(freelist, freepage); +@@ -1741,15 +1748,10 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + update_fast_start_pfn(cc, free_pfn); + pfn = pageblock_start_pfn(free_pfn); + cc->fast_search_fail = 0; ++ found_block = true; + set_pageblock_skip(freepage); + break; + } +- +- if (nr_scanned >= limit) { +- cc->fast_search_fail++; +- move_freelist_tail(freelist, freepage); +- break; +- } + } + spin_unlock_irqrestore(&cc->zone->lock, flags); + } +@@ -1760,9 +1762,10 @@ static unsigned long fast_find_migrateblock(struct compact_control *cc) + * If fast scanning failed then use a cached entry for a page block + * that had free pages as the basis for starting a linear scan. + */ +- if (pfn == cc->migrate_pfn) ++ if (!found_block) { ++ cc->fast_search_fail++; + pfn = reinit_migrate_pfn(cc); +- ++ } + return pfn; + } + +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 26909396898b6..2e3b7075e4329 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -1312,14 +1312,16 @@ static inline void destroy_compound_gigantic_page(struct page *page, + static void update_and_free_page(struct hstate *h, struct page *page) + { + int i; ++ struct page *subpage = page; + + if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported()) + return; + + h->nr_huge_pages--; + h->nr_huge_pages_node[page_to_nid(page)]--; +- for (i = 0; i < pages_per_huge_page(h); i++) { +- page[i].flags &= ~(1 << PG_locked | 1 << PG_error | ++ for (i = 0; i < pages_per_huge_page(h); ++ i++, subpage = mem_map_next(subpage, page, i)) { ++ subpage->flags &= ~(1 << PG_locked | 1 << PG_error | + 1 << PG_referenced | 1 << PG_dirty | + 1 << PG_active | 1 << PG_private | + 1 << PG_writeback); +@@ -2517,7 +2519,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) + if (hstate_is_gigantic(h)) { + if (hugetlb_cma_size) { + pr_warn_once("HugeTLB: hugetlb_cma is enabled, skip boot time allocation\n"); +- break; ++ goto free; + } + if (!alloc_bootmem_huge_page(h)) + break; +@@ -2535,7 +2537,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h) + h->max_huge_pages, buf, i); + h->max_huge_pages = i; + } +- ++free: + kfree(node_alloc_noretry); + } + +@@ -2984,8 +2986,10 @@ static int hugetlb_sysfs_add_hstate(struct hstate *h, struct kobject *parent, + return -ENOMEM; + + retval = sysfs_create_group(hstate_kobjs[hi], hstate_attr_group); +- if (retval) ++ if (retval) { + kobject_put(hstate_kobjs[hi]); ++ hstate_kobjs[hi] = NULL; ++ } + + return retval; + } +diff --git a/mm/khugepaged.c b/mm/khugepaged.c +index 4e3dff13eb70c..abab394c42062 100644 +--- a/mm/khugepaged.c ++++ b/mm/khugepaged.c +@@ -440,18 +440,28 @@ static inline int khugepaged_test_exit(struct mm_struct *mm) + static bool hugepage_vma_check(struct vm_area_struct *vma, + unsigned long vm_flags) + { +- if ((!(vm_flags & VM_HUGEPAGE) && !khugepaged_always()) || +- (vm_flags & VM_NOHUGEPAGE) || ++ /* Explicitly disabled through madvise. */ ++ if ((vm_flags & VM_NOHUGEPAGE) || + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) + return false; + +- if (shmem_file(vma->vm_file) || +- (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && +- vma->vm_file && +- (vm_flags & VM_DENYWRITE))) { ++ /* Enabled via shmem mount options or sysfs settings. */ ++ if (shmem_file(vma->vm_file) && shmem_huge_enabled(vma)) { + return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, + HPAGE_PMD_NR); + } ++ ++ /* THP settings require madvise. */ ++ if (!(vm_flags & VM_HUGEPAGE) && !khugepaged_always()) ++ return false; ++ ++ /* Read-only file mappings need to be aligned for THP to work. */ ++ if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file && ++ (vm_flags & VM_DENYWRITE)) { ++ return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, ++ HPAGE_PMD_NR); ++ } ++ + if (!vma->anon_vma || vma->vm_ops) + return false; + if (vma_is_temporary_stack(vma)) +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index a604e69ecfa57..d6966f1ebc7af 100644 +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -1083,13 +1083,9 @@ static __always_inline struct mem_cgroup *get_active_memcg(void) + + rcu_read_lock(); + memcg = active_memcg(); +- if (memcg) { +- /* current->active_memcg must hold a ref. */ +- if (WARN_ON_ONCE(!css_tryget(&memcg->css))) +- memcg = root_mem_cgroup; +- else +- memcg = current->active_memcg; +- } ++ /* remote memcg must hold a ref. */ ++ if (memcg && WARN_ON_ONCE(!css_tryget(&memcg->css))) ++ memcg = root_mem_cgroup; + rcu_read_unlock(); + + return memcg; +@@ -5668,10 +5664,8 @@ static int mem_cgroup_move_account(struct page *page, + __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages); + __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages); + if (PageTransHuge(page)) { +- __mod_lruvec_state(from_vec, NR_ANON_THPS, +- -nr_pages); +- __mod_lruvec_state(to_vec, NR_ANON_THPS, +- nr_pages); ++ __dec_lruvec_state(from_vec, NR_ANON_THPS); ++ __inc_lruvec_state(to_vec, NR_ANON_THPS); + } + + } +@@ -6810,7 +6804,19 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask) + memcg_check_events(memcg, page); + local_irq_enable(); + +- if (PageSwapCache(page)) { ++ /* ++ * Cgroup1's unified memory+swap counter has been charged with the ++ * new swapcache page, finish the transfer by uncharging the swap ++ * slot. The swap slot would also get uncharged when it dies, but ++ * it can stick around indefinitely and we'd count the page twice ++ * the entire time. ++ * ++ * Cgroup2 has separate resource counters for memory and swap, ++ * so this is a non-issue here. Memory and swap charge lifetimes ++ * correspond 1:1 to page and swap slot lifetimes: we charge the ++ * page to memory here, and uncharge swap when the slot is freed. ++ */ ++ if (do_memsw_account() && PageSwapCache(page)) { + swp_entry_t entry = { .val = page_private(page) }; + /* + * The swap entry might not get freed for a long time, +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index fd653c9953cfd..570a20b425613 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1237,6 +1237,12 @@ static int memory_failure_dev_pagemap(unsigned long pfn, int flags, + */ + put_page(page); + ++ /* device metadata space is not recoverable */ ++ if (!pgmap_pfn_valid(pgmap, pfn)) { ++ rc = -ENXIO; ++ goto out; ++ } ++ + /* + * Prevent the inode from being freed while we are interrogating + * the address_space, typically this would be handled by +diff --git a/mm/memory.c b/mm/memory.c +index eb5722027160a..827d42f9ebf7c 100644 +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -2165,11 +2165,11 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, + unsigned long addr, unsigned long end, + unsigned long pfn, pgprot_t prot) + { +- pte_t *pte; ++ pte_t *pte, *mapped_pte; + spinlock_t *ptl; + int err = 0; + +- pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); ++ mapped_pte = pte = pte_alloc_map_lock(mm, pmd, addr, &ptl); + if (!pte) + return -ENOMEM; + arch_enter_lazy_mmu_mode(); +@@ -2183,7 +2183,7 @@ static int remap_pte_range(struct mm_struct *mm, pmd_t *pmd, + pfn++; + } while (pte++, addr += PAGE_SIZE, addr != end); + arch_leave_lazy_mmu_mode(); +- pte_unmap_unlock(pte - 1, ptl); ++ pte_unmap_unlock(mapped_pte, ptl); + return err; + } + +@@ -5203,17 +5203,19 @@ long copy_huge_page_from_user(struct page *dst_page, + void *page_kaddr; + unsigned long i, rc = 0; + unsigned long ret_val = pages_per_huge_page * PAGE_SIZE; ++ struct page *subpage = dst_page; + +- for (i = 0; i < pages_per_huge_page; i++) { ++ for (i = 0; i < pages_per_huge_page; ++ i++, subpage = mem_map_next(subpage, dst_page, i)) { + if (allow_pagefault) +- page_kaddr = kmap(dst_page + i); ++ page_kaddr = kmap(subpage); + else +- page_kaddr = kmap_atomic(dst_page + i); ++ page_kaddr = kmap_atomic(subpage); + rc = copy_from_user(page_kaddr, + (const void __user *)(src + i * PAGE_SIZE), + PAGE_SIZE); + if (allow_pagefault) +- kunmap(dst_page + i); ++ kunmap(subpage); + else + kunmap_atomic(page_kaddr); + +diff --git a/mm/memremap.c b/mm/memremap.c +index 16b2fb482da11..2455bac895066 100644 +--- a/mm/memremap.c ++++ b/mm/memremap.c +@@ -80,6 +80,21 @@ static unsigned long pfn_first(struct dev_pagemap *pgmap, int range_id) + return pfn + vmem_altmap_offset(pgmap_altmap(pgmap)); + } + ++bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn) ++{ ++ int i; ++ ++ for (i = 0; i < pgmap->nr_range; i++) { ++ struct range *range = &pgmap->ranges[i]; ++ ++ if (pfn >= PHYS_PFN(range->start) && ++ pfn <= PHYS_PFN(range->end)) ++ return pfn >= pfn_first(pgmap, i); ++ } ++ ++ return false; ++} ++ + static unsigned long pfn_end(struct dev_pagemap *pgmap, int range_id) + { + const struct range *range = &pgmap->ranges[range_id]; +diff --git a/mm/slab_common.c b/mm/slab_common.c +index f9ccd5dc13f32..8d96679668b4e 100644 +--- a/mm/slab_common.c ++++ b/mm/slab_common.c +@@ -836,8 +836,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) + page = alloc_pages(flags, order); + if (likely(page)) { + ret = page_address(page); +- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, +- PAGE_SIZE << order); ++ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, ++ PAGE_SIZE << order); + } + ret = kasan_kmalloc_large(ret, size, flags); + /* As ret might get tagged, call kmemleak hook after KASAN. */ +diff --git a/mm/slub.c b/mm/slub.c +index 071e41067ea67..7b378e2ce270d 100644 +--- a/mm/slub.c ++++ b/mm/slub.c +@@ -3984,8 +3984,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) + page = alloc_pages_node(node, flags, order); + if (page) { + ptr = page_address(page); +- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, +- PAGE_SIZE << order); ++ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, ++ PAGE_SIZE << order); + } + + return kmalloc_large_node_hook(ptr, size, flags); +@@ -4116,8 +4116,8 @@ void kfree(const void *x) + + BUG_ON(!PageCompound(page)); + kfree_hook(object); +- mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, +- -(PAGE_SIZE << order)); ++ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, ++ -(PAGE_SIZE << order)); + __free_pages(page, order); + return; + } +diff --git a/mm/vmscan.c b/mm/vmscan.c +index 4c5a9b2286bf5..67d38334052ef 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -4084,8 +4084,13 @@ module_init(kswapd_init) + */ + int node_reclaim_mode __read_mostly; + +-#define RECLAIM_WRITE (1<<0) /* Writeout pages during reclaim */ +-#define RECLAIM_UNMAP (1<<1) /* Unmap pages during reclaim */ ++/* ++ * These bit locations are exposed in the vm.zone_reclaim_mode sysctl ++ * ABI. New bits are OK, but existing bits can never change. ++ */ ++#define RECLAIM_ZONE (1<<0) /* Run shrink_inactive_list on the zone */ ++#define RECLAIM_WRITE (1<<1) /* Writeout pages during reclaim */ ++#define RECLAIM_UNMAP (1<<2) /* Unmap pages during reclaim */ + + /* + * Priority for NODE_RECLAIM. This determines the fraction of pages +diff --git a/net/bluetooth/a2mp.c b/net/bluetooth/a2mp.c +index da7fd7c8c2dc0..463bad58478b2 100644 +--- a/net/bluetooth/a2mp.c ++++ b/net/bluetooth/a2mp.c +@@ -381,9 +381,9 @@ static int a2mp_getampassoc_req(struct amp_mgr *mgr, struct sk_buff *skb, + hdev = hci_dev_get(req->id); + if (!hdev || hdev->amp_type == AMP_TYPE_BREDR || tmp) { + struct a2mp_amp_assoc_rsp rsp; +- rsp.id = req->id; + + memset(&rsp, 0, sizeof(rsp)); ++ rsp.id = req->id; + + if (tmp) { + rsp.status = A2MP_STATUS_COLLISION_OCCURED; +@@ -512,6 +512,7 @@ static int a2mp_createphyslink_req(struct amp_mgr *mgr, struct sk_buff *skb, + assoc = kmemdup(req->amp_assoc, assoc_len, GFP_KERNEL); + if (!assoc) { + amp_ctrl_put(ctrl); ++ hci_dev_put(hdev); + return -ENOMEM; + } + +diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c +index c4aa2cbb92697..555058270f112 100644 +--- a/net/bluetooth/hci_core.c ++++ b/net/bluetooth/hci_core.c +@@ -1356,8 +1356,10 @@ int hci_inquiry(void __user *arg) + * cleared). If it is interrupted by a signal, return -EINTR. + */ + if (wait_on_bit(&hdev->flags, HCI_INQUIRY, +- TASK_INTERRUPTIBLE)) +- return -EINTR; ++ TASK_INTERRUPTIBLE)) { ++ err = -EINTR; ++ goto done; ++ } + } + + /* for unlimited number of responses we will use buffer with +diff --git a/net/core/filter.c b/net/core/filter.c +index 2ca5eecebacfa..f0a19a48c0481 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -5549,6 +5549,7 @@ BPF_CALL_4(bpf_skb_fib_lookup, struct sk_buff *, skb, + { + struct net *net = dev_net(skb->dev); + int rc = -EAFNOSUPPORT; ++ bool check_mtu = false; + + if (plen < sizeof(*params)) + return -EINVAL; +@@ -5556,22 +5557,28 @@ BPF_CALL_4(bpf_skb_fib_lookup, struct sk_buff *, skb, + if (flags & ~(BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_OUTPUT)) + return -EINVAL; + ++ if (params->tot_len) ++ check_mtu = true; ++ + switch (params->family) { + #if IS_ENABLED(CONFIG_INET) + case AF_INET: +- rc = bpf_ipv4_fib_lookup(net, params, flags, false); ++ rc = bpf_ipv4_fib_lookup(net, params, flags, check_mtu); + break; + #endif + #if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: +- rc = bpf_ipv6_fib_lookup(net, params, flags, false); ++ rc = bpf_ipv6_fib_lookup(net, params, flags, check_mtu); + break; + #endif + } + +- if (!rc) { ++ if (rc == BPF_FIB_LKUP_RET_SUCCESS && !check_mtu) { + struct net_device *dev; + ++ /* When tot_len isn't provided by user, check skb ++ * against MTU of FIB lookup resulting net_device ++ */ + dev = dev_get_by_index_rcu(net, params->ifindex); + if (!is_skb_forwardable(dev, skb)) + rc = BPF_FIB_LKUP_RET_FRAG_NEEDED; +diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c +index 005faea415a48..ff3818333fcfb 100644 +--- a/net/ipv4/icmp.c ++++ b/net/ipv4/icmp.c +@@ -775,13 +775,14 @@ EXPORT_SYMBOL(__icmp_send); + void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info) + { + struct sk_buff *cloned_skb = NULL; ++ struct ip_options opts = { 0 }; + enum ip_conntrack_info ctinfo; + struct nf_conn *ct; + __be32 orig_ip; + + ct = nf_ct_get(skb_in, &ctinfo); + if (!ct || !(ct->status & IPS_SRC_NAT)) { +- icmp_send(skb_in, type, code, info); ++ __icmp_send(skb_in, type, code, info, &opts); + return; + } + +@@ -796,7 +797,7 @@ void icmp_ndo_send(struct sk_buff *skb_in, int type, int code, __be32 info) + + orig_ip = ip_hdr(skb_in)->saddr; + ip_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.ip; +- icmp_send(skb_in, type, code, info); ++ __icmp_send(skb_in, type, code, info, &opts); + ip_hdr(skb_in)->saddr = orig_ip; + out: + consume_skb(cloned_skb); +diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c +index 8956144ea65e8..cbab41d557b20 100644 +--- a/net/ipv6/icmp.c ++++ b/net/ipv6/icmp.c +@@ -331,10 +331,9 @@ static int icmpv6_getfrag(void *from, char *to, int offset, int len, int odd, st + } + + #if IS_ENABLED(CONFIG_IPV6_MIP6) +-static void mip6_addr_swap(struct sk_buff *skb) ++static void mip6_addr_swap(struct sk_buff *skb, const struct inet6_skb_parm *opt) + { + struct ipv6hdr *iph = ipv6_hdr(skb); +- struct inet6_skb_parm *opt = IP6CB(skb); + struct ipv6_destopt_hao *hao; + struct in6_addr tmp; + int off; +@@ -351,7 +350,7 @@ static void mip6_addr_swap(struct sk_buff *skb) + } + } + #else +-static inline void mip6_addr_swap(struct sk_buff *skb) {} ++static inline void mip6_addr_swap(struct sk_buff *skb, const struct inet6_skb_parm *opt) {} + #endif + + static struct dst_entry *icmpv6_route_lookup(struct net *net, +@@ -446,7 +445,8 @@ static int icmp6_iif(const struct sk_buff *skb) + * Send an ICMP message in response to a packet in error + */ + void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, +- const struct in6_addr *force_saddr) ++ const struct in6_addr *force_saddr, ++ const struct inet6_skb_parm *parm) + { + struct inet6_dev *idev = NULL; + struct ipv6hdr *hdr = ipv6_hdr(skb); +@@ -542,7 +542,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, + if (!(skb->dev->flags & IFF_LOOPBACK) && !icmpv6_global_allow(net, type)) + goto out_bh_enable; + +- mip6_addr_swap(skb); ++ mip6_addr_swap(skb, parm); + + sk = icmpv6_xmit_lock(net); + if (!sk) +@@ -559,7 +559,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, + /* select a more meaningful saddr from input if */ + struct net_device *in_netdev; + +- in_netdev = dev_get_by_index(net, IP6CB(skb)->iif); ++ in_netdev = dev_get_by_index(net, parm->iif); + if (in_netdev) { + ipv6_dev_get_saddr(net, in_netdev, &fl6.daddr, + inet6_sk(sk)->srcprefs, +@@ -640,7 +640,7 @@ EXPORT_SYMBOL(icmp6_send); + */ + void icmpv6_param_prob(struct sk_buff *skb, u8 code, int pos) + { +- icmp6_send(skb, ICMPV6_PARAMPROB, code, pos, NULL); ++ icmp6_send(skb, ICMPV6_PARAMPROB, code, pos, NULL, IP6CB(skb)); + kfree_skb(skb); + } + +@@ -697,10 +697,10 @@ int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type, + } + if (type == ICMP_TIME_EXCEEDED) + icmp6_send(skb2, ICMPV6_TIME_EXCEED, ICMPV6_EXC_HOPLIMIT, +- info, &temp_saddr); ++ info, &temp_saddr, IP6CB(skb2)); + else + icmp6_send(skb2, ICMPV6_DEST_UNREACH, ICMPV6_ADDR_UNREACH, +- info, &temp_saddr); ++ info, &temp_saddr, IP6CB(skb2)); + if (rt) + ip6_rt_put(rt); + +diff --git a/net/ipv6/ip6_icmp.c b/net/ipv6/ip6_icmp.c +index 70c8c2f36c980..9e3574880cb03 100644 +--- a/net/ipv6/ip6_icmp.c ++++ b/net/ipv6/ip6_icmp.c +@@ -33,23 +33,25 @@ int inet6_unregister_icmp_sender(ip6_icmp_send_t *fn) + } + EXPORT_SYMBOL(inet6_unregister_icmp_sender); + +-void icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info) ++void __icmpv6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info, ++ const struct inet6_skb_parm *parm) + { + ip6_icmp_send_t *send; + + rcu_read_lock(); + send = rcu_dereference(ip6_icmp_send); + if (send) +- send(skb, type, code, info, NULL); ++ send(skb, type, code, info, NULL, parm); + rcu_read_unlock(); + } +-EXPORT_SYMBOL(icmpv6_send); ++EXPORT_SYMBOL(__icmpv6_send); + #endif + + #if IS_ENABLED(CONFIG_NF_NAT) + #include + void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info) + { ++ struct inet6_skb_parm parm = { 0 }; + struct sk_buff *cloned_skb = NULL; + enum ip_conntrack_info ctinfo; + struct in6_addr orig_ip; +@@ -57,7 +59,7 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info) + + ct = nf_ct_get(skb_in, &ctinfo); + if (!ct || !(ct->status & IPS_SRC_NAT)) { +- icmpv6_send(skb_in, type, code, info); ++ __icmpv6_send(skb_in, type, code, info, &parm); + return; + } + +@@ -72,7 +74,7 @@ void icmpv6_ndo_send(struct sk_buff *skb_in, u8 type, u8 code, __u32 info) + + orig_ip = ipv6_hdr(skb_in)->saddr; + ipv6_hdr(skb_in)->saddr = ct->tuplehash[0].tuple.src.u3.in6; +- icmpv6_send(skb_in, type, code, info); ++ __icmpv6_send(skb_in, type, code, info, &parm); + ipv6_hdr(skb_in)->saddr = orig_ip; + out: + consume_skb(cloned_skb); +diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c +index 313eee12410ec..3db514c4c63ab 100644 +--- a/net/mac80211/mesh_hwmp.c ++++ b/net/mac80211/mesh_hwmp.c +@@ -356,7 +356,7 @@ u32 airtime_link_metric_get(struct ieee80211_local *local, + */ + tx_time = (device_constant + 10 * test_frame_len / rate); + estimated_retx = ((1 << (2 * ARITH_SHIFT)) / (s_unit - err)); +- result = (tx_time * estimated_retx) >> (2 * ARITH_SHIFT); ++ result = ((u64)tx_time * estimated_retx) >> (2 * ARITH_SHIFT); + return (u32)result; + } + +diff --git a/net/nfc/nci/uart.c b/net/nfc/nci/uart.c +index 11b554ce07ffc..1204c438e87dc 100644 +--- a/net/nfc/nci/uart.c ++++ b/net/nfc/nci/uart.c +@@ -292,7 +292,8 @@ static int nci_uart_tty_ioctl(struct tty_struct *tty, struct file *file, + + /* We don't provide read/write/poll interface for user space. */ + static ssize_t nci_uart_tty_read(struct tty_struct *tty, struct file *file, +- unsigned char __user *buf, size_t nr) ++ unsigned char *buf, size_t nr, ++ void **cookie, unsigned long offset) + { + return 0; + } +diff --git a/net/qrtr/tun.c b/net/qrtr/tun.c +index b238c40a99842..304b41fea5ab0 100644 +--- a/net/qrtr/tun.c ++++ b/net/qrtr/tun.c +@@ -31,6 +31,7 @@ static int qrtr_tun_send(struct qrtr_endpoint *ep, struct sk_buff *skb) + static int qrtr_tun_open(struct inode *inode, struct file *filp) + { + struct qrtr_tun *tun; ++ int ret; + + tun = kzalloc(sizeof(*tun), GFP_KERNEL); + if (!tun) +@@ -43,7 +44,16 @@ static int qrtr_tun_open(struct inode *inode, struct file *filp) + + filp->private_data = tun; + +- return qrtr_endpoint_register(&tun->ep, QRTR_EP_NID_AUTO); ++ ret = qrtr_endpoint_register(&tun->ep, QRTR_EP_NID_AUTO); ++ if (ret) ++ goto out; ++ ++ return 0; ++ ++out: ++ filp->private_data = NULL; ++ kfree(tun); ++ return ret; + } + + static ssize_t qrtr_tun_read_iter(struct kiocb *iocb, struct iov_iter *to) +diff --git a/net/sched/act_api.c b/net/sched/act_api.c +index f66417d5d2c31..181c4b501225f 100644 +--- a/net/sched/act_api.c ++++ b/net/sched/act_api.c +@@ -888,7 +888,7 @@ static const struct nla_policy tcf_action_policy[TCA_ACT_MAX + 1] = { + [TCA_ACT_HW_STATS] = NLA_POLICY_BITFIELD32(TCA_ACT_HW_STATS_ANY), + }; + +-static void tcf_idr_insert_many(struct tc_action *actions[]) ++void tcf_idr_insert_many(struct tc_action *actions[]) + { + int i; + +@@ -908,19 +908,13 @@ static void tcf_idr_insert_many(struct tc_action *actions[]) + } + } + +-struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, +- struct nlattr *nla, struct nlattr *est, +- char *name, int ovr, int bind, +- bool rtnl_held, +- struct netlink_ext_ack *extack) ++struct tc_action_ops *tc_action_load_ops(char *name, struct nlattr *nla, ++ bool rtnl_held, ++ struct netlink_ext_ack *extack) + { +- struct nla_bitfield32 flags = { 0, 0 }; +- u8 hw_stats = TCA_ACT_HW_STATS_ANY; +- struct tc_action *a; ++ struct nlattr *tb[TCA_ACT_MAX + 1]; + struct tc_action_ops *a_o; +- struct tc_cookie *cookie = NULL; + char act_name[IFNAMSIZ]; +- struct nlattr *tb[TCA_ACT_MAX + 1]; + struct nlattr *kind; + int err; + +@@ -928,33 +922,21 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla, + tcf_action_policy, extack); + if (err < 0) +- goto err_out; ++ return ERR_PTR(err); + err = -EINVAL; + kind = tb[TCA_ACT_KIND]; + if (!kind) { + NL_SET_ERR_MSG(extack, "TC action kind must be specified"); +- goto err_out; ++ return ERR_PTR(err); + } + if (nla_strlcpy(act_name, kind, IFNAMSIZ) >= IFNAMSIZ) { + NL_SET_ERR_MSG(extack, "TC action name too long"); +- goto err_out; +- } +- if (tb[TCA_ACT_COOKIE]) { +- cookie = nla_memdup_cookie(tb); +- if (!cookie) { +- NL_SET_ERR_MSG(extack, "No memory to generate TC cookie"); +- err = -ENOMEM; +- goto err_out; +- } ++ return ERR_PTR(err); + } +- hw_stats = tcf_action_hw_stats_get(tb[TCA_ACT_HW_STATS]); +- if (tb[TCA_ACT_FLAGS]) +- flags = nla_get_bitfield32(tb[TCA_ACT_FLAGS]); + } else { + if (strlcpy(act_name, name, IFNAMSIZ) >= IFNAMSIZ) { + NL_SET_ERR_MSG(extack, "TC action name too long"); +- err = -EINVAL; +- goto err_out; ++ return ERR_PTR(-EINVAL); + } + } + +@@ -976,24 +958,56 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + * indicate this using -EAGAIN. + */ + if (a_o != NULL) { +- err = -EAGAIN; +- goto err_mod; ++ module_put(a_o->owner); ++ return ERR_PTR(-EAGAIN); + } + #endif + NL_SET_ERR_MSG(extack, "Failed to load TC action module"); +- err = -ENOENT; +- goto err_free; ++ return ERR_PTR(-ENOENT); + } + ++ return a_o; ++} ++ ++struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, ++ struct nlattr *nla, struct nlattr *est, ++ char *name, int ovr, int bind, ++ struct tc_action_ops *a_o, bool rtnl_held, ++ struct netlink_ext_ack *extack) ++{ ++ struct nla_bitfield32 flags = { 0, 0 }; ++ u8 hw_stats = TCA_ACT_HW_STATS_ANY; ++ struct nlattr *tb[TCA_ACT_MAX + 1]; ++ struct tc_cookie *cookie = NULL; ++ struct tc_action *a; ++ int err; ++ + /* backward compatibility for policer */ +- if (name == NULL) ++ if (name == NULL) { ++ err = nla_parse_nested_deprecated(tb, TCA_ACT_MAX, nla, ++ tcf_action_policy, extack); ++ if (err < 0) ++ return ERR_PTR(err); ++ if (tb[TCA_ACT_COOKIE]) { ++ cookie = nla_memdup_cookie(tb); ++ if (!cookie) { ++ NL_SET_ERR_MSG(extack, "No memory to generate TC cookie"); ++ err = -ENOMEM; ++ goto err_out; ++ } ++ } ++ hw_stats = tcf_action_hw_stats_get(tb[TCA_ACT_HW_STATS]); ++ if (tb[TCA_ACT_FLAGS]) ++ flags = nla_get_bitfield32(tb[TCA_ACT_FLAGS]); ++ + err = a_o->init(net, tb[TCA_ACT_OPTIONS], est, &a, ovr, bind, + rtnl_held, tp, flags.value, extack); +- else ++ } else { + err = a_o->init(net, nla, est, &a, ovr, bind, rtnl_held, + tp, flags.value, extack); ++ } + if (err < 0) +- goto err_mod; ++ goto err_out; + + if (!name && tb[TCA_ACT_COOKIE]) + tcf_set_action_cookie(&a->act_cookie, cookie); +@@ -1010,14 +1024,11 @@ struct tc_action *tcf_action_init_1(struct net *net, struct tcf_proto *tp, + + return a; + +-err_mod: +- module_put(a_o->owner); +-err_free: ++err_out: + if (cookie) { + kfree(cookie->data); + kfree(cookie); + } +-err_out: + return ERR_PTR(err); + } + +@@ -1028,6 +1039,7 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, + struct tc_action *actions[], size_t *attr_size, + bool rtnl_held, struct netlink_ext_ack *extack) + { ++ struct tc_action_ops *ops[TCA_ACT_MAX_PRIO] = {}; + struct nlattr *tb[TCA_ACT_MAX_PRIO + 1]; + struct tc_action *act; + size_t sz = 0; +@@ -1039,9 +1051,20 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, + if (err < 0) + return err; + ++ for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) { ++ struct tc_action_ops *a_o; ++ ++ a_o = tc_action_load_ops(name, tb[i], rtnl_held, extack); ++ if (IS_ERR(a_o)) { ++ err = PTR_ERR(a_o); ++ goto err_mod; ++ } ++ ops[i - 1] = a_o; ++ } ++ + for (i = 1; i <= TCA_ACT_MAX_PRIO && tb[i]; i++) { + act = tcf_action_init_1(net, tp, tb[i], est, name, ovr, bind, +- rtnl_held, extack); ++ ops[i - 1], rtnl_held, extack); + if (IS_ERR(act)) { + err = PTR_ERR(act); + goto err; +@@ -1061,6 +1084,11 @@ int tcf_action_init(struct net *net, struct tcf_proto *tp, struct nlattr *nla, + + err: + tcf_action_destroy(actions, bind); ++err_mod: ++ for (i = 0; i < TCA_ACT_MAX_PRIO; i++) { ++ if (ops[i]) ++ module_put(ops[i]->owner); ++ } + return err; + } + +diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c +index 838b3fd94d776..b2b7834c6cf8a 100644 +--- a/net/sched/cls_api.c ++++ b/net/sched/cls_api.c +@@ -3055,16 +3055,24 @@ int tcf_exts_validate(struct net *net, struct tcf_proto *tp, struct nlattr **tb, + size_t attr_size = 0; + + if (exts->police && tb[exts->police]) { ++ struct tc_action_ops *a_o; ++ ++ a_o = tc_action_load_ops("police", tb[exts->police], rtnl_held, extack); ++ if (IS_ERR(a_o)) ++ return PTR_ERR(a_o); + act = tcf_action_init_1(net, tp, tb[exts->police], + rate_tlv, "police", ovr, +- TCA_ACT_BIND, rtnl_held, ++ TCA_ACT_BIND, a_o, rtnl_held, + extack); +- if (IS_ERR(act)) ++ if (IS_ERR(act)) { ++ module_put(a_o->owner); + return PTR_ERR(act); ++ } + + act->type = exts->type = TCA_OLD_COMPAT; + exts->actions[0] = act; + exts->nr_actions = 1; ++ tcf_idr_insert_many(exts->actions); + } else if (exts->action && tb[exts->action]) { + int err; + +diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c +index fb044792b571c..5f7e3d12523fe 100644 +--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c ++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c +@@ -475,9 +475,6 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) + if (!svc_rdma_post_recvs(newxprt)) + goto errout; + +- /* Swap out the handler */ +- newxprt->sc_cm_id->event_handler = svc_rdma_cma_handler; +- + /* Construct RDMA-CM private message */ + pmsg.cp_magic = rpcrdma_cmp_magic; + pmsg.cp_version = RPCRDMA_CMP_VERSION; +@@ -498,7 +495,10 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt) + } + conn_param.private_data = &pmsg; + conn_param.private_data_len = sizeof(pmsg); ++ rdma_lock_handler(newxprt->sc_cm_id); ++ newxprt->sc_cm_id->event_handler = svc_rdma_cma_handler; + ret = rdma_accept(newxprt->sc_cm_id, &conn_param); ++ rdma_unlock_handler(newxprt->sc_cm_id); + if (ret) { + trace_svcrdma_accept_err(newxprt, ret); + goto errout; +diff --git a/samples/Kconfig b/samples/Kconfig +index 0ed6e4d71d87b..e76cdfc50e257 100644 +--- a/samples/Kconfig ++++ b/samples/Kconfig +@@ -210,7 +210,7 @@ config SAMPLE_WATCHDOG + depends on CC_CAN_LINK + + config SAMPLE_WATCH_QUEUE +- bool "Build example /dev/watch_queue notification consumer" ++ bool "Build example watch_queue notification API consumer" + depends on CC_CAN_LINK && HEADERS_INSTALL + help + Build example userspace program to use the new mount_notify(), +diff --git a/samples/watch_queue/watch_test.c b/samples/watch_queue/watch_test.c +index 46e618a897fef..8c6cb57d5cfc5 100644 +--- a/samples/watch_queue/watch_test.c ++++ b/samples/watch_queue/watch_test.c +@@ -1,5 +1,5 @@ + // SPDX-License-Identifier: GPL-2.0 +-/* Use /dev/watch_queue to watch for notifications. ++/* Use watch_queue API to watch for notifications. + * + * Copyright (C) 2020 Red Hat, Inc. All Rights Reserved. + * Written by David Howells (dhowells@redhat.com) +diff --git a/security/commoncap.c b/security/commoncap.c +index a6c9bb4441d54..b2a656947504d 100644 +--- a/security/commoncap.c ++++ b/security/commoncap.c +@@ -500,7 +500,8 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size) + __u32 magic, nsmagic; + struct inode *inode = d_backing_inode(dentry); + struct user_namespace *task_ns = current_user_ns(), +- *fs_ns = inode->i_sb->s_user_ns; ++ *fs_ns = inode->i_sb->s_user_ns, ++ *ancestor; + kuid_t rootid; + size_t newsize; + +@@ -523,6 +524,15 @@ int cap_convert_nscap(struct dentry *dentry, void **ivalue, size_t size) + if (nsrootid == -1) + return -EINVAL; + ++ /* ++ * Do not allow allow adding a v3 filesystem capability xattr ++ * if the rootid field is ambiguous. ++ */ ++ for (ancestor = task_ns->parent; ancestor; ancestor = ancestor->parent) { ++ if (from_kuid(ancestor, rootid) == 0) ++ return -EINVAL; ++ } ++ + newsize = sizeof(struct vfs_ns_cap_data); + nscap = kmalloc(newsize, GFP_ATOMIC); + if (!nscap) +diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c +index 168c3b78ac47b..a6dd47eb086da 100644 +--- a/security/integrity/evm/evm_crypto.c ++++ b/security/integrity/evm/evm_crypto.c +@@ -73,7 +73,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo) + { + long rc; + const char *algo; +- struct crypto_shash **tfm, *tmp_tfm; ++ struct crypto_shash **tfm, *tmp_tfm = NULL; + struct shash_desc *desc; + + if (type == EVM_XATTR_HMAC) { +@@ -118,13 +118,16 @@ unlock: + alloc: + desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm), + GFP_KERNEL); +- if (!desc) ++ if (!desc) { ++ crypto_free_shash(tmp_tfm); + return ERR_PTR(-ENOMEM); ++ } + + desc->tfm = *tfm; + + rc = crypto_shash_init(desc); + if (rc) { ++ crypto_free_shash(tmp_tfm); + kfree(desc); + return ERR_PTR(rc); + } +diff --git a/security/integrity/ima/ima_kexec.c b/security/integrity/ima/ima_kexec.c +index 121de3e04af23..e29bea3dd4ccd 100644 +--- a/security/integrity/ima/ima_kexec.c ++++ b/security/integrity/ima/ima_kexec.c +@@ -119,6 +119,7 @@ void ima_add_kexec_buffer(struct kimage *image) + ret = kexec_add_buffer(&kbuf); + if (ret) { + pr_err("Error passing over kexec measurement buffer.\n"); ++ vfree(kexec_buffer); + return; + } + +@@ -128,6 +129,8 @@ void ima_add_kexec_buffer(struct kimage *image) + return; + } + ++ image->ima_buffer = kexec_buffer; ++ + pr_debug("kexec measurement buffer for the loaded kernel at 0x%lx.\n", + kbuf.mem); + } +diff --git a/security/integrity/ima/ima_mok.c b/security/integrity/ima/ima_mok.c +index 36cadadbfba47..1e5c019161738 100644 +--- a/security/integrity/ima/ima_mok.c ++++ b/security/integrity/ima/ima_mok.c +@@ -38,13 +38,12 @@ __init int ima_mok_init(void) + (KEY_POS_ALL & ~KEY_POS_SETATTR) | + KEY_USR_VIEW | KEY_USR_READ | + KEY_USR_WRITE | KEY_USR_SEARCH, +- KEY_ALLOC_NOT_IN_QUOTA, ++ KEY_ALLOC_NOT_IN_QUOTA | ++ KEY_ALLOC_SET_KEEP, + restriction, NULL); + + if (IS_ERR(ima_blacklist_keyring)) + panic("Can't allocate IMA blacklist keyring."); +- +- set_bit(KEY_FLAG_KEEP, &ima_blacklist_keyring->flags); + return 0; + } + device_initcall(ima_mok_init); +diff --git a/security/keys/Kconfig b/security/keys/Kconfig +index 83bc23409164a..c161642a84841 100644 +--- a/security/keys/Kconfig ++++ b/security/keys/Kconfig +@@ -119,7 +119,7 @@ config KEY_NOTIFICATIONS + bool "Provide key/keyring change notifications" + depends on KEYS && WATCH_QUEUE + help +- This option provides support for getting change notifications on keys +- and keyrings on which the caller has View permission. This makes use +- of the /dev/watch_queue misc device to handle the notification +- buffer and provides KEYCTL_WATCH_KEY to enable/disable watches. ++ This option provides support for getting change notifications ++ on keys and keyrings on which the caller has View permission. ++ This makes use of pipes to handle the notification buffer and ++ provides KEYCTL_WATCH_KEY to enable/disable watches. +diff --git a/security/keys/key.c b/security/keys/key.c +index e282c6179b21d..151ff39b68030 100644 +--- a/security/keys/key.c ++++ b/security/keys/key.c +@@ -303,6 +303,8 @@ struct key *key_alloc(struct key_type *type, const char *desc, + key->flags |= 1 << KEY_FLAG_BUILTIN; + if (flags & KEY_ALLOC_UID_KEYRING) + key->flags |= 1 << KEY_FLAG_UID_KEYRING; ++ if (flags & KEY_ALLOC_SET_KEEP) ++ key->flags |= 1 << KEY_FLAG_KEEP; + + #ifdef KEY_DEBUGGING + key->magic = KEY_DEBUG_MAGIC; +diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c +index b9fe02e5f84f0..7a937c3c52834 100644 +--- a/security/keys/trusted-keys/trusted_tpm1.c ++++ b/security/keys/trusted-keys/trusted_tpm1.c +@@ -403,9 +403,12 @@ static int osap(struct tpm_buf *tb, struct osapsess *s, + int ret; + + ret = tpm_get_random(chip, ononce, TPM_NONCE_SIZE); +- if (ret != TPM_NONCE_SIZE) ++ if (ret < 0) + return ret; + ++ if (ret != TPM_NONCE_SIZE) ++ return -EIO; ++ + tpm_buf_reset(tb, TPM_TAG_RQU_COMMAND, TPM_ORD_OSAP); + tpm_buf_append_u16(tb, type); + tpm_buf_append_u32(tb, handle); +@@ -496,8 +499,12 @@ static int tpm_seal(struct tpm_buf *tb, uint16_t keytype, + goto out; + + ret = tpm_get_random(chip, td->nonceodd, TPM_NONCE_SIZE); ++ if (ret < 0) ++ return ret; ++ + if (ret != TPM_NONCE_SIZE) +- goto out; ++ return -EIO; ++ + ordinal = htonl(TPM_ORD_SEAL); + datsize = htonl(datalen); + pcrsize = htonl(pcrinfosize); +@@ -601,9 +608,12 @@ static int tpm_unseal(struct tpm_buf *tb, + + ordinal = htonl(TPM_ORD_UNSEAL); + ret = tpm_get_random(chip, nonceodd, TPM_NONCE_SIZE); ++ if (ret < 0) ++ return ret; ++ + if (ret != TPM_NONCE_SIZE) { + pr_info("trusted_key: tpm_get_random failed (%d)\n", ret); +- return ret; ++ return -EIO; + } + ret = TSS_authhmac(authdata1, keyauth, TPM_NONCE_SIZE, + enonce1, nonceodd, cont, sizeof(uint32_t), +@@ -791,7 +801,7 @@ static int getoptions(char *c, struct trusted_key_payload *pay, + case Opt_migratable: + if (*args[0].from == '0') + pay->migratable = 0; +- else ++ else if (*args[0].from != '1') + return -EINVAL; + break; + case Opt_pcrlock: +@@ -1013,8 +1023,12 @@ static int trusted_instantiate(struct key *key, + case Opt_new: + key_len = payload->key_len; + ret = tpm_get_random(chip, payload->key, key_len); ++ if (ret < 0) ++ goto out; ++ + if (ret != key_len) { + pr_info("trusted_key: key_create failed (%d)\n", ret); ++ ret = -EIO; + goto out; + } + if (tpm2) +diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c +index 08ec7f48f01d0..e2a0ed5d02f01 100644 +--- a/security/keys/trusted-keys/trusted_tpm2.c ++++ b/security/keys/trusted-keys/trusted_tpm2.c +@@ -83,6 +83,12 @@ int tpm2_seal_trusted(struct tpm_chip *chip, + if (rc) + return rc; + ++ rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CREATE); ++ if (rc) { ++ tpm_put_ops(chip); ++ return rc; ++ } ++ + tpm_buf_append_u32(&buf, options->keyhandle); + tpm2_buf_append_auth(&buf, TPM2_RS_PW, + NULL /* nonce */, 0, +@@ -130,7 +136,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip, + goto out; + } + +- rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); ++ rc = tpm_transmit_cmd(chip, &buf, 4, "sealing data"); + if (rc) + goto out; + +@@ -157,6 +163,7 @@ out: + rc = -EPERM; + } + ++ tpm_put_ops(chip); + return rc; + } + +@@ -211,7 +218,7 @@ static int tpm2_load_cmd(struct tpm_chip *chip, + goto out; + } + +- rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); ++ rc = tpm_transmit_cmd(chip, &buf, 4, "loading blob"); + if (!rc) + *blob_handle = be32_to_cpup( + (__be32 *) &buf.data[TPM_HEADER_SIZE]); +@@ -260,7 +267,7 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip, + options->blobauth /* hmac */, + TPM_DIGEST_SIZE); + +- rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); ++ rc = tpm_transmit_cmd(chip, &buf, 6, "unsealing"); + if (rc > 0) + rc = -EPERM; + +@@ -304,12 +311,19 @@ int tpm2_unseal_trusted(struct tpm_chip *chip, + u32 blob_handle; + int rc; + +- rc = tpm2_load_cmd(chip, payload, options, &blob_handle); ++ rc = tpm_try_get_ops(chip); + if (rc) + return rc; + ++ rc = tpm2_load_cmd(chip, payload, options, &blob_handle); ++ if (rc) ++ goto out; ++ + rc = tpm2_unseal_cmd(chip, payload, options, blob_handle); + tpm2_flush_context(chip, blob_handle); + ++out: ++ tpm_put_ops(chip); ++ + return rc; + } +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c +index c46312710e73e..227eb89679637 100644 +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -3414,6 +3414,10 @@ static int selinux_inode_setsecurity(struct inode *inode, const char *name, + static int selinux_inode_listsecurity(struct inode *inode, char *buffer, size_t buffer_size) + { + const int len = sizeof(XATTR_NAME_SELINUX); ++ ++ if (!selinux_initialized(&selinux_state)) ++ return 0; ++ + if (buffer && len <= buffer_size) + memcpy(buffer, XATTR_NAME_SELINUX, len); + return len; +diff --git a/sound/core/init.c b/sound/core/init.c +index 764dbe673d488..018ce4ef12ec8 100644 +--- a/sound/core/init.c ++++ b/sound/core/init.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -418,6 +419,9 @@ int snd_card_disconnect(struct snd_card *card) + /* notify all devices that we are disconnected */ + snd_device_disconnect_all(card); + ++ if (card->sync_irq > 0) ++ synchronize_irq(card->sync_irq); ++ + snd_info_card_disconnect(card); + if (card->registered) { + device_del(&card->card_dev); +diff --git a/sound/core/pcm.c b/sound/core/pcm.c +index be5714f1bb58c..41cbdac5b1cfa 100644 +--- a/sound/core/pcm.c ++++ b/sound/core/pcm.c +@@ -1111,6 +1111,10 @@ static int snd_pcm_dev_disconnect(struct snd_device *device) + } + } + ++ for (cidx = 0; cidx < 2; cidx++) ++ for (substream = pcm->streams[cidx].substream; substream; substream = substream->next) ++ snd_pcm_sync_stop(substream, false); ++ + pcm_call_notify(pcm, n_disconnect); + for (cidx = 0; cidx < 2; cidx++) { + snd_unregister_device(&pcm->streams[cidx].dev); +diff --git a/sound/core/pcm_local.h b/sound/core/pcm_local.h +index 17a1a5d870980..b3e8be5aeafb3 100644 +--- a/sound/core/pcm_local.h ++++ b/sound/core/pcm_local.h +@@ -63,6 +63,7 @@ static inline void snd_pcm_timer_done(struct snd_pcm_substream *substream) {} + + void __snd_pcm_xrun(struct snd_pcm_substream *substream); + void snd_pcm_group_init(struct snd_pcm_group *group); ++void snd_pcm_sync_stop(struct snd_pcm_substream *substream, bool sync_irq); + + #ifdef CONFIG_SND_DMA_SGBUF + struct page *snd_pcm_sgbuf_ops_page(struct snd_pcm_substream *substream, +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c +index 9f3f8e953ff04..50c14cc861e69 100644 +--- a/sound/core/pcm_native.c ++++ b/sound/core/pcm_native.c +@@ -583,13 +583,13 @@ static inline void snd_pcm_timer_notify(struct snd_pcm_substream *substream, + #endif + } + +-static void snd_pcm_sync_stop(struct snd_pcm_substream *substream) ++void snd_pcm_sync_stop(struct snd_pcm_substream *substream, bool sync_irq) + { +- if (substream->runtime->stop_operating) { ++ if (substream->runtime && substream->runtime->stop_operating) { + substream->runtime->stop_operating = false; +- if (substream->ops->sync_stop) ++ if (substream->ops && substream->ops->sync_stop) + substream->ops->sync_stop(substream); +- else if (substream->pcm->card->sync_irq > 0) ++ else if (sync_irq && substream->pcm->card->sync_irq > 0) + synchronize_irq(substream->pcm->card->sync_irq); + } + } +@@ -686,7 +686,7 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream, + if (atomic_read(&substream->mmap_count)) + return -EBADFD; + +- snd_pcm_sync_stop(substream); ++ snd_pcm_sync_stop(substream, true); + + params->rmask = ~0U; + err = snd_pcm_hw_refine(substream, params); +@@ -809,7 +809,7 @@ static int do_hw_free(struct snd_pcm_substream *substream) + { + int result = 0; + +- snd_pcm_sync_stop(substream); ++ snd_pcm_sync_stop(substream, true); + if (substream->ops->hw_free) + result = substream->ops->hw_free(substream); + if (substream->managed_buffer_alloc) +@@ -1421,8 +1421,10 @@ static int snd_pcm_do_stop(struct snd_pcm_substream *substream, + snd_pcm_state_t state) + { + if (substream->runtime->trigger_master == substream && +- snd_pcm_running(substream)) ++ snd_pcm_running(substream)) { + substream->ops->trigger(substream, SNDRV_PCM_TRIGGER_STOP); ++ substream->runtime->stop_operating = true; ++ } + return 0; /* unconditonally stop all substreams */ + } + +@@ -1435,7 +1437,6 @@ static void snd_pcm_post_stop(struct snd_pcm_substream *substream, + runtime->status->state = state; + snd_pcm_timer_notify(substream, SNDRV_TIMER_EVENT_MSTOP); + } +- runtime->stop_operating = true; + wake_up(&runtime->sleep); + wake_up(&runtime->tsleep); + } +@@ -1615,6 +1616,7 @@ static int snd_pcm_do_suspend(struct snd_pcm_substream *substream, + if (! snd_pcm_running(substream)) + return 0; + substream->ops->trigger(substream, SNDRV_PCM_TRIGGER_SUSPEND); ++ runtime->stop_operating = true; + return 0; /* suspend unconditionally */ + } + +@@ -1691,6 +1693,12 @@ int snd_pcm_suspend_all(struct snd_pcm *pcm) + return err; + } + } ++ ++ for (stream = 0; stream < 2; stream++) ++ for (substream = pcm->streams[stream].substream; ++ substream; substream = substream->next) ++ snd_pcm_sync_stop(substream, false); ++ + return 0; + } + EXPORT_SYMBOL(snd_pcm_suspend_all); +@@ -1736,7 +1744,6 @@ static void snd_pcm_post_resume(struct snd_pcm_substream *substream, + snd_pcm_trigger_tstamp(substream); + runtime->status->state = runtime->status->suspended_state; + snd_pcm_timer_notify(substream, SNDRV_TIMER_EVENT_MRESUME); +- snd_pcm_sync_stop(substream); + } + + static const struct action_ops snd_pcm_action_resume = { +@@ -1866,7 +1873,7 @@ static int snd_pcm_do_prepare(struct snd_pcm_substream *substream, + snd_pcm_state_t state) + { + int err; +- snd_pcm_sync_stop(substream); ++ snd_pcm_sync_stop(substream, true); + err = substream->ops->prepare(substream); + if (err < 0) + return err; +diff --git a/sound/firewire/fireface/ff-protocol-latter.c b/sound/firewire/fireface/ff-protocol-latter.c +index 8d3b23778eb26..7ddb7b97f02db 100644 +--- a/sound/firewire/fireface/ff-protocol-latter.c ++++ b/sound/firewire/fireface/ff-protocol-latter.c +@@ -15,6 +15,61 @@ + #define LATTER_FETCH_MODE 0xffff00000010ULL + #define LATTER_SYNC_STATUS 0x0000801c0000ULL + ++// The content of sync status register differs between models. ++// ++// Fireface UCX: ++// 0xf0000000: (unidentified) ++// 0x0f000000: effective rate of sampling clock ++// 0x00f00000: detected rate of word clock on BNC interface ++// 0x000f0000: detected rate of ADAT or S/PDIF on optical interface ++// 0x0000f000: detected rate of S/PDIF on coaxial interface ++// 0x00000e00: effective source of sampling clock ++// 0x00000e00: Internal ++// 0x00000800: (unidentified) ++// 0x00000600: Word clock on BNC interface ++// 0x00000400: ADAT on optical interface ++// 0x00000200: S/PDIF on coaxial or optical interface ++// 0x00000100: Optical interface is used for ADAT signal ++// 0x00000080: (unidentified) ++// 0x00000040: Synchronized to word clock on BNC interface ++// 0x00000020: Synchronized to ADAT or S/PDIF on optical interface ++// 0x00000010: Synchronized to S/PDIF on coaxial interface ++// 0x00000008: (unidentified) ++// 0x00000004: Lock word clock on BNC interface ++// 0x00000002: Lock ADAT or S/PDIF on optical interface ++// 0x00000001: Lock S/PDIF on coaxial interface ++// ++// Fireface 802 (and perhaps UFX): ++// 0xf0000000: effective rate of sampling clock ++// 0x0f000000: detected rate of ADAT-B on 2nd optical interface ++// 0x00f00000: detected rate of ADAT-A on 1st optical interface ++// 0x000f0000: detected rate of AES/EBU on XLR or coaxial interface ++// 0x0000f000: detected rate of word clock on BNC interface ++// 0x00000e00: effective source of sampling clock ++// 0x00000e00: internal ++// 0x00000800: ADAT-B ++// 0x00000600: ADAT-A ++// 0x00000400: AES/EBU ++// 0x00000200: Word clock ++// 0x00000080: Synchronized to ADAT-B on 2nd optical interface ++// 0x00000040: Synchronized to ADAT-A on 1st optical interface ++// 0x00000020: Synchronized to AES/EBU on XLR or 2nd optical interface ++// 0x00000010: Synchronized to word clock on BNC interface ++// 0x00000008: Lock ADAT-B on 2nd optical interface ++// 0x00000004: Lock ADAT-A on 1st optical interface ++// 0x00000002: Lock AES/EBU on XLR or 2nd optical interface ++// 0x00000001: Lock word clock on BNC interface ++// ++// The pattern for rate bits: ++// 0x00: 32.0 kHz ++// 0x01: 44.1 kHz ++// 0x02: 48.0 kHz ++// 0x04: 64.0 kHz ++// 0x05: 88.2 kHz ++// 0x06: 96.0 kHz ++// 0x08: 128.0 kHz ++// 0x09: 176.4 kHz ++// 0x0a: 192.0 kHz + static int parse_clock_bits(u32 data, unsigned int *rate, + enum snd_ff_clock_src *src, + enum snd_ff_unit_version unit_version) +@@ -23,35 +78,48 @@ static int parse_clock_bits(u32 data, unsigned int *rate, + unsigned int rate; + u32 flag; + } *rate_entry, rate_entries[] = { +- { 32000, 0x00000000, }, +- { 44100, 0x01000000, }, +- { 48000, 0x02000000, }, +- { 64000, 0x04000000, }, +- { 88200, 0x05000000, }, +- { 96000, 0x06000000, }, +- { 128000, 0x08000000, }, +- { 176400, 0x09000000, }, +- { 192000, 0x0a000000, }, ++ { 32000, 0x00, }, ++ { 44100, 0x01, }, ++ { 48000, 0x02, }, ++ { 64000, 0x04, }, ++ { 88200, 0x05, }, ++ { 96000, 0x06, }, ++ { 128000, 0x08, }, ++ { 176400, 0x09, }, ++ { 192000, 0x0a, }, + }; + static const struct { + enum snd_ff_clock_src src; + u32 flag; +- } *clk_entry, clk_entries[] = { ++ } *clk_entry, *clk_entries, ucx_clk_entries[] = { + { SND_FF_CLOCK_SRC_SPDIF, 0x00000200, }, + { SND_FF_CLOCK_SRC_ADAT1, 0x00000400, }, + { SND_FF_CLOCK_SRC_WORD, 0x00000600, }, + { SND_FF_CLOCK_SRC_INTERNAL, 0x00000e00, }, ++ }, ufx_ff802_clk_entries[] = { ++ { SND_FF_CLOCK_SRC_WORD, 0x00000200, }, ++ { SND_FF_CLOCK_SRC_SPDIF, 0x00000400, }, ++ { SND_FF_CLOCK_SRC_ADAT1, 0x00000600, }, ++ { SND_FF_CLOCK_SRC_ADAT2, 0x00000800, }, ++ { SND_FF_CLOCK_SRC_INTERNAL, 0x00000e00, }, + }; ++ u32 rate_bits; ++ unsigned int clk_entry_count; + int i; + +- if (unit_version != SND_FF_UNIT_VERSION_UCX) { +- // e.g. 0x00fe0f20 but expected 0x00eff002. +- data = ((data & 0xf0f0f0f0) >> 4) | ((data & 0x0f0f0f0f) << 4); ++ if (unit_version == SND_FF_UNIT_VERSION_UCX) { ++ rate_bits = (data & 0x0f000000) >> 24; ++ clk_entries = ucx_clk_entries; ++ clk_entry_count = ARRAY_SIZE(ucx_clk_entries); ++ } else { ++ rate_bits = (data & 0xf0000000) >> 28; ++ clk_entries = ufx_ff802_clk_entries; ++ clk_entry_count = ARRAY_SIZE(ufx_ff802_clk_entries); + } + + for (i = 0; i < ARRAY_SIZE(rate_entries); ++i) { + rate_entry = rate_entries + i; +- if ((data & 0x0f000000) == rate_entry->flag) { ++ if (rate_bits == rate_entry->flag) { + *rate = rate_entry->rate; + break; + } +@@ -59,14 +127,14 @@ static int parse_clock_bits(u32 data, unsigned int *rate, + if (i == ARRAY_SIZE(rate_entries)) + return -EIO; + +- for (i = 0; i < ARRAY_SIZE(clk_entries); ++i) { ++ for (i = 0; i < clk_entry_count; ++i) { + clk_entry = clk_entries + i; + if ((data & 0x000e00) == clk_entry->flag) { + *src = clk_entry->src; + break; + } + } +- if (i == ARRAY_SIZE(clk_entries)) ++ if (i == clk_entry_count) + return -EIO; + + return 0; +@@ -249,16 +317,22 @@ static void latter_dump_status(struct snd_ff *ff, struct snd_info_buffer *buffer + char *const label; + u32 locked_mask; + u32 synced_mask; +- } *clk_entry, clk_entries[] = { ++ } *clk_entry, *clk_entries, ucx_clk_entries[] = { + { "S/PDIF", 0x00000001, 0x00000010, }, + { "ADAT", 0x00000002, 0x00000020, }, + { "WDClk", 0x00000004, 0x00000040, }, ++ }, ufx_ff802_clk_entries[] = { ++ { "WDClk", 0x00000001, 0x00000010, }, ++ { "AES/EBU", 0x00000002, 0x00000020, }, ++ { "ADAT-A", 0x00000004, 0x00000040, }, ++ { "ADAT-B", 0x00000008, 0x00000080, }, + }; + __le32 reg; + u32 data; + unsigned int rate; + enum snd_ff_clock_src src; + const char *label; ++ unsigned int clk_entry_count; + int i; + int err; + +@@ -270,7 +344,15 @@ static void latter_dump_status(struct snd_ff *ff, struct snd_info_buffer *buffer + + snd_iprintf(buffer, "External source detection:\n"); + +- for (i = 0; i < ARRAY_SIZE(clk_entries); ++i) { ++ if (ff->unit_version == SND_FF_UNIT_VERSION_UCX) { ++ clk_entries = ucx_clk_entries; ++ clk_entry_count = ARRAY_SIZE(ucx_clk_entries); ++ } else { ++ clk_entries = ufx_ff802_clk_entries; ++ clk_entry_count = ARRAY_SIZE(ufx_ff802_clk_entries); ++ } ++ ++ for (i = 0; i < clk_entry_count; ++i) { + clk_entry = clk_entries + i; + snd_iprintf(buffer, "%s: ", clk_entry->label); + if (data & clk_entry->locked_mask) { +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index d393401db1ec5..145f4ff47d54f 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2481,6 +2481,8 @@ static const struct pci_device_id azx_ids[] = { + /* CometLake-H */ + { PCI_DEVICE(0x8086, 0x06C8), + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, ++ { PCI_DEVICE(0x8086, 0xf1c8), ++ .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, + /* CometLake-S */ + { PCI_DEVICE(0x8086, 0xa3f0), + .driver_data = AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE}, +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index dc1ab4fc93a5b..c67d5915ce243 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -2133,7 +2133,6 @@ static int hdmi_pcm_close(struct hda_pcm_stream *hinfo, + goto unlock; + } + per_cvt = get_cvt(spec, cvt_idx); +- snd_BUG_ON(!per_cvt->assigned); + per_cvt->assigned = 0; + hinfo->nid = 0; + +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 290645516313c..1927605f0f7ed 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -1905,6 +1905,7 @@ enum { + ALC889_FIXUP_FRONT_HP_NO_PRESENCE, + ALC889_FIXUP_VAIO_TT, + ALC888_FIXUP_EEE1601, ++ ALC886_FIXUP_EAPD, + ALC882_FIXUP_EAPD, + ALC883_FIXUP_EAPD, + ALC883_FIXUP_ACER_EAPD, +@@ -2238,6 +2239,15 @@ static const struct hda_fixup alc882_fixups[] = { + { } + } + }, ++ [ALC886_FIXUP_EAPD] = { ++ .type = HDA_FIXUP_VERBS, ++ .v.verbs = (const struct hda_verb[]) { ++ /* change to EAPD mode */ ++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x07 }, ++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0068 }, ++ { } ++ } ++ }, + [ALC882_FIXUP_EAPD] = { + .type = HDA_FIXUP_VERBS, + .v.verbs = (const struct hda_verb[]) { +@@ -2510,6 +2520,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = { + SND_PCI_QUIRK(0x106b, 0x4a00, "Macbook 5,2", ALC889_FIXUP_MBA11_VREF), + + SND_PCI_QUIRK(0x1071, 0x8258, "Evesham Voyaeger", ALC882_FIXUP_EAPD), ++ SND_PCI_QUIRK(0x13fe, 0x1009, "Advantech MIT-W101", ALC886_FIXUP_EAPD), + SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE), + SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS), + SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950), +@@ -4280,6 +4291,28 @@ static void alc280_fixup_hp_gpio4(struct hda_codec *codec, + } + } + ++/* HP Spectre x360 14 model needs a unique workaround for enabling the amp; ++ * it needs to toggle the GPIO0 once on and off at each time (bko#210633) ++ */ ++static void alc245_fixup_hp_x360_amp(struct hda_codec *codec, ++ const struct hda_fixup *fix, int action) ++{ ++ struct alc_spec *spec = codec->spec; ++ ++ switch (action) { ++ case HDA_FIXUP_ACT_PRE_PROBE: ++ spec->gpio_mask |= 0x01; ++ spec->gpio_dir |= 0x01; ++ break; ++ case HDA_FIXUP_ACT_INIT: ++ /* need to toggle GPIO to enable the amp */ ++ alc_update_gpio_data(codec, 0x01, true); ++ msleep(100); ++ alc_update_gpio_data(codec, 0x01, false); ++ break; ++ } ++} ++ + static void alc_update_coef_led(struct hda_codec *codec, + struct alc_coef_led *led, + bool polarity, bool on) +@@ -6266,6 +6299,7 @@ enum { + ALC280_FIXUP_HP_DOCK_PINS, + ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED, + ALC280_FIXUP_HP_9480M, ++ ALC245_FIXUP_HP_X360_AMP, + ALC288_FIXUP_DELL_HEADSET_MODE, + ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, + ALC288_FIXUP_DELL_XPS_13, +@@ -6971,6 +7005,10 @@ static const struct hda_fixup alc269_fixups[] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc280_fixup_hp_9480m, + }, ++ [ALC245_FIXUP_HP_X360_AMP] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc245_fixup_hp_x360_amp, ++ }, + [ALC288_FIXUP_DELL_HEADSET_MODE] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc_fixup_headset_mode_dell_alc288, +@@ -7985,6 +8023,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x87f4, "HP", ALC287_FIXUP_HP_GPIO_LED), + SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), ++ SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), + SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC), + SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300), + SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), +@@ -8357,6 +8396,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = { + {.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"}, + {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"}, + {.id = ALC274_FIXUP_HP_MIC, .name = "alc274-hp-mic-detect"}, ++ {.id = ALC245_FIXUP_HP_X360_AMP, .name = "alc245-hp-x360-amp"}, + {} + }; + #define ALC225_STANDARD_PINS \ +diff --git a/sound/soc/codecs/cpcap.c b/sound/soc/codecs/cpcap.c +index f046987ee4cdb..c0425e3707d9c 100644 +--- a/sound/soc/codecs/cpcap.c ++++ b/sound/soc/codecs/cpcap.c +@@ -1264,12 +1264,12 @@ static int cpcap_voice_hw_params(struct snd_pcm_substream *substream, + + if (direction == SNDRV_PCM_STREAM_CAPTURE) { + mask = 0x0000; +- mask |= CPCAP_BIT_MIC1_RX_TIMESLOT0; +- mask |= CPCAP_BIT_MIC1_RX_TIMESLOT1; +- mask |= CPCAP_BIT_MIC1_RX_TIMESLOT2; +- mask |= CPCAP_BIT_MIC2_TIMESLOT0; +- mask |= CPCAP_BIT_MIC2_TIMESLOT1; +- mask |= CPCAP_BIT_MIC2_TIMESLOT2; ++ mask |= BIT(CPCAP_BIT_MIC1_RX_TIMESLOT0); ++ mask |= BIT(CPCAP_BIT_MIC1_RX_TIMESLOT1); ++ mask |= BIT(CPCAP_BIT_MIC1_RX_TIMESLOT2); ++ mask |= BIT(CPCAP_BIT_MIC2_TIMESLOT0); ++ mask |= BIT(CPCAP_BIT_MIC2_TIMESLOT1); ++ mask |= BIT(CPCAP_BIT_MIC2_TIMESLOT2); + val = 0x0000; + if (channels >= 2) + val = BIT(CPCAP_BIT_MIC1_RX_TIMESLOT0); +diff --git a/sound/soc/codecs/cs42l56.c b/sound/soc/codecs/cs42l56.c +index 97024a6ac96d7..06dcfae9dfe71 100644 +--- a/sound/soc/codecs/cs42l56.c ++++ b/sound/soc/codecs/cs42l56.c +@@ -1249,6 +1249,7 @@ static int cs42l56_i2c_probe(struct i2c_client *i2c_client, + dev_err(&i2c_client->dev, + "CS42L56 Device ID (%X). Expected %X\n", + devid, CS42L56_DEVID); ++ ret = -EINVAL; + goto err_enable; + } + alpha_rev = reg & CS42L56_AREV_MASK; +@@ -1306,7 +1307,7 @@ static int cs42l56_i2c_probe(struct i2c_client *i2c_client, + ret = devm_snd_soc_register_component(&i2c_client->dev, + &soc_component_dev_cs42l56, &cs42l56_dai, 1); + if (ret < 0) +- return ret; ++ goto err_enable; + + return 0; + +diff --git a/sound/soc/codecs/rt5682-i2c.c b/sound/soc/codecs/rt5682-i2c.c +index 6b4e0eb30c89a..7e652843c57d9 100644 +--- a/sound/soc/codecs/rt5682-i2c.c ++++ b/sound/soc/codecs/rt5682-i2c.c +@@ -268,6 +268,9 @@ static void rt5682_i2c_shutdown(struct i2c_client *client) + { + struct rt5682_priv *rt5682 = i2c_get_clientdata(client); + ++ cancel_delayed_work_sync(&rt5682->jack_detect_work); ++ cancel_delayed_work_sync(&rt5682->jd_check_work); ++ + rt5682_reset(rt5682); + } + +diff --git a/sound/soc/codecs/wsa881x.c b/sound/soc/codecs/wsa881x.c +index 4530b74f5921b..db87e07b11c94 100644 +--- a/sound/soc/codecs/wsa881x.c ++++ b/sound/soc/codecs/wsa881x.c +@@ -640,6 +640,7 @@ static struct regmap_config wsa881x_regmap_config = { + .val_bits = 8, + .cache_type = REGCACHE_RBTREE, + .reg_defaults = wsa881x_defaults, ++ .max_register = WSA881X_SPKR_STATUS3, + .num_reg_defaults = ARRAY_SIZE(wsa881x_defaults), + .volatile_reg = wsa881x_volatile_register, + .readable_reg = wsa881x_readable_register, +diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c +index 6cada4c1e283b..ab31045cfc952 100644 +--- a/sound/soc/generic/simple-card-utils.c ++++ b/sound/soc/generic/simple-card-utils.c +@@ -172,16 +172,15 @@ int asoc_simple_parse_clk(struct device *dev, + * or device's module clock. + */ + clk = devm_get_clk_from_child(dev, node, NULL); +- if (!IS_ERR(clk)) { +- simple_dai->sysclk = clk_get_rate(clk); ++ if (IS_ERR(clk)) ++ clk = devm_get_clk_from_child(dev, dlc->of_node, NULL); + ++ if (!IS_ERR(clk)) { + simple_dai->clk = clk; +- } else if (!of_property_read_u32(node, "system-clock-frequency", &val)) { ++ simple_dai->sysclk = clk_get_rate(clk); ++ } else if (!of_property_read_u32(node, "system-clock-frequency", ++ &val)) { + simple_dai->sysclk = val; +- } else { +- clk = devm_get_clk_from_child(dev, dlc->of_node, NULL); +- if (!IS_ERR(clk)) +- simple_dai->sysclk = clk_get_rate(clk); + } + + if (of_property_read_bool(node, "system-clock-direction-out")) +diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c +index a8d43c87cb5a2..07e72ca1dfbc9 100644 +--- a/sound/soc/intel/boards/sof_sdw.c ++++ b/sound/soc/intel/boards/sof_sdw.c +@@ -54,7 +54,8 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = { + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A3E") + }, +- .driver_data = (void *)(SOF_RT711_JD_SRC_JD2 | ++ .driver_data = (void *)(SOF_SDW_TGL_HDMI | ++ SOF_RT711_JD_SRC_JD2 | + SOF_RT715_DAI_ID_FIX), + }, + { +@@ -63,7 +64,8 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = { + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc"), + DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "0A5E") + }, +- .driver_data = (void *)(SOF_RT711_JD_SRC_JD2 | ++ .driver_data = (void *)(SOF_SDW_TGL_HDMI | ++ SOF_RT711_JD_SRC_JD2 | + SOF_RT715_DAI_ID_FIX | + SOF_SDW_FOUR_SPK), + }, +diff --git a/sound/soc/qcom/lpass-apq8016.c b/sound/soc/qcom/lpass-apq8016.c +index 0aedb3a0a798a..7c0e774ad0625 100644 +--- a/sound/soc/qcom/lpass-apq8016.c ++++ b/sound/soc/qcom/lpass-apq8016.c +@@ -250,7 +250,7 @@ static struct lpass_variant apq8016_data = { + .micmode = REG_FIELD_ID(0x1000, 4, 7, 4, 0x1000), + .micmono = REG_FIELD_ID(0x1000, 3, 3, 4, 0x1000), + .wssrc = REG_FIELD_ID(0x1000, 2, 2, 4, 0x1000), +- .bitwidth = REG_FIELD_ID(0x1000, 0, 0, 4, 0x1000), ++ .bitwidth = REG_FIELD_ID(0x1000, 0, 1, 4, 0x1000), + + .rdma_dyncclk = REG_FIELD_ID(0x8400, 12, 12, 2, 0x1000), + .rdma_bursten = REG_FIELD_ID(0x8400, 11, 11, 2, 0x1000), +diff --git a/sound/soc/qcom/lpass-cpu.c b/sound/soc/qcom/lpass-cpu.c +index 46bb24afeacf0..a33dbd6de8a06 100644 +--- a/sound/soc/qcom/lpass-cpu.c ++++ b/sound/soc/qcom/lpass-cpu.c +@@ -286,16 +286,12 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream, + dev_err(dai->dev, "error writing to i2sctl reg: %d\n", + ret); + +- if (drvdata->bit_clk_state[id] == LPAIF_BIT_CLK_DISABLE) { +- ret = clk_enable(drvdata->mi2s_bit_clk[id]); +- if (ret) { +- dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret); +- clk_disable(drvdata->mi2s_osr_clk[id]); +- return ret; +- } +- drvdata->bit_clk_state[id] = LPAIF_BIT_CLK_ENABLE; ++ ret = clk_enable(drvdata->mi2s_bit_clk[id]); ++ if (ret) { ++ dev_err(dai->dev, "error in enabling mi2s bit clk: %d\n", ret); ++ clk_disable(drvdata->mi2s_osr_clk[id]); ++ return ret; + } +- + break; + case SNDRV_PCM_TRIGGER_STOP: + case SNDRV_PCM_TRIGGER_SUSPEND: +@@ -310,10 +306,9 @@ static int lpass_cpu_daiops_trigger(struct snd_pcm_substream *substream, + if (ret) + dev_err(dai->dev, "error writing to i2sctl reg: %d\n", + ret); +- if (drvdata->bit_clk_state[id] == LPAIF_BIT_CLK_ENABLE) { +- clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]); +- drvdata->bit_clk_state[id] = LPAIF_BIT_CLK_DISABLE; +- } ++ ++ clk_disable(drvdata->mi2s_bit_clk[dai->driver->id]); ++ + break; + } + +@@ -599,7 +594,7 @@ static bool lpass_hdmi_regmap_writeable(struct device *dev, unsigned int reg) + return true; + } + +- for (i = 0; i < v->rdma_channels; ++i) { ++ for (i = 0; i < v->hdmi_rdma_channels; ++i) { + if (reg == LPAIF_HDMI_RDMACTL_REG(v, i)) + return true; + if (reg == LPAIF_HDMI_RDMABASE_REG(v, i)) +@@ -645,7 +640,7 @@ static bool lpass_hdmi_regmap_readable(struct device *dev, unsigned int reg) + if (reg == LPASS_HDMITX_APP_IRQSTAT_REG(v)) + return true; + +- for (i = 0; i < v->rdma_channels; ++i) { ++ for (i = 0; i < v->hdmi_rdma_channels; ++i) { + if (reg == LPAIF_HDMI_RDMACTL_REG(v, i)) + return true; + if (reg == LPAIF_HDMI_RDMABASE_REG(v, i)) +@@ -672,7 +667,7 @@ static bool lpass_hdmi_regmap_volatile(struct device *dev, unsigned int reg) + if (reg == LPASS_HDMI_TX_LEGACY_ADDR(v)) + return true; + +- for (i = 0; i < v->rdma_channels; ++i) { ++ for (i = 0; i < v->hdmi_rdma_channels; ++i) { + if (reg == LPAIF_HDMI_RDMACURR_REG(v, i)) + return true; + } +@@ -822,7 +817,7 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev) + } + + lpass_hdmi_regmap_config.max_register = LPAIF_HDMI_RDMAPER_REG(variant, +- variant->hdmi_rdma_channels); ++ variant->hdmi_rdma_channels - 1); + drvdata->hdmiif_map = devm_regmap_init_mmio(dev, drvdata->hdmiif, + &lpass_hdmi_regmap_config); + if (IS_ERR(drvdata->hdmiif_map)) { +@@ -866,7 +861,6 @@ int asoc_qcom_lpass_cpu_platform_probe(struct platform_device *pdev) + PTR_ERR(drvdata->mi2s_bit_clk[dai_id])); + return PTR_ERR(drvdata->mi2s_bit_clk[dai_id]); + } +- drvdata->bit_clk_state[dai_id] = LPAIF_BIT_CLK_DISABLE; + } + + /* Allocation for i2sctl regmap fields */ +diff --git a/sound/soc/qcom/lpass-lpaif-reg.h b/sound/soc/qcom/lpass-lpaif-reg.h +index baf72f124ea9b..2eb03ad9b7c74 100644 +--- a/sound/soc/qcom/lpass-lpaif-reg.h ++++ b/sound/soc/qcom/lpass-lpaif-reg.h +@@ -60,9 +60,6 @@ + #define LPAIF_I2SCTL_BITWIDTH_24 1 + #define LPAIF_I2SCTL_BITWIDTH_32 2 + +-#define LPAIF_BIT_CLK_DISABLE 0 +-#define LPAIF_BIT_CLK_ENABLE 1 +- + #define LPAIF_I2SCTL_RESET_STATE 0x003C0004 + #define LPAIF_DMACTL_RESET_STATE 0x00200000 + +diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h +index 868c1c8dbd455..1d926dd5f5900 100644 +--- a/sound/soc/qcom/lpass.h ++++ b/sound/soc/qcom/lpass.h +@@ -68,7 +68,6 @@ struct lpass_data { + unsigned int mi2s_playback_sd_mode[LPASS_MAX_MI2S_PORTS]; + unsigned int mi2s_capture_sd_mode[LPASS_MAX_MI2S_PORTS]; + int hdmi_port_enable; +- int bit_clk_state[LPASS_MAX_MI2S_PORTS]; + + /* low-power audio interface (LPAIF) registers */ + void __iomem *lpaif; +diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c +index c9ac9c1d26c47..9766725c29166 100644 +--- a/sound/soc/qcom/qdsp6/q6asm-dai.c ++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c +@@ -1233,6 +1233,25 @@ static void q6asm_dai_pcm_free(struct snd_soc_component *component, + } + } + ++static const struct snd_soc_dapm_widget q6asm_dapm_widgets[] = { ++ SND_SOC_DAPM_AIF_IN("MM_DL1", "MultiMedia1 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL2", "MultiMedia2 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL3", "MultiMedia3 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL4", "MultiMedia4 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL5", "MultiMedia5 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL6", "MultiMedia6 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL7", "MultiMedia7 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_IN("MM_DL8", "MultiMedia8 Playback", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL1", "MultiMedia1 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL3", "MultiMedia3 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL6", "MultiMedia6 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL7", "MultiMedia7 Capture", 0, SND_SOC_NOPM, 0, 0), ++ SND_SOC_DAPM_AIF_OUT("MM_UL8", "MultiMedia8 Capture", 0, SND_SOC_NOPM, 0, 0), ++}; ++ + static const struct snd_soc_component_driver q6asm_fe_dai_component = { + .name = DRV_NAME, + .open = q6asm_dai_open, +@@ -1245,6 +1264,8 @@ static const struct snd_soc_component_driver q6asm_fe_dai_component = { + .pcm_construct = q6asm_dai_pcm_new, + .pcm_destruct = q6asm_dai_pcm_free, + .compress_ops = &q6asm_dai_compress_ops, ++ .dapm_widgets = q6asm_dapm_widgets, ++ .num_dapm_widgets = ARRAY_SIZE(q6asm_dapm_widgets), + }; + + static struct snd_soc_dai_driver q6asm_fe_dais_template[] = { +diff --git a/sound/soc/qcom/qdsp6/q6routing.c b/sound/soc/qcom/qdsp6/q6routing.c +index 53185e26fea17..0a6b9433f6acf 100644 +--- a/sound/soc/qcom/qdsp6/q6routing.c ++++ b/sound/soc/qcom/qdsp6/q6routing.c +@@ -713,24 +713,6 @@ static const struct snd_kcontrol_new mmul8_mixer_controls[] = { + Q6ROUTING_TX_MIXERS(MSM_FRONTEND_DAI_MULTIMEDIA8) }; + + static const struct snd_soc_dapm_widget msm_qdsp6_widgets[] = { +- /* Frontend AIF */ +- SND_SOC_DAPM_AIF_IN("MM_DL1", "MultiMedia1 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL2", "MultiMedia2 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL3", "MultiMedia3 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL4", "MultiMedia4 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL5", "MultiMedia5 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL6", "MultiMedia6 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL7", "MultiMedia7 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_IN("MM_DL8", "MultiMedia8 Playback", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL1", "MultiMedia1 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL2", "MultiMedia2 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL3", "MultiMedia3 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL4", "MultiMedia4 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL5", "MultiMedia5 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL6", "MultiMedia6 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL7", "MultiMedia7 Capture", 0, 0, 0, 0), +- SND_SOC_DAPM_AIF_OUT("MM_UL8", "MultiMedia8 Capture", 0, 0, 0, 0), +- + /* Mixer definitions */ + SND_SOC_DAPM_MIXER("HDMI Mixer", SND_SOC_NOPM, 0, 0, + hdmi_mixer_controls, +diff --git a/sound/soc/sh/siu.h b/sound/soc/sh/siu.h +index 6201840f1bc05..a675c36fc9d95 100644 +--- a/sound/soc/sh/siu.h ++++ b/sound/soc/sh/siu.h +@@ -169,7 +169,7 @@ static inline u32 siu_read32(u32 __iomem *addr) + #define SIU_BRGBSEL (0x108 / sizeof(u32)) + #define SIU_BRRB (0x10c / sizeof(u32)) + +-extern struct snd_soc_component_driver siu_component; ++extern const struct snd_soc_component_driver siu_component; + extern struct siu_info *siu_i2s_data; + + int siu_init_port(int port, struct siu_port **port_info, struct snd_card *card); +diff --git a/sound/soc/sh/siu_pcm.c b/sound/soc/sh/siu_pcm.c +index 45c4320976ab9..4785886df4f03 100644 +--- a/sound/soc/sh/siu_pcm.c ++++ b/sound/soc/sh/siu_pcm.c +@@ -543,7 +543,7 @@ static void siu_pcm_free(struct snd_soc_component *component, + dev_dbg(pcm->card->dev, "%s\n", __func__); + } + +-struct const snd_soc_component_driver siu_component = { ++const struct snd_soc_component_driver siu_component = { + .name = DRV_NAME, + .open = siu_pcm_open, + .close = siu_pcm_close, +diff --git a/sound/soc/sof/debug.c b/sound/soc/sof/debug.c +index 9419a99bab536..3ef51b2210237 100644 +--- a/sound/soc/sof/debug.c ++++ b/sound/soc/sof/debug.c +@@ -350,7 +350,7 @@ static ssize_t sof_dfsentry_write(struct file *file, const char __user *buffer, + char *string; + int ret; + +- string = kzalloc(count, GFP_KERNEL); ++ string = kzalloc(count+1, GFP_KERNEL); + if (!string) + return -ENOMEM; + +diff --git a/sound/soc/sof/intel/hda-dsp.c b/sound/soc/sof/intel/hda-dsp.c +index 2dbc1273e56bd..cd324f3d11d17 100644 +--- a/sound/soc/sof/intel/hda-dsp.c ++++ b/sound/soc/sof/intel/hda-dsp.c +@@ -801,11 +801,15 @@ int hda_dsp_runtime_idle(struct snd_sof_dev *sdev) + + int hda_dsp_runtime_suspend(struct snd_sof_dev *sdev) + { ++ struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata; + const struct sof_dsp_power_state target_state = { + .state = SOF_DSP_PM_D3, + }; + int ret; + ++ /* cancel any attempt for DSP D0I3 */ ++ cancel_delayed_work_sync(&hda->d0i3_work); ++ + /* stop hda controller and power dsp off */ + ret = hda_suspend(sdev, true); + if (ret < 0) +diff --git a/sound/soc/sof/sof-pci-dev.c b/sound/soc/sof/sof-pci-dev.c +index 8f62e3487dc18..75657a25dbc05 100644 +--- a/sound/soc/sof/sof-pci-dev.c ++++ b/sound/soc/sof/sof-pci-dev.c +@@ -65,6 +65,13 @@ static const struct dmi_system_id community_key_platforms[] = { + DMI_MATCH(DMI_BOARD_NAME, "UP-APL01"), + } + }, ++ { ++ .ident = "Up Extreme", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "AAEON"), ++ DMI_MATCH(DMI_BOARD_NAME, "UP-WHL01"), ++ } ++ }, + { + .ident = "Google Chromebooks", + .matches = { +diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c +index a860303cc5222..1b08f52ef86f6 100644 +--- a/sound/usb/pcm.c ++++ b/sound/usb/pcm.c +@@ -1861,7 +1861,7 @@ void snd_usb_preallocate_buffer(struct snd_usb_substream *subs) + { + struct snd_pcm *pcm = subs->stream->pcm; + struct snd_pcm_substream *s = pcm->streams[subs->direction].substream; +- struct device *dev = subs->dev->bus->controller; ++ struct device *dev = subs->dev->bus->sysdev; + + if (snd_usb_use_vmalloc) + snd_pcm_set_managed_buffer(s, SNDRV_DMA_TYPE_VMALLOC, +diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c +index ad165e6e74bc0..b954db52bb807 100644 +--- a/tools/lib/bpf/libbpf.c ++++ b/tools/lib/bpf/libbpf.c +@@ -865,24 +865,24 @@ static int bpf_map__init_kern_struct_ops(struct bpf_map *map, + if (btf_is_ptr(mtype)) { + struct bpf_program *prog; + +- mtype = skip_mods_and_typedefs(btf, mtype->type, &mtype_id); ++ prog = st_ops->progs[i]; ++ if (!prog) ++ continue; ++ + kern_mtype = skip_mods_and_typedefs(kern_btf, + kern_mtype->type, + &kern_mtype_id); +- if (!btf_is_func_proto(mtype) || +- !btf_is_func_proto(kern_mtype)) { +- pr_warn("struct_ops init_kern %s: non func ptr %s is not supported\n", ++ ++ /* mtype->type must be a func_proto which was ++ * guaranteed in bpf_object__collect_st_ops_relos(), ++ * so only check kern_mtype for func_proto here. ++ */ ++ if (!btf_is_func_proto(kern_mtype)) { ++ pr_warn("struct_ops init_kern %s: kernel member %s is not a func ptr\n", + map->name, mname); + return -ENOTSUP; + } + +- prog = st_ops->progs[i]; +- if (!prog) { +- pr_debug("struct_ops init_kern %s: func ptr %s is not set\n", +- map->name, mname); +- continue; +- } +- + prog->attach_btf_id = kern_type_id; + prog->expected_attach_type = kern_member_idx; + +diff --git a/tools/objtool/arch/x86/special.c b/tools/objtool/arch/x86/special.c +index fd4af88c0ea52..151b13d0a2676 100644 +--- a/tools/objtool/arch/x86/special.c ++++ b/tools/objtool/arch/x86/special.c +@@ -48,7 +48,7 @@ bool arch_support_alt_relocation(struct special_alt *special_alt, + * replacement group. + */ + return insn->offset == special_alt->new_off && +- (insn->type == INSN_CALL || is_static_jump(insn)); ++ (insn->type == INSN_CALL || is_jump(insn)); + } + + /* +diff --git a/tools/objtool/check.c b/tools/objtool/check.c +index 4bd30315eb62b..dc24aac08edd6 100644 +--- a/tools/objtool/check.c ++++ b/tools/objtool/check.c +@@ -789,7 +789,8 @@ static int add_jump_destinations(struct objtool_file *file) + dest_sec = reloc->sym->sec; + dest_off = reloc->sym->sym.st_value + + arch_dest_reloc_offset(reloc->addend); +- } else if (strstr(reloc->sym->name, "_indirect_thunk_")) { ++ } else if (!strncmp(reloc->sym->name, "__x86_indirect_thunk_", 21) || ++ !strncmp(reloc->sym->name, "__x86_retpoline_", 16)) { + /* + * Retpoline jumps are really dynamic jumps in + * disguise, so convert them accordingly. +@@ -849,8 +850,8 @@ static int add_jump_destinations(struct objtool_file *file) + * case where the parent function's only reference to a + * subfunction is through a jump table. + */ +- if (!strstr(insn->func->name, ".cold.") && +- strstr(insn->jump_dest->func->name, ".cold.")) { ++ if (!strstr(insn->func->name, ".cold") && ++ strstr(insn->jump_dest->func->name, ".cold")) { + insn->func->cfunc = insn->jump_dest->func; + insn->jump_dest->func->pfunc = insn->func; + +@@ -2592,15 +2593,19 @@ static int validate_branch(struct objtool_file *file, struct symbol *func, + break; + + case INSN_STD: +- if (state.df) ++ if (state.df) { + WARN_FUNC("recursive STD", sec, insn->offset); ++ return 1; ++ } + + state.df = true; + break; + + case INSN_CLD: +- if (!state.df && func) ++ if (!state.df && func) { + WARN_FUNC("redundant CLD", sec, insn->offset); ++ return 1; ++ } + + state.df = false; + break; +diff --git a/tools/objtool/check.h b/tools/objtool/check.h +index 5ec00a4b891b6..2804848e628e3 100644 +--- a/tools/objtool/check.h ++++ b/tools/objtool/check.h +@@ -54,6 +54,17 @@ static inline bool is_static_jump(struct instruction *insn) + insn->type == INSN_JUMP_UNCONDITIONAL; + } + ++static inline bool is_dynamic_jump(struct instruction *insn) ++{ ++ return insn->type == INSN_JUMP_DYNAMIC || ++ insn->type == INSN_JUMP_DYNAMIC_CONDITIONAL; ++} ++ ++static inline bool is_jump(struct instruction *insn) ++{ ++ return is_static_jump(insn) || is_dynamic_jump(insn); ++} ++ + struct instruction *find_insn(struct objtool_file *file, + struct section *sec, unsigned long offset); + +diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c +index adf311d15d3d2..e5c938d538ee5 100644 +--- a/tools/perf/builtin-record.c ++++ b/tools/perf/builtin-record.c +@@ -1666,7 +1666,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv) + status = -1; + goto out_delete_session; + } +- err = evlist__add_pollfd(rec->evlist, done_fd); ++ err = evlist__add_wakeup_eventfd(rec->evlist, done_fd); + if (err < 0) { + pr_err("Failed to add wakeup eventfd to poll list\n"); + status = err; +diff --git a/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json b/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json +index 40010a8724b3a..ce6e7e7960579 100644 +--- a/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json ++++ b/tools/perf/pmu-events/arch/arm64/ampere/emag/cache.json +@@ -114,7 +114,7 @@ + "PublicDescription": "Level 2 access to instruciton TLB that caused a page table walk. This event counts on any instruciton access which causes L2I_TLB_REFILL to count", + "EventCode": "0x35", + "EventName": "L2I_TLB_ACCESS", +- "BriefDescription": "L2D TLB access" ++ "BriefDescription": "L2I TLB access" + }, + { + "PublicDescription": "Branch target buffer misprediction", +diff --git a/tools/perf/tests/sample-parsing.c b/tools/perf/tests/sample-parsing.c +index a0bdaf390ac8e..33a58976222d3 100644 +--- a/tools/perf/tests/sample-parsing.c ++++ b/tools/perf/tests/sample-parsing.c +@@ -193,7 +193,7 @@ static int do_test(u64 sample_type, u64 sample_regs, u64 read_format) + .data = {1, -1ULL, 211, 212, 213}, + }; + u64 regs[64]; +- const u64 raw_data[] = {0x123456780a0b0c0dULL, 0x1102030405060708ULL}; ++ const u32 raw_data[] = {0x12345678, 0x0a0b0c0d, 0x11020304, 0x05060708, 0 }; + const u64 data[] = {0x2211443366558877ULL, 0, 0xaabbccddeeff4321ULL}; + const u64 aux_data[] = {0xa55a, 0, 0xeeddee, 0x0282028202820282}; + struct perf_sample sample = { +diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c +index 05616d4138a96..7e440fa90c938 100644 +--- a/tools/perf/util/event.c ++++ b/tools/perf/util/event.c +@@ -673,6 +673,8 @@ int machine__resolve(struct machine *machine, struct addr_location *al, + } + + al->sym = map__find_symbol(al->map, al->addr); ++ } else if (symbol_conf.dso_list) { ++ al->filtered |= (1 << HIST_FILTER__DSO); + } + + if (symbol_conf.sym_list) { +diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c +index 8bdf3d2c907cb..98ae432470cdd 100644 +--- a/tools/perf/util/evlist.c ++++ b/tools/perf/util/evlist.c +@@ -508,6 +508,14 @@ int evlist__filter_pollfd(struct evlist *evlist, short revents_and_mask) + return perf_evlist__filter_pollfd(&evlist->core, revents_and_mask); + } + ++#ifdef HAVE_EVENTFD_SUPPORT ++int evlist__add_wakeup_eventfd(struct evlist *evlist, int fd) ++{ ++ return perf_evlist__add_pollfd(&evlist->core, fd, NULL, POLLIN, ++ fdarray_flag__nonfilterable); ++} ++#endif ++ + int evlist__poll(struct evlist *evlist, int timeout) + { + return perf_evlist__poll(&evlist->core, timeout); +diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h +index e1a450322bc5b..9298fce53ea31 100644 +--- a/tools/perf/util/evlist.h ++++ b/tools/perf/util/evlist.h +@@ -160,6 +160,10 @@ perf_evlist__find_tracepoint_by_name(struct evlist *evlist, + int evlist__add_pollfd(struct evlist *evlist, int fd); + int evlist__filter_pollfd(struct evlist *evlist, short revents_and_mask); + ++#ifdef HAVE_EVENTFD_SUPPORT ++int evlist__add_wakeup_eventfd(struct evlist *evlist, int fd); ++#endif ++ + int evlist__poll(struct evlist *evlist, int timeout); + + struct evsel *perf_evlist__id2evsel(struct evlist *evlist, u64 id); +diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +index 697513f351549..197eb58a39cb7 100644 +--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c ++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +@@ -24,6 +24,13 @@ + #include "intel-pt-decoder.h" + #include "intel-pt-log.h" + ++#define BITULL(x) (1ULL << (x)) ++ ++/* IA32_RTIT_CTL MSR bits */ ++#define INTEL_PT_CYC_ENABLE BITULL(1) ++#define INTEL_PT_CYC_THRESHOLD (BITULL(22) | BITULL(21) | BITULL(20) | BITULL(19)) ++#define INTEL_PT_CYC_THRESHOLD_SHIFT 19 ++ + #define INTEL_PT_BLK_SIZE 1024 + + #define BIT63 (((uint64_t)1 << 63)) +@@ -167,6 +174,8 @@ struct intel_pt_decoder { + uint64_t sample_tot_cyc_cnt; + uint64_t base_cyc_cnt; + uint64_t cyc_cnt_timestamp; ++ uint64_t ctl; ++ uint64_t cyc_threshold; + double tsc_to_cyc; + bool continuous_period; + bool overflow; +@@ -204,6 +213,14 @@ static uint64_t intel_pt_lower_power_of_2(uint64_t x) + return x << i; + } + ++static uint64_t intel_pt_cyc_threshold(uint64_t ctl) ++{ ++ if (!(ctl & INTEL_PT_CYC_ENABLE)) ++ return 0; ++ ++ return (ctl & INTEL_PT_CYC_THRESHOLD) >> INTEL_PT_CYC_THRESHOLD_SHIFT; ++} ++ + static void intel_pt_setup_period(struct intel_pt_decoder *decoder) + { + if (decoder->period_type == INTEL_PT_PERIOD_TICKS) { +@@ -245,12 +262,15 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params) + + decoder->flags = params->flags; + ++ decoder->ctl = params->ctl; + decoder->period = params->period; + decoder->period_type = params->period_type; + + decoder->max_non_turbo_ratio = params->max_non_turbo_ratio; + decoder->max_non_turbo_ratio_fp = params->max_non_turbo_ratio; + ++ decoder->cyc_threshold = intel_pt_cyc_threshold(decoder->ctl); ++ + intel_pt_setup_period(decoder); + + decoder->mtc_shift = params->mtc_period; +@@ -1761,6 +1781,9 @@ static int intel_pt_walk_psbend(struct intel_pt_decoder *decoder) + break; + + case INTEL_PT_CYC: ++ intel_pt_calc_cyc_timestamp(decoder); ++ break; ++ + case INTEL_PT_VMCS: + case INTEL_PT_MNT: + case INTEL_PT_PAD: +@@ -2014,6 +2037,7 @@ static int intel_pt_hop_trace(struct intel_pt_decoder *decoder, bool *no_tip, in + + static int intel_pt_walk_trace(struct intel_pt_decoder *decoder) + { ++ int last_packet_type = INTEL_PT_PAD; + bool no_tip = false; + int err; + +@@ -2022,6 +2046,12 @@ static int intel_pt_walk_trace(struct intel_pt_decoder *decoder) + if (err) + return err; + next: ++ if (decoder->cyc_threshold) { ++ if (decoder->sample_cyc && last_packet_type != INTEL_PT_CYC) ++ decoder->sample_cyc = false; ++ last_packet_type = decoder->packet.type; ++ } ++ + if (decoder->hop) { + switch (intel_pt_hop_trace(decoder, &no_tip, &err)) { + case HOP_IGNORE: +@@ -2811,9 +2841,18 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder) + } + if (intel_pt_sample_time(decoder->pkt_state)) { + intel_pt_update_sample_time(decoder); +- if (decoder->sample_cyc) ++ if (decoder->sample_cyc) { + decoder->sample_tot_cyc_cnt = decoder->tot_cyc_cnt; ++ decoder->state.flags |= INTEL_PT_SAMPLE_IPC; ++ decoder->sample_cyc = false; ++ } + } ++ /* ++ * When using only TSC/MTC to compute cycles, IPC can be ++ * sampled as soon as the cycle count changes. ++ */ ++ if (!decoder->have_cyc) ++ decoder->state.flags |= INTEL_PT_SAMPLE_IPC; + } + + decoder->state.timestamp = decoder->sample_timestamp; +diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h +index 8645fc2654811..48adaa78acfc2 100644 +--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h ++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h +@@ -17,6 +17,7 @@ + #define INTEL_PT_ABORT_TX (1 << 1) + #define INTEL_PT_ASYNC (1 << 2) + #define INTEL_PT_FUP_IP (1 << 3) ++#define INTEL_PT_SAMPLE_IPC (1 << 4) + + enum intel_pt_sample_type { + INTEL_PT_BRANCH = 1 << 0, +@@ -243,6 +244,7 @@ struct intel_pt_params { + void *data; + bool return_compression; + bool branch_enable; ++ uint64_t ctl; + uint64_t period; + enum intel_pt_period_type period_type; + unsigned max_non_turbo_ratio; +diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c +index 3a0348caec7d6..dc023b8c6003a 100644 +--- a/tools/perf/util/intel-pt.c ++++ b/tools/perf/util/intel-pt.c +@@ -893,6 +893,18 @@ static bool intel_pt_sampling_mode(struct intel_pt *pt) + return false; + } + ++static u64 intel_pt_ctl(struct intel_pt *pt) ++{ ++ struct evsel *evsel; ++ u64 config; ++ ++ evlist__for_each_entry(pt->session->evlist, evsel) { ++ if (intel_pt_get_config(pt, &evsel->core.attr, &config)) ++ return config; ++ } ++ return 0; ++} ++ + static u64 intel_pt_ns_to_ticks(const struct intel_pt *pt, u64 ns) + { + u64 quot, rem; +@@ -1026,6 +1038,7 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt, + params.data = ptq; + params.return_compression = intel_pt_return_compression(pt); + params.branch_enable = intel_pt_branch_enable(pt); ++ params.ctl = intel_pt_ctl(pt); + params.max_non_turbo_ratio = pt->max_non_turbo_ratio; + params.mtc_period = intel_pt_mtc_period(pt); + params.tsc_ctc_ratio_n = pt->tsc_ctc_ratio_n; +@@ -1381,7 +1394,8 @@ static int intel_pt_synth_branch_sample(struct intel_pt_queue *ptq) + sample.branch_stack = (struct branch_stack *)&dummy_bs; + } + +- sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_br_cyc_cnt; ++ if (ptq->state->flags & INTEL_PT_SAMPLE_IPC) ++ sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_br_cyc_cnt; + if (sample.cyc_cnt) { + sample.insn_cnt = ptq->ipc_insn_cnt - ptq->last_br_insn_cnt; + ptq->last_br_insn_cnt = ptq->ipc_insn_cnt; +@@ -1431,7 +1445,8 @@ static int intel_pt_synth_instruction_sample(struct intel_pt_queue *ptq) + else + sample.period = ptq->state->tot_insn_cnt - ptq->last_insn_cnt; + +- sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_in_cyc_cnt; ++ if (ptq->state->flags & INTEL_PT_SAMPLE_IPC) ++ sample.cyc_cnt = ptq->ipc_cyc_cnt - ptq->last_in_cyc_cnt; + if (sample.cyc_cnt) { + sample.insn_cnt = ptq->ipc_insn_cnt - ptq->last_in_insn_cnt; + ptq->last_in_insn_cnt = ptq->ipc_insn_cnt; +@@ -1966,14 +1981,8 @@ static int intel_pt_sample(struct intel_pt_queue *ptq) + + ptq->have_sample = false; + +- if (ptq->state->tot_cyc_cnt > ptq->ipc_cyc_cnt) { +- /* +- * Cycle count and instruction count only go together to create +- * a valid IPC ratio when the cycle count changes. +- */ +- ptq->ipc_insn_cnt = ptq->state->tot_insn_cnt; +- ptq->ipc_cyc_cnt = ptq->state->tot_cyc_cnt; +- } ++ ptq->ipc_insn_cnt = ptq->state->tot_insn_cnt; ++ ptq->ipc_cyc_cnt = ptq->state->tot_cyc_cnt; + + /* + * Do PEBS first to allow for the possibility that the PEBS timestamp +diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c +index 0d14abdf3d722..4d569ad7db02d 100644 +--- a/tools/perf/util/symbol.c ++++ b/tools/perf/util/symbol.c +@@ -1561,12 +1561,11 @@ static int bfd2elf_binding(asymbol *symbol) + int dso__load_bfd_symbols(struct dso *dso, const char *debugfile) + { + int err = -1; +- long symbols_size, symbols_count; ++ long symbols_size, symbols_count, i; + asection *section; + asymbol **symbols, *sym; + struct symbol *symbol; + bfd *abfd; +- u_int i; + u64 start, len; + + abfd = bfd_openr(dso->long_name, NULL); +@@ -1867,8 +1866,10 @@ int dso__load(struct dso *dso, struct map *map) + if (nsexit) + nsinfo__mountns_enter(dso->nsinfo, &nsc); + +- if (bfdrc == 0) ++ if (bfdrc == 0) { ++ ret = 0; + break; ++ } + + if (!is_reg || sirc < 0) + continue; +diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py +index 497ab51bc1702..3fbe1acd531ae 100755 +--- a/tools/testing/kunit/kunit_tool_test.py ++++ b/tools/testing/kunit/kunit_tool_test.py +@@ -288,19 +288,17 @@ class StrContains(str): + class KUnitMainTest(unittest.TestCase): + def setUp(self): + path = get_absolute_path('test_data/test_is_test_passed-all_passed.log') +- file = open(path) +- all_passed_log = file.readlines() +- self.print_patch = mock.patch('builtins.print') +- self.print_mock = self.print_patch.start() ++ with open(path) as file: ++ all_passed_log = file.readlines() ++ ++ self.print_mock = mock.patch('builtins.print').start() ++ self.addCleanup(mock.patch.stopall) ++ + self.linux_source_mock = mock.Mock() + self.linux_source_mock.build_reconfig = mock.Mock(return_value=True) + self.linux_source_mock.build_um_kernel = mock.Mock(return_value=True) + self.linux_source_mock.run_kernel = mock.Mock(return_value=all_passed_log) + +- def tearDown(self): +- self.print_patch.stop() +- pass +- + def test_config_passes_args_pass(self): + kunit.main(['config', '--build_dir=.kunit'], self.linux_source_mock) + assert self.linux_source_mock.build_reconfig.call_count == 1 +diff --git a/tools/testing/selftests/bpf/test_xdp_redirect.sh b/tools/testing/selftests/bpf/test_xdp_redirect.sh +index dd80f0c84afb4..c033850886f44 100755 +--- a/tools/testing/selftests/bpf/test_xdp_redirect.sh ++++ b/tools/testing/selftests/bpf/test_xdp_redirect.sh +@@ -1,4 +1,4 @@ +-#!/bin/sh ++#!/bin/bash + # Create 2 namespaces with two veth peers, and + # forward packets in-between using generic XDP + # +@@ -57,12 +57,8 @@ test_xdp_redirect() + ip link set dev veth1 $xdpmode obj test_xdp_redirect.o sec redirect_to_222 &> /dev/null + ip link set dev veth2 $xdpmode obj test_xdp_redirect.o sec redirect_to_111 &> /dev/null + +- ip netns exec ns1 ping -c 1 10.1.1.22 &> /dev/null +- local ret1=$? +- ip netns exec ns2 ping -c 1 10.1.1.11 &> /dev/null +- local ret2=$? +- +- if [ $ret1 -eq 0 -a $ret2 -eq 0 ]; then ++ if ip netns exec ns1 ping -c 1 10.1.1.22 &> /dev/null && ++ ip netns exec ns2 ping -c 1 10.1.1.11 &> /dev/null; then + echo "selftests: test_xdp_redirect $xdpmode [PASS]"; + else + ret=1 +diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile +index 607c2acd20829..604b43ece15f5 100644 +--- a/tools/testing/selftests/dmabuf-heaps/Makefile ++++ b/tools/testing/selftests/dmabuf-heaps/Makefile +@@ -1,5 +1,5 @@ + # SPDX-License-Identifier: GPL-2.0 +-CFLAGS += -static -O3 -Wl,-no-as-needed -Wall -I../../../../usr/include ++CFLAGS += -static -O3 -Wl,-no-as-needed -Wall + + TEST_GEN_PROGS = dmabuf-heap + +diff --git a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc +index ada594fe16cb3..955e3ceea44b5 100644 +--- a/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc ++++ b/tools/testing/selftests/ftrace/test.d/trigger/inter-event/trigger-synthetic_event_syntax_errors.tc +@@ -1,19 +1,38 @@ + #!/bin/sh + # SPDX-License-Identifier: GPL-2.0 + # description: event trigger - test synthetic_events syntax parser errors +-# requires: synthetic_events error_log ++# requires: synthetic_events error_log "char name[]' >> synthetic_events":README + + check_error() { # command-with-error-pos-by-^ + ftrace_errlog_check 'synthetic_events' "$1" 'synthetic_events' + } + ++check_dyn_error() { # command-with-error-pos-by-^ ++ ftrace_errlog_check 'synthetic_events' "$1" 'dynamic_events' ++} ++ + check_error 'myevent ^chr arg' # INVALID_TYPE +-check_error 'myevent ^char str[];; int v' # INVALID_TYPE +-check_error 'myevent char ^str]; int v' # INVALID_NAME +-check_error 'myevent char ^str;[]' # INVALID_NAME +-check_error 'myevent ^char str[; int v' # INVALID_TYPE +-check_error '^mye;vent char str[]' # BAD_NAME +-check_error 'myevent char str[]; ^int' # INVALID_FIELD +-check_error '^myevent' # INCOMPLETE_CMD ++check_error 'myevent ^unsigned arg' # INCOMPLETE_TYPE ++ ++check_error 'myevent char ^str]; int v' # BAD_NAME ++check_error '^mye-vent char str[]' # BAD_NAME ++check_error 'myevent char ^st-r[]' # BAD_NAME ++ ++check_error 'myevent char str;^[]' # INVALID_FIELD ++check_error 'myevent char str; ^int' # INVALID_FIELD ++ ++check_error 'myevent char ^str[; int v' # INVALID_ARRAY_SPEC ++check_error 'myevent char ^str[kdjdk]' # INVALID_ARRAY_SPEC ++check_error 'myevent char ^str[257]' # INVALID_ARRAY_SPEC ++ ++check_error '^mye;vent char str[]' # INVALID_CMD ++check_error '^myevent ; char str[]' # INVALID_CMD ++check_error '^myevent; char str[]' # INVALID_CMD ++check_error '^myevent ;char str[]' # INVALID_CMD ++check_error '^; char str[]' # INVALID_CMD ++check_error '^;myevent char str[]' # INVALID_CMD ++check_error '^myevent' # INVALID_CMD ++ ++check_dyn_error '^s:junk/myevent char str[' # INVALID_DYN_CMD + + exit 0 +diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.sh b/tools/testing/selftests/net/mptcp/mptcp_connect.sh +index 2cfd87d94db89..e927df83efb91 100755 +--- a/tools/testing/selftests/net/mptcp/mptcp_connect.sh ++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.sh +@@ -493,7 +493,7 @@ do_transfer() + echo "${listener_ns} SYNRX: ${cl_proto} -> ${srv_proto}: expect ${expect_synrx}, got ${stat_synrx_now_l}" + fi + if [ $expect_ackrx -ne $stat_ackrx_now_l ] ;then +- echo "${listener_ns} ACKRX: ${cl_proto} -> ${srv_proto}: expect ${expect_synrx}, got ${stat_synrx_now_l}" ++ echo "${listener_ns} ACKRX: ${cl_proto} -> ${srv_proto}: expect ${expect_ackrx}, got ${stat_ackrx_now_l} " + fi + + if [ $retc -eq 0 ] && [ $rets -eq 0 ];then +diff --git a/tools/testing/selftests/powerpc/eeh/eeh-basic.sh b/tools/testing/selftests/powerpc/eeh/eeh-basic.sh +index 0d783e1065c86..64779f073e177 100755 +--- a/tools/testing/selftests/powerpc/eeh/eeh-basic.sh ++++ b/tools/testing/selftests/powerpc/eeh/eeh-basic.sh +@@ -86,5 +86,5 @@ echo "$failed devices failed to recover ($dev_count tested)" + lspci | diff -u $pre_lspci - + rm -f $pre_lspci + +-test "$failed" == 0 ++test "$failed" -eq 0 + exit $? +diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c +index 26c72f2b61b1b..1b6c7d33c4ff2 100644 +--- a/tools/testing/selftests/seccomp/seccomp_bpf.c ++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c +@@ -315,7 +315,7 @@ TEST(kcmp) + ret = __filecmp(getpid(), getpid(), 1, 1); + EXPECT_EQ(ret, 0); + if (ret != 0 && errno == ENOSYS) +- SKIP(return, "Kernel does not support kcmp() (missing CONFIG_CHECKPOINT_RESTORE?)"); ++ SKIP(return, "Kernel does not support kcmp() (missing CONFIG_KCMP?)"); + } + + TEST(mode_strict_support) +diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh +index 74c69b75f6f5a..7ed7cd95e58fe 100755 +--- a/tools/testing/selftests/wireguard/netns.sh ++++ b/tools/testing/selftests/wireguard/netns.sh +@@ -39,7 +39,7 @@ ip0() { pretty 0 "ip $*"; ip -n $netns0 "$@"; } + ip1() { pretty 1 "ip $*"; ip -n $netns1 "$@"; } + ip2() { pretty 2 "ip $*"; ip -n $netns2 "$@"; } + sleep() { read -t "$1" -N 1 || true; } +-waitiperf() { pretty "${1//*-}" "wait for iperf:5201 pid $2"; while [[ $(ss -N "$1" -tlpH 'sport = 5201') != *\"iperf3\",pid=$2,fd=* ]]; do sleep 0.1; done; } ++waitiperf() { pretty "${1//*-}" "wait for iperf:${3:-5201} pid $2"; while [[ $(ss -N "$1" -tlpH "sport = ${3:-5201}") != *\"iperf3\",pid=$2,fd=* ]]; do sleep 0.1; done; } + waitncatudp() { pretty "${1//*-}" "wait for udp:1111 pid $2"; while [[ $(ss -N "$1" -ulpH 'sport = 1111') != *\"ncat\",pid=$2,fd=* ]]; do sleep 0.1; done; } + waitiface() { pretty "${1//*-}" "wait for $2 to come up"; ip netns exec "$1" bash -c "while [[ \$(< \"/sys/class/net/$2/operstate\") != up ]]; do read -t .1 -N 0 || true; done;"; } + +@@ -141,6 +141,19 @@ tests() { + n2 iperf3 -s -1 -B fd00::2 & + waitiperf $netns2 $! + n1 iperf3 -Z -t 3 -b 0 -u -c fd00::2 ++ ++ # TCP over IPv4, in parallel ++ for max in 4 5 50; do ++ local pids=( ) ++ for ((i=0; i < max; ++i)) do ++ n2 iperf3 -p $(( 5200 + i )) -s -1 -B 192.168.241.2 & ++ pids+=( $! ); waitiperf $netns2 $! $(( 5200 + i )) ++ done ++ for ((i=0; i < max; ++i)) do ++ n1 iperf3 -Z -t 3 -p $(( 5200 + i )) -c 192.168.241.2 & ++ done ++ wait "${pids[@]}" ++ done + } + + [[ $(ip1 link show dev wg0) =~ mtu\ ([0-9]+) ]] && orig_mtu="${BASH_REMATCH[1]}"