From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 62BAE158012 for ; Wed, 22 Sep 2021 11:39:34 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 88CDAE07F9; Wed, 22 Sep 2021 11:39:33 +0000 (UTC) Received: from smtp.gentoo.org (dev.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id EB023E07F9 for ; Wed, 22 Sep 2021 11:39:32 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-CHACHA20-POLY1305 (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 62539342E0C for ; Wed, 22 Sep 2021 11:39:31 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 104DCCF for ; Wed, 22 Sep 2021 11:39:30 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1632310758.f1ad5dc0b5f6809f86a86e23b3fe3b3592722da7.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1147_linux-5.4.148.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: f1ad5dc0b5f6809f86a86e23b3fe3b3592722da7 X-VCS-Branch: 5.4 Date: Wed, 22 Sep 2021 11:39:30 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 80448320-af7c-4511-8273-99f9e528c55d X-Archives-Hash: 77bdbf8386544ca79bc464e5ae6e82d8 commit: f1ad5dc0b5f6809f86a86e23b3fe3b3592722da7 Author: Mike Pagano gentoo org> AuthorDate: Wed Sep 22 11:39:18 2021 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Sep 22 11:39:18 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f1ad5dc0 Linux patch 5.4.148 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1147_linux-5.4.148.patch | 8519 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 8523 insertions(+) diff --git a/0000_README b/0000_README index 4d2be88..b620b8f 100644 --- a/0000_README +++ b/0000_README @@ -631,6 +631,10 @@ Patch: 1146_linux-5.4.147.patch From: http://www.kernel.org Desc: Linux 5.4.147 +Patch: 1147_linux-5.4.148.patch +From: http://www.kernel.org +Desc: Linux 5.4.148 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1147_linux-5.4.148.patch b/1147_linux-5.4.148.patch new file mode 100644 index 0000000..e4f197b --- /dev/null +++ b/1147_linux-5.4.148.patch @@ -0,0 +1,8519 @@ +diff --git a/Documentation/admin-guide/devices.txt b/Documentation/admin-guide/devices.txt +index 1c5d2281efc97..771d9e7ae082b 100644 +--- a/Documentation/admin-guide/devices.txt ++++ b/Documentation/admin-guide/devices.txt +@@ -3002,10 +3002,10 @@ + 65 = /dev/infiniband/issm1 Second InfiniBand IsSM device + ... + 127 = /dev/infiniband/issm63 63rd InfiniBand IsSM device +- 128 = /dev/infiniband/uverbs0 First InfiniBand verbs device +- 129 = /dev/infiniband/uverbs1 Second InfiniBand verbs device ++ 192 = /dev/infiniband/uverbs0 First InfiniBand verbs device ++ 193 = /dev/infiniband/uverbs1 Second InfiniBand verbs device + ... +- 159 = /dev/infiniband/uverbs31 31st InfiniBand verbs device ++ 223 = /dev/infiniband/uverbs31 31st InfiniBand verbs device + + 232 char Biometric Devices + 0 = /dev/biometric/sensor0/fingerprint first fingerprint sensor on first device +diff --git a/Documentation/devicetree/bindings/arm/tegra.yaml b/Documentation/devicetree/bindings/arm/tegra.yaml +index 60b38eb5c61ab..56e1945911f1e 100644 +--- a/Documentation/devicetree/bindings/arm/tegra.yaml ++++ b/Documentation/devicetree/bindings/arm/tegra.yaml +@@ -49,7 +49,7 @@ properties: + - const: toradex,apalis_t30 + - const: nvidia,tegra30 + - items: +- - const: toradex,apalis_t30-eval-v1.1 ++ - const: toradex,apalis_t30-v1.1-eval + - const: toradex,apalis_t30-eval + - const: toradex,apalis_t30-v1.1 + - const: toradex,apalis_t30 +diff --git a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt +index 44919d48d2415..c459f169a9044 100644 +--- a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt ++++ b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt +@@ -122,7 +122,7 @@ on various other factors also like; + so the device should have enough free bytes available its OOB/Spare + area to accommodate ECC for entire page. In general following expression + helps in determining if given device can accommodate ECC syndrome: +- "2 + (PAGESIZE / 512) * ECC_BYTES" >= OOBSIZE" ++ "2 + (PAGESIZE / 512) * ECC_BYTES" <= OOBSIZE" + where + OOBSIZE number of bytes in OOB/spare area + PAGESIZE number of bytes in main-area of device page +diff --git a/Makefile b/Makefile +index 98227dae34947..b84706c6d6248 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 147 ++SUBLEVEL = 148 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c +index a2fbea3ee07c7..102418ac5ff4a 100644 +--- a/arch/arc/mm/cache.c ++++ b/arch/arc/mm/cache.c +@@ -1123,7 +1123,7 @@ void clear_user_page(void *to, unsigned long u_vaddr, struct page *page) + clear_page(to); + clear_bit(PG_dc_clean, &page->flags); + } +- ++EXPORT_SYMBOL(clear_user_page); + + /********************************************************************** + * Explicit Cache flush request from user space via syscall +diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile +index f0b3a9281d69b..fb6cb24bde5c9 100644 +--- a/arch/arm/boot/compressed/Makefile ++++ b/arch/arm/boot/compressed/Makefile +@@ -90,6 +90,8 @@ $(addprefix $(obj)/,$(libfdt_objs) atags_to_fdt.o): \ + $(addprefix $(obj)/,$(libfdt_hdrs)) + + ifeq ($(CONFIG_ARM_ATAG_DTB_COMPAT),y) ++CFLAGS_REMOVE_atags_to_fdt.o += -Wframe-larger-than=${CONFIG_FRAME_WARN} ++CFLAGS_atags_to_fdt.o += -Wframe-larger-than=1280 + OBJS += $(libfdt_objs) atags_to_fdt.o + endif + +diff --git a/arch/arm/boot/dts/imx53-ppd.dts b/arch/arm/boot/dts/imx53-ppd.dts +index 5ff9a179c83c3..c80d1700e0949 100644 +--- a/arch/arm/boot/dts/imx53-ppd.dts ++++ b/arch/arm/boot/dts/imx53-ppd.dts +@@ -70,6 +70,12 @@ + clock-frequency = <11289600>; + }; + ++ achc_24M: achc-clock { ++ compatible = "fixed-clock"; ++ #clock-cells = <0>; ++ clock-frequency = <24000000>; ++ }; ++ + sgtlsound: sound { + compatible = "fsl,imx53-cpuvo-sgtl5000", + "fsl,imx-audio-sgtl5000"; +@@ -287,16 +293,13 @@ + &gpio4 12 GPIO_ACTIVE_LOW>; + status = "okay"; + +- spidev0: spi@0 { +- compatible = "ge,achc"; +- reg = <0>; +- spi-max-frequency = <1000000>; +- }; +- +- spidev1: spi@1 { +- compatible = "ge,achc"; +- reg = <1>; +- spi-max-frequency = <1000000>; ++ spidev0: spi@1 { ++ compatible = "ge,achc", "nxp,kinetis-k20"; ++ reg = <1>, <0>; ++ vdd-supply = <®_3v3>; ++ vdda-supply = <®_3v3>; ++ clocks = <&achc_24M>; ++ reset-gpios = <&gpio3 6 GPIO_ACTIVE_LOW>; + }; + + gpioxra0: gpio@2 { +diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi +index 8b79b4112ee1a..2b075e287610f 100644 +--- a/arch/arm/boot/dts/qcom-apq8064.dtsi ++++ b/arch/arm/boot/dts/qcom-apq8064.dtsi +@@ -1261,9 +1261,9 @@ + <&mmcc DSI1_BYTE_CLK>, + <&mmcc DSI_PIXEL_CLK>, + <&mmcc DSI1_ESC_CLK>; +- clock-names = "iface_clk", "bus_clk", "core_mmss_clk", +- "src_clk", "byte_clk", "pixel_clk", +- "core_clk"; ++ clock-names = "iface", "bus", "core_mmss", ++ "src", "byte", "pixel", ++ "core"; + + assigned-clocks = <&mmcc DSI1_BYTE_SRC>, + <&mmcc DSI1_ESC_SRC>, +diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi +index 20137fc578b1b..394a6b4dc69d5 100644 +--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi ++++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi +@@ -185,8 +185,9 @@ + nvidia,pins = "ata", "atb", "atc", "atd", "ate", + "cdev1", "cdev2", "dap1", "dtb", "gma", + "gmb", "gmc", "gmd", "gme", "gpu7", +- "gpv", "i2cp", "pta", "rm", "slxa", +- "slxk", "spia", "spib", "uac"; ++ "gpv", "i2cp", "irrx", "irtx", "pta", ++ "rm", "slxa", "slxk", "spia", "spib", ++ "uac"; + nvidia,pull = ; + nvidia,tristate = ; + }; +@@ -211,7 +212,7 @@ + conf_ddc { + nvidia,pins = "ddc", "dta", "dtd", "kbca", + "kbcb", "kbcc", "kbcd", "kbce", "kbcf", +- "sdc"; ++ "sdc", "uad", "uca"; + nvidia,pull = ; + nvidia,tristate = ; + }; +@@ -221,10 +222,9 @@ + "lvp0", "owc", "sdb"; + nvidia,tristate = ; + }; +- conf_irrx { +- nvidia,pins = "irrx", "irtx", "sdd", "spic", +- "spie", "spih", "uaa", "uab", "uad", +- "uca", "ucb"; ++ conf_sdd { ++ nvidia,pins = "sdd", "spic", "spie", "spih", ++ "uaa", "uab", "ucb"; + nvidia,pull = ; + nvidia,tristate = ; + }; +diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts +index 3595be0f25277..2d6c73d7d397c 100644 +--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts ++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-frwy.dts +@@ -83,15 +83,9 @@ + }; + + eeprom@52 { +- compatible = "atmel,24c512"; ++ compatible = "onnn,cat24c04", "atmel,24c04"; + reg = <0x52>; + }; +- +- eeprom@53 { +- compatible = "atmel,24c512"; +- reg = <0x53>; +- }; +- + }; + }; + }; +diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts +index 2743397591141..8858c1e92f23c 100644 +--- a/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts ++++ b/arch/arm64/boot/dts/freescale/fsl-ls1046a-rdb.dts +@@ -58,14 +58,9 @@ + }; + + eeprom@52 { +- compatible = "atmel,24c512"; ++ compatible = "onnn,cat24c05", "atmel,24c04"; + reg = <0x52>; + }; +- +- eeprom@53 { +- compatible = "atmel,24c512"; +- reg = <0x53>; +- }; + }; + + &i2c3 { +diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi +index 631a7f77c3869..0b3eb8c0b8df0 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi ++++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi +@@ -1082,13 +1082,13 @@ + + cpu@0 { + device_type = "cpu"; +- compatible = "nvidia,denver"; ++ compatible = "nvidia,tegra132-denver"; + reg = <0>; + }; + + cpu@1 { + device_type = "cpu"; +- compatible = "nvidia,denver"; ++ compatible = "nvidia,tegra132-denver"; + reg = <1>; + }; + }; +diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi +index 0821754f0fd6d..90adff8aa9baf 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi ++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi +@@ -1434,7 +1434,7 @@ + }; + + pcie_ep@14160000 { +- compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep"; ++ compatible = "nvidia,tegra194-pcie-ep"; + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX4A>; + reg = <0x00 0x14160000 0x0 0x00020000 /* appl registers (128K) */ + 0x00 0x36040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */ +@@ -1466,7 +1466,7 @@ + }; + + pcie_ep@14180000 { +- compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep"; ++ compatible = "nvidia,tegra194-pcie-ep"; + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>; + reg = <0x00 0x14180000 0x0 0x00020000 /* appl registers (128K) */ + 0x00 0x38040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */ +@@ -1498,7 +1498,7 @@ + }; + + pcie_ep@141a0000 { +- compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep"; ++ compatible = "nvidia,tegra194-pcie-ep"; + power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>; + reg = <0x00 0x141a0000 0x0 0x00020000 /* appl registers (128K) */ + 0x00 0x3a040000 0x0 0x00040000 /* iATU_DMA reg space (256K) */ +diff --git a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts +index 70be3f95209bc..830d9f2c1e5f2 100644 +--- a/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts ++++ b/arch/arm64/boot/dts/qcom/ipq8074-hk01.dts +@@ -20,7 +20,7 @@ + stdout-path = "serial0"; + }; + +- memory { ++ memory@40000000 { + device_type = "memory"; + reg = <0x0 0x40000000 0x0 0x20000000>; + }; +diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h +index 817efd95d539f..9679b74a20817 100644 +--- a/arch/arm64/include/asm/kernel-pgtable.h ++++ b/arch/arm64/include/asm/kernel-pgtable.h +@@ -65,8 +65,8 @@ + #define EARLY_KASLR (0) + #endif + +-#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \ +- - ((vstart) >> (shift)) + 1 + EARLY_KASLR) ++#define EARLY_ENTRIES(vstart, vend, shift) \ ++ ((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR) + + #define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT)) + +diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c +index 04b982a2799eb..e62c9cbf99f46 100644 +--- a/arch/arm64/kernel/fpsimd.c ++++ b/arch/arm64/kernel/fpsimd.c +@@ -498,7 +498,7 @@ size_t sve_state_size(struct task_struct const *task) + void sve_alloc(struct task_struct *task) + { + if (task->thread.sve_state) { +- memset(task->thread.sve_state, 0, sve_state_size(current)); ++ memset(task->thread.sve_state, 0, sve_state_size(task)); + return; + } + +diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S +index a2e0b37549433..2f784d3b4b390 100644 +--- a/arch/arm64/kernel/head.S ++++ b/arch/arm64/kernel/head.S +@@ -194,7 +194,7 @@ ENDPROC(preserve_boot_args) + * to be composed of multiple pages. (This effectively scales the end index). + * + * vstart: virtual address of start of range +- * vend: virtual address of end of range ++ * vend: virtual address of end of range - we map [vstart, vend] + * shift: shift used to transform virtual address into index + * ptrs: number of entries in page table + * istart: index in table corresponding to vstart +@@ -231,17 +231,18 @@ ENDPROC(preserve_boot_args) + * + * tbl: location of page table + * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE) +- * vstart: start address to map +- * vend: end address to map - we map [vstart, vend] ++ * vstart: virtual address of start of range ++ * vend: virtual address of end of range - we map [vstart, vend - 1] + * flags: flags to use to map last level entries + * phys: physical address corresponding to vstart - physical memory is contiguous + * pgds: the number of pgd entries + * + * Temporaries: istart, iend, tmp, count, sv - these need to be different registers +- * Preserves: vstart, vend, flags +- * Corrupts: tbl, rtbl, istart, iend, tmp, count, sv ++ * Preserves: vstart, flags ++ * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv + */ + .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv ++ sub \vend, \vend, #1 + add \rtbl, \tbl, #PAGE_SIZE + mov \sv, \rtbl + mov \count, #0 +diff --git a/arch/m68k/Kconfig.bus b/arch/m68k/Kconfig.bus +index 9d0a3a23d50e5..355c51309ed85 100644 +--- a/arch/m68k/Kconfig.bus ++++ b/arch/m68k/Kconfig.bus +@@ -63,7 +63,7 @@ source "drivers/zorro/Kconfig" + + endif + +-if !MMU ++if COLDFIRE + + config ISA_DMA_API + def_bool !M5272 +diff --git a/arch/mips/mti-malta/malta-dtshim.c b/arch/mips/mti-malta/malta-dtshim.c +index 98a063093b69a..0be28adff5572 100644 +--- a/arch/mips/mti-malta/malta-dtshim.c ++++ b/arch/mips/mti-malta/malta-dtshim.c +@@ -22,7 +22,7 @@ + #define ROCIT_CONFIG_GEN1_MEMMAP_SHIFT 8 + #define ROCIT_CONFIG_GEN1_MEMMAP_MASK (0xf << 8) + +-static unsigned char fdt_buf[16 << 10] __initdata; ++static unsigned char fdt_buf[16 << 10] __initdata __aligned(8); + + /* determined physical memory size, not overridden by command line args */ + extern unsigned long physical_memsize; +diff --git a/arch/openrisc/kernel/entry.S b/arch/openrisc/kernel/entry.S +index c6481cfc5220f..6b27cf4a0d786 100644 +--- a/arch/openrisc/kernel/entry.S ++++ b/arch/openrisc/kernel/entry.S +@@ -547,6 +547,7 @@ EXCEPTION_ENTRY(_external_irq_handler) + l.bnf 1f // ext irq enabled, all ok. + l.nop + ++#ifdef CONFIG_PRINTK + l.addi r1,r1,-0x8 + l.movhi r3,hi(42f) + l.ori r3,r3,lo(42f) +@@ -560,6 +561,7 @@ EXCEPTION_ENTRY(_external_irq_handler) + .string "\n\rESR interrupt bug: in _external_irq_handler (ESR %x)\n\r" + .align 4 + .previous ++#endif + + l.ori r4,r4,SPR_SR_IEE // fix the bug + // l.sw PT_SR(r1),r4 +diff --git a/arch/parisc/kernel/signal.c b/arch/parisc/kernel/signal.c +index 02895a8f2c551..92223f9ff05c7 100644 +--- a/arch/parisc/kernel/signal.c ++++ b/arch/parisc/kernel/signal.c +@@ -238,6 +238,12 @@ setup_rt_frame(struct ksignal *ksig, sigset_t *set, struct pt_regs *regs, + #endif + + usp = (regs->gr[30] & ~(0x01UL)); ++#ifdef CONFIG_64BIT ++ if (is_compat_task()) { ++ /* The gcc alloca implementation leaves garbage in the upper 32 bits of sp */ ++ usp = (compat_uint_t)usp; ++ } ++#endif + /*FIXME: frame_size parameter is unused, remove it. */ + frame = get_sigframe(&ksig->ka, usp, sizeof(*frame)); + +diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig +index 285d506c5a769..2f5e06309f096 100644 +--- a/arch/powerpc/configs/mpc885_ads_defconfig ++++ b/arch/powerpc/configs/mpc885_ads_defconfig +@@ -39,6 +39,7 @@ CONFIG_MTD_CFI_GEOMETRY=y + # CONFIG_MTD_CFI_I2 is not set + CONFIG_MTD_CFI_I4=y + CONFIG_MTD_CFI_AMDSTD=y ++CONFIG_MTD_PHYSMAP=y + CONFIG_MTD_PHYSMAP_OF=y + # CONFIG_BLK_DEV is not set + CONFIG_NETDEVICES=y +diff --git a/arch/powerpc/include/asm/pmc.h b/arch/powerpc/include/asm/pmc.h +index c6bbe9778d3cd..3c09109e708ef 100644 +--- a/arch/powerpc/include/asm/pmc.h ++++ b/arch/powerpc/include/asm/pmc.h +@@ -34,6 +34,13 @@ static inline void ppc_set_pmu_inuse(int inuse) + #endif + } + ++#ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE ++static inline int ppc_get_pmu_inuse(void) ++{ ++ return get_paca()->pmcregs_in_use; ++} ++#endif ++ + extern void power4_enable_pmcs(void); + + #else /* CONFIG_PPC64 */ +diff --git a/arch/powerpc/kernel/stacktrace.c b/arch/powerpc/kernel/stacktrace.c +index b13c6213b0d9b..890f95151fb44 100644 +--- a/arch/powerpc/kernel/stacktrace.c ++++ b/arch/powerpc/kernel/stacktrace.c +@@ -8,6 +8,7 @@ + * Copyright 2018 Nick Piggin, Michael Ellerman, IBM Corp. + */ + ++#include + #include + #include + #include +diff --git a/arch/powerpc/kvm/book3s_64_vio_hv.c b/arch/powerpc/kvm/book3s_64_vio_hv.c +index ab6eeb8e753e5..35fd67b4ceb41 100644 +--- a/arch/powerpc/kvm/book3s_64_vio_hv.c ++++ b/arch/powerpc/kvm/book3s_64_vio_hv.c +@@ -177,10 +177,13 @@ static void kvmppc_rm_tce_put(struct kvmppc_spapr_tce_table *stt, + idx -= stt->offset; + page = stt->pages[idx / TCES_PER_PAGE]; + /* +- * page must not be NULL in real mode, +- * kvmppc_rm_ioba_validate() must have taken care of this. ++ * kvmppc_rm_ioba_validate() allows pages not be allocated if TCE is ++ * being cleared, otherwise it returns H_TOO_HARD and we skip this. + */ +- WARN_ON_ONCE_RM(!page); ++ if (!page) { ++ WARN_ON_ONCE_RM(tce != 0); ++ return; ++ } + tbl = kvmppc_page_address(page); + + tbl[idx % TCES_PER_PAGE] = tce; +diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c +index bba358f134718..6c99ccc3bfcb0 100644 +--- a/arch/powerpc/kvm/book3s_hv.c ++++ b/arch/powerpc/kvm/book3s_hv.c +@@ -58,6 +58,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -3559,6 +3560,18 @@ int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, + cpu_has_feature(CPU_FTR_P9_TM_HV_ASSIST)) + kvmppc_restore_tm_hv(vcpu, vcpu->arch.shregs.msr, true); + ++#ifdef CONFIG_PPC_PSERIES ++ if (kvmhv_on_pseries()) { ++ barrier(); ++ if (vcpu->arch.vpa.pinned_addr) { ++ struct lppaca *lp = vcpu->arch.vpa.pinned_addr; ++ get_lppaca()->pmcregs_in_use = lp->pmcregs_in_use; ++ } else { ++ get_lppaca()->pmcregs_in_use = 1; ++ } ++ barrier(); ++ } ++#endif + kvmhv_load_guest_pmu(vcpu); + + msr_check_and_set(MSR_FP | MSR_VEC | MSR_VSX); +@@ -3693,6 +3706,13 @@ int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit, + save_pmu |= nesting_enabled(vcpu->kvm); + + kvmhv_save_guest_pmu(vcpu, save_pmu); ++#ifdef CONFIG_PPC_PSERIES ++ if (kvmhv_on_pseries()) { ++ barrier(); ++ get_lppaca()->pmcregs_in_use = ppc_get_pmu_inuse(); ++ barrier(); ++ } ++#endif + + vc->entry_exit_map = 0x101; + vc->in_guest = 0; +diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +index c6fbbd29bd871..feaf6ca2e76c1 100644 +--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S ++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +@@ -3137,7 +3137,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_P9_TM_HV_ASSIST) + /* The following code handles the fake_suspend = 1 case */ + mflr r0 + std r0, PPC_LR_STKOFF(r1) +- stdu r1, -PPC_MIN_STKFRM(r1) ++ stdu r1, -TM_FRAME_SIZE(r1) + + /* Turn on TM. */ + mfmsr r8 +@@ -3152,10 +3152,42 @@ BEGIN_FTR_SECTION + END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG) + nop + ++ /* ++ * It's possible that treclaim. may modify registers, if we have lost ++ * track of fake-suspend state in the guest due to it using rfscv. ++ * Save and restore registers in case this occurs. ++ */ ++ mfspr r3, SPRN_DSCR ++ mfspr r4, SPRN_XER ++ mfspr r5, SPRN_AMR ++ /* SPRN_TAR would need to be saved here if the kernel ever used it */ ++ mfcr r12 ++ SAVE_NVGPRS(r1) ++ SAVE_GPR(2, r1) ++ SAVE_GPR(3, r1) ++ SAVE_GPR(4, r1) ++ SAVE_GPR(5, r1) ++ stw r12, 8(r1) ++ std r1, HSTATE_HOST_R1(r13) ++ + /* We have to treclaim here because that's the only way to do S->N */ + li r3, TM_CAUSE_KVM_RESCHED + TRECLAIM(R3) + ++ GET_PACA(r13) ++ ld r1, HSTATE_HOST_R1(r13) ++ REST_GPR(2, r1) ++ REST_GPR(3, r1) ++ REST_GPR(4, r1) ++ REST_GPR(5, r1) ++ lwz r12, 8(r1) ++ REST_NVGPRS(r1) ++ mtspr SPRN_DSCR, r3 ++ mtspr SPRN_XER, r4 ++ mtspr SPRN_AMR, r5 ++ mtcr r12 ++ HMT_MEDIUM ++ + /* + * We were in fake suspend, so we are not going to save the + * register state as the guest checkpointed state (since +@@ -3183,7 +3215,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_P9_TM_XER_SO_BUG) + std r5, VCPU_TFHAR(r9) + std r6, VCPU_TFIAR(r9) + +- addi r1, r1, PPC_MIN_STKFRM ++ addi r1, r1, TM_FRAME_SIZE + ld r0, PPC_LR_STKOFF(r1) + mtlr r0 + blr +diff --git a/arch/powerpc/perf/hv-gpci.c b/arch/powerpc/perf/hv-gpci.c +index 6884d16ec19b9..732cfc53e260d 100644 +--- a/arch/powerpc/perf/hv-gpci.c ++++ b/arch/powerpc/perf/hv-gpci.c +@@ -164,7 +164,7 @@ static unsigned long single_gpci_request(u32 req, u32 starting_index, + */ + count = 0; + for (i = offset; i < offset + length; i++) +- count |= arg->bytes[i] << (i - offset); ++ count |= (u64)(arg->bytes[i]) << ((length - 1 - (i - offset)) * 8); + + *value = count; + out: +diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h +index 1932088686a68..e6a5007f017d8 100644 +--- a/arch/s390/include/asm/setup.h ++++ b/arch/s390/include/asm/setup.h +@@ -39,6 +39,7 @@ + #define MACHINE_FLAG_NX BIT(15) + #define MACHINE_FLAG_GS BIT(16) + #define MACHINE_FLAG_SCC BIT(17) ++#define MACHINE_FLAG_PCI_MIO BIT(18) + + #define LPP_MAGIC BIT(31) + #define LPP_PID_MASK _AC(0xffffffff, UL) +@@ -106,6 +107,7 @@ extern unsigned long __swsusp_reset_dma; + #define MACHINE_HAS_NX (S390_lowcore.machine_flags & MACHINE_FLAG_NX) + #define MACHINE_HAS_GS (S390_lowcore.machine_flags & MACHINE_FLAG_GS) + #define MACHINE_HAS_SCC (S390_lowcore.machine_flags & MACHINE_FLAG_SCC) ++#define MACHINE_HAS_PCI_MIO (S390_lowcore.machine_flags & MACHINE_FLAG_PCI_MIO) + + /* + * Console mode. Override with conmode= +diff --git a/arch/s390/kernel/early.c b/arch/s390/kernel/early.c +index 2531776cf6cf9..eb89cb0aa60b4 100644 +--- a/arch/s390/kernel/early.c ++++ b/arch/s390/kernel/early.c +@@ -252,6 +252,10 @@ static __init void detect_machine_facilities(void) + clock_comparator_max = -1ULL >> 1; + __ctl_set_bit(0, 53); + } ++ if (IS_ENABLED(CONFIG_PCI) && test_facility(153)) { ++ S390_lowcore.machine_flags |= MACHINE_FLAG_PCI_MIO; ++ /* the control bit is set during PCI initialization */ ++ } + } + + static inline void save_vector_registers(void) +diff --git a/arch/s390/kernel/jump_label.c b/arch/s390/kernel/jump_label.c +index ab584e8e35275..9156653b56f69 100644 +--- a/arch/s390/kernel/jump_label.c ++++ b/arch/s390/kernel/jump_label.c +@@ -36,7 +36,7 @@ static void jump_label_bug(struct jump_entry *entry, struct insn *expected, + unsigned char *ipe = (unsigned char *)expected; + unsigned char *ipn = (unsigned char *)new; + +- pr_emerg("Jump label code mismatch at %pS [%p]\n", ipc, ipc); ++ pr_emerg("Jump label code mismatch at %pS [%px]\n", ipc, ipc); + pr_emerg("Found: %6ph\n", ipc); + pr_emerg("Expected: %6ph\n", ipe); + pr_emerg("New: %6ph\n", ipn); +diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c +index c1d96e588152b..5521f593cd20a 100644 +--- a/arch/s390/mm/init.c ++++ b/arch/s390/mm/init.c +@@ -168,9 +168,9 @@ static void pv_init(void) + return; + + /* make sure bounce buffers are shared */ ++ swiotlb_force = SWIOTLB_FORCE; + swiotlb_init(1); + swiotlb_update_mem_attributes(); +- swiotlb_force = SWIOTLB_FORCE; + } + + void __init mem_init(void) +diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c +index 3e6612d8b921c..2d29966276296 100644 +--- a/arch/s390/net/bpf_jit_comp.c ++++ b/arch/s390/net/bpf_jit_comp.c +@@ -569,10 +569,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT4(0xb9080000, dst_reg, src_reg); + break; + case BPF_ALU | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */ +- if (!imm) +- break; +- /* alfi %dst,imm */ +- EMIT6_IMM(0xc20b0000, dst_reg, imm); ++ if (imm != 0) { ++ /* alfi %dst,imm */ ++ EMIT6_IMM(0xc20b0000, dst_reg, imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_ADD | BPF_K: /* dst = dst + imm */ +@@ -594,17 +594,22 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT4(0xb9090000, dst_reg, src_reg); + break; + case BPF_ALU | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */ +- if (!imm) +- break; +- /* alfi %dst,-imm */ +- EMIT6_IMM(0xc20b0000, dst_reg, -imm); ++ if (imm != 0) { ++ /* alfi %dst,-imm */ ++ EMIT6_IMM(0xc20b0000, dst_reg, -imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_SUB | BPF_K: /* dst = dst - imm */ + if (!imm) + break; +- /* agfi %dst,-imm */ +- EMIT6_IMM(0xc2080000, dst_reg, -imm); ++ if (imm == -0x80000000) { ++ /* algfi %dst,0x80000000 */ ++ EMIT6_IMM(0xc20a0000, dst_reg, 0x80000000); ++ } else { ++ /* agfi %dst,-imm */ ++ EMIT6_IMM(0xc2080000, dst_reg, -imm); ++ } + break; + /* + * BPF_MUL +@@ -619,10 +624,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT4(0xb90c0000, dst_reg, src_reg); + break; + case BPF_ALU | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */ +- if (imm == 1) +- break; +- /* msfi %r5,imm */ +- EMIT6_IMM(0xc2010000, dst_reg, imm); ++ if (imm != 1) { ++ /* msfi %r5,imm */ ++ EMIT6_IMM(0xc2010000, dst_reg, imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_MUL | BPF_K: /* dst = dst * imm */ +@@ -675,6 +680,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + if (BPF_OP(insn->code) == BPF_MOD) + /* lhgi %dst,0 */ + EMIT4_IMM(0xa7090000, dst_reg, 0); ++ else ++ EMIT_ZERO(dst_reg); + break; + } + /* lhi %w0,0 */ +@@ -769,10 +776,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT4(0xb9820000, dst_reg, src_reg); + break; + case BPF_ALU | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */ +- if (!imm) +- break; +- /* xilf %dst,imm */ +- EMIT6_IMM(0xc0070000, dst_reg, imm); ++ if (imm != 0) { ++ /* xilf %dst,imm */ ++ EMIT6_IMM(0xc0070000, dst_reg, imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_XOR | BPF_K: /* dst = dst ^ imm */ +@@ -793,10 +800,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, src_reg, 0); + break; + case BPF_ALU | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */ +- if (imm == 0) +- break; +- /* sll %dst,imm(%r0) */ +- EMIT4_DISP(0x89000000, dst_reg, REG_0, imm); ++ if (imm != 0) { ++ /* sll %dst,imm(%r0) */ ++ EMIT4_DISP(0x89000000, dst_reg, REG_0, imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_LSH | BPF_K: /* dst = dst << imm */ +@@ -818,10 +825,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, src_reg, 0); + break; + case BPF_ALU | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */ +- if (imm == 0) +- break; +- /* srl %dst,imm(%r0) */ +- EMIT4_DISP(0x88000000, dst_reg, REG_0, imm); ++ if (imm != 0) { ++ /* srl %dst,imm(%r0) */ ++ EMIT4_DISP(0x88000000, dst_reg, REG_0, imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_RSH | BPF_K: /* dst = dst >> imm */ +@@ -843,10 +850,10 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, + EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0); + break; + case BPF_ALU | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */ +- if (imm == 0) +- break; +- /* sra %dst,imm(%r0) */ +- EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm); ++ if (imm != 0) { ++ /* sra %dst,imm(%r0) */ ++ EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm); ++ } + EMIT_ZERO(dst_reg); + break; + case BPF_ALU64 | BPF_ARSH | BPF_K: /* ((s64) dst) >>= imm */ +diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c +index 6105b1b6e49b7..b8ddacf1efe11 100644 +--- a/arch/s390/pci/pci.c ++++ b/arch/s390/pci/pci.c +@@ -854,7 +854,6 @@ static void zpci_mem_exit(void) + } + + static unsigned int s390_pci_probe __initdata = 1; +-static unsigned int s390_pci_no_mio __initdata; + unsigned int s390_pci_force_floating __initdata; + static unsigned int s390_pci_initialized; + +@@ -865,7 +864,7 @@ char * __init pcibios_setup(char *str) + return NULL; + } + if (!strcmp(str, "nomio")) { +- s390_pci_no_mio = 1; ++ S390_lowcore.machine_flags &= ~MACHINE_FLAG_PCI_MIO; + return NULL; + } + if (!strcmp(str, "force_floating")) { +@@ -890,7 +889,7 @@ static int __init pci_base_init(void) + if (!test_facility(69) || !test_facility(71)) + return 0; + +- if (test_facility(153) && !s390_pci_no_mio) { ++ if (MACHINE_HAS_PCI_MIO) { + static_branch_enable(&have_mio); + ctl_set_bit(2, 5); + } +diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c +index b8541d77452c1..a37ccafe065ba 100644 +--- a/arch/x86/mm/init_64.c ++++ b/arch/x86/mm/init_64.c +@@ -1355,18 +1355,18 @@ int kern_addr_valid(unsigned long addr) + return 0; + + p4d = p4d_offset(pgd, addr); +- if (p4d_none(*p4d)) ++ if (!p4d_present(*p4d)) + return 0; + + pud = pud_offset(p4d, addr); +- if (pud_none(*pud)) ++ if (!pud_present(*pud)) + return 0; + + if (pud_large(*pud)) + return pfn_valid(pud_pfn(*pud)); + + pmd = pmd_offset(pud, addr); +- if (pmd_none(*pmd)) ++ if (!pmd_present(*pmd)) + return 0; + + if (pmd_large(*pmd)) +diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c +index 3e66feff524a8..b99074ca5e686 100644 +--- a/arch/x86/xen/enlighten_pv.c ++++ b/arch/x86/xen/enlighten_pv.c +@@ -1183,6 +1183,11 @@ static void __init xen_dom0_set_legacy_features(void) + x86_platform.legacy.rtc = 1; + } + ++static void __init xen_domu_set_legacy_features(void) ++{ ++ x86_platform.legacy.rtc = 0; ++} ++ + /* First C function to be called on Xen boot */ + asmlinkage __visible void __init xen_start_kernel(void) + { +@@ -1353,6 +1358,8 @@ asmlinkage __visible void __init xen_start_kernel(void) + add_preferred_console("xenboot", 0, NULL); + if (pci_xen) + x86_init.pci.arch_init = pci_xen_init; ++ x86_platform.set_legacy_features = ++ xen_domu_set_legacy_features; + } else { + const struct dom0_vga_console_info *info = + (void *)((char *)xen_start_info + +diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c +index 12fcb3858303a..8b1e40ec58f65 100644 +--- a/arch/x86/xen/p2m.c ++++ b/arch/x86/xen/p2m.c +@@ -622,8 +622,8 @@ int xen_alloc_p2m_entry(unsigned long pfn) + } + + /* Expanded the p2m? */ +- if (pfn > xen_p2m_last_pfn) { +- xen_p2m_last_pfn = pfn; ++ if (pfn >= xen_p2m_last_pfn) { ++ xen_p2m_last_pfn = ALIGN(pfn + 1, P2M_PER_PAGE); + HYPERVISOR_shared_info->arch.max_pfn = xen_p2m_last_pfn; + } + +diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c +index af81a62faba64..e7faea3d73d3b 100644 +--- a/arch/xtensa/platforms/iss/console.c ++++ b/arch/xtensa/platforms/iss/console.c +@@ -168,9 +168,13 @@ static const struct tty_operations serial_ops = { + + int __init rs_init(void) + { +- tty_port_init(&serial_port); ++ int ret; + + serial_driver = alloc_tty_driver(SERIAL_MAX_NUM_LINES); ++ if (!serial_driver) ++ return -ENOMEM; ++ ++ tty_port_init(&serial_port); + + pr_info("%s %s\n", serial_name, serial_version); + +@@ -190,8 +194,15 @@ int __init rs_init(void) + tty_set_operations(serial_driver, &serial_ops); + tty_port_link_device(&serial_port, serial_driver, 0); + +- if (tty_register_driver(serial_driver)) +- panic("Couldn't register serial driver\n"); ++ ret = tty_register_driver(serial_driver); ++ if (ret) { ++ pr_err("Couldn't register serial driver\n"); ++ tty_driver_kref_put(serial_driver); ++ tty_port_destroy(&serial_port); ++ ++ return ret; ++ } ++ + return 0; + } + +diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c +index 136232a01f715..8dee243e639f0 100644 +--- a/block/bfq-iosched.c ++++ b/block/bfq-iosched.c +@@ -2523,6 +2523,15 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) + * are likely to increase the throughput. + */ + bfqq->new_bfqq = new_bfqq; ++ /* ++ * The above assignment schedules the following redirections: ++ * each time some I/O for bfqq arrives, the process that ++ * generated that I/O is disassociated from bfqq and ++ * associated with new_bfqq. Here we increases new_bfqq->ref ++ * in advance, adding the number of processes that are ++ * expected to be associated with new_bfqq as they happen to ++ * issue I/O. ++ */ + new_bfqq->ref += process_refs; + return new_bfqq; + } +@@ -2582,6 +2591,10 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, + { + struct bfq_queue *in_service_bfqq, *new_bfqq; + ++ /* if a merge has already been setup, then proceed with that first */ ++ if (bfqq->new_bfqq) ++ return bfqq->new_bfqq; ++ + /* + * Do not perform queue merging if the device is non + * rotational and performs internal queueing. In fact, such a +@@ -2636,9 +2649,6 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, + if (bfq_too_late_for_merging(bfqq)) + return NULL; + +- if (bfqq->new_bfqq) +- return bfqq->new_bfqq; +- + if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq)) + return NULL; + +@@ -5004,7 +5014,7 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic) + if (bfqq->new_ioprio >= IOPRIO_BE_NR) { + pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n", + bfqq->new_ioprio); +- bfqq->new_ioprio = IOPRIO_BE_NR; ++ bfqq->new_ioprio = IOPRIO_BE_NR - 1; + } + + bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio); +diff --git a/block/blk-zoned.c b/block/blk-zoned.c +index b17c094cb977c..a85d0a06a6ff2 100644 +--- a/block/blk-zoned.c ++++ b/block/blk-zoned.c +@@ -316,9 +316,6 @@ int blkdev_report_zones_ioctl(struct block_device *bdev, fmode_t mode, + if (!blk_queue_is_zoned(q)) + return -ENOTTY; + +- if (!capable(CAP_SYS_ADMIN)) +- return -EACCES; +- + if (copy_from_user(&rep, argp, sizeof(struct blk_zone_report))) + return -EFAULT; + +@@ -374,9 +371,6 @@ int blkdev_reset_zones_ioctl(struct block_device *bdev, fmode_t mode, + if (!blk_queue_is_zoned(q)) + return -ENOTTY; + +- if (!capable(CAP_SYS_ADMIN)) +- return -EACCES; +- + if (!(mode & FMODE_WRITE)) + return -EBADF; + +diff --git a/block/bsg.c b/block/bsg.c +index 0d012efef5274..c8b9714e69232 100644 +--- a/block/bsg.c ++++ b/block/bsg.c +@@ -371,10 +371,13 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + case SG_GET_RESERVED_SIZE: + case SG_SET_RESERVED_SIZE: + case SG_EMULATED_HOST: +- case SCSI_IOCTL_SEND_COMMAND: + return scsi_cmd_ioctl(bd->queue, NULL, file->f_mode, cmd, uarg); + case SG_IO: + return bsg_sg_io(bd->queue, file->f_mode, uarg); ++ case SCSI_IOCTL_SEND_COMMAND: ++ pr_warn_ratelimited("%s: calling unsupported SCSI_IOCTL_SEND_COMMAND\n", ++ current->comm); ++ return -EINVAL; + default: + return -ENOTTY; + } +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c +index 7788af0ca1090..5c354c7aff946 100644 +--- a/drivers/ata/libata-core.c ++++ b/drivers/ata/libata-core.c +@@ -4556,6 +4556,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { + ATA_HORKAGE_ZERO_AFTER_TRIM, }, + { "Samsung SSD 850*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | + ATA_HORKAGE_ZERO_AFTER_TRIM, }, ++ { "Samsung SSD 860*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | ++ ATA_HORKAGE_ZERO_AFTER_TRIM, }, ++ { "Samsung SSD 870*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | ++ ATA_HORKAGE_ZERO_AFTER_TRIM, }, + { "FCCT*M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | + ATA_HORKAGE_ZERO_AFTER_TRIM, }, + +diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c +index 9dcef6ac643b9..982fe91125322 100644 +--- a/drivers/ata/sata_dwc_460ex.c ++++ b/drivers/ata/sata_dwc_460ex.c +@@ -1249,24 +1249,20 @@ static int sata_dwc_probe(struct platform_device *ofdev) + irq = irq_of_parse_and_map(np, 0); + if (irq == NO_IRQ) { + dev_err(&ofdev->dev, "no SATA DMA irq\n"); +- err = -ENODEV; +- goto error_out; ++ return -ENODEV; + } + + #ifdef CONFIG_SATA_DWC_OLD_DMA + if (!of_find_property(np, "dmas", NULL)) { + err = sata_dwc_dma_init_old(ofdev, hsdev); + if (err) +- goto error_out; ++ return err; + } + #endif + + hsdev->phy = devm_phy_optional_get(hsdev->dev, "sata-phy"); +- if (IS_ERR(hsdev->phy)) { +- err = PTR_ERR(hsdev->phy); +- hsdev->phy = NULL; +- goto error_out; +- } ++ if (IS_ERR(hsdev->phy)) ++ return PTR_ERR(hsdev->phy); + + err = phy_init(hsdev->phy); + if (err) +diff --git a/drivers/base/power/trace.c b/drivers/base/power/trace.c +index 977d27bd1a220..9a9dec657c166 100644 +--- a/drivers/base/power/trace.c ++++ b/drivers/base/power/trace.c +@@ -13,6 +13,7 @@ + #include + #include + #include ++#include + + #include + +@@ -165,6 +166,9 @@ void generate_pm_trace(const void *tracedata, unsigned int user) + const char *file = *(const char **)(tracedata + 2); + unsigned int user_hash_value, file_hash_value; + ++ if (!x86_platform.legacy.rtc) ++ return; ++ + user_hash_value = user % USERHASH; + file_hash_value = hash_string(lineno, file, FILEHASH); + set_magic_time(user_hash_value, file_hash_value, dev_hash_value); +@@ -267,6 +271,9 @@ static struct notifier_block pm_trace_nb = { + + static int early_resume_init(void) + { ++ if (!x86_platform.legacy.rtc) ++ return 0; ++ + hash_value_early_read = read_magic_time(); + register_pm_notifier(&pm_trace_nb); + return 0; +@@ -277,6 +284,9 @@ static int late_resume_init(void) + unsigned int val = hash_value_early_read; + unsigned int user, file, dev; + ++ if (!x86_platform.legacy.rtc) ++ return 0; ++ + user = val % USERHASH; + val = val / USERHASH; + file = val % FILEHASH; +diff --git a/drivers/clk/at91/clk-generated.c b/drivers/clk/at91/clk-generated.c +index 44a46dcc0518b..d7fe1303f79dc 100644 +--- a/drivers/clk/at91/clk-generated.c ++++ b/drivers/clk/at91/clk-generated.c +@@ -18,8 +18,6 @@ + + #define GENERATED_MAX_DIV 255 + +-#define GCK_INDEX_DT_AUDIO_PLL 5 +- + struct clk_generated { + struct clk_hw hw; + struct regmap *regmap; +@@ -29,7 +27,7 @@ struct clk_generated { + u32 gckdiv; + const struct clk_pcr_layout *layout; + u8 parent_id; +- bool audio_pll_allowed; ++ int chg_pid; + }; + + #define to_clk_generated(hw) \ +@@ -109,7 +107,7 @@ static void clk_generated_best_diff(struct clk_rate_request *req, + tmp_rate = parent_rate / div; + tmp_diff = abs(req->rate - tmp_rate); + +- if (*best_diff < 0 || *best_diff > tmp_diff) { ++ if (*best_diff < 0 || *best_diff >= tmp_diff) { + *best_rate = tmp_rate; + *best_diff = tmp_diff; + req->best_parent_rate = parent_rate; +@@ -129,7 +127,16 @@ static int clk_generated_determine_rate(struct clk_hw *hw, + int i; + u32 div; + +- for (i = 0; i < clk_hw_get_num_parents(hw) - 1; i++) { ++ /* do not look for a rate that is outside of our range */ ++ if (gck->range.max && req->rate > gck->range.max) ++ req->rate = gck->range.max; ++ if (gck->range.min && req->rate < gck->range.min) ++ req->rate = gck->range.min; ++ ++ for (i = 0; i < clk_hw_get_num_parents(hw); i++) { ++ if (gck->chg_pid == i) ++ continue; ++ + parent = clk_hw_get_parent_by_index(hw, i); + if (!parent) + continue; +@@ -161,10 +168,10 @@ static int clk_generated_determine_rate(struct clk_hw *hw, + * that the only clks able to modify gck rate are those of audio IPs. + */ + +- if (!gck->audio_pll_allowed) ++ if (gck->chg_pid < 0) + goto end; + +- parent = clk_hw_get_parent_by_index(hw, GCK_INDEX_DT_AUDIO_PLL); ++ parent = clk_hw_get_parent_by_index(hw, gck->chg_pid); + if (!parent) + goto end; + +@@ -271,8 +278,8 @@ struct clk_hw * __init + at91_clk_register_generated(struct regmap *regmap, spinlock_t *lock, + const struct clk_pcr_layout *layout, + const char *name, const char **parent_names, +- u8 num_parents, u8 id, bool pll_audio, +- const struct clk_range *range) ++ u8 num_parents, u8 id, ++ const struct clk_range *range, int chg_pid) + { + struct clk_generated *gck; + struct clk_init_data init; +@@ -287,15 +294,16 @@ at91_clk_register_generated(struct regmap *regmap, spinlock_t *lock, + init.ops = &generated_ops; + init.parent_names = parent_names; + init.num_parents = num_parents; +- init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE | +- CLK_SET_RATE_PARENT; ++ init.flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE; ++ if (chg_pid >= 0) ++ init.flags |= CLK_SET_RATE_PARENT; + + gck->id = id; + gck->hw.init = &init; + gck->regmap = regmap; + gck->lock = lock; + gck->range = *range; +- gck->audio_pll_allowed = pll_audio; ++ gck->chg_pid = chg_pid; + gck->layout = layout; + + clk_generated_startup(gck); +diff --git a/drivers/clk/at91/dt-compat.c b/drivers/clk/at91/dt-compat.c +index aa1754eac59ff..8a652c44c25ab 100644 +--- a/drivers/clk/at91/dt-compat.c ++++ b/drivers/clk/at91/dt-compat.c +@@ -22,6 +22,8 @@ + + #define SYSTEM_MAX_ID 31 + ++#define GCK_INDEX_DT_AUDIO_PLL 5 ++ + #ifdef CONFIG_HAVE_AT91_AUDIO_PLL + static void __init of_sama5d2_clk_audio_pll_frac_setup(struct device_node *np) + { +@@ -135,7 +137,7 @@ static void __init of_sama5d2_clk_generated_setup(struct device_node *np) + return; + + for_each_child_of_node(np, gcknp) { +- bool pll_audio = false; ++ int chg_pid = INT_MIN; + + if (of_property_read_u32(gcknp, "reg", &id)) + continue; +@@ -152,12 +154,12 @@ static void __init of_sama5d2_clk_generated_setup(struct device_node *np) + if (of_device_is_compatible(np, "atmel,sama5d2-clk-generated") && + (id == GCK_ID_I2S0 || id == GCK_ID_I2S1 || + id == GCK_ID_CLASSD)) +- pll_audio = true; ++ chg_pid = GCK_INDEX_DT_AUDIO_PLL; + + hw = at91_clk_register_generated(regmap, &pmc_pcr_lock, + &dt_pcr_layout, name, + parent_names, num_parents, +- id, pll_audio, &range); ++ id, &range, chg_pid); + if (IS_ERR(hw)) + continue; + +diff --git a/drivers/clk/at91/pmc.h b/drivers/clk/at91/pmc.h +index 9b8db9cdcda53..8a88ad2360742 100644 +--- a/drivers/clk/at91/pmc.h ++++ b/drivers/clk/at91/pmc.h +@@ -118,8 +118,8 @@ struct clk_hw * __init + at91_clk_register_generated(struct regmap *regmap, spinlock_t *lock, + const struct clk_pcr_layout *layout, + const char *name, const char **parent_names, +- u8 num_parents, u8 id, bool pll_audio, +- const struct clk_range *range); ++ u8 num_parents, u8 id, ++ const struct clk_range *range, int chg_pid); + + struct clk_hw * __init + at91_clk_register_h32mx(struct regmap *regmap, const char *name, +diff --git a/drivers/clk/at91/sam9x60.c b/drivers/clk/at91/sam9x60.c +index e3f4c8f20223a..39923899478f9 100644 +--- a/drivers/clk/at91/sam9x60.c ++++ b/drivers/clk/at91/sam9x60.c +@@ -124,7 +124,6 @@ static const struct { + char *n; + u8 id; + struct clk_range r; +- bool pll; + } sam9x60_gck[] = { + { .n = "flex0_gclk", .id = 5, }, + { .n = "flex1_gclk", .id = 6, }, +@@ -144,11 +143,9 @@ static const struct { + { .n = "sdmmc1_gclk", .id = 26, .r = { .min = 0, .max = 105000000 }, }, + { .n = "flex11_gclk", .id = 32, }, + { .n = "flex12_gclk", .id = 33, }, +- { .n = "i2s_gclk", .id = 34, .r = { .min = 0, .max = 105000000 }, +- .pll = true, }, ++ { .n = "i2s_gclk", .id = 34, .r = { .min = 0, .max = 105000000 }, }, + { .n = "pit64b_gclk", .id = 37, }, +- { .n = "classd_gclk", .id = 42, .r = { .min = 0, .max = 100000000 }, +- .pll = true, }, ++ { .n = "classd_gclk", .id = 42, .r = { .min = 0, .max = 100000000 }, }, + { .n = "tcb1_gclk", .id = 45, }, + { .n = "dbgu_gclk", .id = 47, }, + }; +@@ -285,8 +282,7 @@ static void __init sam9x60_pmc_setup(struct device_node *np) + sam9x60_gck[i].n, + parent_names, 6, + sam9x60_gck[i].id, +- sam9x60_gck[i].pll, +- &sam9x60_gck[i].r); ++ &sam9x60_gck[i].r, INT_MIN); + if (IS_ERR(hw)) + goto err_free; + +diff --git a/drivers/clk/at91/sama5d2.c b/drivers/clk/at91/sama5d2.c +index ff7e3f727082e..d3c4bceb032d1 100644 +--- a/drivers/clk/at91/sama5d2.c ++++ b/drivers/clk/at91/sama5d2.c +@@ -115,21 +115,20 @@ static const struct { + char *n; + u8 id; + struct clk_range r; +- bool pll; ++ int chg_pid; + } sama5d2_gck[] = { +- { .n = "sdmmc0_gclk", .id = 31, }, +- { .n = "sdmmc1_gclk", .id = 32, }, +- { .n = "tcb0_gclk", .id = 35, .r = { .min = 0, .max = 83000000 }, }, +- { .n = "tcb1_gclk", .id = 36, .r = { .min = 0, .max = 83000000 }, }, +- { .n = "pwm_gclk", .id = 38, .r = { .min = 0, .max = 83000000 }, }, +- { .n = "isc_gclk", .id = 46, }, +- { .n = "pdmic_gclk", .id = 48, }, +- { .n = "i2s0_gclk", .id = 54, .pll = true }, +- { .n = "i2s1_gclk", .id = 55, .pll = true }, +- { .n = "can0_gclk", .id = 56, .r = { .min = 0, .max = 80000000 }, }, +- { .n = "can1_gclk", .id = 57, .r = { .min = 0, .max = 80000000 }, }, +- { .n = "classd_gclk", .id = 59, .r = { .min = 0, .max = 100000000 }, +- .pll = true }, ++ { .n = "sdmmc0_gclk", .id = 31, .chg_pid = INT_MIN, }, ++ { .n = "sdmmc1_gclk", .id = 32, .chg_pid = INT_MIN, }, ++ { .n = "tcb0_gclk", .id = 35, .chg_pid = INT_MIN, .r = { .min = 0, .max = 83000000 }, }, ++ { .n = "tcb1_gclk", .id = 36, .chg_pid = INT_MIN, .r = { .min = 0, .max = 83000000 }, }, ++ { .n = "pwm_gclk", .id = 38, .chg_pid = INT_MIN, .r = { .min = 0, .max = 83000000 }, }, ++ { .n = "isc_gclk", .id = 46, .chg_pid = INT_MIN, }, ++ { .n = "pdmic_gclk", .id = 48, .chg_pid = INT_MIN, }, ++ { .n = "i2s0_gclk", .id = 54, .chg_pid = 5, }, ++ { .n = "i2s1_gclk", .id = 55, .chg_pid = 5, }, ++ { .n = "can0_gclk", .id = 56, .chg_pid = INT_MIN, .r = { .min = 0, .max = 80000000 }, }, ++ { .n = "can1_gclk", .id = 57, .chg_pid = INT_MIN, .r = { .min = 0, .max = 80000000 }, }, ++ { .n = "classd_gclk", .id = 59, .chg_pid = 5, .r = { .min = 0, .max = 100000000 }, }, + }; + + static const struct clk_programmable_layout sama5d2_programmable_layout = { +@@ -317,8 +316,8 @@ static void __init sama5d2_pmc_setup(struct device_node *np) + sama5d2_gck[i].n, + parent_names, 6, + sama5d2_gck[i].id, +- sama5d2_gck[i].pll, +- &sama5d2_gck[i].r); ++ &sama5d2_gck[i].r, ++ sama5d2_gck[i].chg_pid); + if (IS_ERR(hw)) + goto err_free; + +diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c +index bc6ccf2c7aae0..c636c9ba01008 100644 +--- a/drivers/cpufreq/powernv-cpufreq.c ++++ b/drivers/cpufreq/powernv-cpufreq.c +@@ -36,6 +36,7 @@ + #define MAX_PSTATE_SHIFT 32 + #define LPSTATE_SHIFT 48 + #define GPSTATE_SHIFT 56 ++#define MAX_NR_CHIPS 32 + + #define MAX_RAMP_DOWN_TIME 5120 + /* +@@ -1050,12 +1051,20 @@ static int init_chip_info(void) + unsigned int *chip; + unsigned int cpu, i; + unsigned int prev_chip_id = UINT_MAX; ++ cpumask_t *chip_cpu_mask; + int ret = 0; + + chip = kcalloc(num_possible_cpus(), sizeof(*chip), GFP_KERNEL); + if (!chip) + return -ENOMEM; + ++ /* Allocate a chip cpu mask large enough to fit mask for all chips */ ++ chip_cpu_mask = kcalloc(MAX_NR_CHIPS, sizeof(cpumask_t), GFP_KERNEL); ++ if (!chip_cpu_mask) { ++ ret = -ENOMEM; ++ goto free_and_return; ++ } ++ + for_each_possible_cpu(cpu) { + unsigned int id = cpu_to_chip_id(cpu); + +@@ -1063,22 +1072,25 @@ static int init_chip_info(void) + prev_chip_id = id; + chip[nr_chips++] = id; + } ++ cpumask_set_cpu(cpu, &chip_cpu_mask[nr_chips-1]); + } + + chips = kcalloc(nr_chips, sizeof(struct chip), GFP_KERNEL); + if (!chips) { + ret = -ENOMEM; +- goto free_and_return; ++ goto out_free_chip_cpu_mask; + } + + for (i = 0; i < nr_chips; i++) { + chips[i].id = chip[i]; +- cpumask_copy(&chips[i].mask, cpumask_of_node(chip[i])); ++ cpumask_copy(&chips[i].mask, &chip_cpu_mask[i]); + INIT_WORK(&chips[i].throttle, powernv_cpufreq_work_fn); + for_each_cpu(cpu, &chips[i].mask) + per_cpu(chip_info, cpu) = &chips[i]; + } + ++out_free_chip_cpu_mask: ++ kfree(chip_cpu_mask); + free_and_return: + kfree(chip); + return ret; +diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c +index 66fa524b6261e..5471110792071 100644 +--- a/drivers/crypto/mxs-dcp.c ++++ b/drivers/crypto/mxs-dcp.c +@@ -298,21 +298,20 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) + + struct scatterlist *dst = req->dst; + struct scatterlist *src = req->src; +- const int nents = sg_nents(req->src); ++ int dst_nents = sg_nents(dst); + + const int out_off = DCP_BUF_SZ; + uint8_t *in_buf = sdcp->coh->aes_in_buf; + uint8_t *out_buf = sdcp->coh->aes_out_buf; + +- uint8_t *out_tmp, *src_buf, *dst_buf = NULL; + uint32_t dst_off = 0; ++ uint8_t *src_buf = NULL; + uint32_t last_out_len = 0; + + uint8_t *key = sdcp->coh->aes_key; + + int ret = 0; +- int split = 0; +- unsigned int i, len, clen, rem = 0, tlen = 0; ++ unsigned int i, len, clen, tlen = 0; + int init = 0; + bool limit_hit = false; + +@@ -330,7 +329,7 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) + memset(key + AES_KEYSIZE_128, 0, AES_KEYSIZE_128); + } + +- for_each_sg(req->src, src, nents, i) { ++ for_each_sg(req->src, src, sg_nents(src), i) { + src_buf = sg_virt(src); + len = sg_dma_len(src); + tlen += len; +@@ -355,34 +354,17 @@ static int mxs_dcp_aes_block_crypt(struct crypto_async_request *arq) + * submit the buffer. + */ + if (actx->fill == out_off || sg_is_last(src) || +- limit_hit) { ++ limit_hit) { + ret = mxs_dcp_run_aes(actx, req, init); + if (ret) + return ret; + init = 0; + +- out_tmp = out_buf; ++ sg_pcopy_from_buffer(dst, dst_nents, out_buf, ++ actx->fill, dst_off); ++ dst_off += actx->fill; + last_out_len = actx->fill; +- while (dst && actx->fill) { +- if (!split) { +- dst_buf = sg_virt(dst); +- dst_off = 0; +- } +- rem = min(sg_dma_len(dst) - dst_off, +- actx->fill); +- +- memcpy(dst_buf + dst_off, out_tmp, rem); +- out_tmp += rem; +- dst_off += rem; +- actx->fill -= rem; +- +- if (dst_off == sg_dma_len(dst)) { +- dst = sg_next(dst); +- split = 0; +- } else { +- split = 1; +- } +- } ++ actx->fill = 0; + } + } while (len); + +diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c +index 67736c801f3ca..cc70da05db4b5 100644 +--- a/drivers/dma/imx-sdma.c ++++ b/drivers/dma/imx-sdma.c +@@ -377,7 +377,6 @@ struct sdma_channel { + unsigned long watermark_level; + u32 shp_addr, per_addr; + enum dma_status status; +- bool context_loaded; + struct imx_dma_data data; + struct work_struct terminate_worker; + }; +@@ -988,9 +987,6 @@ static int sdma_load_context(struct sdma_channel *sdmac) + int ret; + unsigned long flags; + +- if (sdmac->context_loaded) +- return 0; +- + if (sdmac->direction == DMA_DEV_TO_MEM) + load_address = sdmac->pc_from_device; + else if (sdmac->direction == DMA_DEV_TO_DEV) +@@ -1033,8 +1029,6 @@ static int sdma_load_context(struct sdma_channel *sdmac) + + spin_unlock_irqrestore(&sdma->channel_0_lock, flags); + +- sdmac->context_loaded = true; +- + return ret; + } + +@@ -1074,7 +1068,6 @@ static void sdma_channel_terminate_work(struct work_struct *work) + sdmac->desc = NULL; + spin_unlock_irqrestore(&sdmac->vc.lock, flags); + vchan_dma_desc_free_list(&sdmac->vc, &head); +- sdmac->context_loaded = false; + } + + static int sdma_disable_channel_async(struct dma_chan *chan) +@@ -1141,7 +1134,6 @@ static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac) + static int sdma_config_channel(struct dma_chan *chan) + { + struct sdma_channel *sdmac = to_sdma_chan(chan); +- int ret; + + sdma_disable_channel(chan); + +@@ -1181,9 +1173,7 @@ static int sdma_config_channel(struct dma_chan *chan) + sdmac->watermark_level = 0; /* FIXME: M3_BASE_ADDRESS */ + } + +- ret = sdma_load_context(sdmac); +- +- return ret; ++ return 0; + } + + static int sdma_set_channel_priority(struct sdma_channel *sdmac, +@@ -1335,7 +1325,6 @@ static void sdma_free_chan_resources(struct dma_chan *chan) + + sdmac->event_id0 = 0; + sdmac->event_id1 = 0; +- sdmac->context_loaded = false; + + sdma_set_channel_priority(sdmac, 0); + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h +index d1e278e999eeb..1b2fa83798304 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h +@@ -762,7 +762,7 @@ enum amd_hw_ip_block_type { + MAX_HWIP + }; + +-#define HWIP_MAX_INSTANCE 8 ++#define HWIP_MAX_INSTANCE 10 + + struct amd_powerplay { + void *pp_handle; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c +index 70dbe343f51df..89cecdba81ace 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_i2c.c +@@ -339,7 +339,7 @@ static void amdgpu_i2c_put_byte(struct amdgpu_i2c_chan *i2c_bus, + void + amdgpu_i2c_router_select_ddc_port(const struct amdgpu_connector *amdgpu_connector) + { +- u8 val; ++ u8 val = 0; + + if (!amdgpu_connector->router.ddc_valid) + return; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +index 28361a9c5addc..532d1842f6a30 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +@@ -200,7 +200,7 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain) + c++; + } + +- BUG_ON(c >= AMDGPU_BO_MAX_PLACEMENTS); ++ BUG_ON(c > AMDGPU_BO_MAX_PLACEMENTS); + + placement->num_placement = c; + placement->placement = places; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c +index 8a32b5c93778b..bd7ae3e130b6f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras_eeprom.c +@@ -138,7 +138,7 @@ int amdgpu_ras_eeprom_init(struct amdgpu_ras_eeprom_control *control) + return ret; + } + +- __decode_table_header_from_buff(hdr, &buff[2]); ++ __decode_table_header_from_buff(hdr, buff); + + if (hdr->header == EEPROM_TABLE_HDR_VAL) { + control->num_recs = (hdr->tbl_size - EEPROM_TABLE_HEADER_SIZE) / +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c +index 88813dad731fa..c021519af8106 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c +@@ -98,36 +98,78 @@ void mqd_symmetrically_map_cu_mask(struct mqd_manager *mm, + uint32_t *se_mask) + { + struct kfd_cu_info cu_info; +- uint32_t cu_per_se[KFD_MAX_NUM_SE] = {0}; +- int i, se, sh, cu = 0; +- ++ uint32_t cu_per_sh[KFD_MAX_NUM_SE][KFD_MAX_NUM_SH_PER_SE] = {0}; ++ int i, se, sh, cu; + amdgpu_amdkfd_get_cu_info(mm->dev->kgd, &cu_info); + + if (cu_mask_count > cu_info.cu_active_number) + cu_mask_count = cu_info.cu_active_number; + ++ /* Exceeding these bounds corrupts the stack and indicates a coding error. ++ * Returning with no CU's enabled will hang the queue, which should be ++ * attention grabbing. ++ */ ++ if (cu_info.num_shader_engines > KFD_MAX_NUM_SE) { ++ pr_err("Exceeded KFD_MAX_NUM_SE, chip reports %d\n", cu_info.num_shader_engines); ++ return; ++ } ++ if (cu_info.num_shader_arrays_per_engine > KFD_MAX_NUM_SH_PER_SE) { ++ pr_err("Exceeded KFD_MAX_NUM_SH, chip reports %d\n", ++ cu_info.num_shader_arrays_per_engine * cu_info.num_shader_engines); ++ return; ++ } ++ /* Count active CUs per SH. ++ * ++ * Some CUs in an SH may be disabled. HW expects disabled CUs to be ++ * represented in the high bits of each SH's enable mask (the upper and lower ++ * 16 bits of se_mask) and will take care of the actual distribution of ++ * disabled CUs within each SH automatically. ++ * Each half of se_mask must be filled only on bits 0-cu_per_sh[se][sh]-1. ++ * ++ * See note on Arcturus cu_bitmap layout in gfx_v9_0_get_cu_info. ++ */ + for (se = 0; se < cu_info.num_shader_engines; se++) + for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++) +- cu_per_se[se] += hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]); +- +- /* Symmetrically map cu_mask to all SEs: +- * cu_mask[0] bit0 -> se_mask[0] bit0; +- * cu_mask[0] bit1 -> se_mask[1] bit0; +- * ... (if # SE is 4) +- * cu_mask[0] bit4 -> se_mask[0] bit1; ++ cu_per_sh[se][sh] = hweight32(cu_info.cu_bitmap[se % 4][sh + (se / 4)]); ++ ++ /* Symmetrically map cu_mask to all SEs & SHs: ++ * se_mask programs up to 2 SH in the upper and lower 16 bits. ++ * ++ * Examples ++ * Assuming 1 SH/SE, 4 SEs: ++ * cu_mask[0] bit0 -> se_mask[0] bit0 ++ * cu_mask[0] bit1 -> se_mask[1] bit0 ++ * ... ++ * cu_mask[0] bit4 -> se_mask[0] bit1 ++ * ... ++ * ++ * Assuming 2 SH/SE, 4 SEs ++ * cu_mask[0] bit0 -> se_mask[0] bit0 (SE0,SH0,CU0) ++ * cu_mask[0] bit1 -> se_mask[1] bit0 (SE1,SH0,CU0) ++ * ... ++ * cu_mask[0] bit4 -> se_mask[0] bit16 (SE0,SH1,CU0) ++ * cu_mask[0] bit5 -> se_mask[1] bit16 (SE1,SH1,CU0) ++ * ... ++ * cu_mask[0] bit8 -> se_mask[0] bit1 (SE0,SH0,CU1) + * ... ++ * ++ * First ensure all CUs are disabled, then enable user specified CUs. + */ +- se = 0; +- for (i = 0; i < cu_mask_count; i++) { +- if (cu_mask[i / 32] & (1 << (i % 32))) +- se_mask[se] |= 1 << cu; +- +- do { +- se++; +- if (se == cu_info.num_shader_engines) { +- se = 0; +- cu++; ++ for (i = 0; i < cu_info.num_shader_engines; i++) ++ se_mask[i] = 0; ++ ++ i = 0; ++ for (cu = 0; cu < 16; cu++) { ++ for (sh = 0; sh < cu_info.num_shader_arrays_per_engine; sh++) { ++ for (se = 0; se < cu_info.num_shader_engines; se++) { ++ if (cu_per_sh[se][sh] > cu) { ++ if (cu_mask[i / 32] & (1 << (i % 32))) ++ se_mask[se] |= 1 << (cu + sh * 16); ++ i++; ++ if (i == cu_mask_count) ++ return; ++ } + } +- } while (cu >= cu_per_se[se] && cu < 32); ++ } + } + } +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h +index fbdb16418847c..4edc012e31387 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.h +@@ -27,6 +27,7 @@ + #include "kfd_priv.h" + + #define KFD_MAX_NUM_SE 8 ++#define KFD_MAX_NUM_SH_PER_SE 2 + + /** + * struct mqd_manager +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +index f3dfb2887ae0b..2cdcefab2d7d4 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c +@@ -95,29 +95,29 @@ static ssize_t dp_link_settings_read(struct file *f, char __user *buf, + + rd_buf_ptr = rd_buf; + +- str_len = strlen("Current: %d %d %d "); +- snprintf(rd_buf_ptr, str_len, "Current: %d %d %d ", ++ str_len = strlen("Current: %d 0x%x %d "); ++ snprintf(rd_buf_ptr, str_len, "Current: %d 0x%x %d ", + link->cur_link_settings.lane_count, + link->cur_link_settings.link_rate, + link->cur_link_settings.link_spread); + rd_buf_ptr += str_len; + +- str_len = strlen("Verified: %d %d %d "); +- snprintf(rd_buf_ptr, str_len, "Verified: %d %d %d ", ++ str_len = strlen("Verified: %d 0x%x %d "); ++ snprintf(rd_buf_ptr, str_len, "Verified: %d 0x%x %d ", + link->verified_link_cap.lane_count, + link->verified_link_cap.link_rate, + link->verified_link_cap.link_spread); + rd_buf_ptr += str_len; + +- str_len = strlen("Reported: %d %d %d "); +- snprintf(rd_buf_ptr, str_len, "Reported: %d %d %d ", ++ str_len = strlen("Reported: %d 0x%x %d "); ++ snprintf(rd_buf_ptr, str_len, "Reported: %d 0x%x %d ", + link->reported_link_cap.lane_count, + link->reported_link_cap.link_rate, + link->reported_link_cap.link_spread); + rd_buf_ptr += str_len; + +- str_len = strlen("Preferred: %d %d %d "); +- snprintf(rd_buf_ptr, str_len, "Preferred: %d %d %d\n", ++ str_len = strlen("Preferred: %d 0x%x %d "); ++ snprintf(rd_buf_ptr, str_len, "Preferred: %d 0x%x %d\n", + link->preferred_link_setting.lane_count, + link->preferred_link_setting.link_rate, + link->preferred_link_setting.link_spread); +diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +index 60123db7ba02f..bc5ebea1abede 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c +@@ -3264,13 +3264,12 @@ static enum dc_status dcn10_set_clock(struct dc *dc, + struct dc_clock_config clock_cfg = {0}; + struct dc_clocks *current_clocks = &context->bw_ctx.bw.dcn.clk; + +- if (dc->clk_mgr && dc->clk_mgr->funcs->get_clock) +- dc->clk_mgr->funcs->get_clock(dc->clk_mgr, +- context, clock_type, &clock_cfg); +- +- if (!dc->clk_mgr->funcs->get_clock) ++ if (!dc->clk_mgr || !dc->clk_mgr->funcs->get_clock) + return DC_FAIL_UNSUPPORTED_1; + ++ dc->clk_mgr->funcs->get_clock(dc->clk_mgr, ++ context, clock_type, &clock_cfg); ++ + if (clk_khz > clock_cfg.max_clock_khz) + return DC_FAIL_CLK_EXCEED_MAX; + +@@ -3288,7 +3287,7 @@ static enum dc_status dcn10_set_clock(struct dc *dc, + else + return DC_ERROR_UNEXPECTED; + +- if (dc->clk_mgr && dc->clk_mgr->funcs->update_clocks) ++ if (dc->clk_mgr->funcs->update_clocks) + dc->clk_mgr->funcs->update_clocks(dc->clk_mgr, + context, true); + return DC_OK; +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +index 2b1175bb2daee..d2ea4c003d442 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c +@@ -2232,7 +2232,7 @@ void dcn20_set_mcif_arb_params( + wb_arb_params->cli_watermark[k] = get_wm_writeback_urgent(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000; + wb_arb_params->pstate_watermark[k] = get_wm_writeback_dram_clock_change(&context->bw_ctx.dml, pipes, pipe_cnt) * 1000; + } +- wb_arb_params->time_per_pixel = 16.0 / context->res_ctx.pipe_ctx[i].stream->phy_pix_clk; /* 4 bit fraction, ms */ ++ wb_arb_params->time_per_pixel = 16.0 * 1000 / (context->res_ctx.pipe_ctx[i].stream->phy_pix_clk / 1000); /* 4 bit fraction, ms */ + wb_arb_params->slice_lines = 32; + wb_arb_params->arbitration_slice = 2; + wb_arb_params->max_scaled_time = dcn20_calc_max_scaled_time(wb_arb_params->time_per_pixel, +diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c +index 00debd02c3220..0ba92428ef560 100644 +--- a/drivers/gpu/drm/drm_debugfs.c ++++ b/drivers/gpu/drm/drm_debugfs.c +@@ -91,6 +91,7 @@ static int drm_clients_info(struct seq_file *m, void *data) + mutex_lock(&dev->filelist_mutex); + list_for_each_entry_reverse(priv, &dev->filelist, lhead) { + struct task_struct *task; ++ bool is_current_master = drm_is_current_master(priv); + + rcu_read_lock(); /* locks pid_task()->comm */ + task = pid_task(priv->pid, PIDTYPE_PID); +@@ -99,7 +100,7 @@ static int drm_clients_info(struct seq_file *m, void *data) + task ? task->comm : "", + pid_vnr(priv->pid), + priv->minor->index, +- drm_is_current_master(priv) ? 'y' : 'n', ++ is_current_master ? 'y' : 'n', + priv->authenticated ? 'y' : 'n', + from_kuid_munged(seq_user_ns(m), uid), + priv->magic); +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +index 0c9c40720ca9a..35225ff8792dd 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c +@@ -397,8 +397,7 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state, + if (switch_mmu_context) { + struct etnaviv_iommu_context *old_context = gpu->mmu_context; + +- etnaviv_iommu_context_get(mmu_context); +- gpu->mmu_context = mmu_context; ++ gpu->mmu_context = etnaviv_iommu_context_get(mmu_context); + etnaviv_iommu_context_put(old_context); + } + +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c +index cb1faaac380a3..519948637186e 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c +@@ -304,8 +304,7 @@ struct etnaviv_vram_mapping *etnaviv_gem_mapping_get( + list_del(&mapping->obj_node); + } + +- etnaviv_iommu_context_get(mmu_context); +- mapping->context = mmu_context; ++ mapping->context = etnaviv_iommu_context_get(mmu_context); + mapping->use = 1; + + ret = etnaviv_iommu_map_gem(mmu_context, etnaviv_obj, +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +index 1ba83a90cdef6..7085b08b1db42 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +@@ -534,8 +534,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, + goto err_submit_objects; + + submit->ctx = file->driver_priv; +- etnaviv_iommu_context_get(submit->ctx->mmu); +- submit->mmu_context = submit->ctx->mmu; ++ submit->mmu_context = etnaviv_iommu_context_get(submit->ctx->mmu); + submit->exec_state = args->exec_state; + submit->flags = args->flags; + +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +index 85de8551ce866..db35736d47af2 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c +@@ -545,6 +545,12 @@ static int etnaviv_hw_reset(struct etnaviv_gpu *gpu) + /* We rely on the GPU running, so program the clock */ + etnaviv_gpu_update_clock(gpu); + ++ gpu->fe_running = false; ++ gpu->exec_state = -1; ++ if (gpu->mmu_context) ++ etnaviv_iommu_context_put(gpu->mmu_context); ++ gpu->mmu_context = NULL; ++ + return 0; + } + +@@ -607,19 +613,23 @@ void etnaviv_gpu_start_fe(struct etnaviv_gpu *gpu, u32 address, u16 prefetch) + VIVS_MMUv2_SEC_COMMAND_CONTROL_ENABLE | + VIVS_MMUv2_SEC_COMMAND_CONTROL_PREFETCH(prefetch)); + } ++ ++ gpu->fe_running = true; + } + +-static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu) ++static void etnaviv_gpu_start_fe_idleloop(struct etnaviv_gpu *gpu, ++ struct etnaviv_iommu_context *context) + { +- u32 address = etnaviv_cmdbuf_get_va(&gpu->buffer, +- &gpu->mmu_context->cmdbuf_mapping); + u16 prefetch; ++ u32 address; + + /* setup the MMU */ +- etnaviv_iommu_restore(gpu, gpu->mmu_context); ++ etnaviv_iommu_restore(gpu, context); + + /* Start command processor */ + prefetch = etnaviv_buffer_init(gpu); ++ address = etnaviv_cmdbuf_get_va(&gpu->buffer, ++ &gpu->mmu_context->cmdbuf_mapping); + + etnaviv_gpu_start_fe(gpu, address, prefetch); + } +@@ -790,7 +800,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu) + /* Now program the hardware */ + mutex_lock(&gpu->lock); + etnaviv_gpu_hw_init(gpu); +- gpu->exec_state = -1; + mutex_unlock(&gpu->lock); + + pm_runtime_mark_last_busy(gpu->dev); +@@ -994,8 +1003,6 @@ void etnaviv_gpu_recover_hang(struct etnaviv_gpu *gpu) + spin_unlock(&gpu->event_spinlock); + + etnaviv_gpu_hw_init(gpu); +- gpu->exec_state = -1; +- gpu->mmu_context = NULL; + + mutex_unlock(&gpu->lock); + pm_runtime_mark_last_busy(gpu->dev); +@@ -1306,14 +1313,12 @@ struct dma_fence *etnaviv_gpu_submit(struct etnaviv_gem_submit *submit) + goto out_unlock; + } + +- if (!gpu->mmu_context) { +- etnaviv_iommu_context_get(submit->mmu_context); +- gpu->mmu_context = submit->mmu_context; +- etnaviv_gpu_start_fe_idleloop(gpu); +- } else { +- etnaviv_iommu_context_get(gpu->mmu_context); +- submit->prev_mmu_context = gpu->mmu_context; +- } ++ if (!gpu->fe_running) ++ etnaviv_gpu_start_fe_idleloop(gpu, submit->mmu_context); ++ ++ if (submit->prev_mmu_context) ++ etnaviv_iommu_context_put(submit->prev_mmu_context); ++ submit->prev_mmu_context = etnaviv_iommu_context_get(gpu->mmu_context); + + if (submit->nr_pmrs) { + gpu->event[event[1]].sync_point = &sync_point_perfmon_sample_pre; +@@ -1530,7 +1535,7 @@ int etnaviv_gpu_wait_idle(struct etnaviv_gpu *gpu, unsigned int timeout_ms) + + static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu) + { +- if (gpu->initialized && gpu->mmu_context) { ++ if (gpu->initialized && gpu->fe_running) { + /* Replace the last WAIT with END */ + mutex_lock(&gpu->lock); + etnaviv_buffer_end(gpu); +@@ -1543,8 +1548,7 @@ static int etnaviv_gpu_hw_suspend(struct etnaviv_gpu *gpu) + */ + etnaviv_gpu_wait_idle(gpu, 100); + +- etnaviv_iommu_context_put(gpu->mmu_context); +- gpu->mmu_context = NULL; ++ gpu->fe_running = false; + } + + gpu->exec_state = -1; +@@ -1692,6 +1696,9 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master, + etnaviv_gpu_hw_suspend(gpu); + #endif + ++ if (gpu->mmu_context) ++ etnaviv_iommu_context_put(gpu->mmu_context); ++ + if (gpu->initialized) { + etnaviv_cmdbuf_free(&gpu->buffer); + etnaviv_iommu_global_fini(gpu); +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +index 8f9bd4edc96a5..02478c75f8968 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.h ++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.h +@@ -101,6 +101,7 @@ struct etnaviv_gpu { + struct workqueue_struct *wq; + struct drm_gpu_scheduler sched; + bool initialized; ++ bool fe_running; + + /* 'ring'-buffer: */ + struct etnaviv_cmdbuf buffer; +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c +index 1a7c89a67bea3..afe5dd6a9925b 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_iommu.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu.c +@@ -92,6 +92,10 @@ static void etnaviv_iommuv1_restore(struct etnaviv_gpu *gpu, + struct etnaviv_iommuv1_context *v1_context = to_v1_context(context); + u32 pgtable; + ++ if (gpu->mmu_context) ++ etnaviv_iommu_context_put(gpu->mmu_context); ++ gpu->mmu_context = etnaviv_iommu_context_get(context); ++ + /* set base addresses */ + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_RA, context->global->memory_base); + gpu_write(gpu, VIVS_MC_MEMORY_BASE_ADDR_FE, context->global->memory_base); +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c +index f8bf488e9d717..d664ae29ae209 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_iommu_v2.c +@@ -172,6 +172,10 @@ static void etnaviv_iommuv2_restore_nonsec(struct etnaviv_gpu *gpu, + if (gpu_read(gpu, VIVS_MMUv2_CONTROL) & VIVS_MMUv2_CONTROL_ENABLE) + return; + ++ if (gpu->mmu_context) ++ etnaviv_iommu_context_put(gpu->mmu_context); ++ gpu->mmu_context = etnaviv_iommu_context_get(context); ++ + prefetch = etnaviv_buffer_config_mmuv2(gpu, + (u32)v2_context->mtlb_dma, + (u32)context->global->bad_page_dma); +@@ -192,6 +196,10 @@ static void etnaviv_iommuv2_restore_sec(struct etnaviv_gpu *gpu, + if (gpu_read(gpu, VIVS_MMUv2_SEC_CONTROL) & VIVS_MMUv2_SEC_CONTROL_ENABLE) + return; + ++ if (gpu->mmu_context) ++ etnaviv_iommu_context_put(gpu->mmu_context); ++ gpu->mmu_context = etnaviv_iommu_context_get(context); ++ + gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_LOW, + lower_32_bits(context->global->v2.pta_dma)); + gpu_write(gpu, VIVS_MMUv2_PTA_ADDRESS_HIGH, +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +index 3607d348c2980..707f5c1a58740 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c ++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +@@ -204,6 +204,7 @@ static int etnaviv_iommu_find_iova(struct etnaviv_iommu_context *context, + */ + list_for_each_entry_safe(m, n, &list, scan_node) { + etnaviv_iommu_remove_mapping(context, m); ++ etnaviv_iommu_context_put(m->context); + m->context = NULL; + list_del_init(&m->mmu_node); + list_del_init(&m->scan_node); +diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h +index d1d6902fd13be..e4a0b7d09c2ea 100644 +--- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.h ++++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.h +@@ -105,9 +105,11 @@ void etnaviv_iommu_dump(struct etnaviv_iommu_context *ctx, void *buf); + struct etnaviv_iommu_context * + etnaviv_iommu_context_init(struct etnaviv_iommu_global *global, + struct etnaviv_cmdbuf_suballoc *suballoc); +-static inline void etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx) ++static inline struct etnaviv_iommu_context * ++etnaviv_iommu_context_get(struct etnaviv_iommu_context *ctx) + { + kref_get(&ctx->refcount); ++ return ctx; + } + void etnaviv_iommu_context_put(struct etnaviv_iommu_context *ctx); + void etnaviv_iommu_restore(struct etnaviv_gpu *gpu, +diff --git a/drivers/gpu/drm/exynos/exynos_drm_dma.c b/drivers/gpu/drm/exynos/exynos_drm_dma.c +index 58b89ec11b0eb..a3c9d8b9e1a18 100644 +--- a/drivers/gpu/drm/exynos/exynos_drm_dma.c ++++ b/drivers/gpu/drm/exynos/exynos_drm_dma.c +@@ -140,6 +140,8 @@ int exynos_drm_register_dma(struct drm_device *drm, struct device *dev, + EXYNOS_DEV_ADDR_START, EXYNOS_DEV_ADDR_SIZE); + else if (IS_ENABLED(CONFIG_IOMMU_DMA)) + mapping = iommu_get_domain_for_dev(priv->dma_dev); ++ else ++ mapping = ERR_PTR(-ENODEV); + + if (IS_ERR(mapping)) + return PTR_ERR(mapping); +diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +index 20194d86d0339..4f0c6d58e06fa 100644 +--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c ++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +@@ -108,13 +108,6 @@ static void mdp4_disable_commit(struct msm_kms *kms) + + static void mdp4_prepare_commit(struct msm_kms *kms, struct drm_atomic_state *state) + { +- int i; +- struct drm_crtc *crtc; +- struct drm_crtc_state *crtc_state; +- +- /* see 119ecb7fd */ +- for_each_new_crtc_in_state(state, crtc, crtc_state, i) +- drm_crtc_vblank_get(crtc); + } + + static void mdp4_flush_commit(struct msm_kms *kms, unsigned crtc_mask) +@@ -133,12 +126,6 @@ static void mdp4_wait_flush(struct msm_kms *kms, unsigned crtc_mask) + + static void mdp4_complete_commit(struct msm_kms *kms, unsigned crtc_mask) + { +- struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); +- struct drm_crtc *crtc; +- +- /* see 119ecb7fd */ +- for_each_crtc_mask(mdp4_kms->dev, crtc, crtc_mask) +- drm_crtc_vblank_put(crtc); + } + + static long mdp4_round_pixclk(struct msm_kms *kms, unsigned long rate, +@@ -418,6 +405,7 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev) + { + struct platform_device *pdev = to_platform_device(dev->dev); + struct mdp4_platform_config *config = mdp4_get_config(pdev); ++ struct msm_drm_private *priv = dev->dev_private; + struct mdp4_kms *mdp4_kms; + struct msm_kms *kms = NULL; + struct msm_gem_address_space *aspace; +@@ -432,7 +420,8 @@ struct msm_kms *mdp4_kms_init(struct drm_device *dev) + + mdp_kms_init(&mdp4_kms->base, &kms_funcs); + +- kms = &mdp4_kms->base.base; ++ priv->kms = &mdp4_kms->base.base; ++ kms = priv->kms; + + mdp4_kms->dev = dev; + +diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c b/drivers/gpu/drm/panfrost/panfrost_mmu.c +index bfd503d220881..8a014dc115712 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_mmu.c ++++ b/drivers/gpu/drm/panfrost/panfrost_mmu.c +@@ -52,25 +52,16 @@ static int write_cmd(struct panfrost_device *pfdev, u32 as_nr, u32 cmd) + } + + static void lock_region(struct panfrost_device *pfdev, u32 as_nr, +- u64 iova, size_t size) ++ u64 iova, u64 size) + { + u8 region_width; + u64 region = iova & PAGE_MASK; +- /* +- * fls returns: +- * 1 .. 32 +- * +- * 10 + fls(num_pages) +- * results in the range (11 .. 42) +- */ +- +- size = round_up(size, PAGE_SIZE); + +- region_width = 10 + fls(size >> PAGE_SHIFT); +- if ((size >> PAGE_SHIFT) != (1ul << (region_width - 11))) { +- /* not pow2, so must go up to the next pow2 */ +- region_width += 1; +- } ++ /* The size is encoded as ceil(log2) minus(1), which may be calculated ++ * with fls. The size must be clamped to hardware bounds. ++ */ ++ size = max_t(u64, size, AS_LOCK_REGION_MIN_SIZE); ++ region_width = fls64(size - 1) - 1; + region |= region_width; + + /* Lock the region that needs to be updated */ +@@ -81,7 +72,7 @@ static void lock_region(struct panfrost_device *pfdev, u32 as_nr, + + + static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr, +- u64 iova, size_t size, u32 op) ++ u64 iova, u64 size, u32 op) + { + if (as_nr < 0) + return 0; +@@ -98,7 +89,7 @@ static int mmu_hw_do_operation_locked(struct panfrost_device *pfdev, int as_nr, + + static int mmu_hw_do_operation(struct panfrost_device *pfdev, + struct panfrost_mmu *mmu, +- u64 iova, size_t size, u32 op) ++ u64 iova, u64 size, u32 op) + { + int ret; + +@@ -115,7 +106,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m + u64 transtab = cfg->arm_mali_lpae_cfg.transtab; + u64 memattr = cfg->arm_mali_lpae_cfg.memattr; + +- mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM); ++ mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM); + + mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), transtab & 0xffffffffUL); + mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), transtab >> 32); +@@ -131,7 +122,7 @@ static void panfrost_mmu_enable(struct panfrost_device *pfdev, struct panfrost_m + + static void panfrost_mmu_disable(struct panfrost_device *pfdev, u32 as_nr) + { +- mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0UL, AS_COMMAND_FLUSH_MEM); ++ mmu_hw_do_operation_locked(pfdev, as_nr, 0, ~0ULL, AS_COMMAND_FLUSH_MEM); + + mmu_write(pfdev, AS_TRANSTAB_LO(as_nr), 0); + mmu_write(pfdev, AS_TRANSTAB_HI(as_nr), 0); +@@ -231,7 +222,7 @@ static size_t get_pgsize(u64 addr, size_t size) + + static void panfrost_mmu_flush_range(struct panfrost_device *pfdev, + struct panfrost_mmu *mmu, +- u64 iova, size_t size) ++ u64 iova, u64 size) + { + if (mmu->as < 0) + return; +diff --git a/drivers/gpu/drm/panfrost/panfrost_regs.h b/drivers/gpu/drm/panfrost/panfrost_regs.h +index eddaa62ad8b0e..2ae3a4d301d39 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_regs.h ++++ b/drivers/gpu/drm/panfrost/panfrost_regs.h +@@ -318,6 +318,8 @@ + #define AS_FAULTSTATUS_ACCESS_TYPE_READ (0x2 << 8) + #define AS_FAULTSTATUS_ACCESS_TYPE_WRITE (0x3 << 8) + ++#define AS_LOCK_REGION_MIN_SIZE (1ULL << 15) ++ + #define gpu_write(dev, reg, data) writel(data, dev->iomem + reg) + #define gpu_read(dev, reg) readl(dev->iomem + reg) + +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c +index 6d551ae251c0a..ea4c97f5b0736 100644 +--- a/drivers/hid/hid-input.c ++++ b/drivers/hid/hid-input.c +@@ -415,8 +415,6 @@ static int hidinput_get_battery_property(struct power_supply *psy, + + if (dev->battery_status == HID_BATTERY_UNKNOWN) + val->intval = POWER_SUPPLY_STATUS_UNKNOWN; +- else if (dev->battery_capacity == 100) +- val->intval = POWER_SUPPLY_STATUS_FULL; + else + val->intval = POWER_SUPPLY_STATUS_DISCHARGING; + break; +diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c +index 6f7a3702b5fba..ac076ac73de5d 100644 +--- a/drivers/hid/i2c-hid/i2c-hid-core.c ++++ b/drivers/hid/i2c-hid/i2c-hid-core.c +@@ -178,8 +178,6 @@ static const struct i2c_hid_quirks { + I2C_HID_QUIRK_NO_IRQ_AFTER_RESET }, + { I2C_VENDOR_ID_RAYDIUM, I2C_PRODUCT_ID_RAYDIUM_3118, + I2C_HID_QUIRK_NO_IRQ_AFTER_RESET }, +- { USB_VENDOR_ID_ELAN, HID_ANY_ID, +- I2C_HID_QUIRK_BOGUS_IRQ }, + { USB_VENDOR_ID_ALPS_JP, HID_ANY_ID, + I2C_HID_QUIRK_RESET_ON_RESUME }, + { I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393, +@@ -190,7 +188,8 @@ static const struct i2c_hid_quirks { + * Sending the wakeup after reset actually break ELAN touchscreen controller + */ + { USB_VENDOR_ID_ELAN, HID_ANY_ID, +- I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET }, ++ I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET | ++ I2C_HID_QUIRK_BOGUS_IRQ }, + { 0, 0 } + }; + +diff --git a/drivers/iio/dac/ad5624r_spi.c b/drivers/iio/dac/ad5624r_spi.c +index e6c022e1dc1cf..17cc8b3fc5d82 100644 +--- a/drivers/iio/dac/ad5624r_spi.c ++++ b/drivers/iio/dac/ad5624r_spi.c +@@ -229,7 +229,7 @@ static int ad5624r_probe(struct spi_device *spi) + if (!indio_dev) + return -ENOMEM; + st = iio_priv(indio_dev); +- st->reg = devm_regulator_get(&spi->dev, "vcc"); ++ st->reg = devm_regulator_get_optional(&spi->dev, "vref"); + if (!IS_ERR(st->reg)) { + ret = regulator_enable(st->reg); + if (ret) +@@ -240,6 +240,22 @@ static int ad5624r_probe(struct spi_device *spi) + goto error_disable_reg; + + voltage_uv = ret; ++ } else { ++ if (PTR_ERR(st->reg) != -ENODEV) ++ return PTR_ERR(st->reg); ++ /* Backwards compatibility. This naming is not correct */ ++ st->reg = devm_regulator_get_optional(&spi->dev, "vcc"); ++ if (!IS_ERR(st->reg)) { ++ ret = regulator_enable(st->reg); ++ if (ret) ++ return ret; ++ ++ ret = regulator_get_voltage(st->reg); ++ if (ret < 0) ++ goto error_disable_reg; ++ ++ voltage_uv = ret; ++ } + } + + spi_set_drvdata(spi, indio_dev); +diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c +index da8adadf47559..75b6da00065a3 100644 +--- a/drivers/infiniband/core/iwcm.c ++++ b/drivers/infiniband/core/iwcm.c +@@ -1187,29 +1187,34 @@ static int __init iw_cm_init(void) + + ret = iwpm_init(RDMA_NL_IWCM); + if (ret) +- pr_err("iw_cm: couldn't init iwpm\n"); +- else +- rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table); ++ return ret; ++ + iwcm_wq = alloc_ordered_workqueue("iw_cm_wq", 0); + if (!iwcm_wq) +- return -ENOMEM; ++ goto err_alloc; + + iwcm_ctl_table_hdr = register_net_sysctl(&init_net, "net/iw_cm", + iwcm_ctl_table); + if (!iwcm_ctl_table_hdr) { + pr_err("iw_cm: couldn't register sysctl paths\n"); +- destroy_workqueue(iwcm_wq); +- return -ENOMEM; ++ goto err_sysctl; + } + ++ rdma_nl_register(RDMA_NL_IWCM, iwcm_nl_cb_table); + return 0; ++ ++err_sysctl: ++ destroy_workqueue(iwcm_wq); ++err_alloc: ++ iwpm_exit(RDMA_NL_IWCM); ++ return -ENOMEM; + } + + static void __exit iw_cm_cleanup(void) + { ++ rdma_nl_unregister(RDMA_NL_IWCM); + unregister_net_sysctl_table(iwcm_ctl_table_hdr); + destroy_workqueue(iwcm_wq); +- rdma_nl_unregister(RDMA_NL_IWCM); + iwpm_exit(RDMA_NL_IWCM); + } + +diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c +index 4edae89e8e3ca..17f1e59ab12ee 100644 +--- a/drivers/infiniband/hw/efa/efa_verbs.c ++++ b/drivers/infiniband/hw/efa/efa_verbs.c +@@ -745,7 +745,6 @@ struct ib_qp *efa_create_qp(struct ib_pd *ibpd, + rq_entry_inserted = true; + qp->qp_handle = create_qp_resp.qp_handle; + qp->ibqp.qp_num = create_qp_resp.qp_num; +- qp->ibqp.qp_type = init_attr->qp_type; + qp->max_send_wr = init_attr->cap.max_send_wr; + qp->max_recv_wr = init_attr->cap.max_recv_wr; + qp->max_send_sge = init_attr->cap.max_send_sge; +diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c +index fbff6b2f00e71..1256dbd5b2ef0 100644 +--- a/drivers/infiniband/hw/hfi1/init.c ++++ b/drivers/infiniband/hw/hfi1/init.c +@@ -664,12 +664,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd, + + ppd->pkeys[default_pkey_idx] = DEFAULT_P_KEY; + ppd->part_enforce |= HFI1_PART_ENFORCE_IN; +- +- if (loopback) { +- dd_dev_err(dd, "Faking data partition 0x8001 in idx %u\n", +- !default_pkey_idx); +- ppd->pkeys[!default_pkey_idx] = 0x8001; +- } ++ ppd->pkeys[0] = 0x8001; + + INIT_WORK(&ppd->link_vc_work, handle_verify_cap); + INIT_WORK(&ppd->link_up_work, handle_link_up); +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c +index d85648b9c247a..571c04e70343a 100644 +--- a/drivers/md/dm-crypt.c ++++ b/drivers/md/dm-crypt.c +@@ -2092,7 +2092,12 @@ static void *crypt_page_alloc(gfp_t gfp_mask, void *pool_data) + struct crypt_config *cc = pool_data; + struct page *page; + +- if (unlikely(percpu_counter_compare(&cc->n_allocated_pages, dm_crypt_pages_per_client) >= 0) && ++ /* ++ * Note, percpu_counter_read_positive() may over (and under) estimate ++ * the current usage by at most (batch - 1) * num_online_cpus() pages, ++ * but avoids potential spinlock contention of an exact result. ++ */ ++ if (unlikely(percpu_counter_read_positive(&cc->n_allocated_pages) >= dm_crypt_pages_per_client) && + likely(gfp_mask & __GFP_NORETRY)) + return NULL; + +diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c +index 082796534b0ae..bb02354a48b81 100644 +--- a/drivers/media/dvb-frontends/dib8000.c ++++ b/drivers/media/dvb-frontends/dib8000.c +@@ -2107,32 +2107,55 @@ static void dib8000_load_ana_fe_coefs(struct dib8000_state *state, const s16 *an + dib8000_write_word(state, 117 + mode, ana_fe[mode]); + } + +-static const u16 lut_prbs_2k[14] = { +- 0, 0x423, 0x009, 0x5C7, 0x7A6, 0x3D8, 0x527, 0x7FF, 0x79B, 0x3D6, 0x3A2, 0x53B, 0x2F4, 0x213 ++static const u16 lut_prbs_2k[13] = { ++ 0x423, 0x009, 0x5C7, ++ 0x7A6, 0x3D8, 0x527, ++ 0x7FF, 0x79B, 0x3D6, ++ 0x3A2, 0x53B, 0x2F4, ++ 0x213 + }; +-static const u16 lut_prbs_4k[14] = { +- 0, 0x208, 0x0C3, 0x7B9, 0x423, 0x5C7, 0x3D8, 0x7FF, 0x3D6, 0x53B, 0x213, 0x029, 0x0D0, 0x48E ++ ++static const u16 lut_prbs_4k[13] = { ++ 0x208, 0x0C3, 0x7B9, ++ 0x423, 0x5C7, 0x3D8, ++ 0x7FF, 0x3D6, 0x53B, ++ 0x213, 0x029, 0x0D0, ++ 0x48E + }; +-static const u16 lut_prbs_8k[14] = { +- 0, 0x740, 0x069, 0x7DD, 0x208, 0x7B9, 0x5C7, 0x7FF, 0x53B, 0x029, 0x48E, 0x4C4, 0x367, 0x684 ++ ++static const u16 lut_prbs_8k[13] = { ++ 0x740, 0x069, 0x7DD, ++ 0x208, 0x7B9, 0x5C7, ++ 0x7FF, 0x53B, 0x029, ++ 0x48E, 0x4C4, 0x367, ++ 0x684 + }; + + static u16 dib8000_get_init_prbs(struct dib8000_state *state, u16 subchannel) + { + int sub_channel_prbs_group = 0; ++ int prbs_group; + +- sub_channel_prbs_group = (subchannel / 3) + 1; +- dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n", sub_channel_prbs_group, subchannel, lut_prbs_8k[sub_channel_prbs_group]); ++ sub_channel_prbs_group = subchannel / 3; ++ if (sub_channel_prbs_group >= ARRAY_SIZE(lut_prbs_2k)) ++ return 0; + + switch (state->fe[0]->dtv_property_cache.transmission_mode) { + case TRANSMISSION_MODE_2K: +- return lut_prbs_2k[sub_channel_prbs_group]; ++ prbs_group = lut_prbs_2k[sub_channel_prbs_group]; ++ break; + case TRANSMISSION_MODE_4K: +- return lut_prbs_4k[sub_channel_prbs_group]; ++ prbs_group = lut_prbs_4k[sub_channel_prbs_group]; ++ break; + default: + case TRANSMISSION_MODE_8K: +- return lut_prbs_8k[sub_channel_prbs_group]; ++ prbs_group = lut_prbs_8k[sub_channel_prbs_group]; + } ++ ++ dprintk("sub_channel_prbs_group = %d , subchannel =%d prbs = 0x%04x\n", ++ sub_channel_prbs_group, subchannel, prbs_group); ++ ++ return prbs_group; + } + + static void dib8000_set_13seg_channel(struct dib8000_state *state) +@@ -2409,10 +2432,8 @@ static void dib8000_set_isdbt_common_channel(struct dib8000_state *state, u8 seq + /* TSB or ISDBT ? apply it now */ + if (c->isdbt_sb_mode) { + dib8000_set_sb_channel(state); +- if (c->isdbt_sb_subchannel < 14) +- init_prbs = dib8000_get_init_prbs(state, c->isdbt_sb_subchannel); +- else +- init_prbs = 0; ++ init_prbs = dib8000_get_init_prbs(state, ++ c->isdbt_sb_subchannel); + } else { + dib8000_set_13seg_channel(state); + init_prbs = 0xfff; +@@ -3004,6 +3025,7 @@ static int dib8000_tune(struct dvb_frontend *fe) + + unsigned long *timeout = &state->timeout; + unsigned long now = jiffies; ++ u16 init_prbs; + #ifdef DIB8000_AGC_FREEZE + u16 agc1, agc2; + #endif +@@ -3302,8 +3324,10 @@ static int dib8000_tune(struct dvb_frontend *fe) + break; + + case CT_DEMOD_STEP_11: /* 41 : init prbs autosearch */ +- if (state->subchannel <= 41) { +- dib8000_set_subchannel_prbs(state, dib8000_get_init_prbs(state, state->subchannel)); ++ init_prbs = dib8000_get_init_prbs(state, state->subchannel); ++ ++ if (init_prbs) { ++ dib8000_set_subchannel_prbs(state, init_prbs); + *tune_state = CT_DEMOD_STEP_9; + } else { + *tune_state = CT_DEMOD_STOP; +diff --git a/drivers/media/i2c/imx258.c b/drivers/media/i2c/imx258.c +index f86ae18bc104b..ffaa4a91e5713 100644 +--- a/drivers/media/i2c/imx258.c ++++ b/drivers/media/i2c/imx258.c +@@ -22,7 +22,7 @@ + #define IMX258_CHIP_ID 0x0258 + + /* V_TIMING internal */ +-#define IMX258_VTS_30FPS 0x0c98 ++#define IMX258_VTS_30FPS 0x0c50 + #define IMX258_VTS_30FPS_2K 0x0638 + #define IMX258_VTS_30FPS_VGA 0x034c + #define IMX258_VTS_MAX 0xffff +@@ -46,7 +46,7 @@ + /* Analog gain control */ + #define IMX258_REG_ANALOG_GAIN 0x0204 + #define IMX258_ANA_GAIN_MIN 0 +-#define IMX258_ANA_GAIN_MAX 0x1fff ++#define IMX258_ANA_GAIN_MAX 480 + #define IMX258_ANA_GAIN_STEP 1 + #define IMX258_ANA_GAIN_DEFAULT 0x0 + +diff --git a/drivers/media/i2c/tda1997x.c b/drivers/media/i2c/tda1997x.c +index 1088161498df0..18a2027ba1450 100644 +--- a/drivers/media/i2c/tda1997x.c ++++ b/drivers/media/i2c/tda1997x.c +@@ -1695,14 +1695,15 @@ static int tda1997x_query_dv_timings(struct v4l2_subdev *sd, + struct v4l2_dv_timings *timings) + { + struct tda1997x_state *state = to_state(sd); ++ int ret; + + v4l_dbg(1, debug, state->client, "%s\n", __func__); + memset(timings, 0, sizeof(struct v4l2_dv_timings)); + mutex_lock(&state->lock); +- tda1997x_detect_std(state, timings); ++ ret = tda1997x_detect_std(state, timings); + mutex_unlock(&state->lock); + +- return 0; ++ return ret; + } + + static const struct v4l2_subdev_video_ops tda1997x_video_ops = { +diff --git a/drivers/media/platform/tegra-cec/tegra_cec.c b/drivers/media/platform/tegra-cec/tegra_cec.c +index a632602131f21..efb80a78d2fa2 100644 +--- a/drivers/media/platform/tegra-cec/tegra_cec.c ++++ b/drivers/media/platform/tegra-cec/tegra_cec.c +@@ -366,7 +366,11 @@ static int tegra_cec_probe(struct platform_device *pdev) + return -ENOENT; + } + +- clk_prepare_enable(cec->clk); ++ ret = clk_prepare_enable(cec->clk); ++ if (ret) { ++ dev_err(&pdev->dev, "Unable to prepare clock for CEC\n"); ++ return ret; ++ } + + /* set context info. */ + cec->dev = &pdev->dev; +@@ -446,9 +450,7 @@ static int tegra_cec_resume(struct platform_device *pdev) + + dev_notice(&pdev->dev, "Resuming\n"); + +- clk_prepare_enable(cec->clk); +- +- return 0; ++ return clk_prepare_enable(cec->clk); + } + #endif + +diff --git a/drivers/media/rc/rc-loopback.c b/drivers/media/rc/rc-loopback.c +index ef8b83b707df0..13ab7312fa3b5 100644 +--- a/drivers/media/rc/rc-loopback.c ++++ b/drivers/media/rc/rc-loopback.c +@@ -42,7 +42,7 @@ static int loop_set_tx_mask(struct rc_dev *dev, u32 mask) + + if ((mask & (RXMASK_REGULAR | RXMASK_LEARNING)) != mask) { + dprintk("invalid tx mask: %u\n", mask); +- return -EINVAL; ++ return 2; + } + + dprintk("setting tx mask: %u\n", mask); +diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c +index 7d60dd3b0bd85..db7f8f8ee2f9f 100644 +--- a/drivers/media/usb/uvc/uvc_v4l2.c ++++ b/drivers/media/usb/uvc/uvc_v4l2.c +@@ -894,8 +894,8 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input) + { + struct uvc_fh *handle = fh; + struct uvc_video_chain *chain = handle->chain; ++ u8 *buf; + int ret; +- u8 i; + + if (chain->selector == NULL || + (chain->dev->quirks & UVC_QUIRK_IGNORE_SELECTOR_UNIT)) { +@@ -903,22 +903,27 @@ static int uvc_ioctl_g_input(struct file *file, void *fh, unsigned int *input) + return 0; + } + ++ buf = kmalloc(1, GFP_KERNEL); ++ if (!buf) ++ return -ENOMEM; ++ + ret = uvc_query_ctrl(chain->dev, UVC_GET_CUR, chain->selector->id, + chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL, +- &i, 1); +- if (ret < 0) +- return ret; ++ buf, 1); ++ if (!ret) ++ *input = *buf - 1; + +- *input = i - 1; +- return 0; ++ kfree(buf); ++ ++ return ret; + } + + static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input) + { + struct uvc_fh *handle = fh; + struct uvc_video_chain *chain = handle->chain; ++ u8 *buf; + int ret; +- u32 i; + + ret = uvc_acquire_privileges(handle); + if (ret < 0) +@@ -934,10 +939,17 @@ static int uvc_ioctl_s_input(struct file *file, void *fh, unsigned int input) + if (input >= chain->selector->bNrInPins) + return -EINVAL; + +- i = input + 1; +- return uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id, +- chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL, +- &i, 1); ++ buf = kmalloc(1, GFP_KERNEL); ++ if (!buf) ++ return -ENOMEM; ++ ++ *buf = input + 1; ++ ret = uvc_query_ctrl(chain->dev, UVC_SET_CUR, chain->selector->id, ++ chain->dev->intfnum, UVC_SU_INPUT_SELECT_CONTROL, ++ buf, 1); ++ kfree(buf); ++ ++ return ret; + } + + static int uvc_ioctl_queryctrl(struct file *file, void *fh, +diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c +index 4f23e939ead0b..60454e1b727e9 100644 +--- a/drivers/media/v4l2-core/v4l2-dv-timings.c ++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c +@@ -196,7 +196,7 @@ bool v4l2_find_dv_timings_cap(struct v4l2_dv_timings *t, + if (!v4l2_valid_dv_timings(t, cap, fnc, fnc_handle)) + return false; + +- for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) { ++ for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) { + if (v4l2_valid_dv_timings(v4l2_dv_timings_presets + i, cap, + fnc, fnc_handle) && + v4l2_match_dv_timings(t, v4l2_dv_timings_presets + i, +@@ -218,7 +218,7 @@ bool v4l2_find_dv_timings_cea861_vic(struct v4l2_dv_timings *t, u8 vic) + { + unsigned int i; + +- for (i = 0; i < v4l2_dv_timings_presets[i].bt.width; i++) { ++ for (i = 0; v4l2_dv_timings_presets[i].bt.width; i++) { + const struct v4l2_bt_timings *bt = + &v4l2_dv_timings_presets[i].bt; + +diff --git a/drivers/mfd/ab8500-core.c b/drivers/mfd/ab8500-core.c +index 3e9dc92cb467b..842de1f352dfc 100644 +--- a/drivers/mfd/ab8500-core.c ++++ b/drivers/mfd/ab8500-core.c +@@ -493,7 +493,7 @@ static int ab8500_handle_hierarchical_line(struct ab8500 *ab8500, + if (line == AB8540_INT_GPIO43F || line == AB8540_INT_GPIO44F) + line += 1; + +- handle_nested_irq(irq_create_mapping(ab8500->domain, line)); ++ handle_nested_irq(irq_find_mapping(ab8500->domain, line)); + } + + return 0; +diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c +index aa59496e43768..9db1000944c34 100644 +--- a/drivers/mfd/axp20x.c ++++ b/drivers/mfd/axp20x.c +@@ -125,12 +125,13 @@ static const struct regmap_range axp288_writeable_ranges[] = { + + static const struct regmap_range axp288_volatile_ranges[] = { + regmap_reg_range(AXP20X_PWR_INPUT_STATUS, AXP288_POWER_REASON), ++ regmap_reg_range(AXP22X_PWR_OUT_CTRL1, AXP22X_ALDO3_V_OUT), + regmap_reg_range(AXP288_BC_GLOBAL, AXP288_BC_GLOBAL), + regmap_reg_range(AXP288_BC_DET_STAT, AXP20X_VBUS_IPSOUT_MGMT), + regmap_reg_range(AXP20X_CHRG_BAK_CTRL, AXP20X_CHRG_BAK_CTRL), + regmap_reg_range(AXP20X_IRQ1_EN, AXP20X_IPSOUT_V_HIGH_L), + regmap_reg_range(AXP20X_TIMER_CTRL, AXP20X_TIMER_CTRL), +- regmap_reg_range(AXP22X_GPIO_STATE, AXP22X_GPIO_STATE), ++ regmap_reg_range(AXP20X_GPIO1_CTRL, AXP22X_GPIO_STATE), + regmap_reg_range(AXP288_RT_BATT_V_H, AXP288_RT_BATT_V_L), + regmap_reg_range(AXP20X_FG_RES, AXP288_FG_CC_CAP_REG), + }; +diff --git a/drivers/mfd/db8500-prcmu.c b/drivers/mfd/db8500-prcmu.c +index dfac6afa82ca5..f1f2ad9ff0b34 100644 +--- a/drivers/mfd/db8500-prcmu.c ++++ b/drivers/mfd/db8500-prcmu.c +@@ -1695,22 +1695,20 @@ static long round_clock_rate(u8 clock, unsigned long rate) + } + + static const unsigned long db8500_armss_freqs[] = { +- 200000000, +- 400000000, +- 800000000, ++ 199680000, ++ 399360000, ++ 798720000, + 998400000 + }; + + /* The DB8520 has slightly higher ARMSS max frequency */ + static const unsigned long db8520_armss_freqs[] = { +- 200000000, +- 400000000, +- 800000000, ++ 199680000, ++ 399360000, ++ 798720000, + 1152000000 + }; + +- +- + static long round_armss_rate(unsigned long rate) + { + unsigned long freq = 0; +diff --git a/drivers/mfd/stmpe.c b/drivers/mfd/stmpe.c +index 1aee3b3253fc9..508349399f8af 100644 +--- a/drivers/mfd/stmpe.c ++++ b/drivers/mfd/stmpe.c +@@ -1091,7 +1091,7 @@ static irqreturn_t stmpe_irq(int irq, void *data) + + if (variant->id_val == STMPE801_ID || + variant->id_val == STMPE1600_ID) { +- int base = irq_create_mapping(stmpe->domain, 0); ++ int base = irq_find_mapping(stmpe->domain, 0); + + handle_nested_irq(base); + return IRQ_HANDLED; +@@ -1119,7 +1119,7 @@ static irqreturn_t stmpe_irq(int irq, void *data) + while (status) { + int bit = __ffs(status); + int line = bank * 8 + bit; +- int nestedirq = irq_create_mapping(stmpe->domain, line); ++ int nestedirq = irq_find_mapping(stmpe->domain, line); + + handle_nested_irq(nestedirq); + status &= ~(1 << bit); +diff --git a/drivers/mfd/tc3589x.c b/drivers/mfd/tc3589x.c +index 67c9995bb1aa6..23cfbd050120d 100644 +--- a/drivers/mfd/tc3589x.c ++++ b/drivers/mfd/tc3589x.c +@@ -187,7 +187,7 @@ again: + + while (status) { + int bit = __ffs(status); +- int virq = irq_create_mapping(tc3589x->domain, bit); ++ int virq = irq_find_mapping(tc3589x->domain, bit); + + handle_nested_irq(virq); + status &= ~(1 << bit); +diff --git a/drivers/mfd/tqmx86.c b/drivers/mfd/tqmx86.c +index 22d2f02d855c2..ccc5a9ac788c1 100644 +--- a/drivers/mfd/tqmx86.c ++++ b/drivers/mfd/tqmx86.c +@@ -210,6 +210,8 @@ static int tqmx86_probe(struct platform_device *pdev) + + /* Assumes the IRQ resource is first. */ + tqmx_gpio_resources[0].start = gpio_irq; ++ } else { ++ tqmx_gpio_resources[0].flags = 0; + } + + ocores_platfom_data.clock_khz = tqmx86_board_id_to_clk_rate(board_id); +diff --git a/drivers/mfd/wm8994-irq.c b/drivers/mfd/wm8994-irq.c +index 6c3a619e26286..651a028bc519a 100644 +--- a/drivers/mfd/wm8994-irq.c ++++ b/drivers/mfd/wm8994-irq.c +@@ -154,7 +154,7 @@ static irqreturn_t wm8994_edge_irq(int irq, void *data) + struct wm8994 *wm8994 = data; + + while (gpio_get_value_cansleep(wm8994->pdata.irq_gpio)) +- handle_nested_irq(irq_create_mapping(wm8994->edge_irq, 0)); ++ handle_nested_irq(irq_find_mapping(wm8994->edge_irq, 0)); + + return IRQ_HANDLED; + } +diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c +index c2338750313c4..a49782dd903cd 100644 +--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c ++++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c +@@ -2238,7 +2238,8 @@ int vmci_qp_broker_map(struct vmci_handle handle, + + result = VMCI_SUCCESS; + +- if (context_id != VMCI_HOST_CONTEXT_ID) { ++ if (context_id != VMCI_HOST_CONTEXT_ID && ++ !QPBROKERSTATE_HAS_MEM(entry)) { + struct vmci_qp_page_store page_store; + + page_store.pages = guest_mem; +@@ -2345,7 +2346,8 @@ int vmci_qp_broker_unmap(struct vmci_handle handle, + goto out; + } + +- if (context_id != VMCI_HOST_CONTEXT_ID) { ++ if (context_id != VMCI_HOST_CONTEXT_ID && ++ QPBROKERSTATE_HAS_MEM(entry)) { + qp_acquire_queue_mutex(entry->produce_q); + result = qp_save_headers(entry); + if (result < VMCI_SUCCESS) +diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c +index 8322d22a59c45..e92f9373e2274 100644 +--- a/drivers/mmc/core/block.c ++++ b/drivers/mmc/core/block.c +@@ -591,6 +591,7 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, + } + + mmc_wait_for_req(card->host, &mrq); ++ memcpy(&idata->ic.response, cmd.resp, sizeof(cmd.resp)); + + if (cmd.error) { + dev_err(mmc_dev(card->host), "%s: cmd error %d\n", +@@ -640,8 +641,6 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, + if (idata->ic.postsleep_min_us) + usleep_range(idata->ic.postsleep_min_us, idata->ic.postsleep_max_us); + +- memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp)); +- + if (idata->rpmb || (cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) { + /* + * Ensure RPMB/R1B command has completed by polling CMD13 +diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c +index 11087976ab19c..9ff718b61c72e 100644 +--- a/drivers/mmc/host/rtsx_pci_sdmmc.c ++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c +@@ -539,9 +539,22 @@ static int sd_write_long_data(struct realtek_pci_sdmmc *host, + return 0; + } + ++static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host) ++{ ++ rtsx_pci_write_register(host->pcr, SD_CFG1, ++ SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128); ++} ++ ++static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host) ++{ ++ rtsx_pci_write_register(host->pcr, SD_CFG1, ++ SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0); ++} ++ + static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq) + { + struct mmc_data *data = mrq->data; ++ int err; + + if (host->sg_count < 0) { + data->error = host->sg_count; +@@ -550,22 +563,19 @@ static int sd_rw_multi(struct realtek_pci_sdmmc *host, struct mmc_request *mrq) + return data->error; + } + +- if (data->flags & MMC_DATA_READ) +- return sd_read_long_data(host, mrq); ++ if (data->flags & MMC_DATA_READ) { ++ if (host->initial_mode) ++ sd_disable_initial_mode(host); + +- return sd_write_long_data(host, mrq); +-} ++ err = sd_read_long_data(host, mrq); + +-static inline void sd_enable_initial_mode(struct realtek_pci_sdmmc *host) +-{ +- rtsx_pci_write_register(host->pcr, SD_CFG1, +- SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_128); +-} ++ if (host->initial_mode) ++ sd_enable_initial_mode(host); + +-static inline void sd_disable_initial_mode(struct realtek_pci_sdmmc *host) +-{ +- rtsx_pci_write_register(host->pcr, SD_CFG1, +- SD_CLK_DIVIDE_MASK, SD_CLK_DIVIDE_0); ++ return err; ++ } ++ ++ return sd_write_long_data(host, mrq); + } + + static void sd_normal_rw(struct realtek_pci_sdmmc *host, +diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c +index 7023cbec4017b..dd10f7abf5a71 100644 +--- a/drivers/mmc/host/sdhci-of-arasan.c ++++ b/drivers/mmc/host/sdhci-of-arasan.c +@@ -192,7 +192,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock) + * through low speeds without power cycling. + */ + sdhci_set_clock(host, host->max_clk); +- phy_power_on(sdhci_arasan->phy); ++ if (phy_power_on(sdhci_arasan->phy)) { ++ pr_err("%s: Cannot power on phy.\n", ++ mmc_hostname(host->mmc)); ++ return; ++ } ++ + sdhci_arasan->is_phy_on = true; + + /* +@@ -228,7 +233,12 @@ static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock) + msleep(20); + + if (ctrl_phy) { +- phy_power_on(sdhci_arasan->phy); ++ if (phy_power_on(sdhci_arasan->phy)) { ++ pr_err("%s: Cannot power on phy.\n", ++ mmc_hostname(host->mmc)); ++ return; ++ } ++ + sdhci_arasan->is_phy_on = true; + } + } +@@ -416,7 +426,9 @@ static int sdhci_arasan_suspend(struct device *dev) + ret = phy_power_off(sdhci_arasan->phy); + if (ret) { + dev_err(dev, "Cannot power off phy.\n"); +- sdhci_resume_host(host); ++ if (sdhci_resume_host(host)) ++ dev_err(dev, "Cannot resume host.\n"); ++ + return ret; + } + sdhci_arasan->is_phy_on = false; +diff --git a/drivers/mtd/nand/raw/cafe_nand.c b/drivers/mtd/nand/raw/cafe_nand.c +index 2d1c22dc88c15..cc5009200cc23 100644 +--- a/drivers/mtd/nand/raw/cafe_nand.c ++++ b/drivers/mtd/nand/raw/cafe_nand.c +@@ -757,7 +757,7 @@ static int cafe_nand_probe(struct pci_dev *pdev, + "CAFE NAND", mtd); + if (err) { + dev_warn(&pdev->dev, "Could not register IRQ %d\n", pdev->irq); +- goto out_ior; ++ goto out_free_rs; + } + + /* Disable master reset, enable NAND clock */ +@@ -801,6 +801,8 @@ static int cafe_nand_probe(struct pci_dev *pdev, + /* Disable NAND IRQ in global IRQ mask register */ + cafe_writel(cafe, ~1 & cafe_readl(cafe, GLOBAL_IRQ_MASK), GLOBAL_IRQ_MASK); + free_irq(pdev->irq, mtd); ++ out_free_rs: ++ free_rs(cafe->rs); + out_ior: + pci_iounmap(pdev, cafe->mmio); + out_free_mtd: +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index e21643377162b..1949f631e1bc5 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -1926,7 +1926,6 @@ static int __bond_release_one(struct net_device *bond_dev, + /* recompute stats just before removing the slave */ + bond_get_stats(bond->dev, &bond->bond_stats); + +- bond_upper_dev_unlink(bond, slave); + /* unregister rx_handler early so bond_handle_frame wouldn't be called + * for this slave anymore. + */ +@@ -1935,6 +1934,8 @@ static int __bond_release_one(struct net_device *bond_dev, + if (BOND_MODE(bond) == BOND_MODE_8023AD) + bond_3ad_unbind_slave(slave); + ++ bond_upper_dev_unlink(bond, slave); ++ + if (bond_mode_can_use_xmit_hash(bond)) + bond_update_slave_arr(bond, slave); + +diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c +index e78b683f73052..825d840cdb8c3 100644 +--- a/drivers/net/dsa/b53/b53_common.c ++++ b/drivers/net/dsa/b53/b53_common.c +@@ -2353,9 +2353,8 @@ static int b53_switch_init(struct b53_device *dev) + dev->cpu_port = 5; + } + +- /* cpu port is always last */ +- dev->num_ports = dev->cpu_port + 1; + dev->enabled_ports |= BIT(dev->cpu_port); ++ dev->num_ports = fls(dev->enabled_ports); + + /* Include non standard CPU port built-in PHYs to be probed */ + if (is539x(dev) || is531x5(dev)) { +diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c +index af3d56636a076..3225de0f655f2 100644 +--- a/drivers/net/dsa/lantiq_gswip.c ++++ b/drivers/net/dsa/lantiq_gswip.c +@@ -837,7 +837,8 @@ static int gswip_setup(struct dsa_switch *ds) + + gswip_switch_mask(priv, 0, GSWIP_MAC_CTRL_2_MLEN, + GSWIP_MAC_CTRL_2p(cpu_port)); +- gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8, GSWIP_MAC_FLEN); ++ gswip_switch_w(priv, VLAN_ETH_FRAME_LEN + 8 + ETH_FCS_LEN, ++ GSWIP_MAC_FLEN); + gswip_switch_mask(priv, 0, GSWIP_BM_QUEUE_GCTRL_GL_MOD, + GSWIP_BM_QUEUE_GCTRL); + +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c +index cf39623b828b7..4630998d47fd4 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c +@@ -1246,7 +1246,7 @@ int bnx2x_iov_init_one(struct bnx2x *bp, int int_mode_param, + + /* SR-IOV capability was enabled but there are no VFs*/ + if (iov->total == 0) { +- err = -EINVAL; ++ err = 0; + goto failed; + } + +diff --git a/drivers/net/ethernet/chelsio/cxgb/cxgb2.c b/drivers/net/ethernet/chelsio/cxgb/cxgb2.c +index 0ccdde366ae17..540d99f59226e 100644 +--- a/drivers/net/ethernet/chelsio/cxgb/cxgb2.c ++++ b/drivers/net/ethernet/chelsio/cxgb/cxgb2.c +@@ -1153,6 +1153,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent) + if (!adapter->registered_device_map) { + pr_err("%s: could not register any net devices\n", + pci_name(pdev)); ++ err = -EINVAL; + goto out_release_adapter_res; + } + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +index e64e175162068..db9c8f943811b 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +@@ -56,6 +56,7 @@ MODULE_PARM_DESC(debug, " Network interface message level setting"); + #define HNS3_OUTER_VLAN_TAG 2 + + #define HNS3_MIN_TX_LEN 33U ++#define HNS3_MIN_TUN_PKT_LEN 65U + + /* hns3_pci_tbl - PCI Device ID Table + * +@@ -931,8 +932,11 @@ static int hns3_set_l2l3l4(struct sk_buff *skb, u8 ol4_proto, + l4.tcp->doff); + break; + case IPPROTO_UDP: +- if (hns3_tunnel_csum_bug(skb)) +- return skb_checksum_help(skb); ++ if (hns3_tunnel_csum_bug(skb)) { ++ int ret = skb_put_padto(skb, HNS3_MIN_TUN_PKT_LEN); ++ ++ return ret ? ret : skb_checksum_help(skb); ++ } + + hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4CS_B, 1); + hns3_set_field(*type_cs_vlan_tso, HNS3_TXD_L4T_S, +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index aa402e2671212..f44e8401496b1 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -1328,9 +1328,10 @@ static void hclge_init_kdump_kernel_config(struct hclge_dev *hdev) + + static int hclge_configure(struct hclge_dev *hdev) + { ++ const struct cpumask *cpumask = cpu_online_mask; + struct hclge_cfg cfg; + unsigned int i; +- int ret; ++ int node, ret; + + ret = hclge_get_cfg(hdev, &cfg); + if (ret) { +@@ -1390,11 +1391,12 @@ static int hclge_configure(struct hclge_dev *hdev) + + hclge_init_kdump_kernel_config(hdev); + +- /* Set the init affinity based on pci func number */ +- i = cpumask_weight(cpumask_of_node(dev_to_node(&hdev->pdev->dev))); +- i = i ? PCI_FUNC(hdev->pdev->devfn) % i : 0; +- cpumask_set_cpu(cpumask_local_spread(i, dev_to_node(&hdev->pdev->dev)), +- &hdev->affinity_mask); ++ /* Set the affinity based on numa node */ ++ node = dev_to_node(&hdev->pdev->dev); ++ if (node != NUMA_NO_NODE) ++ cpumask = cpumask_of_node(node); ++ ++ cpumask_copy(&hdev->affinity_mask, cpumask); + + return ret; + } +@@ -6683,11 +6685,12 @@ static void hclge_ae_stop(struct hnae3_handle *handle) + hclge_clear_arfs_rules(handle); + spin_unlock_bh(&hdev->fd_rule_lock); + +- /* If it is not PF reset, the firmware will disable the MAC, ++ /* If it is not PF reset or FLR, the firmware will disable the MAC, + * so it only need to stop phy here. + */ + if (test_bit(HCLGE_STATE_RST_HANDLING, &hdev->state) && +- hdev->reset_type != HNAE3_FUNC_RESET) { ++ hdev->reset_type != HNAE3_FUNC_RESET && ++ hdev->reset_type != HNAE3_FLR_RESET) { + hclge_mac_stop_phy(hdev); + hclge_update_link_status(hdev); + return; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +index ea348ebbbf2e9..db2e9dd5681eb 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +@@ -1956,6 +1956,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data) + + hclgevf_enable_vector(&hdev->misc_vector, false); + event_cause = hclgevf_check_evt_cause(hdev, &clearval); ++ if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) ++ hclgevf_clear_event_cause(hdev, clearval); + + switch (event_cause) { + case HCLGEVF_VECTOR0_EVENT_RST: +@@ -1968,10 +1970,8 @@ static irqreturn_t hclgevf_misc_irq_handle(int irq, void *data) + break; + } + +- if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) { +- hclgevf_clear_event_cause(hdev, clearval); ++ if (event_cause != HCLGEVF_VECTOR0_EVENT_OTHER) + hclgevf_enable_vector(&hdev->misc_vector, true); +- } + + return IRQ_HANDLED; + } +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c +index ecfe588f330ef..cfe7229593ead 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.c ++++ b/drivers/net/ethernet/ibm/ibmvnic.c +@@ -4277,6 +4277,14 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq, + return 0; + } + ++ if (adapter->failover_pending) { ++ adapter->init_done_rc = -EAGAIN; ++ netdev_dbg(netdev, "Failover pending, ignoring login response\n"); ++ complete(&adapter->init_done); ++ /* login response buffer will be released on reset */ ++ return 0; ++ } ++ + netdev->mtu = adapter->req_mtu - ETH_HLEN; + + netdev_dbg(adapter->netdev, "Login Response Buffer:\n"); +diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c +index 94a3f000e999b..bc46c262b42d8 100644 +--- a/drivers/net/ethernet/intel/iavf/iavf_main.c ++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c +@@ -142,6 +142,30 @@ enum iavf_status iavf_free_virt_mem_d(struct iavf_hw *hw, + return 0; + } + ++/** ++ * iavf_lock_timeout - try to set bit but give up after timeout ++ * @adapter: board private structure ++ * @bit: bit to set ++ * @msecs: timeout in msecs ++ * ++ * Returns 0 on success, negative on failure ++ **/ ++static int iavf_lock_timeout(struct iavf_adapter *adapter, ++ enum iavf_critical_section_t bit, ++ unsigned int msecs) ++{ ++ unsigned int wait, delay = 10; ++ ++ for (wait = 0; wait < msecs; wait += delay) { ++ if (!test_and_set_bit(bit, &adapter->crit_section)) ++ return 0; ++ ++ msleep(delay); ++ } ++ ++ return -1; ++} ++ + /** + * iavf_schedule_reset - Set the flags and schedule a reset event + * @adapter: board private structure +@@ -1961,7 +1985,6 @@ static void iavf_watchdog_task(struct work_struct *work) + /* check for hw reset */ + reg_val = rd32(hw, IAVF_VF_ARQLEN1) & IAVF_VF_ARQLEN1_ARQENABLE_MASK; + if (!reg_val) { +- adapter->state = __IAVF_RESETTING; + adapter->flags |= IAVF_FLAG_RESET_PENDING; + adapter->aq_required = 0; + adapter->current_op = VIRTCHNL_OP_UNKNOWN; +@@ -2077,6 +2100,10 @@ static void iavf_reset_task(struct work_struct *work) + if (test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) + return; + ++ if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 200)) { ++ schedule_work(&adapter->reset_task); ++ return; ++ } + while (test_and_set_bit(__IAVF_IN_CLIENT_TASK, + &adapter->crit_section)) + usleep_range(500, 1000); +@@ -2291,6 +2318,8 @@ static void iavf_adminq_task(struct work_struct *work) + if (!event.msg_buf) + goto out; + ++ if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 200)) ++ goto freedom; + do { + ret = iavf_clean_arq_element(hw, &event, &pending); + v_op = (enum virtchnl_ops)le32_to_cpu(event.desc.cookie_high); +@@ -2304,6 +2333,7 @@ static void iavf_adminq_task(struct work_struct *work) + if (pending != 0) + memset(event.msg_buf, 0, IAVF_MAX_AQ_BUF_SIZE); + } while (pending); ++ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section); + + if ((adapter->flags & + (IAVF_FLAG_RESET_PENDING | IAVF_FLAG_RESET_NEEDED)) || +@@ -3600,6 +3630,10 @@ static void iavf_init_task(struct work_struct *work) + init_task.work); + struct iavf_hw *hw = &adapter->hw; + ++ if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000)) { ++ dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__); ++ return; ++ } + switch (adapter->state) { + case __IAVF_STARTUP: + if (iavf_startup(adapter) < 0) +@@ -3612,14 +3646,14 @@ static void iavf_init_task(struct work_struct *work) + case __IAVF_INIT_GET_RESOURCES: + if (iavf_init_get_resources(adapter) < 0) + goto init_failed; +- return; ++ goto out; + default: + goto init_failed; + } + + queue_delayed_work(iavf_wq, &adapter->init_task, + msecs_to_jiffies(30)); +- return; ++ goto out; + init_failed: + if (++adapter->aq_wait_count > IAVF_AQ_MAX_ERR) { + dev_err(&adapter->pdev->dev, +@@ -3628,9 +3662,11 @@ init_failed: + iavf_shutdown_adminq(hw); + adapter->state = __IAVF_STARTUP; + queue_delayed_work(iavf_wq, &adapter->init_task, HZ * 5); +- return; ++ goto out; + } + queue_delayed_work(iavf_wq, &adapter->init_task, HZ); ++out: ++ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section); + } + + /** +@@ -3647,9 +3683,12 @@ static void iavf_shutdown(struct pci_dev *pdev) + if (netif_running(netdev)) + iavf_close(netdev); + ++ if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000)) ++ dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__); + /* Prevent the watchdog from running. */ + adapter->state = __IAVF_REMOVE; + adapter->aq_required = 0; ++ clear_bit(__IAVF_IN_CRITICAL_TASK, &adapter->crit_section); + + #ifdef CONFIG_PM + pci_save_state(pdev); +@@ -3878,10 +3917,6 @@ static void iavf_remove(struct pci_dev *pdev) + err); + } + +- /* Shut down all the garbage mashers on the detention level */ +- adapter->state = __IAVF_REMOVE; +- adapter->aq_required = 0; +- adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; + iavf_request_reset(adapter); + msleep(50); + /* If the FW isn't responding, kick it once, but only once. */ +@@ -3889,6 +3924,13 @@ static void iavf_remove(struct pci_dev *pdev) + iavf_request_reset(adapter); + msleep(50); + } ++ if (iavf_lock_timeout(adapter, __IAVF_IN_CRITICAL_TASK, 5000)) ++ dev_warn(&adapter->pdev->dev, "failed to set __IAVF_IN_CRITICAL_TASK in %s\n", __FUNCTION__); ++ ++ /* Shut down all the garbage mashers on the detention level */ ++ adapter->state = __IAVF_REMOVE; ++ adapter->aq_required = 0; ++ adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED; + iavf_free_all_tx_resources(adapter); + iavf_free_all_rx_resources(adapter); + iavf_misc_irq_disable(adapter); +diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c +index 084cf4a4114ad..9ba05d9aa8e08 100644 +--- a/drivers/net/ethernet/intel/igc/igc_main.c ++++ b/drivers/net/ethernet/intel/igc/igc_main.c +@@ -2693,6 +2693,7 @@ static irqreturn_t igc_msix_ring(int irq, void *data) + */ + static int igc_request_msix(struct igc_adapter *adapter) + { ++ unsigned int num_q_vectors = adapter->num_q_vectors; + int i = 0, err = 0, vector = 0, free_vector = 0; + struct net_device *netdev = adapter->netdev; + +@@ -2701,7 +2702,13 @@ static int igc_request_msix(struct igc_adapter *adapter) + if (err) + goto err_out; + +- for (i = 0; i < adapter->num_q_vectors; i++) { ++ if (num_q_vectors > MAX_Q_VECTORS) { ++ num_q_vectors = MAX_Q_VECTORS; ++ dev_warn(&adapter->pdev->dev, ++ "The number of queue vectors (%d) is higher than max allowed (%d)\n", ++ adapter->num_q_vectors, MAX_Q_VECTORS); ++ } ++ for (i = 0; i < num_q_vectors; i++) { + struct igc_q_vector *q_vector = adapter->q_vector[i]; + + vector++; +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +index 76547d35cd0e1..bf091a6c0cd2d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +@@ -865,7 +865,7 @@ static void cb_timeout_handler(struct work_struct *work) + ent->ret = -ETIMEDOUT; + mlx5_core_warn(dev, "cmd[%d]: %s(0x%x) Async, timeout. Will cause a leak of a command resource\n", + ent->idx, mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in)); +- mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); ++ mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true); + + out: + cmd_ent_put(ent); /* for the cmd_ent_get() took on schedule delayed work */ +@@ -977,7 +977,7 @@ static void cmd_work_handler(struct work_struct *work) + MLX5_SET(mbox_out, ent->out, status, status); + MLX5_SET(mbox_out, ent->out, syndrome, drv_synd); + +- mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); ++ mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true); + return; + } + +@@ -991,7 +991,7 @@ static void cmd_work_handler(struct work_struct *work) + poll_timeout(ent); + /* make sure we read the descriptor after ownership is SW */ + rmb(); +- mlx5_cmd_comp_handler(dev, 1UL << ent->idx, (ent->ret == -ETIMEDOUT)); ++ mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, (ent->ret == -ETIMEDOUT)); + } + } + +@@ -1051,7 +1051,7 @@ static void wait_func_handle_exec_timeout(struct mlx5_core_dev *dev, + mlx5_command_str(msg_to_opcode(ent->in)), msg_to_opcode(ent->in)); + + ent->ret = -ETIMEDOUT; +- mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true); ++ mlx5_cmd_comp_handler(dev, 1ULL << ent->idx, true); + } + + static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent) +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +index dc36b0db37222..97359417c6e7f 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c +@@ -1005,7 +1005,7 @@ int mlx5_fw_tracer_init(struct mlx5_fw_tracer *tracer) + err = mlx5_core_alloc_pd(dev, &tracer->buff.pdn); + if (err) { + mlx5_core_warn(dev, "FWTracer: Failed to allocate PD %d\n", err); +- return err; ++ goto err_cancel_work; + } + + err = mlx5_fw_tracer_create_mkey(tracer); +@@ -1029,6 +1029,7 @@ err_notifier_unregister: + mlx5_core_destroy_mkey(dev, &tracer->buff.mkey); + err_dealloc_pd: + mlx5_core_dealloc_pd(dev, tracer->buff.pdn); ++err_cancel_work: + cancel_work_sync(&tracer->read_fw_strings_work); + return err; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +index 739bf5dc5a252..5fe4e028567a9 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c +@@ -1606,9 +1606,9 @@ static int build_match_list(struct match_list_head *match_head, + + curr_match = kmalloc(sizeof(*curr_match), GFP_ATOMIC); + if (!curr_match) { ++ rcu_read_unlock(); + free_match_list(match_head, ft_locked); +- err = -ENOMEM; +- goto out; ++ return -ENOMEM; + } + if (!tree_get_node(&g->node)) { + kfree(curr_match); +@@ -1617,7 +1617,6 @@ static int build_match_list(struct match_list_head *match_head, + curr_match->g = g; + list_add_tail(&curr_match->list, &match_head->list); + } +-out: + rcu_read_unlock(); + return err; + } +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c +index f012aac83b10e..401564b94eb10 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c +@@ -603,6 +603,7 @@ static int dr_cmd_modify_qp_rtr2rts(struct mlx5_core_dev *mdev, + MLX5_SET(qpc, qpc, log_ack_req_freq, 0); + MLX5_SET(qpc, qpc, retry_count, attr->retry_cnt); + MLX5_SET(qpc, qpc, rnr_retry, attr->rnr_retry); ++ MLX5_SET(qpc, qpc, primary_address_path.ack_timeout, 0x8); /* ~1ms */ + + return mlx5_core_qp_modify(mdev, MLX5_CMD_OP_RTR2RTS_QP, 0, qpc, + &dr_qp->mqp); +diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c +index 5d85ae59bc51e..3769b15b04b3b 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c +@@ -3173,6 +3173,7 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn, + struct qed_nvm_image_att *p_image_att) + { + enum nvm_image_type type; ++ int rc; + u32 i; + + /* Translate image_id into MFW definitions */ +@@ -3198,7 +3199,10 @@ qed_mcp_get_nvm_image_att(struct qed_hwfn *p_hwfn, + return -EINVAL; + } + +- qed_mcp_nvm_info_populate(p_hwfn); ++ rc = qed_mcp_nvm_info_populate(p_hwfn); ++ if (rc) ++ return rc; ++ + for (i = 0; i < p_hwfn->nvm_info.num_images; i++) + if (type == p_hwfn->nvm_info.image_att[i].image_type) + break; +diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c +index c48a0e2d4d7ef..6a009d51ec510 100644 +--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c ++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_init.c +@@ -440,7 +440,6 @@ int qlcnic_pinit_from_rom(struct qlcnic_adapter *adapter) + QLCWR32(adapter, QLCNIC_CRB_PEG_NET_4 + 0x3c, 1); + msleep(20); + +- qlcnic_rom_unlock(adapter); + /* big hammer don't reset CAM block on reset */ + QLCWR32(adapter, QLCNIC_ROMUSB_GLB_SW_RESET, 0xfeffffff); + +diff --git a/drivers/net/ethernet/rdc/r6040.c b/drivers/net/ethernet/rdc/r6040.c +index 274e5b4bc4ac8..f158fdf3aab2c 100644 +--- a/drivers/net/ethernet/rdc/r6040.c ++++ b/drivers/net/ethernet/rdc/r6040.c +@@ -119,6 +119,8 @@ + #define PHY_ST 0x8A /* PHY status register */ + #define MAC_SM 0xAC /* MAC status machine */ + #define MAC_SM_RST 0x0002 /* MAC status machine reset */ ++#define MD_CSC 0xb6 /* MDC speed control register */ ++#define MD_CSC_DEFAULT 0x0030 + #define MAC_ID 0xBE /* Identifier register */ + + #define TX_DCNT 0x80 /* TX descriptor count */ +@@ -354,8 +356,9 @@ static void r6040_reset_mac(struct r6040_private *lp) + { + void __iomem *ioaddr = lp->base; + int limit = MAC_DEF_TIMEOUT; +- u16 cmd; ++ u16 cmd, md_csc; + ++ md_csc = ioread16(ioaddr + MD_CSC); + iowrite16(MAC_RST, ioaddr + MCR1); + while (limit--) { + cmd = ioread16(ioaddr + MCR1); +@@ -367,6 +370,10 @@ static void r6040_reset_mac(struct r6040_private *lp) + iowrite16(MAC_SM_RST, ioaddr + MAC_SM); + iowrite16(0, ioaddr + MAC_SM); + mdelay(5); ++ ++ /* Restore MDIO clock frequency */ ++ if (md_csc != MD_CSC_DEFAULT) ++ iowrite16(md_csc, ioaddr + MD_CSC); + } + + static void r6040_init_mac_regs(struct net_device *dev) +diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c +index 931a44fe7afe8..50d85d0372302 100644 +--- a/drivers/net/ethernet/renesas/sh_eth.c ++++ b/drivers/net/ethernet/renesas/sh_eth.c +@@ -2567,6 +2567,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev) + else + txdesc->status |= cpu_to_le32(TD_TACT); + ++ wmb(); /* cur_tx must be incremented after TACT bit was set */ + mdp->cur_tx++; + + if (!(sh_eth_read(ndev, EDTRR) & mdp->cd->edtrr_trns)) +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c +index 0f56f8e336917..03b11f191c262 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c +@@ -288,10 +288,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev) + val &= ~NSS_COMMON_GMAC_CTL_PHY_IFACE_SEL; + break; + default: +- dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", +- phy_modes(gmac->phy_mode)); +- err = -EINVAL; +- goto err_remove_config_dt; ++ goto err_unsupported_phy; + } + regmap_write(gmac->nss_common, NSS_COMMON_GMAC_CTL(gmac->id), val); + +@@ -308,10 +305,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev) + NSS_COMMON_CLK_SRC_CTRL_OFFSET(gmac->id); + break; + default: +- dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", +- phy_modes(gmac->phy_mode)); +- err = -EINVAL; +- goto err_remove_config_dt; ++ goto err_unsupported_phy; + } + regmap_write(gmac->nss_common, NSS_COMMON_CLK_SRC_CTRL, val); + +@@ -328,8 +322,7 @@ static int ipq806x_gmac_probe(struct platform_device *pdev) + NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id); + break; + default: +- /* We don't get here; the switch above will have errored out */ +- unreachable(); ++ goto err_unsupported_phy; + } + regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val); + +@@ -360,6 +353,11 @@ static int ipq806x_gmac_probe(struct platform_device *pdev) + + return 0; + ++err_unsupported_phy: ++ dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n", ++ phy_modes(gmac->phy_mode)); ++ err = -EINVAL; ++ + err_remove_config_dt: + stmmac_remove_config_dt(pdev, plat_dat); + +diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c +index bede1ff289c59..a65b7291e12a2 100644 +--- a/drivers/net/ethernet/wiznet/w5100.c ++++ b/drivers/net/ethernet/wiznet/w5100.c +@@ -1052,6 +1052,8 @@ static int w5100_mmio_probe(struct platform_device *pdev) + mac_addr = data->mac_addr; + + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); ++ if (!mem) ++ return -EINVAL; + if (resource_size(mem) < W5100_BUS_DIRECT_SIZE) + ops = &w5100_mmio_indirect_ops; + else +diff --git a/drivers/net/phy/dp83640_reg.h b/drivers/net/phy/dp83640_reg.h +index 21aa24c741b96..daae7fa58fb82 100644 +--- a/drivers/net/phy/dp83640_reg.h ++++ b/drivers/net/phy/dp83640_reg.h +@@ -5,7 +5,7 @@ + #ifndef HAVE_DP83640_REGISTERS + #define HAVE_DP83640_REGISTERS + +-#define PAGE0 0x0000 ++/* #define PAGE0 0x0000 */ + #define PHYCR2 0x001c /* PHY Control Register 2 */ + + #define PAGE4 0x0004 +diff --git a/drivers/net/usb/cdc_mbim.c b/drivers/net/usb/cdc_mbim.c +index eb100eb33de3d..77ac5a721e7b6 100644 +--- a/drivers/net/usb/cdc_mbim.c ++++ b/drivers/net/usb/cdc_mbim.c +@@ -653,6 +653,11 @@ static const struct usb_device_id mbim_devs[] = { + .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, + }, + ++ /* Telit LN920 */ ++ { USB_DEVICE_AND_INTERFACE_INFO(0x1bc7, 0x1061, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), ++ .driver_info = (unsigned long)&cdc_mbim_info_avoid_altsetting_toggle, ++ }, ++ + /* default entry */ + { USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE), + .driver_info = (unsigned long)&cdc_mbim_info_zlp, +diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c +index b4885a700296e..b0a4ca3559fd8 100644 +--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c ++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.c +@@ -3351,7 +3351,8 @@ found: + "Found block at %x: code=%d ref=%d length=%d major=%d minor=%d\n", + cptr, code, reference, length, major, minor); + if ((!AR_SREV_9485(ah) && length >= 1024) || +- (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485)) { ++ (AR_SREV_9485(ah) && length > EEPROM_DATA_LEN_9485) || ++ (length > cptr)) { + ath_dbg(common, EEPROM, "Skipping bad header\n"); + cptr -= COMP_HDR_LEN; + continue; +diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c +index 9fd8e64288ffa..7e2e22b6bbbc5 100644 +--- a/drivers/net/wireless/ath/ath9k/hw.c ++++ b/drivers/net/wireless/ath/ath9k/hw.c +@@ -1622,7 +1622,6 @@ static void ath9k_hw_apply_gpio_override(struct ath_hw *ah) + ath9k_hw_gpio_request_out(ah, i, NULL, + AR_GPIO_OUTPUT_MUX_AS_OUTPUT); + ath9k_hw_set_gpio(ah, i, !!(ah->gpio_val & BIT(i))); +- ath9k_hw_gpio_free(ah, i); + } + } + +@@ -2730,14 +2729,17 @@ static void ath9k_hw_gpio_cfg_output_mux(struct ath_hw *ah, u32 gpio, u32 type) + static void ath9k_hw_gpio_cfg_soc(struct ath_hw *ah, u32 gpio, bool out, + const char *label) + { ++ int err; ++ + if (ah->caps.gpio_requested & BIT(gpio)) + return; + +- /* may be requested by BSP, free anyway */ +- gpio_free(gpio); +- +- if (gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label)) ++ err = gpio_request_one(gpio, out ? GPIOF_OUT_INIT_LOW : GPIOF_IN, label); ++ if (err) { ++ ath_err(ath9k_hw_common(ah), "request GPIO%d failed:%d\n", ++ gpio, err); + return; ++ } + + ah->caps.gpio_requested |= BIT(gpio); + } +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c +index 9c417dd062913..7736621dca653 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c +@@ -1043,8 +1043,10 @@ int iwl_mvm_mac_ctxt_beacon_changed(struct iwl_mvm *mvm, + return -ENOMEM; + + #ifdef CONFIG_IWLWIFI_DEBUGFS +- if (mvm->beacon_inject_active) ++ if (mvm->beacon_inject_active) { ++ dev_kfree_skb(beacon); + return -EBUSY; ++ } + #endif + + ret = iwl_mvm_mac_ctxt_send_beacon(mvm, vif, beacon); +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +index 09b1a6beee77c..081cbc9ec7368 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c +@@ -2970,16 +2970,20 @@ static void iwl_mvm_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy, + void *_data) + { + struct iwl_mvm_he_obss_narrow_bw_ru_data *data = _data; ++ const struct cfg80211_bss_ies *ies; + const struct element *elem; + +- elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, bss->ies->data, +- bss->ies->len); ++ rcu_read_lock(); ++ ies = rcu_dereference(bss->ies); ++ elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY, ies->data, ++ ies->len); + + if (!elem || elem->datalen < 10 || + !(elem->data[10] & + WLAN_EXT_CAPA10_OBSS_NARROW_BW_RU_TOLERANCE_SUPPORT)) { + data->tolerated = false; + } ++ rcu_read_unlock(); + } + + static void iwl_mvm_check_he_obss_narrow_bw_ru(struct ieee80211_hw *hw, +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c +index 8b0576cde797e..a9aab6c690e85 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c +@@ -687,10 +687,26 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg, + + mvm->fw_restart = iwlwifi_mod_params.fw_restart ? -1 : 0; + +- mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE; +- mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE; +- mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE; +- mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE; ++ if (iwl_mvm_has_new_tx_api(mvm)) { ++ /* ++ * If we have the new TX/queue allocation API initialize them ++ * all to invalid numbers. We'll rewrite the ones that we need ++ * later, but that doesn't happen for all of them all of the ++ * time (e.g. P2P Device is optional), and if a dynamic queue ++ * ends up getting number 2 (IWL_MVM_DQA_P2P_DEVICE_QUEUE) then ++ * iwl_mvm_is_static_queue() erroneously returns true, and we ++ * might have things getting stuck. ++ */ ++ mvm->aux_queue = IWL_MVM_INVALID_QUEUE; ++ mvm->snif_queue = IWL_MVM_INVALID_QUEUE; ++ mvm->probe_queue = IWL_MVM_INVALID_QUEUE; ++ mvm->p2p_dev_queue = IWL_MVM_INVALID_QUEUE; ++ } else { ++ mvm->aux_queue = IWL_MVM_DQA_AUX_QUEUE; ++ mvm->snif_queue = IWL_MVM_DQA_INJECT_MONITOR_QUEUE; ++ mvm->probe_queue = IWL_MVM_DQA_AP_PROBE_RESP_QUEUE; ++ mvm->p2p_dev_queue = IWL_MVM_DQA_P2P_DEVICE_QUEUE; ++ } + + mvm->sf_state = SF_UNINIT; + if (iwl_mvm_has_unified_ucode(mvm)) +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +index 40cafcf40ccf0..5df4bbb6c6de3 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c +@@ -346,8 +346,9 @@ static int iwl_mvm_invalidate_sta_queue(struct iwl_mvm *mvm, int queue, + } + + static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta, +- int queue, u8 tid, u8 flags) ++ u16 *queueptr, u8 tid, u8 flags) + { ++ int queue = *queueptr; + struct iwl_scd_txq_cfg_cmd cmd = { + .scd_queue = queue, + .action = SCD_CFG_DISABLE_QUEUE, +@@ -356,6 +357,7 @@ static int iwl_mvm_disable_txq(struct iwl_mvm *mvm, struct ieee80211_sta *sta, + + if (iwl_mvm_has_new_tx_api(mvm)) { + iwl_trans_txq_free(mvm->trans, queue); ++ *queueptr = IWL_MVM_INVALID_QUEUE; + + return 0; + } +@@ -517,6 +519,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue, + u8 sta_id, tid; + unsigned long disable_agg_tids = 0; + bool same_sta; ++ u16 queue_tmp = queue; + int ret; + + lockdep_assert_held(&mvm->mutex); +@@ -539,7 +542,7 @@ static int iwl_mvm_free_inactive_queue(struct iwl_mvm *mvm, int queue, + iwl_mvm_invalidate_sta_queue(mvm, queue, + disable_agg_tids, false); + +- ret = iwl_mvm_disable_txq(mvm, old_sta, queue, tid, 0); ++ ret = iwl_mvm_disable_txq(mvm, old_sta, &queue_tmp, tid, 0); + if (ret) { + IWL_ERR(mvm, + "Failed to free inactive queue %d (ret=%d)\n", +@@ -1209,6 +1212,7 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, + unsigned int wdg_timeout = + iwl_mvm_get_wd_timeout(mvm, mvmsta->vif, false, false); + int queue = -1; ++ u16 queue_tmp; + unsigned long disable_agg_tids = 0; + enum iwl_mvm_agg_state queue_state; + bool shared_queue = false, inc_ssn; +@@ -1357,7 +1361,8 @@ static int iwl_mvm_sta_alloc_queue(struct iwl_mvm *mvm, + return 0; + + out_err: +- iwl_mvm_disable_txq(mvm, sta, queue, tid, 0); ++ queue_tmp = queue; ++ iwl_mvm_disable_txq(mvm, sta, &queue_tmp, tid, 0); + + return ret; + } +@@ -1795,7 +1800,7 @@ static void iwl_mvm_disable_sta_queues(struct iwl_mvm *mvm, + if (mvm_sta->tid_data[i].txq_id == IWL_MVM_INVALID_QUEUE) + continue; + +- iwl_mvm_disable_txq(mvm, sta, mvm_sta->tid_data[i].txq_id, i, ++ iwl_mvm_disable_txq(mvm, sta, &mvm_sta->tid_data[i].txq_id, i, + 0); + mvm_sta->tid_data[i].txq_id = IWL_MVM_INVALID_QUEUE; + } +@@ -2005,7 +2010,7 @@ static int iwl_mvm_add_int_sta_with_queue(struct iwl_mvm *mvm, int macidx, + ret = iwl_mvm_add_int_sta_common(mvm, sta, NULL, macidx, maccolor); + if (ret) { + if (!iwl_mvm_has_new_tx_api(mvm)) +- iwl_mvm_disable_txq(mvm, NULL, *queue, ++ iwl_mvm_disable_txq(mvm, NULL, queue, + IWL_MAX_TID_COUNT, 0); + return ret; + } +@@ -2073,7 +2078,7 @@ int iwl_mvm_rm_snif_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif) + if (WARN_ON_ONCE(mvm->snif_sta.sta_id == IWL_MVM_INVALID_STA)) + return -EINVAL; + +- iwl_mvm_disable_txq(mvm, NULL, mvm->snif_queue, IWL_MAX_TID_COUNT, 0); ++ iwl_mvm_disable_txq(mvm, NULL, &mvm->snif_queue, IWL_MAX_TID_COUNT, 0); + ret = iwl_mvm_rm_sta_common(mvm, mvm->snif_sta.sta_id); + if (ret) + IWL_WARN(mvm, "Failed sending remove station\n"); +@@ -2090,7 +2095,7 @@ int iwl_mvm_rm_aux_sta(struct iwl_mvm *mvm) + if (WARN_ON_ONCE(mvm->aux_sta.sta_id == IWL_MVM_INVALID_STA)) + return -EINVAL; + +- iwl_mvm_disable_txq(mvm, NULL, mvm->aux_queue, IWL_MAX_TID_COUNT, 0); ++ iwl_mvm_disable_txq(mvm, NULL, &mvm->aux_queue, IWL_MAX_TID_COUNT, 0); + ret = iwl_mvm_rm_sta_common(mvm, mvm->aux_sta.sta_id); + if (ret) + IWL_WARN(mvm, "Failed sending remove station\n"); +@@ -2186,7 +2191,7 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm, + struct ieee80211_vif *vif) + { + struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif); +- int queue; ++ u16 *queueptr, queue; + + lockdep_assert_held(&mvm->mutex); + +@@ -2195,10 +2200,10 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm, + switch (vif->type) { + case NL80211_IFTYPE_AP: + case NL80211_IFTYPE_ADHOC: +- queue = mvm->probe_queue; ++ queueptr = &mvm->probe_queue; + break; + case NL80211_IFTYPE_P2P_DEVICE: +- queue = mvm->p2p_dev_queue; ++ queueptr = &mvm->p2p_dev_queue; + break; + default: + WARN(1, "Can't free bcast queue on vif type %d\n", +@@ -2206,7 +2211,8 @@ static void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm, + return; + } + +- iwl_mvm_disable_txq(mvm, NULL, queue, IWL_MAX_TID_COUNT, 0); ++ queue = *queueptr; ++ iwl_mvm_disable_txq(mvm, NULL, queueptr, IWL_MAX_TID_COUNT, 0); + if (iwl_mvm_has_new_tx_api(mvm)) + return; + +@@ -2441,7 +2447,7 @@ int iwl_mvm_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif) + + iwl_mvm_flush_sta(mvm, &mvmvif->mcast_sta, true, 0); + +- iwl_mvm_disable_txq(mvm, NULL, mvmvif->cab_queue, 0, 0); ++ iwl_mvm_disable_txq(mvm, NULL, &mvmvif->cab_queue, 0, 0); + + ret = iwl_mvm_rm_sta_common(mvm, mvmvif->mcast_sta.sta_id); + if (ret) +diff --git a/drivers/ntb/test/ntb_msi_test.c b/drivers/ntb/test/ntb_msi_test.c +index 99d826ed9c341..662067dc9ce2c 100644 +--- a/drivers/ntb/test/ntb_msi_test.c ++++ b/drivers/ntb/test/ntb_msi_test.c +@@ -372,8 +372,10 @@ static int ntb_msit_probe(struct ntb_client *client, struct ntb_dev *ntb) + if (ret) + goto remove_dbgfs; + +- if (!nm->isr_ctx) ++ if (!nm->isr_ctx) { ++ ret = -ENOMEM; + goto remove_dbgfs; ++ } + + ntb_link_enable(ntb, NTB_SPEED_AUTO, NTB_WIDTH_AUTO); + +diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c +index 5ce4766a6c9eb..251fe75798c13 100644 +--- a/drivers/ntb/test/ntb_perf.c ++++ b/drivers/ntb/test/ntb_perf.c +@@ -597,6 +597,7 @@ static int perf_setup_inbuf(struct perf_peer *peer) + return -ENOMEM; + } + if (!IS_ALIGNED(peer->inbuf_xlat, xlat_align)) { ++ ret = -EINVAL; + dev_err(&perf->ntb->dev, "Unaligned inbuf allocated\n"); + goto err_free_inbuf; + } +diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c +index f6427a10a9908..38bbbbbc6f47f 100644 +--- a/drivers/nvme/host/tcp.c ++++ b/drivers/nvme/host/tcp.c +@@ -642,17 +642,9 @@ static int nvme_tcp_recv_data(struct nvme_tcp_queue *queue, struct sk_buff *skb, + unsigned int *offset, size_t *len) + { + struct nvme_tcp_data_pdu *pdu = (void *)queue->pdu; +- struct nvme_tcp_request *req; +- struct request *rq; +- +- rq = blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id); +- if (!rq) { +- dev_err(queue->ctrl->ctrl.device, +- "queue %d tag %#x not found\n", +- nvme_tcp_queue_id(queue), pdu->command_id); +- return -ENOENT; +- } +- req = blk_mq_rq_to_pdu(rq); ++ struct request *rq = ++ blk_mq_tag_to_rq(nvme_tcp_tagset(queue), pdu->command_id); ++ struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq); + + while (true) { + int recv_len, ret; +diff --git a/drivers/of/kobj.c b/drivers/of/kobj.c +index a32e60b024b8d..6675b5e56960c 100644 +--- a/drivers/of/kobj.c ++++ b/drivers/of/kobj.c +@@ -119,7 +119,7 @@ int __of_attach_node_sysfs(struct device_node *np) + struct property *pp; + int rc; + +- if (!of_kset) ++ if (!IS_ENABLED(CONFIG_SYSFS) || !of_kset) + return 0; + + np->kobj.kset = of_kset; +diff --git a/drivers/opp/of.c b/drivers/opp/of.c +index 603c688fe23dc..30cc407c8f93f 100644 +--- a/drivers/opp/of.c ++++ b/drivers/opp/of.c +@@ -95,15 +95,7 @@ static struct dev_pm_opp *_find_opp_of_np(struct opp_table *opp_table, + static struct device_node *of_parse_required_opp(struct device_node *np, + int index) + { +- struct device_node *required_np; +- +- required_np = of_parse_phandle(np, "required-opps", index); +- if (unlikely(!required_np)) { +- pr_err("%s: Unable to parse required-opps: %pOF, index: %d\n", +- __func__, np, index); +- } +- +- return required_np; ++ return of_parse_phandle(np, "required-opps", index); + } + + /* The caller must call dev_pm_opp_put_opp_table() after the table is used */ +@@ -996,7 +988,7 @@ int of_get_required_opp_performance_state(struct device_node *np, int index) + + required_np = of_parse_required_opp(np, index); + if (!required_np) +- return -EINVAL; ++ return -ENODEV; + + opp_table = _find_table_of_opp_np(required_np); + if (IS_ERR(opp_table)) { +diff --git a/drivers/parport/ieee1284_ops.c b/drivers/parport/ieee1284_ops.c +index 5d41dda6da4e7..75daa16f38b7f 100644 +--- a/drivers/parport/ieee1284_ops.c ++++ b/drivers/parport/ieee1284_ops.c +@@ -535,7 +535,7 @@ size_t parport_ieee1284_ecp_read_data (struct parport *port, + goto out; + + /* Yield the port for a while. */ +- if (count && dev->port->irq != PARPORT_IRQ_NONE) { ++ if (dev->port->irq != PARPORT_IRQ_NONE) { + parport_release (dev); + schedule_timeout_interruptible(msecs_to_jiffies(40)); + parport_claim_or_block (dev); +diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c +index 0a2902569f140..0538348ed843f 100644 +--- a/drivers/pci/controller/pci-aardvark.c ++++ b/drivers/pci/controller/pci-aardvark.c +@@ -62,6 +62,7 @@ + #define PIO_COMPLETION_STATUS_CRS 2 + #define PIO_COMPLETION_STATUS_CA 4 + #define PIO_NON_POSTED_REQ BIT(10) ++#define PIO_ERR_STATUS BIT(11) + #define PIO_ADDR_LS (PIO_BASE_ADDR + 0x8) + #define PIO_ADDR_MS (PIO_BASE_ADDR + 0xc) + #define PIO_WR_DATA (PIO_BASE_ADDR + 0x10) +@@ -176,7 +177,7 @@ + (PCIE_CONF_BUS(bus) | PCIE_CONF_DEV(PCI_SLOT(devfn)) | \ + PCIE_CONF_FUNC(PCI_FUNC(devfn)) | PCIE_CONF_REG(where)) + +-#define PIO_RETRY_CNT 500 ++#define PIO_RETRY_CNT 750000 /* 1.5 s */ + #define PIO_RETRY_DELAY 2 /* 2 us*/ + + #define LINK_WAIT_MAX_RETRIES 10 +@@ -193,6 +194,7 @@ struct advk_pcie { + struct list_head resources; + struct irq_domain *irq_domain; + struct irq_chip irq_chip; ++ raw_spinlock_t irq_lock; + struct irq_domain *msi_domain; + struct irq_domain *msi_inner_domain; + struct irq_chip msi_bottom_irq_chip; +@@ -363,7 +365,7 @@ static void advk_pcie_setup_hw(struct advk_pcie *pcie) + advk_writel(pcie, reg, PCIE_CORE_CMD_STATUS_REG); + } + +-static void advk_pcie_check_pio_status(struct advk_pcie *pcie) ++static int advk_pcie_check_pio_status(struct advk_pcie *pcie, u32 *val) + { + struct device *dev = &pcie->pdev->dev; + u32 reg; +@@ -374,14 +376,49 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie) + status = (reg & PIO_COMPLETION_STATUS_MASK) >> + PIO_COMPLETION_STATUS_SHIFT; + +- if (!status) +- return; +- ++ /* ++ * According to HW spec, the PIO status check sequence as below: ++ * 1) even if COMPLETION_STATUS(bit9:7) indicates successful, ++ * it still needs to check Error Status(bit11), only when this bit ++ * indicates no error happen, the operation is successful. ++ * 2) value Unsupported Request(1) of COMPLETION_STATUS(bit9:7) only ++ * means a PIO write error, and for PIO read it is successful with ++ * a read value of 0xFFFFFFFF. ++ * 3) value Completion Retry Status(CRS) of COMPLETION_STATUS(bit9:7) ++ * only means a PIO write error, and for PIO read it is successful ++ * with a read value of 0xFFFF0001. ++ * 4) value Completer Abort (CA) of COMPLETION_STATUS(bit9:7) means ++ * error for both PIO read and PIO write operation. ++ * 5) other errors are indicated as 'unknown'. ++ */ + switch (status) { ++ case PIO_COMPLETION_STATUS_OK: ++ if (reg & PIO_ERR_STATUS) { ++ strcomp_status = "COMP_ERR"; ++ break; ++ } ++ /* Get the read result */ ++ if (val) ++ *val = advk_readl(pcie, PIO_RD_DATA); ++ /* No error */ ++ strcomp_status = NULL; ++ break; + case PIO_COMPLETION_STATUS_UR: + strcomp_status = "UR"; + break; + case PIO_COMPLETION_STATUS_CRS: ++ /* PCIe r4.0, sec 2.3.2, says: ++ * If CRS Software Visibility is not enabled, the Root Complex ++ * must re-issue the Configuration Request as a new Request. ++ * A Root Complex implementation may choose to limit the number ++ * of Configuration Request/CRS Completion Status loops before ++ * determining that something is wrong with the target of the ++ * Request and taking appropriate action, e.g., complete the ++ * Request to the host as a failed transaction. ++ * ++ * To simplify implementation do not re-issue the Configuration ++ * Request and complete the Request as a failed transaction. ++ */ + strcomp_status = "CRS"; + break; + case PIO_COMPLETION_STATUS_CA: +@@ -392,6 +429,9 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie) + break; + } + ++ if (!strcomp_status) ++ return 0; ++ + if (reg & PIO_NON_POSTED_REQ) + str_posted = "Non-posted"; + else +@@ -399,6 +439,8 @@ static void advk_pcie_check_pio_status(struct advk_pcie *pcie) + + dev_err(dev, "%s PIO Response Status: %s, %#x @ %#x\n", + str_posted, strcomp_status, reg, advk_readl(pcie, PIO_ADDR_LS)); ++ ++ return -EFAULT; + } + + static int advk_pcie_wait_pio(struct advk_pcie *pcie) +@@ -625,10 +667,13 @@ static int advk_pcie_rd_conf(struct pci_bus *bus, u32 devfn, + if (ret < 0) + return PCIBIOS_SET_FAILED; + +- advk_pcie_check_pio_status(pcie); ++ /* Check PIO status and get the read result */ ++ ret = advk_pcie_check_pio_status(pcie, val); ++ if (ret < 0) { ++ *val = 0xffffffff; ++ return PCIBIOS_SET_FAILED; ++ } + +- /* Get the read result */ +- *val = advk_readl(pcie, PIO_RD_DATA); + if (size == 1) + *val = (*val >> (8 * (where & 3))) & 0xff; + else if (size == 2) +@@ -692,7 +737,9 @@ static int advk_pcie_wr_conf(struct pci_bus *bus, u32 devfn, + if (ret < 0) + return PCIBIOS_SET_FAILED; + +- advk_pcie_check_pio_status(pcie); ++ ret = advk_pcie_check_pio_status(pcie, NULL); ++ if (ret < 0) ++ return PCIBIOS_SET_FAILED; + + return PCIBIOS_SUCCESSFUL; + } +@@ -766,22 +813,28 @@ static void advk_pcie_irq_mask(struct irq_data *d) + { + struct advk_pcie *pcie = d->domain->host_data; + irq_hw_number_t hwirq = irqd_to_hwirq(d); ++ unsigned long flags; + u32 mask; + ++ raw_spin_lock_irqsave(&pcie->irq_lock, flags); + mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); + mask |= PCIE_ISR1_INTX_ASSERT(hwirq); + advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); ++ raw_spin_unlock_irqrestore(&pcie->irq_lock, flags); + } + + static void advk_pcie_irq_unmask(struct irq_data *d) + { + struct advk_pcie *pcie = d->domain->host_data; + irq_hw_number_t hwirq = irqd_to_hwirq(d); ++ unsigned long flags; + u32 mask; + ++ raw_spin_lock_irqsave(&pcie->irq_lock, flags); + mask = advk_readl(pcie, PCIE_ISR1_MASK_REG); + mask &= ~PCIE_ISR1_INTX_ASSERT(hwirq); + advk_writel(pcie, mask, PCIE_ISR1_MASK_REG); ++ raw_spin_unlock_irqrestore(&pcie->irq_lock, flags); + } + + static int advk_pcie_irq_map(struct irq_domain *h, +@@ -865,6 +918,8 @@ static int advk_pcie_init_irq_domain(struct advk_pcie *pcie) + struct irq_chip *irq_chip; + int ret = 0; + ++ raw_spin_lock_init(&pcie->irq_lock); ++ + pcie_intc_node = of_get_next_child(node, NULL); + if (!pcie_intc_node) { + dev_err(dev, "No PCIe Intc node found\n"); +diff --git a/drivers/pci/controller/pcie-xilinx-nwl.c b/drivers/pci/controller/pcie-xilinx-nwl.c +index 45c0f344ccd16..11b046b20b92a 100644 +--- a/drivers/pci/controller/pcie-xilinx-nwl.c ++++ b/drivers/pci/controller/pcie-xilinx-nwl.c +@@ -6,6 +6,7 @@ + * (C) Copyright 2014 - 2015, Xilinx, Inc. + */ + ++#include + #include + #include + #include +@@ -169,6 +170,7 @@ struct nwl_pcie { + u8 root_busno; + struct nwl_msi msi; + struct irq_domain *legacy_irq_domain; ++ struct clk *clk; + raw_spinlock_t leg_mask_lock; + }; + +@@ -839,6 +841,16 @@ static int nwl_pcie_probe(struct platform_device *pdev) + return err; + } + ++ pcie->clk = devm_clk_get(dev, NULL); ++ if (IS_ERR(pcie->clk)) ++ return PTR_ERR(pcie->clk); ++ ++ err = clk_prepare_enable(pcie->clk); ++ if (err) { ++ dev_err(dev, "can't enable PCIe ref clock\n"); ++ return err; ++ } ++ + err = nwl_pcie_bridge_init(pcie); + if (err) { + dev_err(dev, "HW Initialization failed\n"); +diff --git a/drivers/pci/hotplug/TODO b/drivers/pci/hotplug/TODO +index a32070be5adf9..cc6194aa24c15 100644 +--- a/drivers/pci/hotplug/TODO ++++ b/drivers/pci/hotplug/TODO +@@ -40,9 +40,6 @@ ibmphp: + + * The return value of pci_hp_register() is not checked. + +-* iounmap(io_mem) is called in the error path of ebda_rsrc_controller() +- and once more in the error path of its caller ibmphp_access_ebda(). +- + * The various slot data structures are difficult to follow and need to be + simplified. A lot of functions are too large and too complex, they need + to be broken up into smaller, manageable pieces. Negative examples are +diff --git a/drivers/pci/hotplug/ibmphp_ebda.c b/drivers/pci/hotplug/ibmphp_ebda.c +index 11a2661dc0627..7fb75401ad8a7 100644 +--- a/drivers/pci/hotplug/ibmphp_ebda.c ++++ b/drivers/pci/hotplug/ibmphp_ebda.c +@@ -714,8 +714,7 @@ static int __init ebda_rsrc_controller(void) + /* init hpc structure */ + hpc_ptr = alloc_ebda_hpc(slot_num, bus_num); + if (!hpc_ptr) { +- rc = -ENOMEM; +- goto error_no_hpc; ++ return -ENOMEM; + } + hpc_ptr->ctlr_id = ctlr_id; + hpc_ptr->ctlr_relative_id = ctlr; +@@ -910,8 +909,6 @@ error: + kfree(tmp_slot); + error_no_slot: + free_ebda_hpc(hpc_ptr); +-error_no_hpc: +- iounmap(io_mem); + return rc; + } + +diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c +index 5bb37671a86ad..c8bd243717b7b 100644 +--- a/drivers/pci/msi.c ++++ b/drivers/pci/msi.c +@@ -782,6 +782,9 @@ static void msix_mask_all(void __iomem *base, int tsize) + u32 ctrl = PCI_MSIX_ENTRY_CTRL_MASKBIT; + int i; + ++ if (pci_msi_ignore_mask) ++ return; ++ + for (i = 0; i < tsize; i++, base += PCI_MSIX_ENTRY_SIZE) + writel(ctrl, base + PCI_MSIX_ENTRY_VECTOR_CTRL); + } +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index 58c33b65d451a..b9550cd4280ca 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -224,7 +224,7 @@ static int pci_dev_str_match_path(struct pci_dev *dev, const char *path, + + *endptr = strchrnul(path, ';'); + +- wpath = kmemdup_nul(path, *endptr - path, GFP_KERNEL); ++ wpath = kmemdup_nul(path, *endptr - path, GFP_ATOMIC); + if (!wpath) + return -ENOMEM; + +@@ -1672,11 +1672,7 @@ static int pci_enable_device_flags(struct pci_dev *dev, unsigned long flags) + * so that things like MSI message writing will behave as expected + * (e.g. if the device really is in D0 at enable time). + */ +- if (dev->pm_cap) { +- u16 pmcsr; +- pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr); +- dev->current_state = (pmcsr & PCI_PM_CTRL_STATE_MASK); +- } ++ pci_update_current_state(dev, dev->current_state); + + if (atomic_inc_return(&dev->enable_cnt) > 1) + return 0; /* already enabled */ +diff --git a/drivers/pci/pcie/portdrv_core.c b/drivers/pci/pcie/portdrv_core.c +index 1b330129089fe..8637f6068f9c2 100644 +--- a/drivers/pci/pcie/portdrv_core.c ++++ b/drivers/pci/pcie/portdrv_core.c +@@ -255,8 +255,13 @@ static int get_port_device_capability(struct pci_dev *dev) + services |= PCIE_PORT_SERVICE_DPC; + + if (pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM || +- pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) +- services |= PCIE_PORT_SERVICE_BWNOTIF; ++ pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { ++ u32 linkcap; ++ ++ pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &linkcap); ++ if (linkcap & PCI_EXP_LNKCAP_LBNC) ++ services |= PCIE_PORT_SERVICE_BWNOTIF; ++ } + + return services; + } +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index 97c343d31f989..686298c0f6cda 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -3252,6 +3252,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE, + PCI_DEVICE_ID_SOLARFLARE_SFC4000A_1, fixup_mpss_256); + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SOLARFLARE, + PCI_DEVICE_ID_SOLARFLARE_SFC4000B, fixup_mpss_256); ++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_ASMEDIA, 0x0612, fixup_mpss_256); + + /* + * Intel 5000 and 5100 Memory controllers have an erratum with read completion +@@ -4683,6 +4684,18 @@ static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags) + PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); + } + ++/* ++ * Each of these NXP Root Ports is in a Root Complex with a unique segment ++ * number and does provide isolation features to disable peer transactions ++ * and validate bus numbers in requests, but does not provide an ACS ++ * capability. ++ */ ++static int pci_quirk_nxp_rp_acs(struct pci_dev *dev, u16 acs_flags) ++{ ++ return pci_acs_ctrl_enabled(acs_flags, ++ PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF); ++} ++ + static int pci_quirk_al_acs(struct pci_dev *dev, u16 acs_flags) + { + if (pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) +@@ -4909,6 +4922,10 @@ static const struct pci_dev_acs_enabled { + { 0x10df, 0x720, pci_quirk_mf_endpoint_acs }, /* Emulex Skyhawk-R */ + /* Cavium ThunderX */ + { PCI_VENDOR_ID_CAVIUM, PCI_ANY_ID, pci_quirk_cavium_acs }, ++ /* Cavium multi-function devices */ ++ { PCI_VENDOR_ID_CAVIUM, 0xA026, pci_quirk_mf_endpoint_acs }, ++ { PCI_VENDOR_ID_CAVIUM, 0xA059, pci_quirk_mf_endpoint_acs }, ++ { PCI_VENDOR_ID_CAVIUM, 0xA060, pci_quirk_mf_endpoint_acs }, + /* APM X-Gene */ + { PCI_VENDOR_ID_AMCC, 0xE004, pci_quirk_xgene_acs }, + /* Ampere Computing */ +@@ -4929,6 +4946,39 @@ static const struct pci_dev_acs_enabled { + { PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs }, + { PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs }, + { PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs }, ++ /* NXP root ports, xx=16, 12, or 08 cores */ ++ /* LX2xx0A : without security features + CAN-FD */ ++ { PCI_VENDOR_ID_NXP, 0x8d81, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8da1, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d83, pci_quirk_nxp_rp_acs }, ++ /* LX2xx0C : security features + CAN-FD */ ++ { PCI_VENDOR_ID_NXP, 0x8d80, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8da0, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d82, pci_quirk_nxp_rp_acs }, ++ /* LX2xx0E : security features + CAN */ ++ { PCI_VENDOR_ID_NXP, 0x8d90, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8db0, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d92, pci_quirk_nxp_rp_acs }, ++ /* LX2xx0N : without security features + CAN */ ++ { PCI_VENDOR_ID_NXP, 0x8d91, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8db1, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d93, pci_quirk_nxp_rp_acs }, ++ /* LX2xx2A : without security features + CAN-FD */ ++ { PCI_VENDOR_ID_NXP, 0x8d89, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8da9, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d8b, pci_quirk_nxp_rp_acs }, ++ /* LX2xx2C : security features + CAN-FD */ ++ { PCI_VENDOR_ID_NXP, 0x8d88, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8da8, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d8a, pci_quirk_nxp_rp_acs }, ++ /* LX2xx2E : security features + CAN */ ++ { PCI_VENDOR_ID_NXP, 0x8d98, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8db8, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d9a, pci_quirk_nxp_rp_acs }, ++ /* LX2xx2N : without security features + CAN */ ++ { PCI_VENDOR_ID_NXP, 0x8d99, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8db9, pci_quirk_nxp_rp_acs }, ++ { PCI_VENDOR_ID_NXP, 0x8d9b, pci_quirk_nxp_rp_acs }, + /* Zhaoxin Root/Downstream Ports */ + { PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs }, + { 0 } +@@ -5393,7 +5443,7 @@ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, + PCI_CLASS_MULTIMEDIA_HD_AUDIO, 8, quirk_gpu_hda); + + /* +- * Create device link for NVIDIA GPU with integrated USB xHCI Host ++ * Create device link for GPUs with integrated USB xHCI Host + * controller to VGA. + */ + static void quirk_gpu_usb(struct pci_dev *usb) +@@ -5402,9 +5452,11 @@ static void quirk_gpu_usb(struct pci_dev *usb) + } + DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, + PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb); ++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID, ++ PCI_CLASS_SERIAL_USB, 8, quirk_gpu_usb); + + /* +- * Create device link for NVIDIA GPU with integrated Type-C UCSI controller ++ * Create device link for GPUs with integrated Type-C UCSI controller + * to VGA. Currently there is no class code defined for UCSI device over PCI + * so using UNKNOWN class for now and it will be updated when UCSI + * over PCI gets a class code. +@@ -5417,6 +5469,9 @@ static void quirk_gpu_usb_typec_ucsi(struct pci_dev *ucsi) + DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_ANY_ID, + PCI_CLASS_SERIAL_UNKNOWN, 8, + quirk_gpu_usb_typec_ucsi); ++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_ATI, PCI_ANY_ID, ++ PCI_CLASS_SERIAL_UNKNOWN, 8, ++ quirk_gpu_usb_typec_ucsi); + + /* + * Enable the NVIDIA GPU integrated HDA controller if the BIOS left it +diff --git a/drivers/pci/syscall.c b/drivers/pci/syscall.c +index 8b003c890b87b..c9f03418e71e0 100644 +--- a/drivers/pci/syscall.c ++++ b/drivers/pci/syscall.c +@@ -22,8 +22,10 @@ SYSCALL_DEFINE5(pciconfig_read, unsigned long, bus, unsigned long, dfn, + long err; + int cfg_ret; + ++ err = -EPERM; ++ dev = NULL; + if (!capable(CAP_SYS_ADMIN)) +- return -EPERM; ++ goto error; + + err = -ENODEV; + dev = pci_get_domain_bus_and_slot(0, bus, dfn); +diff --git a/drivers/pinctrl/pinctrl-ingenic.c b/drivers/pinctrl/pinctrl-ingenic.c +index 91596eee0bda1..ba078a7098468 100644 +--- a/drivers/pinctrl/pinctrl-ingenic.c ++++ b/drivers/pinctrl/pinctrl-ingenic.c +@@ -348,7 +348,7 @@ static const struct ingenic_chip_info jz4725b_chip_info = { + }; + + static const u32 jz4760_pull_ups[6] = { +- 0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0xfffff00f, ++ 0xffffffff, 0xfffcf3ff, 0xffffffff, 0xffffcfff, 0xfffffb7c, 0x0000000f, + }; + + static const u32 jz4760_pull_downs[6] = { +@@ -611,11 +611,11 @@ static const struct ingenic_chip_info jz4760b_chip_info = { + }; + + static const u32 jz4770_pull_ups[6] = { +- 0x3fffffff, 0xfff0030c, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0xffa7f00f, ++ 0x3fffffff, 0xfff0f3fc, 0xffffffff, 0xffff4fff, 0xfffffb7c, 0x0024f00f, + }; + + static const u32 jz4770_pull_downs[6] = { +- 0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x00580ff0, ++ 0x00000000, 0x000f0c03, 0x00000000, 0x0000b000, 0x00000483, 0x005b0ff0, + }; + + static int jz4770_uart0_data_pins[] = { 0xa0, 0xa3, }; +diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c +index a9d511982780c..fb1c8965cb991 100644 +--- a/drivers/pinctrl/pinctrl-single.c ++++ b/drivers/pinctrl/pinctrl-single.c +@@ -1201,6 +1201,7 @@ static int pcs_parse_bits_in_pinctrl_entry(struct pcs_device *pcs, + + if (PCS_HAS_PINCONF) { + dev_err(pcs->dev, "pinconf not supported\n"); ++ res = -ENOTSUPP; + goto free_pingroups; + } + +diff --git a/drivers/pinctrl/pinctrl-stmfx.c b/drivers/pinctrl/pinctrl-stmfx.c +index ccdf0bb214149..835c14bb315bc 100644 +--- a/drivers/pinctrl/pinctrl-stmfx.c ++++ b/drivers/pinctrl/pinctrl-stmfx.c +@@ -540,7 +540,7 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id) + u8 pending[NR_GPIO_REGS]; + u8 src[NR_GPIO_REGS] = {0, 0, 0}; + unsigned long n, status; +- int ret; ++ int i, ret; + + ret = regmap_bulk_read(pctl->stmfx->map, STMFX_REG_IRQ_GPI_PENDING, + &pending, NR_GPIO_REGS); +@@ -550,7 +550,9 @@ static irqreturn_t stmfx_pinctrl_irq_thread_fn(int irq, void *dev_id) + regmap_bulk_write(pctl->stmfx->map, STMFX_REG_IRQ_GPI_SRC, + src, NR_GPIO_REGS); + +- status = *(unsigned long *)pending; ++ BUILD_BUG_ON(NR_GPIO_REGS > sizeof(status)); ++ for (i = 0, status = 0; i < NR_GPIO_REGS; i++) ++ status |= (unsigned long)pending[i] << (i * 8); + for_each_set_bit(n, &status, gc->ngpio) { + handle_nested_irq(irq_find_mapping(gc->irq.domain, n)); + stmfx_pinctrl_irq_toggle_trigger(pctl, n); +diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c +index f26574ef234ab..601fffeba39fe 100644 +--- a/drivers/pinctrl/samsung/pinctrl-samsung.c ++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c +@@ -918,7 +918,7 @@ static int samsung_pinctrl_register(struct platform_device *pdev, + pin_bank->grange.pin_base = drvdata->pin_base + + pin_bank->pin_base; + pin_bank->grange.base = pin_bank->grange.pin_base; +- pin_bank->grange.npins = pin_bank->gpio_chip.ngpio; ++ pin_bank->grange.npins = pin_bank->nr_pins; + pin_bank->grange.gc = &pin_bank->gpio_chip; + pinctrl_add_gpio_range(drvdata->pctl_dev, &pin_bank->grange); + } +diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c +index f659f96bda128..9b575e9dd71c5 100644 +--- a/drivers/platform/chrome/cros_ec_proto.c ++++ b/drivers/platform/chrome/cros_ec_proto.c +@@ -213,6 +213,15 @@ static int cros_ec_host_command_proto_query(struct cros_ec_device *ec_dev, + msg->insize = sizeof(struct ec_response_get_protocol_info); + + ret = send_command(ec_dev, msg); ++ /* ++ * Send command once again when timeout occurred. ++ * Fingerprint MCU (FPMCU) is restarted during system boot which ++ * introduces small window in which FPMCU won't respond for any ++ * messages sent by kernel. There is no need to wait before next ++ * attempt because we waited at least EC_MSG_DEADLINE_MS. ++ */ ++ if (ret == -ETIMEDOUT) ++ ret = send_command(ec_dev, msg); + + if (ret < 0) { + dev_dbg(ec_dev->dev, +diff --git a/drivers/platform/x86/dell-smbios-wmi.c b/drivers/platform/x86/dell-smbios-wmi.c +index c97bd4a452422..5821e9d9a4ce4 100644 +--- a/drivers/platform/x86/dell-smbios-wmi.c ++++ b/drivers/platform/x86/dell-smbios-wmi.c +@@ -69,6 +69,7 @@ static int run_smbios_call(struct wmi_device *wdev) + if (obj->type == ACPI_TYPE_INTEGER) + dev_dbg(&wdev->dev, "SMBIOS call failed: %llu\n", + obj->integer.value); ++ kfree(output.pointer); + return -EIO; + } + memcpy(&priv->buf->std, obj->buffer.pointer, obj->buffer.length); +diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c +index ab4740c3bf573..f8f8207a1895e 100644 +--- a/drivers/power/supply/max17042_battery.c ++++ b/drivers/power/supply/max17042_battery.c +@@ -842,8 +842,12 @@ static irqreturn_t max17042_thread_handler(int id, void *dev) + { + struct max17042_chip *chip = dev; + u32 val; ++ int ret; ++ ++ ret = regmap_read(chip->regmap, MAX17042_STATUS, &val); ++ if (ret) ++ return IRQ_HANDLED; + +- regmap_read(chip->regmap, MAX17042_STATUS, &val); + if ((val & STATUS_INTR_SOCMIN_BIT) || + (val & STATUS_INTR_SOCMAX_BIT)) { + dev_info(&chip->client->dev, "SOC threshold INTR\n"); +diff --git a/drivers/rtc/rtc-tps65910.c b/drivers/rtc/rtc-tps65910.c +index 2c0467a9e7179..8d1b1fda62dd1 100644 +--- a/drivers/rtc/rtc-tps65910.c ++++ b/drivers/rtc/rtc-tps65910.c +@@ -460,6 +460,6 @@ static struct platform_driver tps65910_rtc_driver = { + }; + + module_platform_driver(tps65910_rtc_driver); +-MODULE_ALIAS("platform:rtc-tps65910"); ++MODULE_ALIAS("platform:tps65910-rtc"); + MODULE_AUTHOR("Venu Byravarasu "); + MODULE_LICENSE("GPL"); +diff --git a/drivers/s390/char/sclp_early.c b/drivers/s390/char/sclp_early.c +index cc5e84b80c699..faa3a4b8ed91d 100644 +--- a/drivers/s390/char/sclp_early.c ++++ b/drivers/s390/char/sclp_early.c +@@ -40,13 +40,14 @@ static void __init sclp_early_facilities_detect(struct read_info_sccb *sccb) + sclp.has_gisaf = !!(sccb->fac118 & 0x08); + sclp.has_hvs = !!(sccb->fac119 & 0x80); + sclp.has_kss = !!(sccb->fac98 & 0x01); +- sclp.has_sipl = !!(sccb->cbl & 0x4000); + if (sccb->fac85 & 0x02) + S390_lowcore.machine_flags |= MACHINE_FLAG_ESOP; + if (sccb->fac91 & 0x40) + S390_lowcore.machine_flags |= MACHINE_FLAG_TLB_GUEST; + if (sccb->cpuoff > 134) + sclp.has_diag318 = !!(sccb->byte_134 & 0x80); ++ if (sccb->cpuoff > 137) ++ sclp.has_sipl = !!(sccb->cbl & 0x4000); + sclp.rnmax = sccb->rnmax ? sccb->rnmax : sccb->rnmax2; + sclp.rzm = sccb->rnsize ? sccb->rnsize : sccb->rnsize2; + sclp.rzm <<= 20; +diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c +index 6e988233fb81f..6a54556119dd6 100644 +--- a/drivers/scsi/BusLogic.c ++++ b/drivers/scsi/BusLogic.c +@@ -3601,7 +3601,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt, + if (buf[0] != '\n' || len > 1) + printk("%sscsi%d: %s", blogic_msglevelmap[msglevel], adapter->host_no, buf); + } else +- printk("%s", buf); ++ pr_cont("%s", buf); + } else { + if (begin) { + if (adapter != NULL && adapter->adapter_initd) +@@ -3609,7 +3609,7 @@ static void blogic_msg(enum blogic_msglevel msglevel, char *fmt, + else + printk("%s%s", blogic_msglevelmap[msglevel], buf); + } else +- printk("%s", buf); ++ pr_cont("%s", buf); + } + begin = (buf[len - 1] == '\n'); + } +diff --git a/drivers/scsi/pcmcia/fdomain_cs.c b/drivers/scsi/pcmcia/fdomain_cs.c +index e42acf314d068..33df6a9ba9b5f 100644 +--- a/drivers/scsi/pcmcia/fdomain_cs.c ++++ b/drivers/scsi/pcmcia/fdomain_cs.c +@@ -45,8 +45,10 @@ static int fdomain_probe(struct pcmcia_device *link) + goto fail_disable; + + if (!request_region(link->resource[0]->start, FDOMAIN_REGION_SIZE, +- "fdomain_cs")) ++ "fdomain_cs")) { ++ ret = -EBUSY; + goto fail_disable; ++ } + + sh = fdomain_create(link->resource[0]->start, link->irq, 7, &link->dev); + if (!sh) { +diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c +index 7a6306f8483ec..c95e04cc64240 100644 +--- a/drivers/scsi/qedf/qedf_main.c ++++ b/drivers/scsi/qedf/qedf_main.c +@@ -2894,7 +2894,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf) + { + u32 *list; + int i; +- int status = 0, rc; ++ int status; + u32 *pbl; + dma_addr_t page; + int num_pages; +@@ -2906,7 +2906,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf) + */ + if (!qedf->num_queues) { + QEDF_ERR(&(qedf->dbg_ctx), "No MSI-X vectors available!\n"); +- return 1; ++ return -ENOMEM; + } + + /* +@@ -2914,7 +2914,7 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf) + * addresses of our queues + */ + if (!qedf->p_cpuq) { +- status = 1; ++ status = -EINVAL; + QEDF_ERR(&qedf->dbg_ctx, "p_cpuq is NULL.\n"); + goto mem_alloc_failure; + } +@@ -2930,8 +2930,8 @@ static int qedf_alloc_global_queues(struct qedf_ctx *qedf) + "qedf->global_queues=%p.\n", qedf->global_queues); + + /* Allocate DMA coherent buffers for BDQ */ +- rc = qedf_alloc_bdq(qedf); +- if (rc) { ++ status = qedf_alloc_bdq(qedf); ++ if (status) { + QEDF_ERR(&qedf->dbg_ctx, "Unable to allocate bdq.\n"); + goto mem_alloc_failure; + } +diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c +index 1ec42c5f0b2a0..92c4a367b7bd7 100644 +--- a/drivers/scsi/qedi/qedi_main.c ++++ b/drivers/scsi/qedi/qedi_main.c +@@ -1553,7 +1553,7 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi) + { + u32 *list; + int i; +- int status = 0, rc; ++ int status; + u32 *pbl; + dma_addr_t page; + int num_pages; +@@ -1564,14 +1564,14 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi) + */ + if (!qedi->num_queues) { + QEDI_ERR(&qedi->dbg_ctx, "No MSI-X vectors available!\n"); +- return 1; ++ return -ENOMEM; + } + + /* Make sure we allocated the PBL that will contain the physical + * addresses of our queues + */ + if (!qedi->p_cpuq) { +- status = 1; ++ status = -EINVAL; + goto mem_alloc_failure; + } + +@@ -1586,13 +1586,13 @@ static int qedi_alloc_global_queues(struct qedi_ctx *qedi) + "qedi->global_queues=%p.\n", qedi->global_queues); + + /* Allocate DMA coherent buffers for BDQ */ +- rc = qedi_alloc_bdq(qedi); +- if (rc) ++ status = qedi_alloc_bdq(qedi); ++ if (status) + goto mem_alloc_failure; + + /* Allocate DMA coherent buffers for NVM_ISCSI_CFG */ +- rc = qedi_alloc_nvm_iscsi_cfg(qedi); +- if (rc) ++ status = qedi_alloc_nvm_iscsi_cfg(qedi); ++ if (status) + goto mem_alloc_failure; + + /* Allocate a CQ and an associated PBL for each MSI-X +diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c +index 11656e864fca9..97453c12b7358 100644 +--- a/drivers/scsi/qla2xxx/qla_nvme.c ++++ b/drivers/scsi/qla2xxx/qla_nvme.c +@@ -84,8 +84,9 @@ static int qla_nvme_alloc_queue(struct nvme_fc_local_port *lport, + struct qla_hw_data *ha; + struct qla_qpair *qpair; + +- if (!qidx) +- qidx++; ++ /* Map admin queue and 1st IO queue to index 0 */ ++ if (qidx) ++ qidx--; + + vha = (struct scsi_qla_host *)lport->private; + ha = vha->hw; +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c +index 052ce78814075..28cbefe715e59 100644 +--- a/drivers/scsi/qla2xxx/qla_os.c ++++ b/drivers/scsi/qla2xxx/qla_os.c +@@ -15,6 +15,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -2799,6 +2800,11 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id) + return ret; + } + ++ if (is_kdump_kernel()) { ++ ql2xmqsupport = 0; ++ ql2xallocfwdump = 0; ++ } ++ + /* This may fail but that's ok */ + pci_enable_pcie_error_reporting(pdev); + +diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c +index 9bc451004184f..80ff00025c03d 100644 +--- a/drivers/scsi/smartpqi/smartpqi_init.c ++++ b/drivers/scsi/smartpqi/smartpqi_init.c +@@ -1192,6 +1192,7 @@ static int pqi_get_raid_map(struct pqi_ctrl_info *ctrl_info, + "Requested %d bytes, received %d bytes", + raid_map_size, + get_unaligned_le32(&raid_map->structure_size)); ++ rc = -EINVAL; + goto error; + } + } +diff --git a/drivers/soc/aspeed/aspeed-lpc-ctrl.c b/drivers/soc/aspeed/aspeed-lpc-ctrl.c +index 01ed21e8bfee5..040c7dc1d4792 100644 +--- a/drivers/soc/aspeed/aspeed-lpc-ctrl.c ++++ b/drivers/soc/aspeed/aspeed-lpc-ctrl.c +@@ -46,7 +46,7 @@ static int aspeed_lpc_ctrl_mmap(struct file *file, struct vm_area_struct *vma) + unsigned long vsize = vma->vm_end - vma->vm_start; + pgprot_t prot = vma->vm_page_prot; + +- if (vma->vm_pgoff + vsize > lpc_ctrl->mem_base + lpc_ctrl->mem_size) ++ if (vma->vm_pgoff + vma_pages(vma) > lpc_ctrl->mem_size >> PAGE_SHIFT) + return -EINVAL; + + /* ast2400/2500 AHB accesses are not cache coherent */ +diff --git a/drivers/soc/aspeed/aspeed-p2a-ctrl.c b/drivers/soc/aspeed/aspeed-p2a-ctrl.c +index b60fbeaffcbd0..20b5fb2a207cc 100644 +--- a/drivers/soc/aspeed/aspeed-p2a-ctrl.c ++++ b/drivers/soc/aspeed/aspeed-p2a-ctrl.c +@@ -110,7 +110,7 @@ static int aspeed_p2a_mmap(struct file *file, struct vm_area_struct *vma) + vsize = vma->vm_end - vma->vm_start; + prot = vma->vm_page_prot; + +- if (vma->vm_pgoff + vsize > ctrl->mem_base + ctrl->mem_size) ++ if (vma->vm_pgoff + vma_pages(vma) > ctrl->mem_size >> PAGE_SHIFT) + return -EINVAL; + + /* ast2400/2500 AHB accesses are not cache coherent */ +diff --git a/drivers/soc/qcom/qcom_aoss.c b/drivers/soc/qcom/qcom_aoss.c +index 33a27e6c6d67d..45c5aa712edac 100644 +--- a/drivers/soc/qcom/qcom_aoss.c ++++ b/drivers/soc/qcom/qcom_aoss.c +@@ -472,12 +472,12 @@ static int qmp_cooling_device_add(struct qmp *qmp, + static int qmp_cooling_devices_register(struct qmp *qmp) + { + struct device_node *np, *child; +- int count = QMP_NUM_COOLING_RESOURCES; ++ int count = 0; + int ret; + + np = qmp->dev->of_node; + +- qmp->cooling_devs = devm_kcalloc(qmp->dev, count, ++ qmp->cooling_devs = devm_kcalloc(qmp->dev, QMP_NUM_COOLING_RESOURCES, + sizeof(*qmp->cooling_devs), + GFP_KERNEL); + +@@ -493,12 +493,16 @@ static int qmp_cooling_devices_register(struct qmp *qmp) + goto unroll; + } + ++ if (!count) ++ devm_kfree(qmp->dev, qmp->cooling_devs); ++ + return 0; + + unroll: + while (--count >= 0) + thermal_cooling_device_unregister + (qmp->cooling_devs[count].cdev); ++ devm_kfree(qmp->dev, qmp->cooling_devs); + + return ret; + } +diff --git a/drivers/staging/board/board.c b/drivers/staging/board/board.c +index cb6feb34dd401..f980af0373452 100644 +--- a/drivers/staging/board/board.c ++++ b/drivers/staging/board/board.c +@@ -136,6 +136,7 @@ int __init board_staging_register_clock(const struct board_staging_clk *bsc) + static int board_staging_add_dev_domain(struct platform_device *pdev, + const char *domain) + { ++ struct device *dev = &pdev->dev; + struct of_phandle_args pd_args; + struct device_node *np; + +@@ -148,7 +149,11 @@ static int board_staging_add_dev_domain(struct platform_device *pdev, + pd_args.np = np; + pd_args.args_count = 0; + +- return of_genpd_add_device(&pd_args, &pdev->dev); ++ /* Initialization similar to device_pm_init_common() */ ++ spin_lock_init(&dev->power.lock); ++ dev->power.early_init = true; ++ ++ return of_genpd_add_device(&pd_args, dev); + } + #else + static inline int board_staging_add_dev_domain(struct platform_device *pdev, +diff --git a/drivers/staging/ks7010/ks7010_sdio.c b/drivers/staging/ks7010/ks7010_sdio.c +index 4b379542ecd50..3fbe223d59b8e 100644 +--- a/drivers/staging/ks7010/ks7010_sdio.c ++++ b/drivers/staging/ks7010/ks7010_sdio.c +@@ -938,9 +938,9 @@ static void ks7010_private_init(struct ks_wlan_private *priv, + memset(&priv->wstats, 0, sizeof(priv->wstats)); + + /* sleep mode */ ++ atomic_set(&priv->sleepstatus.status, 0); + atomic_set(&priv->sleepstatus.doze_request, 0); + atomic_set(&priv->sleepstatus.wakeup_request, 0); +- atomic_set(&priv->sleepstatus.wakeup_request, 0); + + trx_device_init(priv); + hostif_init(priv); +diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c +index 1deb74112ad43..11d9d9155eef2 100644 +--- a/drivers/staging/rts5208/rtsx_scsi.c ++++ b/drivers/staging/rts5208/rtsx_scsi.c +@@ -2802,10 +2802,10 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip) + } + + if (dev_info_id == 0x15) { +- buf_len = 0x3A; ++ buf_len = 0x3C; + data_len = 0x3A; + } else { +- buf_len = 0x6A; ++ buf_len = 0x6C; + data_len = 0x6A; + } + +@@ -2855,11 +2855,7 @@ static int get_ms_information(struct scsi_cmnd *srb, struct rtsx_chip *chip) + } + + rtsx_stor_set_xfer_buf(buf, buf_len, srb); +- +- if (dev_info_id == 0x15) +- scsi_set_resid(srb, scsi_bufflen(srb) - 0x3C); +- else +- scsi_set_resid(srb, scsi_bufflen(srb) - 0x6C); ++ scsi_set_resid(srb, scsi_bufflen(srb) - buf_len); + + kfree(buf); + return STATUS_SUCCESS; +diff --git a/drivers/target/target_core_xcopy.c b/drivers/target/target_core_xcopy.c +index 596ad3edec9c0..48fabece76443 100644 +--- a/drivers/target/target_core_xcopy.c ++++ b/drivers/target/target_core_xcopy.c +@@ -533,7 +533,6 @@ void target_xcopy_release_pt(void) + * @cdb: SCSI CDB to be copied into @xpt_cmd. + * @remote_port: If false, use the LUN through which the XCOPY command has + * been received. If true, use @se_dev->xcopy_lun. +- * @alloc_mem: Whether or not to allocate an SGL list. + * + * Set up a SCSI command (READ or WRITE) that will be used to execute an + * XCOPY command. +@@ -543,12 +542,9 @@ static int target_xcopy_setup_pt_cmd( + struct xcopy_op *xop, + struct se_device *se_dev, + unsigned char *cdb, +- bool remote_port, +- bool alloc_mem) ++ bool remote_port) + { + struct se_cmd *cmd = &xpt_cmd->se_cmd; +- sense_reason_t sense_rc; +- int ret = 0, rc; + + /* + * Setup LUN+port to honor reservations based upon xop->op_origin for +@@ -564,46 +560,17 @@ static int target_xcopy_setup_pt_cmd( + cmd->se_cmd_flags |= SCF_SE_LUN_CMD; + + cmd->tag = 0; +- sense_rc = target_setup_cmd_from_cdb(cmd, cdb); +- if (sense_rc) { +- ret = -EINVAL; +- goto out; +- } ++ if (target_setup_cmd_from_cdb(cmd, cdb)) ++ return -EINVAL; + +- if (alloc_mem) { +- rc = target_alloc_sgl(&cmd->t_data_sg, &cmd->t_data_nents, +- cmd->data_length, false, false); +- if (rc < 0) { +- ret = rc; +- goto out; +- } +- /* +- * Set this bit so that transport_free_pages() allows the +- * caller to release SGLs + physical memory allocated by +- * transport_generic_get_mem().. +- */ +- cmd->se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC; +- } else { +- /* +- * Here the previously allocated SGLs for the internal READ +- * are mapped zero-copy to the internal WRITE. +- */ +- sense_rc = transport_generic_map_mem_to_cmd(cmd, +- xop->xop_data_sg, xop->xop_data_nents, +- NULL, 0); +- if (sense_rc) { +- ret = -EINVAL; +- goto out; +- } ++ if (transport_generic_map_mem_to_cmd(cmd, xop->xop_data_sg, ++ xop->xop_data_nents, NULL, 0)) ++ return -EINVAL; + +- pr_debug("Setup PASSTHROUGH_NOALLOC t_data_sg: %p t_data_nents:" +- " %u\n", cmd->t_data_sg, cmd->t_data_nents); +- } ++ pr_debug("Setup PASSTHROUGH_NOALLOC t_data_sg: %p t_data_nents:" ++ " %u\n", cmd->t_data_sg, cmd->t_data_nents); + + return 0; +- +-out: +- return ret; + } + + static int target_xcopy_issue_pt_cmd(struct xcopy_pt_cmd *xpt_cmd) +@@ -660,15 +627,13 @@ static int target_xcopy_read_source( + xop->src_pt_cmd = xpt_cmd; + + rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, src_dev, &cdb[0], +- remote_port, true); ++ remote_port); + if (rc < 0) { + ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status; + transport_generic_free_cmd(se_cmd, 0); + return rc; + } + +- xop->xop_data_sg = se_cmd->t_data_sg; +- xop->xop_data_nents = se_cmd->t_data_nents; + pr_debug("XCOPY-READ: Saved xop->xop_data_sg: %p, num: %u for READ" + " memory\n", xop->xop_data_sg, xop->xop_data_nents); + +@@ -678,12 +643,6 @@ static int target_xcopy_read_source( + transport_generic_free_cmd(se_cmd, 0); + return rc; + } +- /* +- * Clear off the allocated t_data_sg, that has been saved for +- * zero-copy WRITE submission reuse in struct xcopy_op.. +- */ +- se_cmd->t_data_sg = NULL; +- se_cmd->t_data_nents = 0; + + return 0; + } +@@ -722,19 +681,9 @@ static int target_xcopy_write_destination( + xop->dst_pt_cmd = xpt_cmd; + + rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, dst_dev, &cdb[0], +- remote_port, false); ++ remote_port); + if (rc < 0) { +- struct se_cmd *src_cmd = &xop->src_pt_cmd->se_cmd; + ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status; +- /* +- * If the failure happened before the t_mem_list hand-off in +- * target_xcopy_setup_pt_cmd(), Reset memory + clear flag so that +- * core releases this memory on error during X-COPY WRITE I/O. +- */ +- src_cmd->se_cmd_flags &= ~SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC; +- src_cmd->t_data_sg = xop->xop_data_sg; +- src_cmd->t_data_nents = xop->xop_data_nents; +- + transport_generic_free_cmd(se_cmd, 0); + return rc; + } +@@ -742,7 +691,6 @@ static int target_xcopy_write_destination( + rc = target_xcopy_issue_pt_cmd(xpt_cmd); + if (rc < 0) { + ec_cmd->scsi_status = xpt_cmd->se_cmd.scsi_status; +- se_cmd->se_cmd_flags &= ~SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC; + transport_generic_free_cmd(se_cmd, 0); + return rc; + } +@@ -758,7 +706,7 @@ static void target_xcopy_do_work(struct work_struct *work) + sector_t src_lba, dst_lba, end_lba; + unsigned int max_sectors; + int rc = 0; +- unsigned short nolb, cur_nolb, max_nolb, copied_nolb = 0; ++ unsigned short nolb, max_nolb, copied_nolb = 0; + + if (target_parse_xcopy_cmd(xop) != TCM_NO_SENSE) + goto err_free; +@@ -788,7 +736,23 @@ static void target_xcopy_do_work(struct work_struct *work) + (unsigned long long)src_lba, (unsigned long long)dst_lba); + + while (src_lba < end_lba) { +- cur_nolb = min(nolb, max_nolb); ++ unsigned short cur_nolb = min(nolb, max_nolb); ++ u32 cur_bytes = cur_nolb * src_dev->dev_attrib.block_size; ++ ++ if (cur_bytes != xop->xop_data_bytes) { ++ /* ++ * (Re)allocate a buffer large enough to hold the XCOPY ++ * I/O size, which can be reused each read / write loop. ++ */ ++ target_free_sgl(xop->xop_data_sg, xop->xop_data_nents); ++ rc = target_alloc_sgl(&xop->xop_data_sg, ++ &xop->xop_data_nents, ++ cur_bytes, ++ false, false); ++ if (rc < 0) ++ goto out; ++ xop->xop_data_bytes = cur_bytes; ++ } + + pr_debug("target_xcopy_do_work: Calling read src_dev: %p src_lba: %llu," + " cur_nolb: %hu\n", src_dev, (unsigned long long)src_lba, cur_nolb); +@@ -819,12 +783,11 @@ static void target_xcopy_do_work(struct work_struct *work) + nolb -= cur_nolb; + + transport_generic_free_cmd(&xop->src_pt_cmd->se_cmd, 0); +- xop->dst_pt_cmd->se_cmd.se_cmd_flags &= ~SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC; +- + transport_generic_free_cmd(&xop->dst_pt_cmd->se_cmd, 0); + } + + xcopy_pt_undepend_remotedev(xop); ++ target_free_sgl(xop->xop_data_sg, xop->xop_data_nents); + kfree(xop); + + pr_debug("target_xcopy_do_work: Final src_lba: %llu, dst_lba: %llu\n", +@@ -838,6 +801,7 @@ static void target_xcopy_do_work(struct work_struct *work) + + out: + xcopy_pt_undepend_remotedev(xop); ++ target_free_sgl(xop->xop_data_sg, xop->xop_data_nents); + + err_free: + kfree(xop); +diff --git a/drivers/target/target_core_xcopy.h b/drivers/target/target_core_xcopy.h +index 974bc1e19ff2b..a1805a14eea07 100644 +--- a/drivers/target/target_core_xcopy.h ++++ b/drivers/target/target_core_xcopy.h +@@ -41,6 +41,7 @@ struct xcopy_op { + struct xcopy_pt_cmd *src_pt_cmd; + struct xcopy_pt_cmd *dst_pt_cmd; + ++ u32 xop_data_bytes; + u32 xop_data_nents; + struct scatterlist *xop_data_sg; + struct work_struct xop_work; +diff --git a/drivers/tty/hvc/hvsi.c b/drivers/tty/hvc/hvsi.c +index 66f95f758be05..73226337f5610 100644 +--- a/drivers/tty/hvc/hvsi.c ++++ b/drivers/tty/hvc/hvsi.c +@@ -1038,7 +1038,7 @@ static const struct tty_operations hvsi_ops = { + + static int __init hvsi_init(void) + { +- int i; ++ int i, ret; + + hvsi_driver = alloc_tty_driver(hvsi_count); + if (!hvsi_driver) +@@ -1069,12 +1069,25 @@ static int __init hvsi_init(void) + } + hvsi_wait = wait_for_state; /* irqs active now */ + +- if (tty_register_driver(hvsi_driver)) +- panic("Couldn't register hvsi console driver\n"); ++ ret = tty_register_driver(hvsi_driver); ++ if (ret) { ++ pr_err("Couldn't register hvsi console driver\n"); ++ goto err_free_irq; ++ } + + printk(KERN_DEBUG "HVSI: registered %i devices\n", hvsi_count); + + return 0; ++err_free_irq: ++ hvsi_wait = poll_for_state; ++ for (i = 0; i < hvsi_count; i++) { ++ struct hvsi_struct *hp = &hvsi_ports[i]; ++ ++ free_irq(hp->virq, hp); ++ } ++ tty_driver_kref_put(hvsi_driver); ++ ++ return ret; + } + device_initcall(hvsi_init); + +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 43fc5b6a25d35..a2bb103f22fc6 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -89,7 +89,7 @@ static void moan_device(const char *str, struct pci_dev *dev) + + static int + setup_port(struct serial_private *priv, struct uart_8250_port *port, +- int bar, int offset, int regshift) ++ u8 bar, unsigned int offset, int regshift) + { + struct pci_dev *dev = priv->dev; + +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index 8a7c6d65f10ef..777ef1a9591c0 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -125,7 +125,8 @@ static const struct serial8250_config uart_config[] = { + .name = "16C950/954", + .fifo_size = 128, + .tx_loadsz = 128, +- .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10, ++ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_01, ++ .rxtrig_bytes = {16, 32, 112, 120}, + /* UART_CAP_EFR breaks billionon CF bluetooth card. */ + .flags = UART_CAP_FIFO | UART_CAP_SLEEP, + }, +diff --git a/drivers/tty/serial/jsm/jsm_neo.c b/drivers/tty/serial/jsm/jsm_neo.c +index bf0e2a4cb0cef..c6f927a76c3be 100644 +--- a/drivers/tty/serial/jsm/jsm_neo.c ++++ b/drivers/tty/serial/jsm/jsm_neo.c +@@ -815,7 +815,9 @@ static void neo_parse_isr(struct jsm_board *brd, u32 port) + /* Parse any modem signal changes */ + jsm_dbg(INTR, &ch->ch_bd->pci_dev, + "MOD_STAT: sending to parse_modem_sigs\n"); ++ spin_lock_irqsave(&ch->uart_port.lock, lock_flags); + neo_parse_modem(ch, readb(&ch->ch_neo_uart->msr)); ++ spin_unlock_irqrestore(&ch->uart_port.lock, lock_flags); + } + } + +diff --git a/drivers/tty/serial/jsm/jsm_tty.c b/drivers/tty/serial/jsm/jsm_tty.c +index 689774c073ca4..8438454ca653f 100644 +--- a/drivers/tty/serial/jsm/jsm_tty.c ++++ b/drivers/tty/serial/jsm/jsm_tty.c +@@ -187,6 +187,7 @@ static void jsm_tty_break(struct uart_port *port, int break_state) + + static int jsm_tty_open(struct uart_port *port) + { ++ unsigned long lock_flags; + struct jsm_board *brd; + struct jsm_channel *channel = + container_of(port, struct jsm_channel, uart_port); +@@ -240,6 +241,7 @@ static int jsm_tty_open(struct uart_port *port) + channel->ch_cached_lsr = 0; + channel->ch_stops_sent = 0; + ++ spin_lock_irqsave(&port->lock, lock_flags); + termios = &port->state->port.tty->termios; + channel->ch_c_cflag = termios->c_cflag; + channel->ch_c_iflag = termios->c_iflag; +@@ -259,6 +261,7 @@ static int jsm_tty_open(struct uart_port *port) + jsm_carrier(channel); + + channel->ch_open_count++; ++ spin_unlock_irqrestore(&port->lock, lock_flags); + + jsm_dbg(OPEN, &channel->ch_bd->pci_dev, "finish\n"); + return 0; +diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c +index 97ee1fc1cd247..ecff9b2088087 100644 +--- a/drivers/tty/serial/sh-sci.c ++++ b/drivers/tty/serial/sh-sci.c +@@ -1763,6 +1763,10 @@ static irqreturn_t sci_br_interrupt(int irq, void *ptr) + + /* Handle BREAKs */ + sci_handle_breaks(port); ++ ++ /* drop invalid character received before break was detected */ ++ serial_port_in(port, SCxRDR); ++ + sci_clear_SCxSR(port, SCxSR_BREAK_CLEAR(port)); + + return IRQ_HANDLED; +@@ -1842,7 +1846,8 @@ static irqreturn_t sci_mpxed_interrupt(int irq, void *ptr) + ret = sci_er_interrupt(irq, ptr); + + /* Break Interrupt */ +- if ((ssr_status & SCxSR_BRK(port)) && err_enabled) ++ if (s->irqs[SCIx_ERI_IRQ] != s->irqs[SCIx_BRI_IRQ] && ++ (ssr_status & SCxSR_BRK(port)) && err_enabled) + ret = sci_br_interrupt(irq, ptr); + + /* Overrun Interrupt */ +diff --git a/drivers/usb/chipidea/host.c b/drivers/usb/chipidea/host.c +index 48e4a5ca18359..f5f56ee07729f 100644 +--- a/drivers/usb/chipidea/host.c ++++ b/drivers/usb/chipidea/host.c +@@ -233,18 +233,26 @@ static int ci_ehci_hub_control( + ) + { + struct ehci_hcd *ehci = hcd_to_ehci(hcd); ++ unsigned int ports = HCS_N_PORTS(ehci->hcs_params); + u32 __iomem *status_reg; +- u32 temp; ++ u32 temp, port_index; + unsigned long flags; + int retval = 0; + struct device *dev = hcd->self.controller; + struct ci_hdrc *ci = dev_get_drvdata(dev); + +- status_reg = &ehci->regs->port_status[(wIndex & 0xff) - 1]; ++ port_index = wIndex & 0xff; ++ port_index -= (port_index > 0); ++ status_reg = &ehci->regs->port_status[port_index]; + + spin_lock_irqsave(&ehci->lock, flags); + + if (typeReq == SetPortFeature && wValue == USB_PORT_FEAT_SUSPEND) { ++ if (!wIndex || wIndex > ports) { ++ retval = -EPIPE; ++ goto done; ++ } ++ + temp = ehci_readl(ehci, status_reg); + if ((temp & PORT_PE) == 0 || (temp & PORT_RESET) != 0) { + retval = -EPIPE; +@@ -273,7 +281,7 @@ static int ci_ehci_hub_control( + ehci_writel(ehci, temp, status_reg); + } + +- set_bit((wIndex & 0xff) - 1, &ehci->suspended_ports); ++ set_bit(port_index, &ehci->suspended_ports); + goto done; + } + +diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c +index 24dad1d78d1ea..6bd3fdb925cd9 100644 +--- a/drivers/usb/gadget/composite.c ++++ b/drivers/usb/gadget/composite.c +@@ -481,7 +481,7 @@ static u8 encode_bMaxPower(enum usb_device_speed speed, + { + unsigned val; + +- if (c->MaxPower) ++ if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER)) + val = c->MaxPower; + else + val = CONFIG_USB_GADGET_VBUS_DRAW; +@@ -905,7 +905,11 @@ static int set_config(struct usb_composite_dev *cdev, + } + + /* when we return, be sure our power usage is valid */ +- power = c->MaxPower ? c->MaxPower : CONFIG_USB_GADGET_VBUS_DRAW; ++ if (c->MaxPower || (c->bmAttributes & USB_CONFIG_ATT_SELFPOWER)) ++ power = c->MaxPower; ++ else ++ power = CONFIG_USB_GADGET_VBUS_DRAW; ++ + if (gadget->speed < USB_SPEED_SUPER) + power = min(power, 500U); + else +diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c +index 99b840daf3d94..57da62e331848 100644 +--- a/drivers/usb/gadget/function/u_ether.c ++++ b/drivers/usb/gadget/function/u_ether.c +@@ -491,8 +491,9 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb, + } + spin_unlock_irqrestore(&dev->lock, flags); + +- if (skb && !in) { +- dev_kfree_skb_any(skb); ++ if (!in) { ++ if (skb) ++ dev_kfree_skb_any(skb); + return NETDEV_TX_OK; + } + +diff --git a/drivers/usb/host/ehci-mv.c b/drivers/usb/host/ehci-mv.c +index b6f196f5e252e..b0e0f8ea98a9c 100644 +--- a/drivers/usb/host/ehci-mv.c ++++ b/drivers/usb/host/ehci-mv.c +@@ -41,26 +41,25 @@ struct ehci_hcd_mv { + int (*set_vbus)(unsigned int vbus); + }; + +-static void ehci_clock_enable(struct ehci_hcd_mv *ehci_mv) ++static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv) + { +- clk_prepare_enable(ehci_mv->clk); +-} ++ int retval; + +-static void ehci_clock_disable(struct ehci_hcd_mv *ehci_mv) +-{ +- clk_disable_unprepare(ehci_mv->clk); +-} ++ retval = clk_prepare_enable(ehci_mv->clk); ++ if (retval) ++ return retval; + +-static int mv_ehci_enable(struct ehci_hcd_mv *ehci_mv) +-{ +- ehci_clock_enable(ehci_mv); +- return phy_init(ehci_mv->phy); ++ retval = phy_init(ehci_mv->phy); ++ if (retval) ++ clk_disable_unprepare(ehci_mv->clk); ++ ++ return retval; + } + + static void mv_ehci_disable(struct ehci_hcd_mv *ehci_mv) + { + phy_exit(ehci_mv->phy); +- ehci_clock_disable(ehci_mv); ++ clk_disable_unprepare(ehci_mv->clk); + } + + static int mv_ehci_reset(struct usb_hcd *hcd) +diff --git a/drivers/usb/host/fotg210-hcd.c b/drivers/usb/host/fotg210-hcd.c +index c3f74d6674e1d..f457e083a6f89 100644 +--- a/drivers/usb/host/fotg210-hcd.c ++++ b/drivers/usb/host/fotg210-hcd.c +@@ -2511,11 +2511,6 @@ retry_xacterr: + return count; + } + +-/* high bandwidth multiplier, as encoded in highspeed endpoint descriptors */ +-#define hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03)) +-/* ... and packet size, for any kind of endpoint descriptor */ +-#define max_packet(wMaxPacketSize) ((wMaxPacketSize) & 0x07ff) +- + /* reverse of qh_urb_transaction: free a list of TDs. + * used for cleanup after errors, before HC sees an URB's TDs. + */ +@@ -2601,7 +2596,7 @@ static struct list_head *qh_urb_transaction(struct fotg210_hcd *fotg210, + token |= (1 /* "in" */ << 8); + /* else it's already initted to "out" pid (0 << 8) */ + +- maxpacket = max_packet(usb_maxpacket(urb->dev, urb->pipe, !is_input)); ++ maxpacket = usb_maxpacket(urb->dev, urb->pipe, !is_input); + + /* + * buffer gets wrapped in one or more qtds; +@@ -2715,9 +2710,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb, + gfp_t flags) + { + struct fotg210_qh *qh = fotg210_qh_alloc(fotg210, flags); ++ struct usb_host_endpoint *ep; + u32 info1 = 0, info2 = 0; + int is_input, type; + int maxp = 0; ++ int mult; + struct usb_tt *tt = urb->dev->tt; + struct fotg210_qh_hw *hw; + +@@ -2732,14 +2729,15 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb, + + is_input = usb_pipein(urb->pipe); + type = usb_pipetype(urb->pipe); +- maxp = usb_maxpacket(urb->dev, urb->pipe, !is_input); ++ ep = usb_pipe_endpoint(urb->dev, urb->pipe); ++ maxp = usb_endpoint_maxp(&ep->desc); ++ mult = usb_endpoint_maxp_mult(&ep->desc); + + /* 1024 byte maxpacket is a hardware ceiling. High bandwidth + * acts like up to 3KB, but is built from smaller packets. + */ +- if (max_packet(maxp) > 1024) { +- fotg210_dbg(fotg210, "bogus qh maxpacket %d\n", +- max_packet(maxp)); ++ if (maxp > 1024) { ++ fotg210_dbg(fotg210, "bogus qh maxpacket %d\n", maxp); + goto done; + } + +@@ -2753,8 +2751,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb, + */ + if (type == PIPE_INTERRUPT) { + qh->usecs = NS_TO_US(usb_calc_bus_time(USB_SPEED_HIGH, +- is_input, 0, +- hb_mult(maxp) * max_packet(maxp))); ++ is_input, 0, mult * maxp)); + qh->start = NO_FRAME; + + if (urb->dev->speed == USB_SPEED_HIGH) { +@@ -2791,7 +2788,7 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb, + think_time = tt ? tt->think_time : 0; + qh->tt_usecs = NS_TO_US(think_time + + usb_calc_bus_time(urb->dev->speed, +- is_input, 0, max_packet(maxp))); ++ is_input, 0, maxp)); + qh->period = urb->interval; + if (qh->period > fotg210->periodic_size) { + qh->period = fotg210->periodic_size; +@@ -2854,11 +2851,11 @@ static struct fotg210_qh *qh_make(struct fotg210_hcd *fotg210, struct urb *urb, + * to help them do so. So now people expect to use + * such nonconformant devices with Linux too; sigh. + */ +- info1 |= max_packet(maxp) << 16; ++ info1 |= maxp << 16; + info2 |= (FOTG210_TUNE_MULT_HS << 30); + } else { /* PIPE_INTERRUPT */ +- info1 |= max_packet(maxp) << 16; +- info2 |= hb_mult(maxp) << 30; ++ info1 |= maxp << 16; ++ info2 |= mult << 30; + } + break; + default: +@@ -3928,6 +3925,7 @@ static void iso_stream_init(struct fotg210_hcd *fotg210, + int is_input; + long bandwidth; + unsigned multi; ++ struct usb_host_endpoint *ep; + + /* + * this might be a "high bandwidth" highspeed endpoint, +@@ -3935,14 +3933,14 @@ static void iso_stream_init(struct fotg210_hcd *fotg210, + */ + epnum = usb_pipeendpoint(pipe); + is_input = usb_pipein(pipe) ? USB_DIR_IN : 0; +- maxp = usb_maxpacket(dev, pipe, !is_input); ++ ep = usb_pipe_endpoint(dev, pipe); ++ maxp = usb_endpoint_maxp(&ep->desc); + if (is_input) + buf1 = (1 << 11); + else + buf1 = 0; + +- maxp = max_packet(maxp); +- multi = hb_mult(maxp); ++ multi = usb_endpoint_maxp_mult(&ep->desc); + buf1 |= maxp; + maxp *= multi; + +@@ -4463,13 +4461,12 @@ static bool itd_complete(struct fotg210_hcd *fotg210, struct fotg210_itd *itd) + + /* HC need not update length with this error */ + if (!(t & FOTG210_ISOC_BABBLE)) { +- desc->actual_length = +- fotg210_itdlen(urb, desc, t); ++ desc->actual_length = FOTG210_ITD_LENGTH(t); + urb->actual_length += desc->actual_length; + } + } else if (likely((t & FOTG210_ISOC_ACTIVE) == 0)) { + desc->status = 0; +- desc->actual_length = fotg210_itdlen(urb, desc, t); ++ desc->actual_length = FOTG210_ITD_LENGTH(t); + urb->actual_length += desc->actual_length; + } else { + /* URB was too late */ +diff --git a/drivers/usb/host/fotg210.h b/drivers/usb/host/fotg210.h +index 1b4db95e5c43a..291add93d84ee 100644 +--- a/drivers/usb/host/fotg210.h ++++ b/drivers/usb/host/fotg210.h +@@ -686,11 +686,6 @@ static inline unsigned fotg210_read_frame_index(struct fotg210_hcd *fotg210) + return fotg210_readl(fotg210, &fotg210->regs->frame_index); + } + +-#define fotg210_itdlen(urb, desc, t) ({ \ +- usb_pipein((urb)->pipe) ? \ +- (desc)->length - FOTG210_ITD_LENGTH(t) : \ +- FOTG210_ITD_LENGTH(t); \ +-}) + /*-------------------------------------------------------------------------*/ + + #endif /* __LINUX_FOTG210_H */ +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index a3813c75a3de8..505da4999e208 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -4662,19 +4662,19 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci, + { + unsigned long long timeout_ns; + +- if (xhci->quirks & XHCI_INTEL_HOST) +- timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); +- else +- timeout_ns = udev->u1_params.sel; +- + /* Prevent U1 if service interval is shorter than U1 exit latency */ + if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { +- if (xhci_service_interval_to_ns(desc) <= timeout_ns) { ++ if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) { + dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n"); + return USB3_LPM_DISABLED; + } + } + ++ if (xhci->quirks & XHCI_INTEL_HOST) ++ timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); ++ else ++ timeout_ns = udev->u1_params.sel; ++ + /* The U1 timeout is encoded in 1us intervals. + * Don't return a timeout of zero, because that's USB3_LPM_DISABLED. + */ +@@ -4726,19 +4726,19 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci, + { + unsigned long long timeout_ns; + +- if (xhci->quirks & XHCI_INTEL_HOST) +- timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); +- else +- timeout_ns = udev->u2_params.sel; +- + /* Prevent U2 if service interval is shorter than U2 exit latency */ + if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { +- if (xhci_service_interval_to_ns(desc) <= timeout_ns) { ++ if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) { + dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n"); + return USB3_LPM_DISABLED; + } + } + ++ if (xhci->quirks & XHCI_INTEL_HOST) ++ timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); ++ else ++ timeout_ns = udev->u2_params.sel; ++ + /* The U2 timeout is encoded in 256us intervals */ + timeout_ns = DIV_ROUND_UP_ULL(timeout_ns, 256 * 1000); + /* If the necessary timeout value is bigger than what we can set in the +diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c +index 327d4f7baaf7c..89d659cef5c63 100644 +--- a/drivers/usb/musb/musb_dsps.c ++++ b/drivers/usb/musb/musb_dsps.c +@@ -890,23 +890,22 @@ static int dsps_probe(struct platform_device *pdev) + if (!glue->usbss_base) + return -ENXIO; + +- if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) { +- ret = dsps_setup_optional_vbus_irq(pdev, glue); +- if (ret) +- goto err_iounmap; +- } +- + platform_set_drvdata(pdev, glue); + pm_runtime_enable(&pdev->dev); + ret = dsps_create_musb_pdev(glue, pdev); + if (ret) + goto err; + ++ if (usb_get_dr_mode(&pdev->dev) == USB_DR_MODE_PERIPHERAL) { ++ ret = dsps_setup_optional_vbus_irq(pdev, glue); ++ if (ret) ++ goto err; ++ } ++ + return 0; + + err: + pm_runtime_disable(&pdev->dev); +-err_iounmap: + iounmap(glue->usbss_base); + return ret; + } +diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c +index 98636fbf71882..170abb06a8a4d 100644 +--- a/drivers/usb/usbip/vhci_hcd.c ++++ b/drivers/usb/usbip/vhci_hcd.c +@@ -455,8 +455,14 @@ static int vhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + vhci_hcd->port_status[rhport] &= ~(1 << USB_PORT_FEAT_RESET); + vhci_hcd->re_timeout = 0; + ++ /* ++ * A few drivers do usb reset during probe when ++ * the device could be in VDEV_ST_USED state ++ */ + if (vhci_hcd->vdev[rhport].ud.status == +- VDEV_ST_NOTASSIGNED) { ++ VDEV_ST_NOTASSIGNED || ++ vhci_hcd->vdev[rhport].ud.status == ++ VDEV_ST_USED) { + usbip_dbg_vhci_rh( + " enable rhport %d (status %u)\n", + rhport, +@@ -952,8 +958,32 @@ static void vhci_device_unlink_cleanup(struct vhci_device *vdev) + spin_lock(&vdev->priv_lock); + + list_for_each_entry_safe(unlink, tmp, &vdev->unlink_tx, list) { ++ struct urb *urb; ++ ++ /* give back urb of unsent unlink request */ + pr_info("unlink cleanup tx %lu\n", unlink->unlink_seqnum); ++ ++ urb = pickup_urb_and_free_priv(vdev, unlink->unlink_seqnum); ++ if (!urb) { ++ list_del(&unlink->list); ++ kfree(unlink); ++ continue; ++ } ++ ++ urb->status = -ENODEV; ++ ++ usb_hcd_unlink_urb_from_ep(hcd, urb); ++ + list_del(&unlink->list); ++ ++ spin_unlock(&vdev->priv_lock); ++ spin_unlock_irqrestore(&vhci->lock, flags); ++ ++ usb_hcd_giveback_urb(hcd, urb, urb->status); ++ ++ spin_lock_irqsave(&vhci->lock, flags); ++ spin_lock(&vdev->priv_lock); ++ + kfree(unlink); + } + +diff --git a/drivers/vfio/Kconfig b/drivers/vfio/Kconfig +index 503ed2f3fbb5e..65743de8aad11 100644 +--- a/drivers/vfio/Kconfig ++++ b/drivers/vfio/Kconfig +@@ -29,7 +29,7 @@ menuconfig VFIO + + If you don't know what to do here, say N. + +-menuconfig VFIO_NOIOMMU ++config VFIO_NOIOMMU + bool "VFIO No-IOMMU support" + depends on VFIO + help +diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c +index 48e574ae60330..cec9173aac6f5 100644 +--- a/drivers/vhost/net.c ++++ b/drivers/vhost/net.c +@@ -466,7 +466,7 @@ static void vhost_tx_batch(struct vhost_net *net, + .num = nvq->batched_xdp, + .ptr = nvq->xdp, + }; +- int err; ++ int i, err; + + if (nvq->batched_xdp == 0) + goto signal_used; +@@ -475,6 +475,15 @@ static void vhost_tx_batch(struct vhost_net *net, + err = sock->ops->sendmsg(sock, msghdr, 0); + if (unlikely(err < 0)) { + vq_err(&nvq->vq, "Fail to batch sending packets\n"); ++ ++ /* free pages owned by XDP; since this is an unlikely error path, ++ * keep it simple and avoid more complex bulk update for the ++ * used pages ++ */ ++ for (i = 0; i < nvq->batched_xdp; ++i) ++ put_page(virt_to_head_page(nvq->xdp[i].data)); ++ nvq->batched_xdp = 0; ++ nvq->done_idx = 0; + return; + } + +diff --git a/drivers/video/fbdev/asiliantfb.c b/drivers/video/fbdev/asiliantfb.c +index ea31054a28ca8..c1d6e63362259 100644 +--- a/drivers/video/fbdev/asiliantfb.c ++++ b/drivers/video/fbdev/asiliantfb.c +@@ -227,6 +227,9 @@ static int asiliantfb_check_var(struct fb_var_screeninfo *var, + { + unsigned long Ftarget, ratio, remainder; + ++ if (!var->pixclock) ++ return -EINVAL; ++ + ratio = 1000000 / var->pixclock; + remainder = 1000000 % var->pixclock; + Ftarget = 1000000 * ratio + (1000000 * remainder) / var->pixclock; +diff --git a/drivers/video/fbdev/kyro/fbdev.c b/drivers/video/fbdev/kyro/fbdev.c +index a7bd9f25911b5..74bf26b527b91 100644 +--- a/drivers/video/fbdev/kyro/fbdev.c ++++ b/drivers/video/fbdev/kyro/fbdev.c +@@ -372,6 +372,11 @@ static int kyro_dev_overlay_viewport_set(u32 x, u32 y, u32 ulWidth, u32 ulHeight + /* probably haven't called CreateOverlay yet */ + return -EINVAL; + ++ if (ulWidth == 0 || ulWidth == 0xffffffff || ++ ulHeight == 0 || ulHeight == 0xffffffff || ++ (x < 2 && ulWidth + 2 == 0)) ++ return -EINVAL; ++ + /* Stop Ramdac Output */ + DisableRamdacOutput(deviceInfo.pSTGReg); + +@@ -394,6 +399,9 @@ static int kyrofb_check_var(struct fb_var_screeninfo *var, struct fb_info *info) + { + struct kyrofb_info *par = info->par; + ++ if (!var->pixclock) ++ return -EINVAL; ++ + if (var->bits_per_pixel != 16 && var->bits_per_pixel != 32) { + printk(KERN_WARNING "kyrofb: depth not supported: %u\n", var->bits_per_pixel); + return -EINVAL; +diff --git a/drivers/video/fbdev/riva/fbdev.c b/drivers/video/fbdev/riva/fbdev.c +index ca593a3e41d74..51c9d9508c0b0 100644 +--- a/drivers/video/fbdev/riva/fbdev.c ++++ b/drivers/video/fbdev/riva/fbdev.c +@@ -1088,6 +1088,9 @@ static int rivafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info) + int mode_valid = 0; + + NVTRACE_ENTER(); ++ if (!var->pixclock) ++ return -EINVAL; ++ + switch (var->bits_per_pixel) { + case 1 ... 8: + var->red.offset = var->green.offset = var->blue.offset = 0; +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index dacd67dca43fe..946ae198b3449 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -2894,6 +2894,29 @@ int open_ctree(struct super_block *sb, + */ + fs_info->compress_type = BTRFS_COMPRESS_ZLIB; + ++ /* ++ * Flag our filesystem as having big metadata blocks if they are bigger ++ * than the page size ++ */ ++ if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) { ++ if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA)) ++ btrfs_info(fs_info, ++ "flagging fs with big metadata feature"); ++ features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA; ++ } ++ ++ /* Set up fs_info before parsing mount options */ ++ nodesize = btrfs_super_nodesize(disk_super); ++ sectorsize = btrfs_super_sectorsize(disk_super); ++ stripesize = sectorsize; ++ fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids)); ++ fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids)); ++ ++ /* Cache block sizes */ ++ fs_info->nodesize = nodesize; ++ fs_info->sectorsize = sectorsize; ++ fs_info->stripesize = stripesize; ++ + ret = btrfs_parse_options(fs_info, options, sb->s_flags); + if (ret) { + err = ret; +@@ -2920,28 +2943,6 @@ int open_ctree(struct super_block *sb, + if (features & BTRFS_FEATURE_INCOMPAT_SKINNY_METADATA) + btrfs_info(fs_info, "has skinny extents"); + +- /* +- * flag our filesystem as having big metadata blocks if +- * they are bigger than the page size +- */ +- if (btrfs_super_nodesize(disk_super) > PAGE_SIZE) { +- if (!(features & BTRFS_FEATURE_INCOMPAT_BIG_METADATA)) +- btrfs_info(fs_info, +- "flagging fs with big metadata feature"); +- features |= BTRFS_FEATURE_INCOMPAT_BIG_METADATA; +- } +- +- nodesize = btrfs_super_nodesize(disk_super); +- sectorsize = btrfs_super_sectorsize(disk_super); +- stripesize = sectorsize; +- fs_info->dirty_metadata_batch = nodesize * (1 + ilog2(nr_cpu_ids)); +- fs_info->delalloc_batch = sectorsize * 512 * (1 + ilog2(nr_cpu_ids)); +- +- /* Cache block sizes */ +- fs_info->nodesize = nodesize; +- fs_info->sectorsize = sectorsize; +- fs_info->stripesize = stripesize; +- + /* + * mixed block groups end up with duplicate but slightly offset + * extent buffers for the same range. It leads to corruptions +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 33b8fedab6c67..b859ed50cf46c 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -1200,11 +1200,6 @@ static noinline void async_cow_submit(struct btrfs_work *work) + nr_pages = (async_chunk->end - async_chunk->start + PAGE_SIZE) >> + PAGE_SHIFT; + +- /* atomic_sub_return implies a barrier */ +- if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) < +- 5 * SZ_1M) +- cond_wake_up_nomb(&fs_info->async_submit_wait); +- + /* + * ->inode could be NULL if async_chunk_start has failed to compress, + * in which case we don't have anything to submit, yet we need to +@@ -1213,6 +1208,11 @@ static noinline void async_cow_submit(struct btrfs_work *work) + */ + if (async_chunk->inode) + submit_compressed_extents(async_chunk); ++ ++ /* atomic_sub_return implies a barrier */ ++ if (atomic_sub_return(nr_pages, &fs_info->async_delalloc_pages) < ++ 5 * SZ_1M) ++ cond_wake_up_nomb(&fs_info->async_submit_wait); + } + + static noinline void async_cow_free(struct btrfs_work *work) +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 5412361d0c270..8ea4b3da85d1a 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -719,7 +719,9 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + */ + ret = btrfs_lookup_data_extent(fs_info, ins.objectid, + ins.offset); +- if (ret == 0) { ++ if (ret < 0) { ++ goto out; ++ } else if (ret == 0) { + btrfs_init_generic_ref(&ref, + BTRFS_ADD_DELAYED_REF, + ins.objectid, ins.offset, 0); +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index e882c790292f9..8deee49a6b3fa 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -1311,6 +1311,9 @@ static void btrfs_close_one_device(struct btrfs_device *device) + fs_devices->rw_devices--; + } + ++ if (device->devid == BTRFS_DEV_REPLACE_DEVID) ++ clear_bit(BTRFS_DEV_STATE_REPLACE_TGT, &device->dev_state); ++ + if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) + fs_devices->missing_devices--; + +diff --git a/fs/cifs/sess.c b/fs/cifs/sess.c +index 85bd644f9773b..30f841a880acd 100644 +--- a/fs/cifs/sess.c ++++ b/fs/cifs/sess.c +@@ -610,7 +610,7 @@ sess_alloc_buffer(struct sess_data *sess_data, int wct) + return 0; + + out_free_smb_buf: +- kfree(smb_buf); ++ cifs_small_buf_release(smb_buf); + sess_data->iov[0].iov_base = NULL; + sess_data->iov[0].iov_len = 0; + sess_data->buf0_type = CIFS_NO_BUFFER; +diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c +index a57219c51c01a..f7d27cbbeb860 100644 +--- a/fs/f2fs/checkpoint.c ++++ b/fs/f2fs/checkpoint.c +@@ -583,7 +583,7 @@ int f2fs_acquire_orphan_inode(struct f2fs_sb_info *sbi) + + if (time_to_inject(sbi, FAULT_ORPHAN)) { + spin_unlock(&im->ino_lock); +- f2fs_show_injection_info(FAULT_ORPHAN); ++ f2fs_show_injection_info(sbi, FAULT_ORPHAN); + return -ENOSPC; + } + +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index 64ee2a064e339..1679f9c0b63b3 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -167,9 +167,10 @@ static bool f2fs_bio_post_read_required(struct bio *bio) + + static void f2fs_read_end_io(struct bio *bio) + { +- if (time_to_inject(F2FS_P_SB(bio_first_page_all(bio)), +- FAULT_READ_IO)) { +- f2fs_show_injection_info(FAULT_READ_IO); ++ struct f2fs_sb_info *sbi = F2FS_P_SB(bio_first_page_all(bio)); ++ ++ if (time_to_inject(sbi, FAULT_READ_IO)) { ++ f2fs_show_injection_info(sbi, FAULT_READ_IO); + bio->bi_status = BLK_STS_IOERR; + } + +@@ -191,7 +192,7 @@ static void f2fs_write_end_io(struct bio *bio) + struct bvec_iter_all iter_all; + + if (time_to_inject(sbi, FAULT_WRITE_IO)) { +- f2fs_show_injection_info(FAULT_WRITE_IO); ++ f2fs_show_injection_info(sbi, FAULT_WRITE_IO); + bio->bi_status = BLK_STS_IOERR; + } + +@@ -1190,7 +1191,21 @@ next_dnode: + if (err) { + if (flag == F2FS_GET_BLOCK_BMAP) + map->m_pblk = 0; ++ + if (err == -ENOENT) { ++ /* ++ * There is one exceptional case that read_node_page() ++ * may return -ENOENT due to filesystem has been ++ * shutdown or cp_error, so force to convert error ++ * number to EIO for such case. ++ */ ++ if (map->m_may_create && ++ (is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) || ++ f2fs_cp_error(sbi))) { ++ err = -EIO; ++ goto unlock_out; ++ } ++ + err = 0; + if (map->m_next_pgofs) + *map->m_next_pgofs = +diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c +index 78d041f9775a4..99c4a868d73b0 100644 +--- a/fs/f2fs/dir.c ++++ b/fs/f2fs/dir.c +@@ -618,7 +618,7 @@ int f2fs_add_regular_entry(struct inode *dir, const struct qstr *new_name, + + start: + if (time_to_inject(F2FS_I_SB(dir), FAULT_DIR_DEPTH)) { +- f2fs_show_injection_info(FAULT_DIR_DEPTH); ++ f2fs_show_injection_info(F2FS_I_SB(dir), FAULT_DIR_DEPTH); + return -ENOSPC; + } + +@@ -892,6 +892,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, + struct f2fs_sb_info *sbi = F2FS_I_SB(d->inode); + struct blk_plug plug; + bool readdir_ra = sbi->readdir_ra == 1; ++ bool found_valid_dirent = false; + int err = 0; + + bit_pos = ((unsigned long)ctx->pos % d->max); +@@ -906,12 +907,15 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, + + de = &d->dentry[bit_pos]; + if (de->name_len == 0) { ++ if (found_valid_dirent || !bit_pos) { ++ printk_ratelimited( ++ "%sF2FS-fs (%s): invalid namelen(0), ino:%u, run fsck to fix.", ++ KERN_WARNING, sbi->sb->s_id, ++ le32_to_cpu(de->ino)); ++ set_sbi_flag(sbi, SBI_NEED_FSCK); ++ } + bit_pos++; + ctx->pos = start_pos + bit_pos; +- printk_ratelimited( +- "%s, invalid namelen(0), ino:%u, run fsck to fix.", +- KERN_WARNING, le32_to_cpu(de->ino)); +- set_sbi_flag(sbi, SBI_NEED_FSCK); + continue; + } + +@@ -954,6 +958,7 @@ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d, + f2fs_ra_node_page(sbi, le32_to_cpu(de->ino)); + + ctx->pos = start_pos + bit_pos; ++ found_valid_dirent = true; + } + out: + if (readdir_ra) +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 4ca3c2a0a0f5b..031a17bf52a24 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -1374,9 +1374,10 @@ struct f2fs_private_dio { + }; + + #ifdef CONFIG_F2FS_FAULT_INJECTION +-#define f2fs_show_injection_info(type) \ +- printk_ratelimited("%sF2FS-fs : inject %s in %s of %pS\n", \ +- KERN_INFO, f2fs_fault_name[type], \ ++#define f2fs_show_injection_info(sbi, type) \ ++ printk_ratelimited("%sF2FS-fs (%s) : inject %s in %s of %pS\n", \ ++ KERN_INFO, sbi->sb->s_id, \ ++ f2fs_fault_name[type], \ + __func__, __builtin_return_address(0)) + static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) + { +@@ -1396,7 +1397,7 @@ static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) + return false; + } + #else +-#define f2fs_show_injection_info(type) do { } while (0) ++#define f2fs_show_injection_info(sbi, type) do { } while (0) + static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) + { + return false; +@@ -1781,7 +1782,7 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi, + return ret; + + if (time_to_inject(sbi, FAULT_BLOCK)) { +- f2fs_show_injection_info(FAULT_BLOCK); ++ f2fs_show_injection_info(sbi, FAULT_BLOCK); + release = *count; + goto release_quota; + } +@@ -2033,7 +2034,7 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi, + } + + if (time_to_inject(sbi, FAULT_BLOCK)) { +- f2fs_show_injection_info(FAULT_BLOCK); ++ f2fs_show_injection_info(sbi, FAULT_BLOCK); + goto enospc; + } + +@@ -2148,7 +2149,8 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping, + return page; + + if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_ALLOC)) { +- f2fs_show_injection_info(FAULT_PAGE_ALLOC); ++ f2fs_show_injection_info(F2FS_M_SB(mapping), ++ FAULT_PAGE_ALLOC); + return NULL; + } + } +@@ -2163,7 +2165,7 @@ static inline struct page *f2fs_pagecache_get_page( + int fgp_flags, gfp_t gfp_mask) + { + if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_GET)) { +- f2fs_show_injection_info(FAULT_PAGE_GET); ++ f2fs_show_injection_info(F2FS_M_SB(mapping), FAULT_PAGE_GET); + return NULL; + } + +@@ -2232,7 +2234,7 @@ static inline struct bio *f2fs_bio_alloc(struct f2fs_sb_info *sbi, + return bio; + } + if (time_to_inject(sbi, FAULT_ALLOC_BIO)) { +- f2fs_show_injection_info(FAULT_ALLOC_BIO); ++ f2fs_show_injection_info(sbi, FAULT_ALLOC_BIO); + return NULL; + } + +@@ -2797,7 +2799,7 @@ static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi, + size_t size, gfp_t flags) + { + if (time_to_inject(sbi, FAULT_KMALLOC)) { +- f2fs_show_injection_info(FAULT_KMALLOC); ++ f2fs_show_injection_info(sbi, FAULT_KMALLOC); + return NULL; + } + +@@ -2814,7 +2816,7 @@ static inline void *f2fs_kvmalloc(struct f2fs_sb_info *sbi, + size_t size, gfp_t flags) + { + if (time_to_inject(sbi, FAULT_KVMALLOC)) { +- f2fs_show_injection_info(FAULT_KVMALLOC); ++ f2fs_show_injection_info(sbi, FAULT_KVMALLOC); + return NULL; + } + +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index 6e58b2e62b189..516007bb1ced1 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -682,7 +682,7 @@ int f2fs_truncate(struct inode *inode) + trace_f2fs_truncate(inode); + + if (time_to_inject(F2FS_I_SB(inode), FAULT_TRUNCATE)) { +- f2fs_show_injection_info(FAULT_TRUNCATE); ++ f2fs_show_injection_info(F2FS_I_SB(inode), FAULT_TRUNCATE); + return -EIO; + } + +@@ -981,7 +981,6 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len) + } + + if (pg_start < pg_end) { +- struct address_space *mapping = inode->i_mapping; + loff_t blk_start, blk_end; + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + +@@ -993,8 +992,7 @@ static int punch_hole(struct inode *inode, loff_t offset, loff_t len) + down_write(&F2FS_I(inode)->i_gc_rwsem[WRITE]); + down_write(&F2FS_I(inode)->i_mmap_sem); + +- truncate_inode_pages_range(mapping, blk_start, +- blk_end - 1); ++ truncate_pagecache_range(inode, blk_start, blk_end - 1); + + f2fs_lock_op(sbi); + ret = f2fs_truncate_hole(inode, pg_start, pg_end); +diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c +index a78aa5480454f..4b6c36208f552 100644 +--- a/fs/f2fs/gc.c ++++ b/fs/f2fs/gc.c +@@ -54,7 +54,7 @@ static int gc_thread_func(void *data) + } + + if (time_to_inject(sbi, FAULT_CHECKPOINT)) { +- f2fs_show_injection_info(FAULT_CHECKPOINT); ++ f2fs_show_injection_info(sbi, FAULT_CHECKPOINT); + f2fs_stop_checkpoint(sbi, false); + } + +@@ -1095,8 +1095,10 @@ next_step: + int err; + + if (S_ISREG(inode->i_mode)) { +- if (!down_write_trylock(&fi->i_gc_rwsem[READ])) ++ if (!down_write_trylock(&fi->i_gc_rwsem[READ])) { ++ sbi->skipped_gc_rwsem++; + continue; ++ } + if (!down_write_trylock( + &fi->i_gc_rwsem[WRITE])) { + sbi->skipped_gc_rwsem++; +diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c +index 386ad54c13c3a..502bd491336a8 100644 +--- a/fs/f2fs/inode.c ++++ b/fs/f2fs/inode.c +@@ -681,7 +681,7 @@ retry: + err = f2fs_truncate(inode); + + if (time_to_inject(sbi, FAULT_EVICT_INODE)) { +- f2fs_show_injection_info(FAULT_EVICT_INODE); ++ f2fs_show_injection_info(sbi, FAULT_EVICT_INODE); + err = -EIO; + } + +diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c +index 48bb5d3c709db..4cb182c20eedd 100644 +--- a/fs/f2fs/node.c ++++ b/fs/f2fs/node.c +@@ -2406,7 +2406,7 @@ bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid) + struct free_nid *i = NULL; + retry: + if (time_to_inject(sbi, FAULT_ALLOC_NID)) { +- f2fs_show_injection_info(FAULT_ALLOC_NID); ++ f2fs_show_injection_info(sbi, FAULT_ALLOC_NID); + return false; + } + +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index 5ba677f85533c..78c54bb7898df 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -489,7 +489,7 @@ int f2fs_commit_inmem_pages(struct inode *inode) + void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need) + { + if (time_to_inject(sbi, FAULT_CHECKPOINT)) { +- f2fs_show_injection_info(FAULT_CHECKPOINT); ++ f2fs_show_injection_info(sbi, FAULT_CHECKPOINT); + f2fs_stop_checkpoint(sbi, false); + } + +@@ -1017,8 +1017,9 @@ static void __remove_discard_cmd(struct f2fs_sb_info *sbi, + + if (dc->error) + printk_ratelimited( +- "%sF2FS-fs: Issue discard(%u, %u, %u) failed, ret: %d", +- KERN_INFO, dc->lstart, dc->start, dc->len, dc->error); ++ "%sF2FS-fs (%s): Issue discard(%u, %u, %u) failed, ret: %d", ++ KERN_INFO, sbi->sb->s_id, ++ dc->lstart, dc->start, dc->len, dc->error); + __detach_discard_cmd(dcc, dc); + } + +@@ -1158,7 +1159,7 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi, + dc->len += len; + + if (time_to_inject(sbi, FAULT_DISCARD)) { +- f2fs_show_injection_info(FAULT_DISCARD); ++ f2fs_show_injection_info(sbi, FAULT_DISCARD); + err = -EIO; + goto submit; + } +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index 6d904dc9bd199..41bf656658ba8 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -1994,6 +1994,33 @@ static int f2fs_enable_quotas(struct super_block *sb) + return 0; + } + ++static int f2fs_quota_sync_file(struct f2fs_sb_info *sbi, int type) ++{ ++ struct quota_info *dqopt = sb_dqopt(sbi->sb); ++ struct address_space *mapping = dqopt->files[type]->i_mapping; ++ int ret = 0; ++ ++ ret = dquot_writeback_dquots(sbi->sb, type); ++ if (ret) ++ goto out; ++ ++ ret = filemap_fdatawrite(mapping); ++ if (ret) ++ goto out; ++ ++ /* if we are using journalled quota */ ++ if (is_journalled_quota(sbi)) ++ goto out; ++ ++ ret = filemap_fdatawait(mapping); ++ ++ truncate_inode_pages(&dqopt->files[type]->i_data, 0); ++out: ++ if (ret) ++ set_sbi_flag(sbi, SBI_QUOTA_NEED_REPAIR); ++ return ret; ++} ++ + int f2fs_quota_sync(struct super_block *sb, int type) + { + struct f2fs_sb_info *sbi = F2FS_SB(sb); +@@ -2001,57 +2028,42 @@ int f2fs_quota_sync(struct super_block *sb, int type) + int cnt; + int ret; + +- /* +- * do_quotactl +- * f2fs_quota_sync +- * down_read(quota_sem) +- * dquot_writeback_dquots() +- * f2fs_dquot_commit +- * block_operation +- * down_read(quota_sem) +- */ +- f2fs_lock_op(sbi); +- +- down_read(&sbi->quota_sem); +- ret = dquot_writeback_dquots(sb, type); +- if (ret) +- goto out; +- + /* + * Now when everything is written we can discard the pagecache so + * that userspace sees the changes. + */ + for (cnt = 0; cnt < MAXQUOTAS; cnt++) { +- struct address_space *mapping; + + if (type != -1 && cnt != type) + continue; +- if (!sb_has_quota_active(sb, cnt)) +- continue; + +- mapping = dqopt->files[cnt]->i_mapping; ++ if (!sb_has_quota_active(sb, type)) ++ return 0; + +- ret = filemap_fdatawrite(mapping); +- if (ret) +- goto out; ++ inode_lock(dqopt->files[cnt]); + +- /* if we are using journalled quota */ +- if (is_journalled_quota(sbi)) +- continue; ++ /* ++ * do_quotactl ++ * f2fs_quota_sync ++ * down_read(quota_sem) ++ * dquot_writeback_dquots() ++ * f2fs_dquot_commit ++ * block_operation ++ * down_read(quota_sem) ++ */ ++ f2fs_lock_op(sbi); ++ down_read(&sbi->quota_sem); + +- ret = filemap_fdatawait(mapping); +- if (ret) +- set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR); ++ ret = f2fs_quota_sync_file(sbi, cnt); ++ ++ up_read(&sbi->quota_sem); ++ f2fs_unlock_op(sbi); + +- inode_lock(dqopt->files[cnt]); +- truncate_inode_pages(&dqopt->files[cnt]->i_data, 0); + inode_unlock(dqopt->files[cnt]); ++ ++ if (ret) ++ break; + } +-out: +- if (ret) +- set_sbi_flag(F2FS_SB(sb), SBI_QUOTA_NEED_REPAIR); +- up_read(&sbi->quota_sem); +- f2fs_unlock_op(sbi); + return ret; + } + +diff --git a/fs/fscache/cookie.c b/fs/fscache/cookie.c +index 0ce39658a6200..44a426c8ea01e 100644 +--- a/fs/fscache/cookie.c ++++ b/fs/fscache/cookie.c +@@ -74,10 +74,8 @@ void fscache_free_cookie(struct fscache_cookie *cookie) + static int fscache_set_key(struct fscache_cookie *cookie, + const void *index_key, size_t index_key_len) + { +- unsigned long long h; + u32 *buf; + int bufs; +- int i; + + bufs = DIV_ROUND_UP(index_key_len, sizeof(*buf)); + +@@ -91,17 +89,7 @@ static int fscache_set_key(struct fscache_cookie *cookie, + } + + memcpy(buf, index_key, index_key_len); +- +- /* Calculate a hash and combine this with the length in the first word +- * or first half word +- */ +- h = (unsigned long)cookie->parent; +- h += index_key_len + cookie->type; +- +- for (i = 0; i < bufs; i++) +- h += buf[i]; +- +- cookie->key_hash = h ^ (h >> 32); ++ cookie->key_hash = fscache_hash(0, buf, bufs); + return 0; + } + +diff --git a/fs/fscache/internal.h b/fs/fscache/internal.h +index 9616af3768e11..d09d4e69c818e 100644 +--- a/fs/fscache/internal.h ++++ b/fs/fscache/internal.h +@@ -97,6 +97,8 @@ extern struct workqueue_struct *fscache_object_wq; + extern struct workqueue_struct *fscache_op_wq; + DECLARE_PER_CPU(wait_queue_head_t, fscache_object_cong_wait); + ++extern unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n); ++ + static inline bool fscache_object_congested(void) + { + return workqueue_congested(WORK_CPU_UNBOUND, fscache_object_wq); +diff --git a/fs/fscache/main.c b/fs/fscache/main.c +index 59c2494efda34..3aa3756c71761 100644 +--- a/fs/fscache/main.c ++++ b/fs/fscache/main.c +@@ -94,6 +94,45 @@ static struct ctl_table fscache_sysctls_root[] = { + }; + #endif + ++/* ++ * Mixing scores (in bits) for (7,20): ++ * Input delta: 1-bit 2-bit ++ * 1 round: 330.3 9201.6 ++ * 2 rounds: 1246.4 25475.4 ++ * 3 rounds: 1907.1 31295.1 ++ * 4 rounds: 2042.3 31718.6 ++ * Perfect: 2048 31744 ++ * (32*64) (32*31/2 * 64) ++ */ ++#define HASH_MIX(x, y, a) \ ++ ( x ^= (a), \ ++ y ^= x, x = rol32(x, 7),\ ++ x += y, y = rol32(y,20),\ ++ y *= 9 ) ++ ++static inline unsigned int fold_hash(unsigned long x, unsigned long y) ++{ ++ /* Use arch-optimized multiply if one exists */ ++ return __hash_32(y ^ __hash_32(x)); ++} ++ ++/* ++ * Generate a hash. This is derived from full_name_hash(), but we want to be ++ * sure it is arch independent and that it doesn't change as bits of the ++ * computed hash value might appear on disk. The caller also guarantees that ++ * the hashed data will be a series of aligned 32-bit words. ++ */ ++unsigned int fscache_hash(unsigned int salt, unsigned int *data, unsigned int n) ++{ ++ unsigned int a, x = 0, y = salt; ++ ++ for (; n; n--) { ++ a = *data++; ++ HASH_MIX(x, y, a); ++ } ++ return fold_hash(x, y); ++} ++ + /* + * initialise the fs caching module + */ +diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c +index 16aa55b73ccf5..7205a89fbb5f3 100644 +--- a/fs/fuse/dev.c ++++ b/fs/fuse/dev.c +@@ -282,10 +282,10 @@ void fuse_request_end(struct fuse_conn *fc, struct fuse_req *req) + + /* + * test_and_set_bit() implies smp_mb() between bit +- * changing and below intr_entry check. Pairs with ++ * changing and below FR_INTERRUPTED check. Pairs with + * smp_mb() from queue_interrupt(). + */ +- if (!list_empty(&req->intr_entry)) { ++ if (test_bit(FR_INTERRUPTED, &req->flags)) { + spin_lock(&fiq->lock); + list_del_init(&req->intr_entry); + spin_unlock(&fiq->lock); +diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c +index 72dec177b3494..94c290a333a0a 100644 +--- a/fs/gfs2/lock_dlm.c ++++ b/fs/gfs2/lock_dlm.c +@@ -292,6 +292,11 @@ static void gdlm_put_lock(struct gfs2_glock *gl) + gfs2_sbstats_inc(gl, GFS2_LKS_DCOUNT); + gfs2_update_request_times(gl); + ++ /* don't want to call dlm if we've unmounted the lock protocol */ ++ if (test_bit(DFL_UNMOUNT, &ls->ls_recover_flags)) { ++ gfs2_glock_free(gl); ++ return; ++ } + /* don't want to skip dlm_unlock writing the lvb when lock has one */ + + if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) && +diff --git a/fs/lockd/svclock.c b/fs/lockd/svclock.c +index 498cb70c2c0d0..273a81971ed57 100644 +--- a/fs/lockd/svclock.c ++++ b/fs/lockd/svclock.c +@@ -395,28 +395,10 @@ nlmsvc_release_lockowner(struct nlm_lock *lock) + nlmsvc_put_lockowner(lock->fl.fl_owner); + } + +-static void nlmsvc_locks_copy_lock(struct file_lock *new, struct file_lock *fl) +-{ +- struct nlm_lockowner *nlm_lo = (struct nlm_lockowner *)fl->fl_owner; +- new->fl_owner = nlmsvc_get_lockowner(nlm_lo); +-} +- +-static void nlmsvc_locks_release_private(struct file_lock *fl) +-{ +- nlmsvc_put_lockowner((struct nlm_lockowner *)fl->fl_owner); +-} +- +-static const struct file_lock_operations nlmsvc_lock_ops = { +- .fl_copy_lock = nlmsvc_locks_copy_lock, +- .fl_release_private = nlmsvc_locks_release_private, +-}; +- + void nlmsvc_locks_init_private(struct file_lock *fl, struct nlm_host *host, + pid_t pid) + { + fl->fl_owner = nlmsvc_find_lockowner(host, pid); +- if (fl->fl_owner != NULL) +- fl->fl_ops = &nlmsvc_lock_ops; + } + + /* +@@ -788,9 +770,21 @@ nlmsvc_notify_blocked(struct file_lock *fl) + printk(KERN_WARNING "lockd: notification for unknown block!\n"); + } + ++static fl_owner_t nlmsvc_get_owner(fl_owner_t owner) ++{ ++ return nlmsvc_get_lockowner(owner); ++} ++ ++static void nlmsvc_put_owner(fl_owner_t owner) ++{ ++ nlmsvc_put_lockowner(owner); ++} ++ + const struct lock_manager_operations nlmsvc_lock_operations = { + .lm_notify = nlmsvc_notify_blocked, + .lm_grant = nlmsvc_grant_deferred, ++ .lm_get_owner = nlmsvc_get_owner, ++ .lm_put_owner = nlmsvc_put_owner, + }; + + /* +diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c +index 6509ec3cb3730..073be36b0686c 100644 +--- a/fs/overlayfs/dir.c ++++ b/fs/overlayfs/dir.c +@@ -513,8 +513,10 @@ static int ovl_create_over_whiteout(struct dentry *dentry, struct inode *inode, + goto out_cleanup; + } + err = ovl_instantiate(dentry, inode, newdentry, hardlink); +- if (err) +- goto out_cleanup; ++ if (err) { ++ ovl_cleanup(udir, newdentry); ++ dput(newdentry); ++ } + out_dput: + dput(upper); + out_unlock: +diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c +index 2c807283115d7..ec57bbb6bb05c 100644 +--- a/fs/userfaultfd.c ++++ b/fs/userfaultfd.c +@@ -32,11 +32,6 @@ int sysctl_unprivileged_userfaultfd __read_mostly = 1; + + static struct kmem_cache *userfaultfd_ctx_cachep __read_mostly; + +-enum userfaultfd_state { +- UFFD_STATE_WAIT_API, +- UFFD_STATE_RUNNING, +-}; +- + /* + * Start with fault_pending_wqh and fault_wqh so they're more likely + * to be in the same cacheline. +@@ -68,8 +63,6 @@ struct userfaultfd_ctx { + unsigned int flags; + /* features requested from the userspace */ + unsigned int features; +- /* state machine */ +- enum userfaultfd_state state; + /* released */ + bool released; + /* memory mappings are changing because of non-cooperative event */ +@@ -103,6 +96,14 @@ struct userfaultfd_wake_range { + unsigned long len; + }; + ++/* internal indication that UFFD_API ioctl was successfully executed */ ++#define UFFD_FEATURE_INITIALIZED (1u << 31) ++ ++static bool userfaultfd_is_initialized(struct userfaultfd_ctx *ctx) ++{ ++ return ctx->features & UFFD_FEATURE_INITIALIZED; ++} ++ + static int userfaultfd_wake_function(wait_queue_entry_t *wq, unsigned mode, + int wake_flags, void *key) + { +@@ -699,7 +700,6 @@ int dup_userfaultfd(struct vm_area_struct *vma, struct list_head *fcs) + + refcount_set(&ctx->refcount, 1); + ctx->flags = octx->flags; +- ctx->state = UFFD_STATE_RUNNING; + ctx->features = octx->features; + ctx->released = false; + ctx->mmap_changing = false; +@@ -980,38 +980,33 @@ static __poll_t userfaultfd_poll(struct file *file, poll_table *wait) + + poll_wait(file, &ctx->fd_wqh, wait); + +- switch (ctx->state) { +- case UFFD_STATE_WAIT_API: ++ if (!userfaultfd_is_initialized(ctx)) + return EPOLLERR; +- case UFFD_STATE_RUNNING: +- /* +- * poll() never guarantees that read won't block. +- * userfaults can be waken before they're read(). +- */ +- if (unlikely(!(file->f_flags & O_NONBLOCK))) +- return EPOLLERR; +- /* +- * lockless access to see if there are pending faults +- * __pollwait last action is the add_wait_queue but +- * the spin_unlock would allow the waitqueue_active to +- * pass above the actual list_add inside +- * add_wait_queue critical section. So use a full +- * memory barrier to serialize the list_add write of +- * add_wait_queue() with the waitqueue_active read +- * below. +- */ +- ret = 0; +- smp_mb(); +- if (waitqueue_active(&ctx->fault_pending_wqh)) +- ret = EPOLLIN; +- else if (waitqueue_active(&ctx->event_wqh)) +- ret = EPOLLIN; +- +- return ret; +- default: +- WARN_ON_ONCE(1); ++ ++ /* ++ * poll() never guarantees that read won't block. ++ * userfaults can be waken before they're read(). ++ */ ++ if (unlikely(!(file->f_flags & O_NONBLOCK))) + return EPOLLERR; +- } ++ /* ++ * lockless access to see if there are pending faults ++ * __pollwait last action is the add_wait_queue but ++ * the spin_unlock would allow the waitqueue_active to ++ * pass above the actual list_add inside ++ * add_wait_queue critical section. So use a full ++ * memory barrier to serialize the list_add write of ++ * add_wait_queue() with the waitqueue_active read ++ * below. ++ */ ++ ret = 0; ++ smp_mb(); ++ if (waitqueue_active(&ctx->fault_pending_wqh)) ++ ret = EPOLLIN; ++ else if (waitqueue_active(&ctx->event_wqh)) ++ ret = EPOLLIN; ++ ++ return ret; + } + + static const struct file_operations userfaultfd_fops; +@@ -1205,7 +1200,7 @@ static ssize_t userfaultfd_read(struct file *file, char __user *buf, + struct uffd_msg msg; + int no_wait = file->f_flags & O_NONBLOCK; + +- if (ctx->state == UFFD_STATE_WAIT_API) ++ if (!userfaultfd_is_initialized(ctx)) + return -EINVAL; + + for (;;) { +@@ -1807,9 +1802,10 @@ out: + static inline unsigned int uffd_ctx_features(__u64 user_features) + { + /* +- * For the current set of features the bits just coincide ++ * For the current set of features the bits just coincide. Set ++ * UFFD_FEATURE_INITIALIZED to mark the features as enabled. + */ +- return (unsigned int)user_features; ++ return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED; + } + + /* +@@ -1822,12 +1818,10 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, + { + struct uffdio_api uffdio_api; + void __user *buf = (void __user *)arg; ++ unsigned int ctx_features; + int ret; + __u64 features; + +- ret = -EINVAL; +- if (ctx->state != UFFD_STATE_WAIT_API) +- goto out; + ret = -EFAULT; + if (copy_from_user(&uffdio_api, buf, sizeof(uffdio_api))) + goto out; +@@ -1844,9 +1838,13 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, + ret = -EFAULT; + if (copy_to_user(buf, &uffdio_api, sizeof(uffdio_api))) + goto out; +- ctx->state = UFFD_STATE_RUNNING; ++ + /* only enable the requested features for this uffd context */ +- ctx->features = uffd_ctx_features(features); ++ ctx_features = uffd_ctx_features(features); ++ ret = -EINVAL; ++ if (cmpxchg(&ctx->features, 0, ctx_features) != 0) ++ goto err_out; ++ + ret = 0; + out: + return ret; +@@ -1863,7 +1861,7 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd, + int ret = -EINVAL; + struct userfaultfd_ctx *ctx = file->private_data; + +- if (cmd != UFFDIO_API && ctx->state == UFFD_STATE_WAIT_API) ++ if (cmd != UFFDIO_API && !userfaultfd_is_initialized(ctx)) + return -EINVAL; + + switch(cmd) { +@@ -1964,7 +1962,6 @@ SYSCALL_DEFINE1(userfaultfd, int, flags) + refcount_set(&ctx->refcount, 1); + ctx->flags = flags; + ctx->features = 0; +- ctx->state = UFFD_STATE_WAIT_API; + ctx->released = false; + ctx->mmap_changing = false; + ctx->mm = current->mm; +diff --git a/include/crypto/public_key.h b/include/crypto/public_key.h +index 0588ef3bc6ff6..48722f2b8543b 100644 +--- a/include/crypto/public_key.h ++++ b/include/crypto/public_key.h +@@ -38,9 +38,9 @@ extern void public_key_free(struct public_key *key); + struct public_key_signature { + struct asymmetric_key_id *auth_ids[2]; + u8 *s; /* Signature */ +- u32 s_size; /* Number of bytes in signature */ + u8 *digest; +- u8 digest_size; /* Number of bytes in digest */ ++ u32 s_size; /* Number of bytes in signature */ ++ u32 digest_size; /* Number of bytes in digest */ + const char *pkey_algo; + const char *hash_algo; + const char *encoding; +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h +index a0513c444446d..cef70d6e1657c 100644 +--- a/include/linux/hugetlb.h ++++ b/include/linux/hugetlb.h +@@ -542,6 +542,11 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h, + + void hugetlb_report_usage(struct seq_file *m, struct mm_struct *mm); + ++static inline void hugetlb_count_init(struct mm_struct *mm) ++{ ++ atomic_long_set(&mm->hugetlb_usage, 0); ++} ++ + static inline void hugetlb_count_add(long l, struct mm_struct *mm) + { + atomic_long_add(l, &mm->hugetlb_usage); +@@ -711,6 +716,10 @@ static inline spinlock_t *huge_pte_lockptr(struct hstate *h, + return &mm->page_table_lock; + } + ++static inline void hugetlb_count_init(struct mm_struct *mm) ++{ ++} ++ + static inline void hugetlb_report_usage(struct seq_file *f, struct mm_struct *m) + { + } +diff --git a/include/linux/list.h b/include/linux/list.h +index 85c92555e31f8..ce19c6b632a59 100644 +--- a/include/linux/list.h ++++ b/include/linux/list.h +@@ -567,6 +567,15 @@ static inline void list_splice_tail_init(struct list_head *list, + pos != (head); \ + pos = n, n = pos->prev) + ++/** ++ * list_entry_is_head - test if the entry points to the head of the list ++ * @pos: the type * to cursor ++ * @head: the head for your list. ++ * @member: the name of the list_head within the struct. ++ */ ++#define list_entry_is_head(pos, head, member) \ ++ (&pos->member == (head)) ++ + /** + * list_for_each_entry - iterate over list of given type + * @pos: the type * to use as a loop cursor. +@@ -575,7 +584,7 @@ static inline void list_splice_tail_init(struct list_head *list, + */ + #define list_for_each_entry(pos, head, member) \ + for (pos = list_first_entry(head, typeof(*pos), member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = list_next_entry(pos, member)) + + /** +@@ -586,7 +595,7 @@ static inline void list_splice_tail_init(struct list_head *list, + */ + #define list_for_each_entry_reverse(pos, head, member) \ + for (pos = list_last_entry(head, typeof(*pos), member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = list_prev_entry(pos, member)) + + /** +@@ -611,7 +620,7 @@ static inline void list_splice_tail_init(struct list_head *list, + */ + #define list_for_each_entry_continue(pos, head, member) \ + for (pos = list_next_entry(pos, member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = list_next_entry(pos, member)) + + /** +@@ -625,7 +634,7 @@ static inline void list_splice_tail_init(struct list_head *list, + */ + #define list_for_each_entry_continue_reverse(pos, head, member) \ + for (pos = list_prev_entry(pos, member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = list_prev_entry(pos, member)) + + /** +@@ -637,7 +646,7 @@ static inline void list_splice_tail_init(struct list_head *list, + * Iterate over list of given type, continuing from current position. + */ + #define list_for_each_entry_from(pos, head, member) \ +- for (; &pos->member != (head); \ ++ for (; !list_entry_is_head(pos, head, member); \ + pos = list_next_entry(pos, member)) + + /** +@@ -650,7 +659,7 @@ static inline void list_splice_tail_init(struct list_head *list, + * Iterate backwards over list of given type, continuing from current position. + */ + #define list_for_each_entry_from_reverse(pos, head, member) \ +- for (; &pos->member != (head); \ ++ for (; !list_entry_is_head(pos, head, member); \ + pos = list_prev_entry(pos, member)) + + /** +@@ -663,7 +672,7 @@ static inline void list_splice_tail_init(struct list_head *list, + #define list_for_each_entry_safe(pos, n, head, member) \ + for (pos = list_first_entry(head, typeof(*pos), member), \ + n = list_next_entry(pos, member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = n, n = list_next_entry(n, member)) + + /** +@@ -679,7 +688,7 @@ static inline void list_splice_tail_init(struct list_head *list, + #define list_for_each_entry_safe_continue(pos, n, head, member) \ + for (pos = list_next_entry(pos, member), \ + n = list_next_entry(pos, member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = n, n = list_next_entry(n, member)) + + /** +@@ -694,7 +703,7 @@ static inline void list_splice_tail_init(struct list_head *list, + */ + #define list_for_each_entry_safe_from(pos, n, head, member) \ + for (n = list_next_entry(pos, member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = n, n = list_next_entry(n, member)) + + /** +@@ -710,7 +719,7 @@ static inline void list_splice_tail_init(struct list_head *list, + #define list_for_each_entry_safe_reverse(pos, n, head, member) \ + for (pos = list_last_entry(head, typeof(*pos), member), \ + n = list_prev_entry(pos, member); \ +- &pos->member != (head); \ ++ !list_entry_is_head(pos, head, member); \ + pos = n, n = list_prev_entry(n, member)) + + /** +diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h +index 451efd4499cc5..961e35c68e413 100644 +--- a/include/linux/memory_hotplug.h ++++ b/include/linux/memory_hotplug.h +@@ -358,6 +358,6 @@ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, + unsigned long pnum); + extern bool allow_online_pfn_range(int nid, unsigned long pfn, unsigned long nr_pages, + int online_type); +-extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, +- unsigned long nr_pages); ++extern struct zone *zone_for_pfn_range(int online_type, int nid, ++ unsigned long start_pfn, unsigned long nr_pages); + #endif /* __LINUX_MEMORY_HOTPLUG_H */ +diff --git a/include/linux/pci.h b/include/linux/pci.h +index 6a6a819c5b49b..9a937f8b27838 100644 +--- a/include/linux/pci.h ++++ b/include/linux/pci.h +@@ -1688,8 +1688,9 @@ static inline int pci_enable_device(struct pci_dev *dev) { return -EIO; } + static inline void pci_disable_device(struct pci_dev *dev) { } + static inline int pci_assign_resource(struct pci_dev *dev, int i) + { return -EBUSY; } +-static inline int __pci_register_driver(struct pci_driver *drv, +- struct module *owner) ++static inline int __must_check __pci_register_driver(struct pci_driver *drv, ++ struct module *owner, ++ const char *mod_name) + { return 0; } + static inline int pci_register_driver(struct pci_driver *drv) + { return 0; } +diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h +index 0ad57693f3926..42588645478d9 100644 +--- a/include/linux/pci_ids.h ++++ b/include/linux/pci_ids.h +@@ -2476,7 +2476,8 @@ + #define PCI_VENDOR_ID_TDI 0x192E + #define PCI_DEVICE_ID_TDI_EHCI 0x0101 + +-#define PCI_VENDOR_ID_FREESCALE 0x1957 ++#define PCI_VENDOR_ID_FREESCALE 0x1957 /* duplicate: NXP */ ++#define PCI_VENDOR_ID_NXP 0x1957 /* duplicate: FREESCALE */ + #define PCI_DEVICE_ID_MPC8308 0xc006 + #define PCI_DEVICE_ID_MPC8315E 0x00b4 + #define PCI_DEVICE_ID_MPC8315 0x00b5 +diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h +index 6493c98c86317..b04b5bd43f541 100644 +--- a/include/linux/skbuff.h ++++ b/include/linux/skbuff.h +@@ -1887,7 +1887,7 @@ static inline void __skb_insert(struct sk_buff *newsk, + WRITE_ONCE(newsk->prev, prev); + WRITE_ONCE(next->prev, newsk); + WRITE_ONCE(prev->next, newsk); +- list->qlen++; ++ WRITE_ONCE(list->qlen, list->qlen + 1); + } + + static inline void __skb_queue_splice(const struct sk_buff_head *list, +diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h +index d7ef5b97174ce..3c6c4b1dbf1a4 100644 +--- a/include/linux/sunrpc/xprt.h ++++ b/include/linux/sunrpc/xprt.h +@@ -419,6 +419,7 @@ void xprt_unlock_connect(struct rpc_xprt *, void *); + #define XPRT_CONGESTED (9) + #define XPRT_CWND_WAIT (10) + #define XPRT_WRITE_SPACE (11) ++#define XPRT_SND_IS_COOKIE (12) + + static inline void xprt_set_connected(struct rpc_xprt *xprt) + { +diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h +index edbbf4bfdd9e5..4a245d7a5c8d6 100644 +--- a/include/uapi/linux/pkt_sched.h ++++ b/include/uapi/linux/pkt_sched.h +@@ -807,6 +807,8 @@ struct tc_codel_xstats { + + /* FQ_CODEL */ + ++#define FQ_CODEL_QUANTUM_MAX (1 << 20) ++ + enum { + TCA_FQ_CODEL_UNSPEC, + TCA_FQ_CODEL_TARGET, +diff --git a/include/uapi/linux/serial_reg.h b/include/uapi/linux/serial_reg.h +index be07b5470f4bb..f51bc8f368134 100644 +--- a/include/uapi/linux/serial_reg.h ++++ b/include/uapi/linux/serial_reg.h +@@ -62,6 +62,7 @@ + * ST16C654: 8 16 56 60 8 16 32 56 PORT_16654 + * TI16C750: 1 16 32 56 xx xx xx xx PORT_16750 + * TI16C752: 8 16 56 60 8 16 32 56 ++ * OX16C950: 16 32 112 120 16 32 64 112 PORT_16C950 + * Tegra: 1 4 8 14 16 8 4 1 PORT_TEGRA + */ + #define UART_FCR_R_TRIG_00 0x00 +diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c +index cb6425e52bf7a..01e893cf9b9f7 100644 +--- a/kernel/dma/debug.c ++++ b/kernel/dma/debug.c +@@ -846,7 +846,7 @@ static int dump_show(struct seq_file *seq, void *v) + } + DEFINE_SHOW_ATTRIBUTE(dump); + +-static void dma_debug_fs_init(void) ++static int __init dma_debug_fs_init(void) + { + struct dentry *dentry = debugfs_create_dir("dma-api", NULL); + +@@ -859,7 +859,10 @@ static void dma_debug_fs_init(void) + debugfs_create_u32("nr_total_entries", 0444, dentry, &nr_total_entries); + debugfs_create_file("driver_filter", 0644, dentry, NULL, &filter_fops); + debugfs_create_file("dump", 0444, dentry, NULL, &dump_fops); ++ ++ return 0; + } ++core_initcall_sync(dma_debug_fs_init); + + static int device_dma_allocations(struct device *dev, struct dma_debug_entry **out_entry) + { +@@ -944,8 +947,6 @@ static int dma_debug_init(void) + spin_lock_init(&dma_entry_hash[i].lock); + } + +- dma_debug_fs_init(); +- + nr_pages = DIV_ROUND_UP(nr_prealloc_entries, DMA_DEBUG_DYNAMIC_ENTRIES); + for (i = 0; i < nr_pages; ++i) + dma_debug_create_entries(GFP_KERNEL); +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 2f848123cdae8..1993a741d2dc5 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -9259,7 +9259,7 @@ static void perf_event_addr_filters_apply(struct perf_event *event) + return; + + if (ifh->nr_file_filters) { +- mm = get_task_mm(event->ctx->task); ++ mm = get_task_mm(task); + if (!mm) + goto restart; + +diff --git a/kernel/fork.c b/kernel/fork.c +index 50f37d5afb32b..cf2cebd214b92 100644 +--- a/kernel/fork.c ++++ b/kernel/fork.c +@@ -1028,6 +1028,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, + mm->pmd_huge_pte = NULL; + #endif + mm_init_uprobes_state(mm); ++ hugetlb_count_init(mm); + + if (current->mm) { + mm->flags = current->mm->flags & MMF_INIT_MASK; +diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c +index a6a79f85c81a8..f26415341c752 100644 +--- a/kernel/pid_namespace.c ++++ b/kernel/pid_namespace.c +@@ -53,7 +53,8 @@ static struct kmem_cache *create_pid_cachep(unsigned int level) + mutex_lock(&pid_caches_mutex); + /* Name collision forces to do allocation under mutex. */ + if (!*pkc) +- *pkc = kmem_cache_create(name, len, 0, SLAB_HWCACHE_ALIGN, 0); ++ *pkc = kmem_cache_create(name, len, 0, ++ SLAB_HWCACHE_ALIGN | SLAB_ACCOUNT, 0); + mutex_unlock(&pid_caches_mutex); + /* current can fail, but someone else can succeed. */ + return READ_ONCE(*pkc); +diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c +index 233322c77b76c..5de084dab4fa6 100644 +--- a/kernel/trace/trace_kprobe.c ++++ b/kernel/trace/trace_kprobe.c +@@ -646,7 +646,11 @@ static int register_trace_kprobe(struct trace_kprobe *tk) + /* Register new event */ + ret = register_kprobe_event(tk); + if (ret) { +- pr_warn("Failed to register probe event(%d)\n", ret); ++ if (ret == -EEXIST) { ++ trace_probe_log_set_index(0); ++ trace_probe_log_err(0, EVENT_EXIST); ++ } else ++ pr_warn("Failed to register probe event(%d)\n", ret); + goto end; + } + +diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c +index f98d6d94cbbf7..23e85cb151346 100644 +--- a/kernel/trace/trace_probe.c ++++ b/kernel/trace/trace_probe.c +@@ -1029,11 +1029,36 @@ error: + return ret; + } + ++static struct trace_event_call * ++find_trace_event_call(const char *system, const char *event_name) ++{ ++ struct trace_event_call *tp_event; ++ const char *name; ++ ++ list_for_each_entry(tp_event, &ftrace_events, list) { ++ if (!tp_event->class->system || ++ strcmp(system, tp_event->class->system)) ++ continue; ++ name = trace_event_name(tp_event); ++ if (!name || strcmp(event_name, name)) ++ continue; ++ return tp_event; ++ } ++ ++ return NULL; ++} ++ + int trace_probe_register_event_call(struct trace_probe *tp) + { + struct trace_event_call *call = trace_probe_event_call(tp); + int ret; + ++ lockdep_assert_held(&event_mutex); ++ ++ if (find_trace_event_call(trace_probe_group_name(tp), ++ trace_probe_name(tp))) ++ return -EEXIST; ++ + ret = register_trace_event(&call->event); + if (!ret) + return -ENODEV; +diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h +index a0ff9e200ef6f..bab9e0dba9af2 100644 +--- a/kernel/trace/trace_probe.h ++++ b/kernel/trace/trace_probe.h +@@ -410,6 +410,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call, + C(NO_EVENT_NAME, "Event name is not specified"), \ + C(EVENT_TOO_LONG, "Event name is too long"), \ + C(BAD_EVENT_NAME, "Event name must follow the same rules as C identifiers"), \ ++ C(EVENT_EXIST, "Given group/event name is already used by another event"), \ + C(RETVAL_ON_PROBE, "$retval is not available on probe"), \ + C(BAD_STACK_NUM, "Invalid stack number"), \ + C(BAD_ARG_NUM, "Invalid argument number"), \ +diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c +index 5294843de6efd..b515db036becc 100644 +--- a/kernel/trace/trace_uprobe.c ++++ b/kernel/trace/trace_uprobe.c +@@ -514,7 +514,11 @@ static int register_trace_uprobe(struct trace_uprobe *tu) + + ret = register_uprobe_event(tu); + if (ret) { +- pr_warn("Failed to register probe event(%d)\n", ret); ++ if (ret == -EEXIST) { ++ trace_probe_log_set_index(0); ++ trace_probe_log_err(0, EVENT_EXIST); ++ } else ++ pr_warn("Failed to register probe event(%d)\n", ret); + goto end; + } + +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index 6aeb53b4e19f8..885d4792abdfc 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -5869,6 +5869,13 @@ static void __init wq_numa_init(void) + return; + } + ++ for_each_possible_cpu(cpu) { ++ if (WARN_ON(cpu_to_node(cpu) == NUMA_NO_NODE)) { ++ pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu); ++ return; ++ } ++ } ++ + wq_update_unbound_numa_attrs_buf = alloc_workqueue_attrs(); + BUG_ON(!wq_update_unbound_numa_attrs_buf); + +@@ -5886,11 +5893,6 @@ static void __init wq_numa_init(void) + + for_each_possible_cpu(cpu) { + node = cpu_to_node(cpu); +- if (WARN_ON(node == NUMA_NO_NODE)) { +- pr_warn("workqueue: NUMA node mapping not available for cpu%d, disabling NUMA support\n", cpu); +- /* happens iff arch is bonkers, let's just proceed */ +- return; +- } + cpumask_set_cpu(cpu, tbl[node]); + } + +diff --git a/lib/test_bpf.c b/lib/test_bpf.c +index 5ef3eccee27cb..3ae002ced4c7a 100644 +--- a/lib/test_bpf.c ++++ b/lib/test_bpf.c +@@ -4286,8 +4286,8 @@ static struct bpf_test tests[] = { + .u.insns_int = { + BPF_LD_IMM64(R0, 0), + BPF_LD_IMM64(R1, 0xffffffffffffffffLL), +- BPF_STX_MEM(BPF_W, R10, R1, -40), +- BPF_LDX_MEM(BPF_W, R0, R10, -40), ++ BPF_STX_MEM(BPF_DW, R10, R1, -40), ++ BPF_LDX_MEM(BPF_DW, R0, R10, -40), + BPF_EXIT_INSN(), + }, + INTERNAL, +@@ -6684,7 +6684,14 @@ static int run_one(const struct bpf_prog *fp, struct bpf_test *test) + u64 duration; + u32 ret; + +- if (test->test[i].data_size == 0 && ++ /* ++ * NOTE: Several sub-tests may be present, in which case ++ * a zero {data_size, result} tuple indicates the end of ++ * the sub-test array. The first test is always run, ++ * even if both data_size and result happen to be zero. ++ */ ++ if (i > 0 && ++ test->test[i].data_size == 0 && + test->test[i].result == 0) + break; + +diff --git a/lib/test_stackinit.c b/lib/test_stackinit.c +index 2d7d257a430e6..35d398b065e4f 100644 +--- a/lib/test_stackinit.c ++++ b/lib/test_stackinit.c +@@ -67,10 +67,10 @@ static bool range_contains(char *haystack_start, size_t haystack_size, + #define INIT_STRUCT_none /**/ + #define INIT_STRUCT_zero = { } + #define INIT_STRUCT_static_partial = { .two = 0, } +-#define INIT_STRUCT_static_all = { .one = arg->one, \ +- .two = arg->two, \ +- .three = arg->three, \ +- .four = arg->four, \ ++#define INIT_STRUCT_static_all = { .one = 0, \ ++ .two = 0, \ ++ .three = 0, \ ++ .four = 0, \ + } + #define INIT_STRUCT_dynamic_partial = { .two = arg->two, } + #define INIT_STRUCT_dynamic_all = { .one = arg->one, \ +@@ -84,8 +84,7 @@ static bool range_contains(char *haystack_start, size_t haystack_size, + var.one = 0; \ + var.two = 0; \ + var.three = 0; \ +- memset(&var.four, 0, \ +- sizeof(var.four)) ++ var.four = 0 + + /* + * @name: unique string name for the test +@@ -208,18 +207,13 @@ struct test_small_hole { + unsigned long four; + }; + +-/* Try to trigger unhandled padding in a structure. */ +-struct test_aligned { +- u32 internal1; +- u64 internal2; +-} __aligned(64); +- ++/* Trigger unhandled padding in a structure. */ + struct test_big_hole { + u8 one; + u8 two; + u8 three; + /* 61 byte padding hole here. */ +- struct test_aligned four; ++ u8 four __aligned(64); + } __aligned(64); + + struct test_trailing_hole { +diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c +index 308beca3ffebc..bcc2686bd0a1b 100644 +--- a/mm/memory_hotplug.c ++++ b/mm/memory_hotplug.c +@@ -775,8 +775,8 @@ static inline struct zone *default_zone_for_pfn(int nid, unsigned long start_pfn + return movable_node_enabled ? movable_zone : kernel_zone; + } + +-struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn, +- unsigned long nr_pages) ++struct zone *zone_for_pfn_range(int online_type, int nid, ++ unsigned long start_pfn, unsigned long nr_pages) + { + if (online_type == MMOP_ONLINE_KERNEL) + return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages); +diff --git a/mm/vmscan.c b/mm/vmscan.c +index fad9be4703ece..de94881eaa927 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -2513,7 +2513,7 @@ out: + cgroup_size = max(cgroup_size, protection); + + scan = lruvec_size - lruvec_size * protection / +- cgroup_size; ++ (cgroup_size + 1); + + /* + * Minimally target SWAP_CLUSTER_MAX pages to keep +diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c +index 3963eb11c3fbd..44e6c74ed4288 100644 +--- a/net/9p/trans_xen.c ++++ b/net/9p/trans_xen.c +@@ -138,7 +138,7 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size) + + static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) + { +- struct xen_9pfs_front_priv *priv = NULL; ++ struct xen_9pfs_front_priv *priv; + RING_IDX cons, prod, masked_cons, masked_prod; + unsigned long flags; + u32 size = p9_req->tc.size; +@@ -151,7 +151,7 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req) + break; + } + read_unlock(&xen_9pfs_lock); +- if (!priv || priv->client != client) ++ if (list_entry_is_head(priv, &xen_9pfs_devs, list)) + return -EINVAL; + + num = p9_req->tc.tag % priv->num_rings; +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index e8e7f108b0161..31469ff084cd3 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -4202,6 +4202,21 @@ static void hci_sync_conn_complete_evt(struct hci_dev *hdev, + + switch (ev->status) { + case 0x00: ++ /* The synchronous connection complete event should only be ++ * sent once per new connection. Receiving a successful ++ * complete event when the connection status is already ++ * BT_CONNECTED means that the device is misbehaving and sent ++ * multiple complete event packets for the same new connection. ++ * ++ * Registering the device more than once can corrupt kernel ++ * memory, hence upon detecting this invalid event, we report ++ * an error and ignore the packet. ++ */ ++ if (conn->state == BT_CONNECTED) { ++ bt_dev_err(hdev, "Ignoring connect complete event for existing connection"); ++ goto unlock; ++ } ++ + conn->handle = __le16_to_cpu(ev->handle); + conn->state = BT_CONNECTED; + conn->type = ev->link_type; +@@ -4905,9 +4920,64 @@ static void hci_disconn_phylink_complete_evt(struct hci_dev *hdev, + } + #endif + ++static void le_conn_update_addr(struct hci_conn *conn, bdaddr_t *bdaddr, ++ u8 bdaddr_type, bdaddr_t *local_rpa) ++{ ++ if (conn->out) { ++ conn->dst_type = bdaddr_type; ++ conn->resp_addr_type = bdaddr_type; ++ bacpy(&conn->resp_addr, bdaddr); ++ ++ /* Check if the controller has set a Local RPA then it must be ++ * used instead or hdev->rpa. ++ */ ++ if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) { ++ conn->init_addr_type = ADDR_LE_DEV_RANDOM; ++ bacpy(&conn->init_addr, local_rpa); ++ } else if (hci_dev_test_flag(conn->hdev, HCI_PRIVACY)) { ++ conn->init_addr_type = ADDR_LE_DEV_RANDOM; ++ bacpy(&conn->init_addr, &conn->hdev->rpa); ++ } else { ++ hci_copy_identity_address(conn->hdev, &conn->init_addr, ++ &conn->init_addr_type); ++ } ++ } else { ++ conn->resp_addr_type = conn->hdev->adv_addr_type; ++ /* Check if the controller has set a Local RPA then it must be ++ * used instead or hdev->rpa. ++ */ ++ if (local_rpa && bacmp(local_rpa, BDADDR_ANY)) { ++ conn->resp_addr_type = ADDR_LE_DEV_RANDOM; ++ bacpy(&conn->resp_addr, local_rpa); ++ } else if (conn->hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) { ++ /* In case of ext adv, resp_addr will be updated in ++ * Adv Terminated event. ++ */ ++ if (!ext_adv_capable(conn->hdev)) ++ bacpy(&conn->resp_addr, ++ &conn->hdev->random_addr); ++ } else { ++ bacpy(&conn->resp_addr, &conn->hdev->bdaddr); ++ } ++ ++ conn->init_addr_type = bdaddr_type; ++ bacpy(&conn->init_addr, bdaddr); ++ ++ /* For incoming connections, set the default minimum ++ * and maximum connection interval. They will be used ++ * to check if the parameters are in range and if not ++ * trigger the connection update procedure. ++ */ ++ conn->le_conn_min_interval = conn->hdev->le_conn_min_interval; ++ conn->le_conn_max_interval = conn->hdev->le_conn_max_interval; ++ } ++} ++ + static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, +- bdaddr_t *bdaddr, u8 bdaddr_type, u8 role, u16 handle, +- u16 interval, u16 latency, u16 supervision_timeout) ++ bdaddr_t *bdaddr, u8 bdaddr_type, ++ bdaddr_t *local_rpa, u8 role, u16 handle, ++ u16 interval, u16 latency, ++ u16 supervision_timeout) + { + struct hci_conn_params *params; + struct hci_conn *conn; +@@ -4955,32 +5025,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status, + cancel_delayed_work(&conn->le_conn_timeout); + } + +- if (!conn->out) { +- /* Set the responder (our side) address type based on +- * the advertising address type. +- */ +- conn->resp_addr_type = hdev->adv_addr_type; +- if (hdev->adv_addr_type == ADDR_LE_DEV_RANDOM) { +- /* In case of ext adv, resp_addr will be updated in +- * Adv Terminated event. +- */ +- if (!ext_adv_capable(hdev)) +- bacpy(&conn->resp_addr, &hdev->random_addr); +- } else { +- bacpy(&conn->resp_addr, &hdev->bdaddr); +- } +- +- conn->init_addr_type = bdaddr_type; +- bacpy(&conn->init_addr, bdaddr); +- +- /* For incoming connections, set the default minimum +- * and maximum connection interval. They will be used +- * to check if the parameters are in range and if not +- * trigger the connection update procedure. +- */ +- conn->le_conn_min_interval = hdev->le_conn_min_interval; +- conn->le_conn_max_interval = hdev->le_conn_max_interval; +- } ++ le_conn_update_addr(conn, bdaddr, bdaddr_type, local_rpa); + + /* Lookup the identity address from the stored connection + * address and address type. +@@ -5074,7 +5119,7 @@ static void hci_le_conn_complete_evt(struct hci_dev *hdev, struct sk_buff *skb) + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); + + le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type, +- ev->role, le16_to_cpu(ev->handle), ++ NULL, ev->role, le16_to_cpu(ev->handle), + le16_to_cpu(ev->interval), + le16_to_cpu(ev->latency), + le16_to_cpu(ev->supervision_timeout)); +@@ -5088,7 +5133,7 @@ static void hci_le_enh_conn_complete_evt(struct hci_dev *hdev, + BT_DBG("%s status 0x%2.2x", hdev->name, ev->status); + + le_conn_complete_evt(hdev, ev->status, &ev->bdaddr, ev->bdaddr_type, +- ev->role, le16_to_cpu(ev->handle), ++ &ev->local_rpa, ev->role, le16_to_cpu(ev->handle), + le16_to_cpu(ev->interval), + le16_to_cpu(ev->latency), + le16_to_cpu(ev->supervision_timeout)); +@@ -5119,7 +5164,8 @@ static void hci_le_ext_adv_term_evt(struct hci_dev *hdev, struct sk_buff *skb) + if (conn) { + struct adv_info *adv_instance; + +- if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM) ++ if (hdev->adv_addr_type != ADDR_LE_DEV_RANDOM || ++ bacmp(&conn->resp_addr, BDADDR_ANY)) + return; + + if (!hdev->cur_adv_instance) { +diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c +index 1b7540cb8e5c4..1915943bb646a 100644 +--- a/net/bluetooth/sco.c ++++ b/net/bluetooth/sco.c +@@ -48,6 +48,8 @@ struct sco_conn { + spinlock_t lock; + struct sock *sk; + ++ struct delayed_work timeout_work; ++ + unsigned int mtu; + }; + +@@ -73,9 +75,20 @@ struct sco_pinfo { + #define SCO_CONN_TIMEOUT (HZ * 40) + #define SCO_DISCONN_TIMEOUT (HZ * 2) + +-static void sco_sock_timeout(struct timer_list *t) ++static void sco_sock_timeout(struct work_struct *work) + { +- struct sock *sk = from_timer(sk, t, sk_timer); ++ struct sco_conn *conn = container_of(work, struct sco_conn, ++ timeout_work.work); ++ struct sock *sk; ++ ++ sco_conn_lock(conn); ++ sk = conn->sk; ++ if (sk) ++ sock_hold(sk); ++ sco_conn_unlock(conn); ++ ++ if (!sk) ++ return; + + BT_DBG("sock %p state %d", sk, sk->sk_state); + +@@ -89,14 +102,21 @@ static void sco_sock_timeout(struct timer_list *t) + + static void sco_sock_set_timer(struct sock *sk, long timeout) + { ++ if (!sco_pi(sk)->conn) ++ return; ++ + BT_DBG("sock %p state %d timeout %ld", sk, sk->sk_state, timeout); +- sk_reset_timer(sk, &sk->sk_timer, jiffies + timeout); ++ cancel_delayed_work(&sco_pi(sk)->conn->timeout_work); ++ schedule_delayed_work(&sco_pi(sk)->conn->timeout_work, timeout); + } + + static void sco_sock_clear_timer(struct sock *sk) + { ++ if (!sco_pi(sk)->conn) ++ return; ++ + BT_DBG("sock %p state %d", sk, sk->sk_state); +- sk_stop_timer(sk, &sk->sk_timer); ++ cancel_delayed_work(&sco_pi(sk)->conn->timeout_work); + } + + /* ---- SCO connections ---- */ +@@ -176,6 +196,9 @@ static void sco_conn_del(struct hci_conn *hcon, int err) + sco_chan_del(sk, err); + bh_unlock_sock(sk); + sock_put(sk); ++ ++ /* Ensure no more work items will run before freeing conn. */ ++ cancel_delayed_work_sync(&conn->timeout_work); + } + + hcon->sco_data = NULL; +@@ -190,6 +213,8 @@ static void __sco_chan_add(struct sco_conn *conn, struct sock *sk, + sco_pi(sk)->conn = conn; + conn->sk = sk; + ++ INIT_DELAYED_WORK(&conn->timeout_work, sco_sock_timeout); ++ + if (parent) + bt_accept_enqueue(parent, sk, true); + } +@@ -209,44 +234,32 @@ static int sco_chan_add(struct sco_conn *conn, struct sock *sk, + return err; + } + +-static int sco_connect(struct sock *sk) ++static int sco_connect(struct hci_dev *hdev, struct sock *sk) + { + struct sco_conn *conn; + struct hci_conn *hcon; +- struct hci_dev *hdev; + int err, type; + + BT_DBG("%pMR -> %pMR", &sco_pi(sk)->src, &sco_pi(sk)->dst); + +- hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src, BDADDR_BREDR); +- if (!hdev) +- return -EHOSTUNREACH; +- +- hci_dev_lock(hdev); +- + if (lmp_esco_capable(hdev) && !disable_esco) + type = ESCO_LINK; + else + type = SCO_LINK; + + if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT && +- (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) { +- err = -EOPNOTSUPP; +- goto done; +- } ++ (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) ++ return -EOPNOTSUPP; + + hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst, + sco_pi(sk)->setting); +- if (IS_ERR(hcon)) { +- err = PTR_ERR(hcon); +- goto done; +- } ++ if (IS_ERR(hcon)) ++ return PTR_ERR(hcon); + + conn = sco_conn_add(hcon); + if (!conn) { + hci_conn_drop(hcon); +- err = -ENOMEM; +- goto done; ++ return -ENOMEM; + } + + /* Update source addr of the socket */ +@@ -254,7 +267,7 @@ static int sco_connect(struct sock *sk) + + err = sco_chan_add(conn, sk, NULL); + if (err) +- goto done; ++ return err; + + if (hcon->state == BT_CONNECTED) { + sco_sock_clear_timer(sk); +@@ -264,9 +277,6 @@ static int sco_connect(struct sock *sk) + sco_sock_set_timer(sk, sk->sk_sndtimeo); + } + +-done: +- hci_dev_unlock(hdev); +- hci_dev_put(hdev); + return err; + } + +@@ -484,8 +494,6 @@ static struct sock *sco_sock_alloc(struct net *net, struct socket *sock, + + sco_pi(sk)->setting = BT_VOICE_CVSD_16BIT; + +- timer_setup(&sk->sk_timer, sco_sock_timeout, 0); +- + bt_sock_link(&sco_sk_list, sk); + return sk; + } +@@ -550,6 +558,7 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen + { + struct sockaddr_sco *sa = (struct sockaddr_sco *) addr; + struct sock *sk = sock->sk; ++ struct hci_dev *hdev; + int err; + + BT_DBG("sk %p", sk); +@@ -564,12 +573,19 @@ static int sco_sock_connect(struct socket *sock, struct sockaddr *addr, int alen + if (sk->sk_type != SOCK_SEQPACKET) + return -EINVAL; + ++ hdev = hci_get_route(&sa->sco_bdaddr, &sco_pi(sk)->src, BDADDR_BREDR); ++ if (!hdev) ++ return -EHOSTUNREACH; ++ hci_dev_lock(hdev); ++ + lock_sock(sk); + + /* Set destination address and psm */ + bacpy(&sco_pi(sk)->dst, &sa->sco_bdaddr); + +- err = sco_connect(sk); ++ err = sco_connect(hdev, sk); ++ hci_dev_unlock(hdev); ++ hci_dev_put(hdev); + if (err) + goto done; + +diff --git a/net/caif/chnl_net.c b/net/caif/chnl_net.c +index a566289628522..910f164dd20cb 100644 +--- a/net/caif/chnl_net.c ++++ b/net/caif/chnl_net.c +@@ -53,20 +53,6 @@ struct chnl_net { + enum caif_states state; + }; + +-static void robust_list_del(struct list_head *delete_node) +-{ +- struct list_head *list_node; +- struct list_head *n; +- ASSERT_RTNL(); +- list_for_each_safe(list_node, n, &chnl_net_list) { +- if (list_node == delete_node) { +- list_del(list_node); +- return; +- } +- } +- WARN_ON(1); +-} +- + static int chnl_recv_cb(struct cflayer *layr, struct cfpkt *pkt) + { + struct sk_buff *skb; +@@ -368,6 +354,7 @@ static int chnl_net_init(struct net_device *dev) + ASSERT_RTNL(); + priv = netdev_priv(dev); + strncpy(priv->name, dev->name, sizeof(priv->name)); ++ INIT_LIST_HEAD(&priv->list_field); + return 0; + } + +@@ -376,7 +363,7 @@ static void chnl_net_uninit(struct net_device *dev) + struct chnl_net *priv; + ASSERT_RTNL(); + priv = netdev_priv(dev); +- robust_list_del(&priv->list_field); ++ list_del_init(&priv->list_field); + } + + static const struct net_device_ops netdev_ops = { +@@ -541,7 +528,7 @@ static void __exit chnl_exit_module(void) + rtnl_lock(); + list_for_each_safe(list_node, _tmp, &chnl_net_list) { + dev = list_entry(list_node, struct chnl_net, list_field); +- list_del(list_node); ++ list_del_init(list_node); + delete_device(dev); + } + rtnl_unlock(); +diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c +index 96957a7c732fa..b740a74f06f22 100644 +--- a/net/core/flow_dissector.c ++++ b/net/core/flow_dissector.c +@@ -1025,8 +1025,10 @@ proto_again: + FLOW_DISSECTOR_KEY_IPV4_ADDRS, + target_container); + +- memcpy(&key_addrs->v4addrs, &iph->saddr, +- sizeof(key_addrs->v4addrs)); ++ memcpy(&key_addrs->v4addrs.src, &iph->saddr, ++ sizeof(key_addrs->v4addrs.src)); ++ memcpy(&key_addrs->v4addrs.dst, &iph->daddr, ++ sizeof(key_addrs->v4addrs.dst)); + key_control->addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS; + } + +@@ -1070,8 +1072,10 @@ proto_again: + FLOW_DISSECTOR_KEY_IPV6_ADDRS, + target_container); + +- memcpy(&key_addrs->v6addrs, &iph->saddr, +- sizeof(key_addrs->v6addrs)); ++ memcpy(&key_addrs->v6addrs.src, &iph->saddr, ++ sizeof(key_addrs->v6addrs.src)); ++ memcpy(&key_addrs->v6addrs.dst, &iph->daddr, ++ sizeof(key_addrs->v6addrs.dst)); + key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS; + } + +diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c +index 25187528c308a..1f352d669c944 100644 +--- a/net/dccp/minisocks.c ++++ b/net/dccp/minisocks.c +@@ -94,6 +94,8 @@ struct sock *dccp_create_openreq_child(const struct sock *sk, + newdp->dccps_role = DCCP_ROLE_SERVER; + newdp->dccps_hc_rx_ackvec = NULL; + newdp->dccps_service_list = NULL; ++ newdp->dccps_hc_rx_ccid = NULL; ++ newdp->dccps_hc_tx_ccid = NULL; + newdp->dccps_service = dreq->dreq_service; + newdp->dccps_timestamp_echo = dreq->dreq_timestamp_echo; + newdp->dccps_timestamp_time = dreq->dreq_timestamp_time; +diff --git a/net/dsa/slave.c b/net/dsa/slave.c +index 75b4cd4bcafb9..59759ceb426ac 100644 +--- a/net/dsa/slave.c ++++ b/net/dsa/slave.c +@@ -1327,13 +1327,11 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev) + * use the switch internal MDIO bus instead + */ + ret = dsa_slave_phy_connect(slave_dev, dp->index); +- if (ret) { +- netdev_err(slave_dev, +- "failed to connect to port %d: %d\n", +- dp->index, ret); +- phylink_destroy(dp->pl); +- return ret; +- } ++ } ++ if (ret) { ++ netdev_err(slave_dev, "failed to connect to PHY: %pe\n", ++ ERR_PTR(ret)); ++ phylink_destroy(dp->pl); + } + + return ret; +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c +index fd8298b8b1c52..c4989e5903e43 100644 +--- a/net/ipv4/ip_gre.c ++++ b/net/ipv4/ip_gre.c +@@ -446,8 +446,6 @@ static void __gre_xmit(struct sk_buff *skb, struct net_device *dev, + + static int gre_handle_offloads(struct sk_buff *skb, bool csum) + { +- if (csum && skb_checksum_start(skb) < skb->data) +- return -EINVAL; + return iptunnel_handle_offloads(skb, csum ? SKB_GSO_GRE_CSUM : SKB_GSO_GRE); + } + +@@ -605,15 +603,20 @@ static netdev_tx_t ipgre_xmit(struct sk_buff *skb, + } + + if (dev->header_ops) { ++ const int pull_len = tunnel->hlen + sizeof(struct iphdr); ++ + if (skb_cow_head(skb, 0)) + goto free_skb; + + tnl_params = (const struct iphdr *)skb->data; + ++ if (pull_len > skb_transport_offset(skb)) ++ goto free_skb; ++ + /* Pull skb since ip_tunnel_xmit() needs skb->data pointing + * to gre header. + */ +- skb_pull(skb, tunnel->hlen + sizeof(struct iphdr)); ++ skb_pull(skb, pull_len); + skb_reset_mac_header(skb); + } else { + if (skb_cow_head(skb, dev->needed_headroom)) +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c +index f52bc9c22e5b8..0ec529d77a56e 100644 +--- a/net/ipv4/ip_output.c ++++ b/net/ipv4/ip_output.c +@@ -446,8 +446,9 @@ static void ip_copy_addrs(struct iphdr *iph, const struct flowi4 *fl4) + { + BUILD_BUG_ON(offsetof(typeof(*fl4), daddr) != + offsetof(typeof(*fl4), saddr) + sizeof(fl4->saddr)); +- memcpy(&iph->saddr, &fl4->saddr, +- sizeof(fl4->saddr) + sizeof(fl4->daddr)); ++ ++ iph->saddr = fl4->saddr; ++ iph->daddr = fl4->daddr; + } + + /* Note: skb->sk can be different from sk, in case of tunnels */ +diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c +index f5f4369c131c9..858bb10d8341e 100644 +--- a/net/ipv4/nexthop.c ++++ b/net/ipv4/nexthop.c +@@ -1183,6 +1183,7 @@ static int nh_create_ipv4(struct net *net, struct nexthop *nh, + .fc_gw4 = cfg->gw.ipv4, + .fc_gw_family = cfg->gw.ipv4 ? AF_INET : 0, + .fc_flags = cfg->nh_flags, ++ .fc_nlinfo = cfg->nlinfo, + .fc_encap = cfg->nh_encap, + .fc_encap_type = cfg->nh_encap_type, + }; +@@ -1218,6 +1219,7 @@ static int nh_create_ipv6(struct net *net, struct nexthop *nh, + .fc_ifindex = cfg->nh_ifindex, + .fc_gateway = cfg->gw.ipv6, + .fc_flags = cfg->nh_flags, ++ .fc_nlinfo = cfg->nlinfo, + .fc_encap = cfg->nh_encap, + .fc_encap_type = cfg->nh_encap_type, + }; +diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c +index 8af4fefe371f2..a5ec77a5ad6f5 100644 +--- a/net/ipv4/tcp_fastopen.c ++++ b/net/ipv4/tcp_fastopen.c +@@ -379,8 +379,7 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb, + return NULL; + } + +- if (syn_data && +- tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD)) ++ if (tcp_fastopen_no_cookie(sk, dst, TFO_SERVER_COOKIE_NOT_REQD)) + goto fastopen; + + if (foc->len == 0) { +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index a1768ded2d545..c0fcfa2964686 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -1209,7 +1209,7 @@ static u8 tcp_sacktag_one(struct sock *sk, + if (dup_sack && (sacked & TCPCB_RETRANS)) { + if (tp->undo_marker && tp->undo_retrans > 0 && + after(end_seq, tp->undo_marker)) +- tp->undo_retrans--; ++ tp->undo_retrans = max_t(int, 0, tp->undo_retrans - pcount); + if ((sacked & TCPCB_SACKED_ACKED) && + before(start_seq, state->reord)) + state->reord = start_seq; +diff --git a/net/ipv6/netfilter/nf_socket_ipv6.c b/net/ipv6/netfilter/nf_socket_ipv6.c +index b9df879c48d3f..69c021704abd7 100644 +--- a/net/ipv6/netfilter/nf_socket_ipv6.c ++++ b/net/ipv6/netfilter/nf_socket_ipv6.c +@@ -99,7 +99,7 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb, + { + __be16 uninitialized_var(dport), uninitialized_var(sport); + const struct in6_addr *daddr = NULL, *saddr = NULL; +- struct ipv6hdr *iph = ipv6_hdr(skb); ++ struct ipv6hdr *iph = ipv6_hdr(skb), ipv6_var; + struct sk_buff *data_skb = NULL; + int doff = 0; + int thoff = 0, tproto; +@@ -129,8 +129,6 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb, + thoff + sizeof(*hp); + + } else if (tproto == IPPROTO_ICMPV6) { +- struct ipv6hdr ipv6_var; +- + if (extract_icmp6_fields(skb, thoff, &tproto, &saddr, &daddr, + &sport, &dport, &ipv6_var)) + return NULL; +diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c +index 95805a6331be2..421b2c89ce12a 100644 +--- a/net/l2tp/l2tp_core.c ++++ b/net/l2tp/l2tp_core.c +@@ -886,8 +886,10 @@ static int l2tp_udp_recv_core(struct l2tp_tunnel *tunnel, struct sk_buff *skb) + } + + if (tunnel->version == L2TP_HDR_VER_3 && +- l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) ++ l2tp_v3_ensure_opt_in_linear(session, skb, &ptr, &optr)) { ++ l2tp_session_dec_refcount(session); + goto error; ++ } + + l2tp_recv_common(session, skb, ptr, optr, hdrflags, length); + l2tp_session_dec_refcount(session); +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c +index 6f576306a4d74..ddc001ad90555 100644 +--- a/net/mac80211/iface.c ++++ b/net/mac80211/iface.c +@@ -1875,9 +1875,16 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name, + + netdev_set_default_ethtool_ops(ndev, &ieee80211_ethtool_ops); + +- /* MTU range: 256 - 2304 */ ++ /* MTU range is normally 256 - 2304, where the upper limit is ++ * the maximum MSDU size. Monitor interfaces send and receive ++ * MPDU and A-MSDU frames which may be much larger so we do ++ * not impose an upper limit in that case. ++ */ + ndev->min_mtu = 256; +- ndev->max_mtu = local->hw.max_mtu; ++ if (type == NL80211_IFTYPE_MONITOR) ++ ndev->max_mtu = 0; ++ else ++ ndev->max_mtu = local->hw.max_mtu; + + ret = register_netdevice(ndev); + if (ret) { +diff --git a/net/netlabel/netlabel_cipso_v4.c b/net/netlabel/netlabel_cipso_v4.c +index 8cd3daf0e3db6..1778e4e8ce247 100644 +--- a/net/netlabel/netlabel_cipso_v4.c ++++ b/net/netlabel/netlabel_cipso_v4.c +@@ -144,8 +144,8 @@ static int netlbl_cipsov4_add_std(struct genl_info *info, + return -ENOMEM; + doi_def->map.std = kzalloc(sizeof(*doi_def->map.std), GFP_KERNEL); + if (doi_def->map.std == NULL) { +- ret_val = -ENOMEM; +- goto add_std_failure; ++ kfree(doi_def); ++ return -ENOMEM; + } + doi_def->type = CIPSO_V4_MAP_TRANS; + +diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c +index 9d993b4cf1aff..acc76a738cfd8 100644 +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -2521,13 +2521,15 @@ int nlmsg_notify(struct sock *sk, struct sk_buff *skb, u32 portid, + /* errors reported via destination sk->sk_err, but propagate + * delivery errors if NETLINK_BROADCAST_ERROR flag is set */ + err = nlmsg_multicast(sk, skb, exclude_portid, group, flags); ++ if (err == -ESRCH) ++ err = 0; + } + + if (report) { + int err2; + + err2 = nlmsg_unicast(sk, skb, portid); +- if (!err || err == -ESRCH) ++ if (!err) + err = err2; + } + +diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c +index 76d72c3f52eda..86fb2f953bd5b 100644 +--- a/net/sched/sch_fq_codel.c ++++ b/net/sched/sch_fq_codel.c +@@ -370,6 +370,7 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt, + { + struct fq_codel_sched_data *q = qdisc_priv(sch); + struct nlattr *tb[TCA_FQ_CODEL_MAX + 1]; ++ u32 quantum = 0; + int err; + + if (!opt) +@@ -387,6 +388,13 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt, + q->flows_cnt > 65536) + return -EINVAL; + } ++ if (tb[TCA_FQ_CODEL_QUANTUM]) { ++ quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM])); ++ if (quantum > FQ_CODEL_QUANTUM_MAX) { ++ NL_SET_ERR_MSG(extack, "Invalid quantum"); ++ return -EINVAL; ++ } ++ } + sch_tree_lock(sch); + + if (tb[TCA_FQ_CODEL_TARGET]) { +@@ -413,8 +421,8 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt, + if (tb[TCA_FQ_CODEL_ECN]) + q->cparams.ecn = !!nla_get_u32(tb[TCA_FQ_CODEL_ECN]); + +- if (tb[TCA_FQ_CODEL_QUANTUM]) +- q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM])); ++ if (quantum) ++ q->quantum = quantum; + + if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]) + q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])); +diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c +index a4de4853c79de..da9ed0613eb7b 100644 +--- a/net/sched/sch_taprio.c ++++ b/net/sched/sch_taprio.c +@@ -1503,7 +1503,9 @@ static int taprio_change(struct Qdisc *sch, struct nlattr *opt, + taprio_set_picos_per_byte(dev, q); + + if (mqprio) { +- netdev_set_num_tc(dev, mqprio->num_tc); ++ err = netdev_set_num_tc(dev, mqprio->num_tc); ++ if (err) ++ goto free_sched; + for (i = 0; i < mqprio->num_tc; i++) + netdev_set_tc_queue(dev, i, + mqprio->count[i], +diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c +index d5470c7fe8792..c0016473a255a 100644 +--- a/net/sunrpc/auth_gss/svcauth_gss.c ++++ b/net/sunrpc/auth_gss/svcauth_gss.c +@@ -1937,7 +1937,7 @@ gss_svc_init_net(struct net *net) + goto out2; + return 0; + out2: +- destroy_use_gss_proxy_proc_entry(net); ++ rsi_cache_destroy_net(net); + out1: + rsc_cache_destroy_net(net); + return rv; +diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c +index 639837b3a5d90..3653898f465ff 100644 +--- a/net/sunrpc/xprt.c ++++ b/net/sunrpc/xprt.c +@@ -729,9 +729,9 @@ void xprt_force_disconnect(struct rpc_xprt *xprt) + /* Try to schedule an autoclose RPC call */ + if (test_and_set_bit(XPRT_LOCKED, &xprt->state) == 0) + queue_work(xprtiod_workqueue, &xprt->task_cleanup); +- else if (xprt->snd_task) ++ else if (xprt->snd_task && !test_bit(XPRT_SND_IS_COOKIE, &xprt->state)) + rpc_wake_up_queued_task_set_status(&xprt->pending, +- xprt->snd_task, -ENOTCONN); ++ xprt->snd_task, -ENOTCONN); + spin_unlock(&xprt->transport_lock); + } + EXPORT_SYMBOL_GPL(xprt_force_disconnect); +@@ -820,6 +820,7 @@ bool xprt_lock_connect(struct rpc_xprt *xprt, + goto out; + if (xprt->snd_task != task) + goto out; ++ set_bit(XPRT_SND_IS_COOKIE, &xprt->state); + xprt->snd_task = cookie; + ret = true; + out: +@@ -835,6 +836,7 @@ void xprt_unlock_connect(struct rpc_xprt *xprt, void *cookie) + if (!test_bit(XPRT_LOCKED, &xprt->state)) + goto out; + xprt->snd_task =NULL; ++ clear_bit(XPRT_SND_IS_COOKIE, &xprt->state); + xprt->ops->release_xprt(xprt, NULL); + xprt_schedule_autodisconnect(xprt); + out: +diff --git a/net/tipc/socket.c b/net/tipc/socket.c +index a5922ce9109cf..fbbac9ba2862f 100644 +--- a/net/tipc/socket.c ++++ b/net/tipc/socket.c +@@ -1756,6 +1756,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, + bool connected = !tipc_sk_type_connectionless(sk); + struct tipc_sock *tsk = tipc_sk(sk); + int rc, err, hlen, dlen, copy; ++ struct tipc_skb_cb *skb_cb; + struct sk_buff_head xmitq; + struct tipc_msg *hdr; + struct sk_buff *skb; +@@ -1779,6 +1780,7 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, + if (unlikely(rc)) + goto exit; + skb = skb_peek(&sk->sk_receive_queue); ++ skb_cb = TIPC_SKB_CB(skb); + hdr = buf_msg(skb); + dlen = msg_data_sz(hdr); + hlen = msg_hdr_sz(hdr); +@@ -1798,18 +1800,33 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, + + /* Capture data if non-error msg, otherwise just set return value */ + if (likely(!err)) { +- copy = min_t(int, dlen, buflen); +- if (unlikely(copy != dlen)) +- m->msg_flags |= MSG_TRUNC; +- rc = skb_copy_datagram_msg(skb, hlen, m, copy); ++ int offset = skb_cb->bytes_read; ++ ++ copy = min_t(int, dlen - offset, buflen); ++ rc = skb_copy_datagram_msg(skb, hlen + offset, m, copy); ++ if (unlikely(rc)) ++ goto exit; ++ if (unlikely(offset + copy < dlen)) { ++ if (flags & MSG_EOR) { ++ if (!(flags & MSG_PEEK)) ++ skb_cb->bytes_read = offset + copy; ++ } else { ++ m->msg_flags |= MSG_TRUNC; ++ skb_cb->bytes_read = 0; ++ } ++ } else { ++ if (flags & MSG_EOR) ++ m->msg_flags |= MSG_EOR; ++ skb_cb->bytes_read = 0; ++ } + } else { + copy = 0; + rc = 0; +- if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) ++ if (err != TIPC_CONN_SHUTDOWN && connected && !m->msg_control) { + rc = -ECONNRESET; ++ goto exit; ++ } + } +- if (unlikely(rc)) +- goto exit; + + /* Mark message as group event if applicable */ + if (unlikely(grp_evt)) { +@@ -1832,6 +1849,9 @@ static int tipc_recvmsg(struct socket *sock, struct msghdr *m, + tipc_node_distr_xmit(sock_net(sk), &xmitq); + } + ++ if (skb_cb->bytes_read) ++ goto exit; ++ + tsk_advance_rx_queue(sk); + + if (likely(!connected)) +@@ -2255,7 +2275,7 @@ static int tipc_sk_backlog_rcv(struct sock *sk, struct sk_buff *skb) + static void tipc_sk_enqueue(struct sk_buff_head *inputq, struct sock *sk, + u32 dport, struct sk_buff_head *xmitq) + { +- unsigned long time_limit = jiffies + 2; ++ unsigned long time_limit = jiffies + usecs_to_jiffies(20000); + struct sk_buff *skb; + unsigned int lim; + atomic_t *dcnt; +diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c +index 52ee3a9bb7093..3098710c9c344 100644 +--- a/net/unix/af_unix.c ++++ b/net/unix/af_unix.c +@@ -2734,7 +2734,7 @@ static __poll_t unix_dgram_poll(struct file *file, struct socket *sock, + + other = unix_peer(sk); + if (other && unix_peer(other) != sk && +- unix_recvq_full(other) && ++ unix_recvq_full_lockless(other) && + unix_dgram_peer_wake_me(sk, other)) + writable = 0; + +diff --git a/samples/bpf/test_override_return.sh b/samples/bpf/test_override_return.sh +index e68b9ee6814b8..35db26f736b9d 100755 +--- a/samples/bpf/test_override_return.sh ++++ b/samples/bpf/test_override_return.sh +@@ -1,5 +1,6 @@ + #!/bin/bash + ++rm -r tmpmnt + rm -f testfile.img + dd if=/dev/zero of=testfile.img bs=1M seek=1000 count=1 + DEVICE=$(losetup --show -f testfile.img) +diff --git a/samples/bpf/tracex7_user.c b/samples/bpf/tracex7_user.c +index ea6dae78f0dff..2ed13e9f3fcb0 100644 +--- a/samples/bpf/tracex7_user.c ++++ b/samples/bpf/tracex7_user.c +@@ -13,6 +13,11 @@ int main(int argc, char **argv) + char command[256]; + int ret; + ++ if (!argv[1]) { ++ fprintf(stderr, "ERROR: Run with the btrfs device argument!\n"); ++ return 0; ++ } ++ + snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]); + + if (load_bpf_file(filename)) { +diff --git a/scripts/gen_ksymdeps.sh b/scripts/gen_ksymdeps.sh +index 1324986e1362c..725e8c9c1b53f 100755 +--- a/scripts/gen_ksymdeps.sh ++++ b/scripts/gen_ksymdeps.sh +@@ -4,7 +4,13 @@ + set -e + + # List of exported symbols +-ksyms=$($NM $1 | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z) ++# ++# If the object has no symbol, $NM warns 'no symbols'. ++# Suppress the stderr. ++# TODO: ++# Use -q instead of 2>/dev/null when we upgrade the minimum version of ++# binutils to 2.37, llvm to 13.0.0. ++ksyms=$($NM $1 2>/dev/null | sed -n 's/.*__ksym_marker_\(.*\)/\1/p' | tr A-Z a-z) + + if [ -z "$ksyms" ]; then + exit 0 +diff --git a/security/smack/smack_access.c b/security/smack/smack_access.c +index 38ac3da4e791e..beeba1a9be170 100644 +--- a/security/smack/smack_access.c ++++ b/security/smack/smack_access.c +@@ -81,23 +81,22 @@ int log_policy = SMACK_AUDIT_DENIED; + int smk_access_entry(char *subject_label, char *object_label, + struct list_head *rule_list) + { +- int may = -ENOENT; + struct smack_rule *srp; + + list_for_each_entry_rcu(srp, rule_list, list) { + if (srp->smk_object->smk_known == object_label && + srp->smk_subject->smk_known == subject_label) { +- may = srp->smk_access; +- break; ++ int may = srp->smk_access; ++ /* ++ * MAY_WRITE implies MAY_LOCK. ++ */ ++ if ((may & MAY_WRITE) == MAY_WRITE) ++ may |= MAY_LOCK; ++ return may; + } + } + +- /* +- * MAY_WRITE implies MAY_LOCK. +- */ +- if ((may & MAY_WRITE) == MAY_WRITE) +- may |= MAY_LOCK; +- return may; ++ return -ENOENT; + } + + /** +diff --git a/sound/soc/atmel/Kconfig b/sound/soc/atmel/Kconfig +index 71f2d42188c46..51e75b7819682 100644 +--- a/sound/soc/atmel/Kconfig ++++ b/sound/soc/atmel/Kconfig +@@ -11,7 +11,6 @@ if SND_ATMEL_SOC + + config SND_ATMEL_SOC_PDC + bool +- depends on HAS_DMA + + config SND_ATMEL_SOC_DMA + bool +diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c +index c67b86e2d0c0a..7830d014d9247 100644 +--- a/sound/soc/intel/boards/bytcr_rt5640.c ++++ b/sound/soc/intel/boards/bytcr_rt5640.c +@@ -284,9 +284,6 @@ static const struct snd_soc_dapm_widget byt_rt5640_widgets[] = { + static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = { + {"Headphone", NULL, "Platform Clock"}, + {"Headset Mic", NULL, "Platform Clock"}, +- {"Internal Mic", NULL, "Platform Clock"}, +- {"Speaker", NULL, "Platform Clock"}, +- + {"Headset Mic", NULL, "MICBIAS1"}, + {"IN2P", NULL, "Headset Mic"}, + {"Headphone", NULL, "HPOL"}, +@@ -294,19 +291,23 @@ static const struct snd_soc_dapm_route byt_rt5640_audio_map[] = { + }; + + static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic1_map[] = { ++ {"Internal Mic", NULL, "Platform Clock"}, + {"DMIC1", NULL, "Internal Mic"}, + }; + + static const struct snd_soc_dapm_route byt_rt5640_intmic_dmic2_map[] = { ++ {"Internal Mic", NULL, "Platform Clock"}, + {"DMIC2", NULL, "Internal Mic"}, + }; + + static const struct snd_soc_dapm_route byt_rt5640_intmic_in1_map[] = { ++ {"Internal Mic", NULL, "Platform Clock"}, + {"Internal Mic", NULL, "MICBIAS1"}, + {"IN1P", NULL, "Internal Mic"}, + }; + + static const struct snd_soc_dapm_route byt_rt5640_intmic_in3_map[] = { ++ {"Internal Mic", NULL, "Platform Clock"}, + {"Internal Mic", NULL, "MICBIAS1"}, + {"IN3P", NULL, "Internal Mic"}, + }; +@@ -348,6 +349,7 @@ static const struct snd_soc_dapm_route byt_rt5640_ssp0_aif2_map[] = { + }; + + static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = { ++ {"Speaker", NULL, "Platform Clock"}, + {"Speaker", NULL, "SPOLP"}, + {"Speaker", NULL, "SPOLN"}, + {"Speaker", NULL, "SPORP"}, +@@ -355,6 +357,7 @@ static const struct snd_soc_dapm_route byt_rt5640_stereo_spk_map[] = { + }; + + static const struct snd_soc_dapm_route byt_rt5640_mono_spk_map[] = { ++ {"Speaker", NULL, "Platform Clock"}, + {"Speaker", NULL, "SPOLP"}, + {"Speaker", NULL, "SPOLN"}, + }; +diff --git a/sound/soc/intel/skylake/skl-messages.c b/sound/soc/intel/skylake/skl-messages.c +index 476ef1897961d..79c6cf2c14bfb 100644 +--- a/sound/soc/intel/skylake/skl-messages.c ++++ b/sound/soc/intel/skylake/skl-messages.c +@@ -802,9 +802,12 @@ static u16 skl_get_module_param_size(struct skl_dev *skl, + + case SKL_MODULE_TYPE_BASE_OUTFMT: + case SKL_MODULE_TYPE_MIC_SELECT: +- case SKL_MODULE_TYPE_KPB: + return sizeof(struct skl_base_outfmt_cfg); + ++ case SKL_MODULE_TYPE_MIXER: ++ case SKL_MODULE_TYPE_KPB: ++ return sizeof(struct skl_base_cfg); ++ + default: + /* + * return only base cfg when no specific module type is +@@ -857,10 +860,14 @@ static int skl_set_module_format(struct skl_dev *skl, + + case SKL_MODULE_TYPE_BASE_OUTFMT: + case SKL_MODULE_TYPE_MIC_SELECT: +- case SKL_MODULE_TYPE_KPB: + skl_set_base_outfmt_format(skl, module_config, *param_data); + break; + ++ case SKL_MODULE_TYPE_MIXER: ++ case SKL_MODULE_TYPE_KPB: ++ skl_set_base_module_format(skl, module_config, *param_data); ++ break; ++ + default: + skl_set_base_module_format(skl, module_config, *param_data); + break; +diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c +index 7f287424af9b7..439dd4ba690c4 100644 +--- a/sound/soc/intel/skylake/skl-pcm.c ++++ b/sound/soc/intel/skylake/skl-pcm.c +@@ -1333,21 +1333,6 @@ static int skl_get_module_info(struct skl_dev *skl, + return -EIO; + } + +- list_for_each_entry(module, &skl->uuid_list, list) { +- if (guid_equal(uuid_mod, &module->uuid)) { +- mconfig->id.module_id = module->id; +- if (mconfig->module) +- mconfig->module->loadable = module->is_loadable; +- ret = 0; +- break; +- } +- } +- +- if (ret) +- return ret; +- +- uuid_mod = &module->uuid; +- ret = -EIO; + for (i = 0; i < skl->nr_modules; i++) { + skl_module = skl->modules[i]; + uuid_tplg = &skl_module->uuid; +@@ -1357,10 +1342,18 @@ static int skl_get_module_info(struct skl_dev *skl, + break; + } + } ++ + if (skl->nr_modules && ret) + return ret; + ++ ret = -EIO; + list_for_each_entry(module, &skl->uuid_list, list) { ++ if (guid_equal(uuid_mod, &module->uuid)) { ++ mconfig->id.module_id = module->id; ++ mconfig->module->loadable = module->is_loadable; ++ ret = 0; ++ } ++ + for (i = 0; i < MAX_IN_QUEUE; i++) { + pin_id = &mconfig->m_in_pin[i].id; + if (guid_equal(&pin_id->mod_uuid, &module->uuid)) +@@ -1374,7 +1367,7 @@ static int skl_get_module_info(struct skl_dev *skl, + } + } + +- return 0; ++ return ret; + } + + static int skl_populate_modules(struct skl_dev *skl) +diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c +index 61c984f10d8e6..086c90e095770 100644 +--- a/sound/soc/rockchip/rockchip_i2s.c ++++ b/sound/soc/rockchip/rockchip_i2s.c +@@ -186,7 +186,9 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, + { + struct rk_i2s_dev *i2s = to_info(cpu_dai); + unsigned int mask = 0, val = 0; ++ int ret = 0; + ++ pm_runtime_get_sync(cpu_dai->dev); + mask = I2S_CKR_MSS_MASK; + switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) { + case SND_SOC_DAIFMT_CBS_CFS: +@@ -199,7 +201,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, + i2s->is_master_mode = false; + break; + default: +- return -EINVAL; ++ ret = -EINVAL; ++ goto err_pm_put; + } + + regmap_update_bits(i2s->regmap, I2S_CKR, mask, val); +@@ -213,7 +216,8 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, + val = I2S_CKR_CKP_POS; + break; + default: +- return -EINVAL; ++ ret = -EINVAL; ++ goto err_pm_put; + } + + regmap_update_bits(i2s->regmap, I2S_CKR, mask, val); +@@ -229,14 +233,15 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, + case SND_SOC_DAIFMT_I2S: + val = I2S_TXCR_IBM_NORMAL; + break; +- case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */ +- val = I2S_TXCR_TFS_PCM; +- break; +- case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */ ++ case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */ + val = I2S_TXCR_TFS_PCM | I2S_TXCR_PBM_MODE(1); + break; ++ case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */ ++ val = I2S_TXCR_TFS_PCM; ++ break; + default: +- return -EINVAL; ++ ret = -EINVAL; ++ goto err_pm_put; + } + + regmap_update_bits(i2s->regmap, I2S_TXCR, mask, val); +@@ -252,19 +257,23 @@ static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, + case SND_SOC_DAIFMT_I2S: + val = I2S_RXCR_IBM_NORMAL; + break; +- case SND_SOC_DAIFMT_DSP_A: /* PCM no delay mode */ +- val = I2S_RXCR_TFS_PCM; +- break; +- case SND_SOC_DAIFMT_DSP_B: /* PCM delay 1 mode */ ++ case SND_SOC_DAIFMT_DSP_A: /* PCM delay 1 bit mode */ + val = I2S_RXCR_TFS_PCM | I2S_RXCR_PBM_MODE(1); + break; ++ case SND_SOC_DAIFMT_DSP_B: /* PCM no delay mode */ ++ val = I2S_RXCR_TFS_PCM; ++ break; + default: +- return -EINVAL; ++ ret = -EINVAL; ++ goto err_pm_put; + } + + regmap_update_bits(i2s->regmap, I2S_RXCR, mask, val); + +- return 0; ++err_pm_put: ++ pm_runtime_put(cpu_dai->dev); ++ ++ return ret; + } + + static int rockchip_i2s_hw_params(struct snd_pcm_substream *substream, +diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config +index 9832affd5d54b..c75c9b03d6e77 100644 +--- a/tools/perf/Makefile.config ++++ b/tools/perf/Makefile.config +@@ -118,10 +118,10 @@ FEATURE_CHECK_LDFLAGS-libunwind = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS) + FEATURE_CHECK_CFLAGS-libunwind-debug-frame = $(LIBUNWIND_CFLAGS) + FEATURE_CHECK_LDFLAGS-libunwind-debug-frame = $(LIBUNWIND_LDFLAGS) $(LIBUNWIND_LIBS) + +-FEATURE_CHECK_LDFLAGS-libunwind-arm = -lunwind -lunwind-arm +-FEATURE_CHECK_LDFLAGS-libunwind-aarch64 = -lunwind -lunwind-aarch64 +-FEATURE_CHECK_LDFLAGS-libunwind-x86 = -lunwind -llzma -lunwind-x86 +-FEATURE_CHECK_LDFLAGS-libunwind-x86_64 = -lunwind -llzma -lunwind-x86_64 ++FEATURE_CHECK_LDFLAGS-libunwind-arm += -lunwind -lunwind-arm ++FEATURE_CHECK_LDFLAGS-libunwind-aarch64 += -lunwind -lunwind-aarch64 ++FEATURE_CHECK_LDFLAGS-libunwind-x86 += -lunwind -llzma -lunwind-x86 ++FEATURE_CHECK_LDFLAGS-libunwind-x86_64 += -lunwind -llzma -lunwind-x86_64 + + FEATURE_CHECK_LDFLAGS-libcrypto = -lcrypto + +diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c +index 767fe1bfd922c..8c3addc2e9e1e 100644 +--- a/tools/perf/util/machine.c ++++ b/tools/perf/util/machine.c +@@ -2020,6 +2020,7 @@ static int add_callchain_ip(struct thread *thread, + + al.filtered = 0; + al.sym = NULL; ++ al.srcline = NULL; + if (!cpumode) { + thread__find_cpumode_addr_location(thread, ip, &al); + } else { +diff --git a/tools/testing/selftests/bpf/progs/xdp_tx.c b/tools/testing/selftests/bpf/progs/xdp_tx.c +index 57912e7c94b0a..9ed477776eca8 100644 +--- a/tools/testing/selftests/bpf/progs/xdp_tx.c ++++ b/tools/testing/selftests/bpf/progs/xdp_tx.c +@@ -3,7 +3,7 @@ + #include + #include "bpf_helpers.h" + +-SEC("tx") ++SEC("xdp") + int xdp_tx(struct xdp_md *xdp) + { + return XDP_TX; +diff --git a/tools/testing/selftests/bpf/test_maps.c b/tools/testing/selftests/bpf/test_maps.c +index 1c4219ceced2f..45c7a55f0b8b5 100644 +--- a/tools/testing/selftests/bpf/test_maps.c ++++ b/tools/testing/selftests/bpf/test_maps.c +@@ -972,7 +972,7 @@ static void test_sockmap(unsigned int tasks, void *data) + + FD_ZERO(&w); + FD_SET(sfd[3], &w); +- to.tv_sec = 1; ++ to.tv_sec = 30; + to.tv_usec = 0; + s = select(sfd[3] + 1, &w, NULL, NULL, &to); + if (s == -1) { +diff --git a/tools/testing/selftests/bpf/test_xdp_veth.sh b/tools/testing/selftests/bpf/test_xdp_veth.sh +index ba8ffcdaac302..995278e684b6e 100755 +--- a/tools/testing/selftests/bpf/test_xdp_veth.sh ++++ b/tools/testing/selftests/bpf/test_xdp_veth.sh +@@ -108,7 +108,7 @@ ip link set dev veth2 xdp pinned $BPF_DIR/progs/redirect_map_1 + ip link set dev veth3 xdp pinned $BPF_DIR/progs/redirect_map_2 + + ip -n ns1 link set dev veth11 xdp obj xdp_dummy.o sec xdp_dummy +-ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec tx ++ip -n ns2 link set dev veth22 xdp obj xdp_tx.o sec xdp + ip -n ns3 link set dev veth33 xdp obj xdp_dummy.o sec xdp_dummy + + trap cleanup EXIT +diff --git a/tools/thermal/tmon/Makefile b/tools/thermal/tmon/Makefile +index 59e417ec3e134..25d7f8f37cfd6 100644 +--- a/tools/thermal/tmon/Makefile ++++ b/tools/thermal/tmon/Makefile +@@ -10,7 +10,7 @@ override CFLAGS+= $(call cc-option,-O3,-O1) ${WARNFLAGS} + # Add "-fstack-protector" only if toolchain supports it. + override CFLAGS+= $(call cc-option,-fstack-protector-strong) + CC?= $(CROSS_COMPILE)gcc +-PKG_CONFIG?= pkg-config ++PKG_CONFIG?= $(CROSS_COMPILE)pkg-config + + override CFLAGS+=-D VERSION=\"$(VERSION)\" + LDFLAGS+= +diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c +index 4af85605730e4..f7150fbeeb55e 100644 +--- a/virt/kvm/arm/arm.c ++++ b/virt/kvm/arm/arm.c +@@ -1141,6 +1141,14 @@ long kvm_arch_vcpu_ioctl(struct file *filp, + if (copy_from_user(®, argp, sizeof(reg))) + break; + ++ /* ++ * We could owe a reset due to PSCI. Handle the pending reset ++ * here to ensure userspace register accesses are ordered after ++ * the reset. ++ */ ++ if (kvm_check_request(KVM_REQ_VCPU_RESET, vcpu)) ++ kvm_reset_vcpu(vcpu); ++ + if (ioctl == KVM_SET_ONE_REG) + r = kvm_arm_set_reg(vcpu, ®); + else