From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id BB159138334 for ; Tue, 17 Dec 2019 21:57:45 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id E5E02E087F; Tue, 17 Dec 2019 21:57:44 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id B0A6DE087F for ; Tue, 17 Dec 2019 21:57:44 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 8C24334D1CA for ; Tue, 17 Dec 2019 21:57:43 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 3DA59720 for ; Tue, 17 Dec 2019 21:57:42 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1576619850.1053a4a9ab2df4c5172439cef22e68b2dc8efe88.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1003_linux-5.4.4.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 1053a4a9ab2df4c5172439cef22e68b2dc8efe88 X-VCS-Branch: 5.4 Date: Tue, 17 Dec 2019 21:57:42 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 3172ecb4-9079-4396-b1c7-43e3e87735c7 X-Archives-Hash: 1214387f404e990b14a788f1ee27da1a commit: 1053a4a9ab2df4c5172439cef22e68b2dc8efe88 Author: Mike Pagano gentoo org> AuthorDate: Tue Dec 17 21:57:30 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Tue Dec 17 21:57:30 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1053a4a9 Linux patch 5.4.4 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1003_linux-5.4.4.patch | 6408 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 6412 insertions(+) diff --git a/0000_README b/0000_README index b9a78f5..d56f066 100644 --- a/0000_README +++ b/0000_README @@ -55,6 +55,10 @@ Patch: 1002_linux-5.4.3.patch From: http://www.kernel.org Desc: Linux 5.4.3 +Patch: 1003_linux-5.4.4.patch +From: http://www.kernel.org +Desc: Linux 5.4.4 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1003_linux-5.4.4.patch b/1003_linux-5.4.4.patch new file mode 100644 index 0000000..2f51fee --- /dev/null +++ b/1003_linux-5.4.4.patch @@ -0,0 +1,6408 @@ +diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt +index 9983ac73b66d..f5a551e4332d 100644 +--- a/Documentation/admin-guide/kernel-parameters.txt ++++ b/Documentation/admin-guide/kernel-parameters.txt +@@ -5101,13 +5101,13 @@ + Flags is a set of characters, each corresponding + to a common usb-storage quirk flag as follows: + a = SANE_SENSE (collect more than 18 bytes +- of sense data); ++ of sense data, not on uas); + b = BAD_SENSE (don't collect more than 18 +- bytes of sense data); ++ bytes of sense data, not on uas); + c = FIX_CAPACITY (decrease the reported + device capacity by one sector); + d = NO_READ_DISC_INFO (don't use +- READ_DISC_INFO command); ++ READ_DISC_INFO command, not on uas); + e = NO_READ_CAPACITY_16 (don't use + READ_CAPACITY_16 command); + f = NO_REPORT_OPCODES (don't use report opcodes +@@ -5122,17 +5122,18 @@ + j = NO_REPORT_LUNS (don't use report luns + command, uas only); + l = NOT_LOCKABLE (don't try to lock and +- unlock ejectable media); ++ unlock ejectable media, not on uas); + m = MAX_SECTORS_64 (don't transfer more +- than 64 sectors = 32 KB at a time); ++ than 64 sectors = 32 KB at a time, ++ not on uas); + n = INITIAL_READ10 (force a retry of the +- initial READ(10) command); ++ initial READ(10) command, not on uas); + o = CAPACITY_OK (accept the capacity +- reported by the device); ++ reported by the device, not on uas); + p = WRITE_CACHE (the device cache is ON +- by default); ++ by default, not on uas); + r = IGNORE_RESIDUE (the device reports +- bogus residue values); ++ bogus residue values, not on uas); + s = SINGLE_LUN (the device has only one + Logical Unit); + t = NO_ATA_1X (don't allow ATA(12) and ATA(16) +@@ -5141,7 +5142,8 @@ + w = NO_WP_DETECT (don't test whether the + medium is write-protected). + y = ALWAYS_SYNC (issue a SYNCHRONIZE_CACHE +- even if the device claims no cache) ++ even if the device claims no cache, ++ not on uas) + Example: quirks=0419:aaf5:rl,0421:0433:rc + + user_debug= [KNL,ARM] +diff --git a/Makefile b/Makefile +index 07998b60d56c..144daf02c78a 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 3 ++SUBLEVEL = 4 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arm/boot/dts/omap3-pandora-common.dtsi b/arch/arm/boot/dts/omap3-pandora-common.dtsi +index ec5891718ae6..150d5be42d27 100644 +--- a/arch/arm/boot/dts/omap3-pandora-common.dtsi ++++ b/arch/arm/boot/dts/omap3-pandora-common.dtsi +@@ -226,6 +226,17 @@ + gpio = <&gpio6 4 GPIO_ACTIVE_HIGH>; /* GPIO_164 */ + }; + ++ /* wl1251 wifi+bt module */ ++ wlan_en: fixed-regulator-wg7210_en { ++ compatible = "regulator-fixed"; ++ regulator-name = "vwlan"; ++ regulator-min-microvolt = <1800000>; ++ regulator-max-microvolt = <1800000>; ++ startup-delay-us = <50000>; ++ enable-active-high; ++ gpio = <&gpio1 23 GPIO_ACTIVE_HIGH>; ++ }; ++ + /* wg7210 (wifi+bt module) 32k clock buffer */ + wg7210_32k: fixed-regulator-wg7210_32k { + compatible = "regulator-fixed"; +@@ -522,9 +533,30 @@ + /*wp-gpios = <&gpio4 31 GPIO_ACTIVE_HIGH>;*/ /* GPIO_127 */ + }; + +-/* mmc3 is probed using pdata-quirks to pass wl1251 card data */ + &mmc3 { +- status = "disabled"; ++ vmmc-supply = <&wlan_en>; ++ ++ bus-width = <4>; ++ non-removable; ++ ti,non-removable; ++ cap-power-off-card; ++ ++ pinctrl-names = "default"; ++ pinctrl-0 = <&mmc3_pins>; ++ ++ #address-cells = <1>; ++ #size-cells = <0>; ++ ++ wlan: wifi@1 { ++ compatible = "ti,wl1251"; ++ ++ reg = <1>; ++ ++ interrupt-parent = <&gpio1>; ++ interrupts = <21 IRQ_TYPE_LEVEL_HIGH>; /* GPIO_21 */ ++ ++ ti,wl1251-has-eeprom; ++ }; + }; + + /* bluetooth*/ +diff --git a/arch/arm/boot/dts/omap3-tao3530.dtsi b/arch/arm/boot/dts/omap3-tao3530.dtsi +index a7a04d78deeb..f24e2326cfa7 100644 +--- a/arch/arm/boot/dts/omap3-tao3530.dtsi ++++ b/arch/arm/boot/dts/omap3-tao3530.dtsi +@@ -222,7 +222,7 @@ + pinctrl-0 = <&mmc1_pins>; + vmmc-supply = <&vmmc1>; + vqmmc-supply = <&vsim>; +- cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_HIGH>; ++ cd-gpios = <&twl_gpio 0 GPIO_ACTIVE_LOW>; + bus-width = <8>; + }; + +diff --git a/arch/arm/mach-omap2/pdata-quirks.c b/arch/arm/mach-omap2/pdata-quirks.c +index 2efd18e8824c..1b7cf81ff035 100644 +--- a/arch/arm/mach-omap2/pdata-quirks.c ++++ b/arch/arm/mach-omap2/pdata-quirks.c +@@ -7,7 +7,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -311,118 +310,15 @@ static void __init omap3_logicpd_torpedo_init(void) + } + + /* omap3pandora legacy devices */ +-#define PANDORA_WIFI_IRQ_GPIO 21 +-#define PANDORA_WIFI_NRESET_GPIO 23 + + static struct platform_device pandora_backlight = { + .name = "pandora-backlight", + .id = -1, + }; + +-static struct regulator_consumer_supply pandora_vmmc3_supply[] = { +- REGULATOR_SUPPLY("vmmc", "omap_hsmmc.2"), +-}; +- +-static struct regulator_init_data pandora_vmmc3 = { +- .constraints = { +- .valid_ops_mask = REGULATOR_CHANGE_STATUS, +- }, +- .num_consumer_supplies = ARRAY_SIZE(pandora_vmmc3_supply), +- .consumer_supplies = pandora_vmmc3_supply, +-}; +- +-static struct fixed_voltage_config pandora_vwlan = { +- .supply_name = "vwlan", +- .microvolts = 1800000, /* 1.8V */ +- .startup_delay = 50000, /* 50ms */ +- .init_data = &pandora_vmmc3, +-}; +- +-static struct platform_device pandora_vwlan_device = { +- .name = "reg-fixed-voltage", +- .id = 1, +- .dev = { +- .platform_data = &pandora_vwlan, +- }, +-}; +- +-static struct gpiod_lookup_table pandora_vwlan_gpiod_table = { +- .dev_id = "reg-fixed-voltage.1", +- .table = { +- /* +- * As this is a low GPIO number it should be at the first +- * GPIO bank. +- */ +- GPIO_LOOKUP("gpio-0-31", PANDORA_WIFI_NRESET_GPIO, +- NULL, GPIO_ACTIVE_HIGH), +- { }, +- }, +-}; +- +-static void pandora_wl1251_init_card(struct mmc_card *card) +-{ +- /* +- * We have TI wl1251 attached to MMC3. Pass this information to +- * SDIO core because it can't be probed by normal methods. +- */ +- if (card->type == MMC_TYPE_SDIO || card->type == MMC_TYPE_SD_COMBO) { +- card->quirks |= MMC_QUIRK_NONSTD_SDIO; +- card->cccr.wide_bus = 1; +- card->cis.vendor = 0x104c; +- card->cis.device = 0x9066; +- card->cis.blksize = 512; +- card->cis.max_dtr = 24000000; +- card->ocr = 0x80; +- } +-} +- +-static struct omap2_hsmmc_info pandora_mmc3[] = { +- { +- .mmc = 3, +- .caps = MMC_CAP_4_BIT_DATA | MMC_CAP_POWER_OFF_CARD, +- .init_card = pandora_wl1251_init_card, +- }, +- {} /* Terminator */ +-}; +- +-static void __init pandora_wl1251_init(void) +-{ +- struct wl1251_platform_data pandora_wl1251_pdata; +- int ret; +- +- memset(&pandora_wl1251_pdata, 0, sizeof(pandora_wl1251_pdata)); +- +- pandora_wl1251_pdata.power_gpio = -1; +- +- ret = gpio_request_one(PANDORA_WIFI_IRQ_GPIO, GPIOF_IN, "wl1251 irq"); +- if (ret < 0) +- goto fail; +- +- pandora_wl1251_pdata.irq = gpio_to_irq(PANDORA_WIFI_IRQ_GPIO); +- if (pandora_wl1251_pdata.irq < 0) +- goto fail_irq; +- +- pandora_wl1251_pdata.use_eeprom = true; +- ret = wl1251_set_platform_data(&pandora_wl1251_pdata); +- if (ret < 0) +- goto fail_irq; +- +- return; +- +-fail_irq: +- gpio_free(PANDORA_WIFI_IRQ_GPIO); +-fail: +- pr_err("wl1251 board initialisation failed\n"); +-} +- + static void __init omap3_pandora_legacy_init(void) + { + platform_device_register(&pandora_backlight); +- gpiod_add_lookup_table(&pandora_vwlan_gpiod_table); +- platform_device_register(&pandora_vwlan_device); +- omap_hsmmc_init(pandora_mmc3); +- omap_hsmmc_late_init(pandora_mmc3); +- pandora_wl1251_init(); + } + #endif /* CONFIG_ARCH_OMAP3 */ + +diff --git a/arch/powerpc/include/asm/sections.h b/arch/powerpc/include/asm/sections.h +index 5a9b6eb651b6..d19871763ed4 100644 +--- a/arch/powerpc/include/asm/sections.h ++++ b/arch/powerpc/include/asm/sections.h +@@ -5,8 +5,22 @@ + + #include + #include ++ ++#define arch_is_kernel_initmem_freed arch_is_kernel_initmem_freed ++ + #include + ++extern bool init_mem_is_free; ++ ++static inline int arch_is_kernel_initmem_freed(unsigned long addr) ++{ ++ if (!init_mem_is_free) ++ return 0; ++ ++ return addr >= (unsigned long)__init_begin && ++ addr < (unsigned long)__init_end; ++} ++ + extern char __head_end[]; + + #ifdef __powerpc64__ +diff --git a/arch/powerpc/include/asm/vdso_datapage.h b/arch/powerpc/include/asm/vdso_datapage.h +index c61d59ed3b45..2ccb938d8544 100644 +--- a/arch/powerpc/include/asm/vdso_datapage.h ++++ b/arch/powerpc/include/asm/vdso_datapage.h +@@ -82,6 +82,7 @@ struct vdso_data { + __s32 wtom_clock_nsec; /* Wall to monotonic clock nsec */ + __s64 wtom_clock_sec; /* Wall to monotonic clock sec */ + struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */ ++ __u32 hrtimer_res; /* hrtimer resolution */ + __u32 syscall_map_64[SYSCALL_MAP_SIZE]; /* map of syscalls */ + __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */ + }; +@@ -103,6 +104,7 @@ struct vdso_data { + __s32 wtom_clock_nsec; + struct timespec stamp_xtime; /* xtime as at tb_orig_stamp */ + __u32 stamp_sec_fraction; /* fractional seconds of stamp_xtime */ ++ __u32 hrtimer_res; /* hrtimer resolution */ + __u32 syscall_map_32[SYSCALL_MAP_SIZE]; /* map of syscalls */ + __u32 dcache_block_size; /* L1 d-cache block size */ + __u32 icache_block_size; /* L1 i-cache block size */ +diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile +index a7ca8fe62368..3c02445cf086 100644 +--- a/arch/powerpc/kernel/Makefile ++++ b/arch/powerpc/kernel/Makefile +@@ -5,8 +5,8 @@ + + CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"' + +-# Disable clang warning for using setjmp without setjmp.h header +-CFLAGS_crash.o += $(call cc-disable-warning, builtin-requires-header) ++# Avoid clang warnings around longjmp/setjmp declarations ++CFLAGS_crash.o += -ffreestanding + + ifdef CONFIG_PPC64 + CFLAGS_prom_init.o += $(NO_MINIMAL_TOC) +diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c +index 484f54dab247..5c0a1e17219b 100644 +--- a/arch/powerpc/kernel/asm-offsets.c ++++ b/arch/powerpc/kernel/asm-offsets.c +@@ -387,6 +387,7 @@ int main(void) + OFFSET(WTOM_CLOCK_NSEC, vdso_data, wtom_clock_nsec); + OFFSET(STAMP_XTIME, vdso_data, stamp_xtime); + OFFSET(STAMP_SEC_FRAC, vdso_data, stamp_sec_fraction); ++ OFFSET(CLOCK_HRTIMER_RES, vdso_data, hrtimer_res); + OFFSET(CFG_ICACHE_BLOCKSZ, vdso_data, icache_block_size); + OFFSET(CFG_DCACHE_BLOCKSZ, vdso_data, dcache_block_size); + OFFSET(CFG_ICACHE_LOGBLOCKSZ, vdso_data, icache_log_block_size); +@@ -417,7 +418,6 @@ int main(void) + DEFINE(CLOCK_REALTIME_COARSE, CLOCK_REALTIME_COARSE); + DEFINE(CLOCK_MONOTONIC_COARSE, CLOCK_MONOTONIC_COARSE); + DEFINE(NSEC_PER_SEC, NSEC_PER_SEC); +- DEFINE(CLOCK_REALTIME_RES, MONOTONIC_RES_NSEC); + + #ifdef CONFIG_BUG + DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry)); +diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S +index b55a7b4cb543..9bc0aa9aeb65 100644 +--- a/arch/powerpc/kernel/misc_64.S ++++ b/arch/powerpc/kernel/misc_64.S +@@ -82,7 +82,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE) + subf r8,r6,r4 /* compute length */ + add r8,r8,r5 /* ensure we get enough */ + lwz r9,DCACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of cache block size */ +- srw. r8,r8,r9 /* compute line count */ ++ srd. r8,r8,r9 /* compute line count */ + beqlr /* nothing to do? */ + mtctr r8 + 1: dcbst 0,r6 +@@ -98,7 +98,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_COHERENT_ICACHE) + subf r8,r6,r4 /* compute length */ + add r8,r8,r5 + lwz r9,ICACHEL1LOGBLOCKSIZE(r10) /* Get log-2 of Icache block size */ +- srw. r8,r8,r9 /* compute line count */ ++ srd. r8,r8,r9 /* compute line count */ + beqlr /* nothing to do? */ + mtctr r8 + 2: icbi 0,r6 +diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c +index 694522308cd5..619447b1b797 100644 +--- a/arch/powerpc/kernel/time.c ++++ b/arch/powerpc/kernel/time.c +@@ -959,6 +959,7 @@ void update_vsyscall(struct timekeeper *tk) + vdso_data->wtom_clock_nsec = tk->wall_to_monotonic.tv_nsec; + vdso_data->stamp_xtime = xt; + vdso_data->stamp_sec_fraction = frac_sec; ++ vdso_data->hrtimer_res = hrtimer_resolution; + smp_wmb(); + ++(vdso_data->tb_update_count); + } +diff --git a/arch/powerpc/kernel/vdso32/gettimeofday.S b/arch/powerpc/kernel/vdso32/gettimeofday.S +index becd9f8767ed..a967e795b96d 100644 +--- a/arch/powerpc/kernel/vdso32/gettimeofday.S ++++ b/arch/powerpc/kernel/vdso32/gettimeofday.S +@@ -156,12 +156,15 @@ V_FUNCTION_BEGIN(__kernel_clock_getres) + cror cr0*4+eq,cr0*4+eq,cr1*4+eq + bne cr0,99f + ++ mflr r12 ++ .cfi_register lr,r12 ++ bl __get_datapage@local /* get data page */ ++ lwz r5, CLOCK_HRTIMER_RES(r3) ++ mtlr r12 + li r3,0 + cmpli cr0,r4,0 + crclr cr0*4+so + beqlr +- lis r5,CLOCK_REALTIME_RES@h +- ori r5,r5,CLOCK_REALTIME_RES@l + stw r3,TSPC32_TV_SEC(r4) + stw r5,TSPC32_TV_NSEC(r4) + blr +diff --git a/arch/powerpc/kernel/vdso64/cacheflush.S b/arch/powerpc/kernel/vdso64/cacheflush.S +index 3f92561a64c4..526f5ba2593e 100644 +--- a/arch/powerpc/kernel/vdso64/cacheflush.S ++++ b/arch/powerpc/kernel/vdso64/cacheflush.S +@@ -35,7 +35,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache) + subf r8,r6,r4 /* compute length */ + add r8,r8,r5 /* ensure we get enough */ + lwz r9,CFG_DCACHE_LOGBLOCKSZ(r10) +- srw. r8,r8,r9 /* compute line count */ ++ srd. r8,r8,r9 /* compute line count */ + crclr cr0*4+so + beqlr /* nothing to do? */ + mtctr r8 +@@ -52,7 +52,7 @@ V_FUNCTION_BEGIN(__kernel_sync_dicache) + subf r8,r6,r4 /* compute length */ + add r8,r8,r5 + lwz r9,CFG_ICACHE_LOGBLOCKSZ(r10) +- srw. r8,r8,r9 /* compute line count */ ++ srd. r8,r8,r9 /* compute line count */ + crclr cr0*4+so + beqlr /* nothing to do? */ + mtctr r8 +diff --git a/arch/powerpc/kernel/vdso64/gettimeofday.S b/arch/powerpc/kernel/vdso64/gettimeofday.S +index 07bfe33fe874..81757f06bbd7 100644 +--- a/arch/powerpc/kernel/vdso64/gettimeofday.S ++++ b/arch/powerpc/kernel/vdso64/gettimeofday.S +@@ -186,12 +186,15 @@ V_FUNCTION_BEGIN(__kernel_clock_getres) + cror cr0*4+eq,cr0*4+eq,cr1*4+eq + bne cr0,99f + ++ mflr r12 ++ .cfi_register lr,r12 ++ bl V_LOCAL_FUNC(__get_datapage) ++ lwz r5, CLOCK_HRTIMER_RES(r3) ++ mtlr r12 + li r3,0 + cmpldi cr0,r4,0 + crclr cr0*4+so + beqlr +- lis r5,CLOCK_REALTIME_RES@h +- ori r5,r5,CLOCK_REALTIME_RES@l + std r3,TSPC64_TV_SEC(r4) + std r5,TSPC64_TV_NSEC(r4) + blr +diff --git a/arch/powerpc/platforms/powernv/opal-imc.c b/arch/powerpc/platforms/powernv/opal-imc.c +index e04b20625cb9..7ccc5c85c74e 100644 +--- a/arch/powerpc/platforms/powernv/opal-imc.c ++++ b/arch/powerpc/platforms/powernv/opal-imc.c +@@ -285,7 +285,14 @@ static int opal_imc_counters_probe(struct platform_device *pdev) + domain = IMC_DOMAIN_THREAD; + break; + case IMC_TYPE_TRACE: +- domain = IMC_DOMAIN_TRACE; ++ /* ++ * FIXME. Using trace_imc events to monitor application ++ * or KVM thread performance can cause a checkstop ++ * (system crash). ++ * Disable it for now. ++ */ ++ pr_info_once("IMC: disabling trace_imc PMU\n"); ++ domain = -1; + break; + default: + pr_warn("IMC Unknown Device type \n"); +diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c +index df832b09e3e9..f5fadbd2533a 100644 +--- a/arch/powerpc/sysdev/xive/common.c ++++ b/arch/powerpc/sysdev/xive/common.c +@@ -1035,6 +1035,15 @@ static int xive_irq_alloc_data(unsigned int virq, irq_hw_number_t hw) + xd->target = XIVE_INVALID_TARGET; + irq_set_handler_data(virq, xd); + ++ /* ++ * Turn OFF by default the interrupt being mapped. A side ++ * effect of this check is the mapping the ESB page of the ++ * interrupt in the Linux address space. This prevents page ++ * fault issues in the crash handler which masks all ++ * interrupts. ++ */ ++ xive_esb_read(xd, XIVE_ESB_SET_PQ_01); ++ + return 0; + } + +diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c +index 33c10749edec..55dc61cb4867 100644 +--- a/arch/powerpc/sysdev/xive/spapr.c ++++ b/arch/powerpc/sysdev/xive/spapr.c +@@ -392,20 +392,28 @@ static int xive_spapr_populate_irq_data(u32 hw_irq, struct xive_irq_data *data) + data->esb_shift = esb_shift; + data->trig_page = trig_page; + ++ data->hw_irq = hw_irq; ++ + /* + * No chip-id for the sPAPR backend. This has an impact how we + * pick a target. See xive_pick_irq_target(). + */ + data->src_chip = XIVE_INVALID_CHIP_ID; + ++ /* ++ * When the H_INT_ESB flag is set, the H_INT_ESB hcall should ++ * be used for interrupt management. Skip the remapping of the ++ * ESB pages which are not available. ++ */ ++ if (data->flags & XIVE_IRQ_FLAG_H_INT_ESB) ++ return 0; ++ + data->eoi_mmio = ioremap(data->eoi_page, 1u << data->esb_shift); + if (!data->eoi_mmio) { + pr_err("Failed to map EOI page for irq 0x%x\n", hw_irq); + return -ENOMEM; + } + +- data->hw_irq = hw_irq; +- + /* Full function page supports trigger */ + if (flags & XIVE_SRC_TRIGGER) { + data->trig_mmio = data->eoi_mmio; +diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile +index f142570ad860..c3842dbeb1b7 100644 +--- a/arch/powerpc/xmon/Makefile ++++ b/arch/powerpc/xmon/Makefile +@@ -1,8 +1,8 @@ + # SPDX-License-Identifier: GPL-2.0 + # Makefile for xmon + +-# Disable clang warning for using setjmp without setjmp.h header +-subdir-ccflags-y := $(call cc-disable-warning, builtin-requires-header) ++# Avoid clang warnings around longjmp/setjmp declarations ++subdir-ccflags-y := -ffreestanding + + GCOV_PROFILE := n + KCOV_INSTRUMENT := n +diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c +index 5367950510f6..fa0150285d38 100644 +--- a/arch/s390/boot/startup.c ++++ b/arch/s390/boot/startup.c +@@ -170,6 +170,11 @@ void startup_kernel(void) + handle_relocs(__kaslr_offset); + + if (__kaslr_offset) { ++ /* ++ * Save KASLR offset for early dumps, before vmcore_info is set. ++ * Mark as uneven to distinguish from real vmcore_info pointer. ++ */ ++ S390_lowcore.vmcore_info = __kaslr_offset | 0x1UL; + /* Clear non-relocated kernel */ + if (IS_ENABLED(CONFIG_KERNEL_UNCOMPRESSED)) + memset(img, 0, vmlinux.image_size); +diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h +index 5ff98d76a66c..a9e46b83c536 100644 +--- a/arch/s390/include/asm/pgtable.h ++++ b/arch/s390/include/asm/pgtable.h +@@ -1173,8 +1173,6 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr); + static inline void set_pte_at(struct mm_struct *mm, unsigned long addr, + pte_t *ptep, pte_t entry) + { +- if (!MACHINE_HAS_NX) +- pte_val(entry) &= ~_PAGE_NOEXEC; + if (pte_present(entry)) + pte_val(entry) &= ~_PAGE_UNUSED; + if (mm_has_pgste(mm)) +@@ -1191,6 +1189,8 @@ static inline pte_t mk_pte_phys(unsigned long physpage, pgprot_t pgprot) + { + pte_t __pte; + pte_val(__pte) = physpage + pgprot_val(pgprot); ++ if (!MACHINE_HAS_NX) ++ pte_val(__pte) &= ~_PAGE_NOEXEC; + return pte_mkyoung(__pte); + } + +diff --git a/arch/s390/kernel/machine_kexec.c b/arch/s390/kernel/machine_kexec.c +index 444a19125a81..d402ced7f7c3 100644 +--- a/arch/s390/kernel/machine_kexec.c ++++ b/arch/s390/kernel/machine_kexec.c +@@ -254,10 +254,10 @@ void arch_crash_save_vmcoreinfo(void) + VMCOREINFO_SYMBOL(lowcore_ptr); + VMCOREINFO_SYMBOL(high_memory); + VMCOREINFO_LENGTH(lowcore_ptr, NR_CPUS); +- mem_assign_absolute(S390_lowcore.vmcore_info, paddr_vmcoreinfo_note()); + vmcoreinfo_append_str("SDMA=%lx\n", __sdma); + vmcoreinfo_append_str("EDMA=%lx\n", __edma); + vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset()); ++ mem_assign_absolute(S390_lowcore.vmcore_info, paddr_vmcoreinfo_note()); + } + + void machine_shutdown(void) +diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c +index 44974654cbd0..d95c85780e07 100644 +--- a/arch/s390/kernel/smp.c ++++ b/arch/s390/kernel/smp.c +@@ -262,10 +262,13 @@ static void pcpu_prepare_secondary(struct pcpu *pcpu, int cpu) + lc->spinlock_index = 0; + lc->percpu_offset = __per_cpu_offset[cpu]; + lc->kernel_asce = S390_lowcore.kernel_asce; ++ lc->user_asce = S390_lowcore.kernel_asce; + lc->machine_flags = S390_lowcore.machine_flags; + lc->user_timer = lc->system_timer = + lc->steal_timer = lc->avg_steal_timer = 0; + __ctl_store(lc->cregs_save_area, 0, 15); ++ lc->cregs_save_area[1] = lc->kernel_asce; ++ lc->cregs_save_area[7] = lc->vdso_asce; + save_access_regs((unsigned int *) lc->access_regs_save_area); + memcpy(lc->stfle_fac_list, S390_lowcore.stfle_fac_list, + sizeof(lc->stfle_fac_list)); +@@ -816,6 +819,8 @@ static void smp_init_secondary(void) + + S390_lowcore.last_update_clock = get_tod_clock(); + restore_access_regs(S390_lowcore.access_regs_save_area); ++ set_cpu_flag(CIF_ASCE_PRIMARY); ++ set_cpu_flag(CIF_ASCE_SECONDARY); + cpu_init(); + preempt_disable(); + init_cpu_timer(); +diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c +index a0d3ce30fa08..a09ab0c3d074 100644 +--- a/block/blk-mq-sysfs.c ++++ b/block/blk-mq-sysfs.c +@@ -166,20 +166,25 @@ static ssize_t blk_mq_hw_sysfs_nr_reserved_tags_show(struct blk_mq_hw_ctx *hctx, + + static ssize_t blk_mq_hw_sysfs_cpus_show(struct blk_mq_hw_ctx *hctx, char *page) + { ++ const size_t size = PAGE_SIZE - 1; + unsigned int i, first = 1; +- ssize_t ret = 0; ++ int ret = 0, pos = 0; + + for_each_cpu(i, hctx->cpumask) { + if (first) +- ret += sprintf(ret + page, "%u", i); ++ ret = snprintf(pos + page, size - pos, "%u", i); + else +- ret += sprintf(ret + page, ", %u", i); ++ ret = snprintf(pos + page, size - pos, ", %u", i); ++ ++ if (ret >= size - pos) ++ break; + + first = 0; ++ pos += ret; + } + +- ret += sprintf(ret + page, "\n"); +- return ret; ++ ret = snprintf(pos + page, size + 1 - pos, "\n"); ++ return pos + ret; + } + + static struct blk_mq_hw_ctx_sysfs_entry blk_mq_hw_sysfs_nr_tags = { +diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c +index 60bbc5090abe..751ed38f2a10 100644 +--- a/drivers/acpi/acpi_lpss.c ++++ b/drivers/acpi/acpi_lpss.c +@@ -10,6 +10,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -463,6 +464,18 @@ struct lpss_device_links { + const char *consumer_hid; + const char *consumer_uid; + u32 flags; ++ const struct dmi_system_id *dep_missing_ids; ++}; ++ ++/* Please keep this list sorted alphabetically by vendor and model */ ++static const struct dmi_system_id i2c1_dep_missing_dmi_ids[] = { ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "T200TA"), ++ }, ++ }, ++ {} + }; + + /* +@@ -473,9 +486,17 @@ struct lpss_device_links { + * the supplier is not enumerated until after the consumer is probed. + */ + static const struct lpss_device_links lpss_device_links[] = { ++ /* CHT External sdcard slot controller depends on PMIC I2C ctrl */ + {"808622C1", "7", "80860F14", "3", DL_FLAG_PM_RUNTIME}, ++ /* CHT iGPU depends on PMIC I2C controller */ + {"808622C1", "7", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME}, ++ /* BYT iGPU depends on the Embedded Controller I2C controller (UID 1) */ ++ {"80860F41", "1", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME, ++ i2c1_dep_missing_dmi_ids}, ++ /* BYT CR iGPU depends on PMIC I2C controller (UID 5 on CR) */ + {"80860F41", "5", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME}, ++ /* BYT iGPU depends on PMIC I2C controller (UID 7 on non CR) */ ++ {"80860F41", "7", "LNXVIDEO", NULL, DL_FLAG_PM_RUNTIME}, + }; + + static bool hid_uid_match(struct acpi_device *adev, +@@ -570,7 +591,8 @@ static void acpi_lpss_link_consumer(struct device *dev1, + if (!dev2) + return; + +- if (acpi_lpss_dep(ACPI_COMPANION(dev2), ACPI_HANDLE(dev1))) ++ if ((link->dep_missing_ids && dmi_check_system(link->dep_missing_ids)) ++ || acpi_lpss_dep(ACPI_COMPANION(dev2), ACPI_HANDLE(dev1))) + device_link_add(dev2, dev1, link->flags); + + put_device(dev2); +@@ -585,7 +607,8 @@ static void acpi_lpss_link_supplier(struct device *dev1, + if (!dev2) + return; + +- if (acpi_lpss_dep(ACPI_COMPANION(dev1), ACPI_HANDLE(dev2))) ++ if ((link->dep_missing_ids && dmi_check_system(link->dep_missing_ids)) ++ || acpi_lpss_dep(ACPI_COMPANION(dev1), ACPI_HANDLE(dev2))) + device_link_add(dev1, dev2, link->flags); + + put_device(dev2); +diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c +index 48bc96d45bab..54002670cb7a 100644 +--- a/drivers/acpi/bus.c ++++ b/drivers/acpi/bus.c +@@ -153,7 +153,7 @@ int acpi_bus_get_private_data(acpi_handle handle, void **data) + { + acpi_status status; + +- if (!*data) ++ if (!data) + return -EINVAL; + + status = acpi_get_data(handle, acpi_bus_private_data_handler, data); +diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c +index 08bb9f2f2d23..5e4a8860a9c0 100644 +--- a/drivers/acpi/device_pm.c ++++ b/drivers/acpi/device_pm.c +@@ -1314,9 +1314,19 @@ static void acpi_dev_pm_detach(struct device *dev, bool power_off) + */ + int acpi_dev_pm_attach(struct device *dev, bool power_on) + { ++ /* ++ * Skip devices whose ACPI companions match the device IDs below, ++ * because they require special power management handling incompatible ++ * with the generic ACPI PM domain. ++ */ ++ static const struct acpi_device_id special_pm_ids[] = { ++ {"PNP0C0B", }, /* Generic ACPI fan */ ++ {"INT3404", }, /* Fan */ ++ {} ++ }; + struct acpi_device *adev = ACPI_COMPANION(dev); + +- if (!adev) ++ if (!adev || !acpi_match_device_ids(adev, special_pm_ids)) + return 0; + + /* +diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c +index da1e5c5ce150..bd75caff8322 100644 +--- a/drivers/acpi/ec.c ++++ b/drivers/acpi/ec.c +@@ -525,26 +525,10 @@ static void acpi_ec_enable_event(struct acpi_ec *ec) + } + + #ifdef CONFIG_PM_SLEEP +-static bool acpi_ec_query_flushed(struct acpi_ec *ec) ++static void __acpi_ec_flush_work(void) + { +- bool flushed; +- unsigned long flags; +- +- spin_lock_irqsave(&ec->lock, flags); +- flushed = !ec->nr_pending_queries; +- spin_unlock_irqrestore(&ec->lock, flags); +- return flushed; +-} +- +-static void __acpi_ec_flush_event(struct acpi_ec *ec) +-{ +- /* +- * When ec_freeze_events is true, we need to flush events in +- * the proper position before entering the noirq stage. +- */ +- wait_event(ec->wait, acpi_ec_query_flushed(ec)); +- if (ec_query_wq) +- flush_workqueue(ec_query_wq); ++ flush_scheduled_work(); /* flush ec->work */ ++ flush_workqueue(ec_query_wq); /* flush queries */ + } + + static void acpi_ec_disable_event(struct acpi_ec *ec) +@@ -554,15 +538,21 @@ static void acpi_ec_disable_event(struct acpi_ec *ec) + spin_lock_irqsave(&ec->lock, flags); + __acpi_ec_disable_event(ec); + spin_unlock_irqrestore(&ec->lock, flags); +- __acpi_ec_flush_event(ec); ++ ++ /* ++ * When ec_freeze_events is true, we need to flush events in ++ * the proper position before entering the noirq stage. ++ */ ++ __acpi_ec_flush_work(); + } + + void acpi_ec_flush_work(void) + { +- if (first_ec) +- __acpi_ec_flush_event(first_ec); ++ /* Without ec_query_wq there is nothing to flush. */ ++ if (!ec_query_wq) ++ return; + +- flush_scheduled_work(); ++ __acpi_ec_flush_work(); + } + #endif /* CONFIG_PM_SLEEP */ + +diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c +index a2e844a8e9ed..41168c027a5a 100644 +--- a/drivers/acpi/osl.c ++++ b/drivers/acpi/osl.c +@@ -374,19 +374,21 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size) + } + EXPORT_SYMBOL_GPL(acpi_os_map_memory); + +-static void acpi_os_drop_map_ref(struct acpi_ioremap *map) ++/* Must be called with mutex_lock(&acpi_ioremap_lock) */ ++static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map) + { +- if (!--map->refcount) ++ unsigned long refcount = --map->refcount; ++ ++ if (!refcount) + list_del_rcu(&map->list); ++ return refcount; + } + + static void acpi_os_map_cleanup(struct acpi_ioremap *map) + { +- if (!map->refcount) { +- synchronize_rcu_expedited(); +- acpi_unmap(map->phys, map->virt); +- kfree(map); +- } ++ synchronize_rcu_expedited(); ++ acpi_unmap(map->phys, map->virt); ++ kfree(map); + } + + /** +@@ -406,6 +408,7 @@ static void acpi_os_map_cleanup(struct acpi_ioremap *map) + void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size) + { + struct acpi_ioremap *map; ++ unsigned long refcount; + + if (!acpi_permanent_mmap) { + __acpi_unmap_table(virt, size); +@@ -419,10 +422,11 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size) + WARN(true, PREFIX "%s: bad address %p\n", __func__, virt); + return; + } +- acpi_os_drop_map_ref(map); ++ refcount = acpi_os_drop_map_ref(map); + mutex_unlock(&acpi_ioremap_lock); + +- acpi_os_map_cleanup(map); ++ if (!refcount) ++ acpi_os_map_cleanup(map); + } + EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem); + +@@ -457,6 +461,7 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas) + { + u64 addr; + struct acpi_ioremap *map; ++ unsigned long refcount; + + if (gas->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY) + return; +@@ -472,10 +477,11 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas) + mutex_unlock(&acpi_ioremap_lock); + return; + } +- acpi_os_drop_map_ref(map); ++ refcount = acpi_os_drop_map_ref(map); + mutex_unlock(&acpi_ioremap_lock); + +- acpi_os_map_cleanup(map); ++ if (!refcount) ++ acpi_os_map_cleanup(map); + } + EXPORT_SYMBOL(acpi_os_unmap_generic_address); + +diff --git a/drivers/android/binder.c b/drivers/android/binder.c +index 265d9dd46a5e..976a69420c16 100644 +--- a/drivers/android/binder.c ++++ b/drivers/android/binder.c +@@ -3314,7 +3314,7 @@ static void binder_transaction(struct binder_proc *proc, + binder_size_t parent_offset; + struct binder_fd_array_object *fda = + to_binder_fd_array_object(hdr); +- size_t num_valid = (buffer_offset - off_start_offset) * ++ size_t num_valid = (buffer_offset - off_start_offset) / + sizeof(binder_size_t); + struct binder_buffer_object *parent = + binder_validate_ptr(target_proc, t->buffer, +@@ -3388,7 +3388,7 @@ static void binder_transaction(struct binder_proc *proc, + t->buffer->user_data + sg_buf_offset; + sg_buf_offset += ALIGN(bp->length, sizeof(u64)); + +- num_valid = (buffer_offset - off_start_offset) * ++ num_valid = (buffer_offset - off_start_offset) / + sizeof(binder_size_t); + ret = binder_fixup_parent(t, thread, bp, + off_start_offset, +diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c +index b27f39688b5e..e329f82c0467 100644 +--- a/drivers/char/hw_random/omap-rng.c ++++ b/drivers/char/hw_random/omap-rng.c +@@ -66,6 +66,13 @@ + #define OMAP4_RNG_OUTPUT_SIZE 0x8 + #define EIP76_RNG_OUTPUT_SIZE 0x10 + ++/* ++ * EIP76 RNG takes approx. 700us to produce 16 bytes of output data ++ * as per testing results. And to account for the lack of udelay()'s ++ * reliability, we keep the timeout as 1000us. ++ */ ++#define RNG_DATA_FILL_TIMEOUT 100 ++ + enum { + RNG_OUTPUT_0_REG = 0, + RNG_OUTPUT_1_REG, +@@ -176,7 +183,7 @@ static int omap_rng_do_read(struct hwrng *rng, void *data, size_t max, + if (max < priv->pdata->data_size) + return 0; + +- for (i = 0; i < 20; i++) { ++ for (i = 0; i < RNG_DATA_FILL_TIMEOUT; i++) { + present = priv->pdata->data_present(priv); + if (present || !wait) + break; +diff --git a/drivers/char/ppdev.c b/drivers/char/ppdev.c +index c86f18aa8985..34bb88fe0b0a 100644 +--- a/drivers/char/ppdev.c ++++ b/drivers/char/ppdev.c +@@ -619,20 +619,27 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + if (copy_from_user(time32, argp, sizeof(time32))) + return -EFAULT; + ++ if ((time32[0] < 0) || (time32[1] < 0)) ++ return -EINVAL; ++ + return pp_set_timeout(pp->pdev, time32[0], time32[1]); + + case PPSETTIME64: + if (copy_from_user(time64, argp, sizeof(time64))) + return -EFAULT; + ++ if ((time64[0] < 0) || (time64[1] < 0)) ++ return -EINVAL; ++ ++ if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall()) ++ time64[1] >>= 32; ++ + return pp_set_timeout(pp->pdev, time64[0], time64[1]); + + case PPGETTIME32: + jiffies_to_timespec64(pp->pdev->timeout, &ts); + time32[0] = ts.tv_sec; + time32[1] = ts.tv_nsec / NSEC_PER_USEC; +- if ((time32[0] < 0) || (time32[1] < 0)) +- return -EINVAL; + + if (copy_to_user(argp, time32, sizeof(time32))) + return -EFAULT; +@@ -643,8 +650,9 @@ static int pp_do_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + jiffies_to_timespec64(pp->pdev->timeout, &ts); + time64[0] = ts.tv_sec; + time64[1] = ts.tv_nsec / NSEC_PER_USEC; +- if ((time64[0] < 0) || (time64[1] < 0)) +- return -EINVAL; ++ ++ if (IS_ENABLED(CONFIG_SPARC64) && !in_compat_syscall()) ++ time64[1] <<= 32; + + if (copy_to_user(argp, time64, sizeof(time64))) + return -EFAULT; +diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c +index ba9acae83bff..5817dfe5c5d2 100644 +--- a/drivers/char/tpm/tpm2-cmd.c ++++ b/drivers/char/tpm/tpm2-cmd.c +@@ -939,6 +939,10 @@ static int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip) + + chip->cc_attrs_tbl = devm_kcalloc(&chip->dev, 4, nr_commands, + GFP_KERNEL); ++ if (!chip->cc_attrs_tbl) { ++ rc = -ENOMEM; ++ goto out; ++ } + + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_GET_CAPABILITY); + if (rc) +diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c +index e4fdde93ed4c..e7df342a317d 100644 +--- a/drivers/char/tpm/tpm_tis.c ++++ b/drivers/char/tpm/tpm_tis.c +@@ -286,7 +286,7 @@ static int tpm_tis_plat_probe(struct platform_device *pdev) + } + tpm_info.res = *res; + +- tpm_info.irq = platform_get_irq(pdev, 0); ++ tpm_info.irq = platform_get_irq_optional(pdev, 0); + if (tpm_info.irq <= 0) { + if (pdev != force_pdev) + tpm_info.irq = -1; +diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c +index 6061850e59c9..56f4bc0d209e 100644 +--- a/drivers/cpufreq/powernv-cpufreq.c ++++ b/drivers/cpufreq/powernv-cpufreq.c +@@ -1041,9 +1041,14 @@ static struct cpufreq_driver powernv_cpufreq_driver = { + + static int init_chip_info(void) + { +- unsigned int chip[256]; ++ unsigned int *chip; + unsigned int cpu, i; + unsigned int prev_chip_id = UINT_MAX; ++ int ret = 0; ++ ++ chip = kcalloc(num_possible_cpus(), sizeof(*chip), GFP_KERNEL); ++ if (!chip) ++ return -ENOMEM; + + for_each_possible_cpu(cpu) { + unsigned int id = cpu_to_chip_id(cpu); +@@ -1055,8 +1060,10 @@ static int init_chip_info(void) + } + + chips = kcalloc(nr_chips, sizeof(struct chip), GFP_KERNEL); +- if (!chips) +- return -ENOMEM; ++ if (!chips) { ++ ret = -ENOMEM; ++ goto free_and_return; ++ } + + for (i = 0; i < nr_chips; i++) { + chips[i].id = chip[i]; +@@ -1066,7 +1073,9 @@ static int init_chip_info(void) + per_cpu(chip_info, cpu) = &chips[i]; + } + +- return 0; ++free_and_return: ++ kfree(chip); ++ return ret; + } + + static inline void clean_chip_info(void) +diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c +index 0895b988fa92..29d2d7a21bd7 100644 +--- a/drivers/cpuidle/cpuidle.c ++++ b/drivers/cpuidle/cpuidle.c +@@ -384,6 +384,7 @@ u64 cpuidle_poll_time(struct cpuidle_driver *drv, + continue; + + limit_ns = (u64)drv->states[i].target_residency * NSEC_PER_USEC; ++ break; + } + + dev->poll_limit_ns = limit_ns; +diff --git a/drivers/cpuidle/driver.c b/drivers/cpuidle/driver.c +index 80c1a830d991..9db154224999 100644 +--- a/drivers/cpuidle/driver.c ++++ b/drivers/cpuidle/driver.c +@@ -62,24 +62,23 @@ static inline void __cpuidle_unset_driver(struct cpuidle_driver *drv) + * __cpuidle_set_driver - set per CPU driver variables for the given driver. + * @drv: a valid pointer to a struct cpuidle_driver + * +- * For each CPU in the driver's cpumask, unset the registered driver per CPU +- * to @drv. +- * +- * Returns 0 on success, -EBUSY if the CPUs have driver(s) already. ++ * Returns 0 on success, -EBUSY if any CPU in the cpumask have a driver ++ * different from drv already. + */ + static inline int __cpuidle_set_driver(struct cpuidle_driver *drv) + { + int cpu; + + for_each_cpu(cpu, drv->cpumask) { ++ struct cpuidle_driver *old_drv; + +- if (__cpuidle_get_cpu_driver(cpu)) { +- __cpuidle_unset_driver(drv); ++ old_drv = __cpuidle_get_cpu_driver(cpu); ++ if (old_drv && old_drv != drv) + return -EBUSY; +- } ++ } + ++ for_each_cpu(cpu, drv->cpumask) + per_cpu(cpuidle_drivers, cpu) = drv; +- } + + return 0; + } +diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c +index b5a0e498f798..b9b9156618e6 100644 +--- a/drivers/cpuidle/governors/teo.c ++++ b/drivers/cpuidle/governors/teo.c +@@ -233,7 +233,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + { + struct teo_cpu *cpu_data = per_cpu_ptr(&teo_cpus, dev->cpu); + int latency_req = cpuidle_governor_latency_req(dev->cpu); +- unsigned int duration_us, count; ++ unsigned int duration_us, hits, misses, early_hits; + int max_early_idx, constraint_idx, idx, i; + ktime_t delta_tick; + +@@ -247,7 +247,9 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + cpu_data->sleep_length_ns = tick_nohz_get_sleep_length(&delta_tick); + duration_us = ktime_to_us(cpu_data->sleep_length_ns); + +- count = 0; ++ hits = 0; ++ misses = 0; ++ early_hits = 0; + max_early_idx = -1; + constraint_idx = drv->state_count; + idx = -1; +@@ -258,23 +260,61 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + + if (s->disabled || su->disable) { + /* +- * If the "early hits" metric of a disabled state is +- * greater than the current maximum, it should be taken +- * into account, because it would be a mistake to select +- * a deeper state with lower "early hits" metric. The +- * index cannot be changed to point to it, however, so +- * just increase the max count alone and let the index +- * still point to a shallower idle state. ++ * Ignore disabled states with target residencies beyond ++ * the anticipated idle duration. + */ +- if (max_early_idx >= 0 && +- count < cpu_data->states[i].early_hits) +- count = cpu_data->states[i].early_hits; ++ if (s->target_residency > duration_us) ++ continue; ++ ++ /* ++ * This state is disabled, so the range of idle duration ++ * values corresponding to it is covered by the current ++ * candidate state, but still the "hits" and "misses" ++ * metrics of the disabled state need to be used to ++ * decide whether or not the state covering the range in ++ * question is good enough. ++ */ ++ hits = cpu_data->states[i].hits; ++ misses = cpu_data->states[i].misses; ++ ++ if (early_hits >= cpu_data->states[i].early_hits || ++ idx < 0) ++ continue; ++ ++ /* ++ * If the current candidate state has been the one with ++ * the maximum "early hits" metric so far, the "early ++ * hits" metric of the disabled state replaces the ++ * current "early hits" count to avoid selecting a ++ * deeper state with lower "early hits" metric. ++ */ ++ if (max_early_idx == idx) { ++ early_hits = cpu_data->states[i].early_hits; ++ continue; ++ } ++ ++ /* ++ * The current candidate state is closer to the disabled ++ * one than the current maximum "early hits" state, so ++ * replace the latter with it, but in case the maximum ++ * "early hits" state index has not been set so far, ++ * check if the current candidate state is not too ++ * shallow for that role. ++ */ ++ if (!(tick_nohz_tick_stopped() && ++ drv->states[idx].target_residency < TICK_USEC)) { ++ early_hits = cpu_data->states[i].early_hits; ++ max_early_idx = idx; ++ } + + continue; + } + +- if (idx < 0) ++ if (idx < 0) { + idx = i; /* first enabled state */ ++ hits = cpu_data->states[i].hits; ++ misses = cpu_data->states[i].misses; ++ } + + if (s->target_residency > duration_us) + break; +@@ -283,11 +323,13 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + constraint_idx = i; + + idx = i; ++ hits = cpu_data->states[i].hits; ++ misses = cpu_data->states[i].misses; + +- if (count < cpu_data->states[i].early_hits && ++ if (early_hits < cpu_data->states[i].early_hits && + !(tick_nohz_tick_stopped() && + drv->states[i].target_residency < TICK_USEC)) { +- count = cpu_data->states[i].early_hits; ++ early_hits = cpu_data->states[i].early_hits; + max_early_idx = i; + } + } +@@ -300,8 +342,7 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + * "early hits" metric, but if that cannot be determined, just use the + * state selected so far. + */ +- if (cpu_data->states[idx].hits <= cpu_data->states[idx].misses && +- max_early_idx >= 0) { ++ if (hits <= misses && max_early_idx >= 0) { + idx = max_early_idx; + duration_us = drv->states[idx].target_residency; + } +@@ -316,10 +357,9 @@ static int teo_select(struct cpuidle_driver *drv, struct cpuidle_device *dev, + if (idx < 0) { + idx = 0; /* No states enabled. Must use 0. */ + } else if (idx > 0) { ++ unsigned int count = 0; + u64 sum = 0; + +- count = 0; +- + /* + * Count and sum the most recent idle duration values less than + * the current expected idle duration value. +diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c +index 446490c9d635..3a1484e7a3ae 100644 +--- a/drivers/devfreq/devfreq.c ++++ b/drivers/devfreq/devfreq.c +@@ -160,6 +160,7 @@ int devfreq_update_status(struct devfreq *devfreq, unsigned long freq) + int lev, prev_lev, ret = 0; + unsigned long cur_time; + ++ lockdep_assert_held(&devfreq->lock); + cur_time = jiffies; + + /* Immediately exit if previous_freq is not initialized yet. */ +@@ -1397,12 +1398,17 @@ static ssize_t trans_stat_show(struct device *dev, + int i, j; + unsigned int max_state = devfreq->profile->max_state; + +- if (!devfreq->stop_polling && +- devfreq_update_status(devfreq, devfreq->previous_freq)) +- return 0; + if (max_state == 0) + return sprintf(buf, "Not Supported.\n"); + ++ mutex_lock(&devfreq->lock); ++ if (!devfreq->stop_polling && ++ devfreq_update_status(devfreq, devfreq->previous_freq)) { ++ mutex_unlock(&devfreq->lock); ++ return 0; ++ } ++ mutex_unlock(&devfreq->lock); ++ + len = sprintf(buf, " From : To\n"); + len += sprintf(buf + len, " :"); + for (i = 0; i < max_state; i++) +diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c +index fbda4b876afd..0be3d1b17f03 100644 +--- a/drivers/edac/altera_edac.c ++++ b/drivers/edac/altera_edac.c +@@ -560,6 +560,7 @@ static const struct regmap_config s10_sdram_regmap_cfg = { + .reg_write = s10_protected_reg_write, + .use_single_read = true, + .use_single_write = true, ++ .fast_io = true, + }; + + /************** ***********/ +diff --git a/drivers/edac/ghes_edac.c b/drivers/edac/ghes_edac.c +index f6f6a688c009..296e714bf553 100644 +--- a/drivers/edac/ghes_edac.c ++++ b/drivers/edac/ghes_edac.c +@@ -566,8 +566,8 @@ int ghes_edac_register(struct ghes *ghes, struct device *dev) + ghes_pvt = pvt; + spin_unlock_irqrestore(&ghes_lock, flags); + +- /* only increment on success */ +- refcount_inc(&ghes_refcount); ++ /* only set on success */ ++ refcount_set(&ghes_refcount, 1); + + unlock: + mutex_unlock(&ghes_reg_mutex); +diff --git a/drivers/firmware/qcom_scm-64.c b/drivers/firmware/qcom_scm-64.c +index 91d5ad7cf58b..25e0f60c759a 100644 +--- a/drivers/firmware/qcom_scm-64.c ++++ b/drivers/firmware/qcom_scm-64.c +@@ -150,7 +150,7 @@ static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id, + kfree(args_virt); + } + +- if (res->a0 < 0) ++ if ((long)res->a0 < 0) + return qcom_scm_remap_error(res->a0); + + return 0; +diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c +index f21bc8a7ee3a..bdf91b75328e 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_drv.c ++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c +@@ -443,7 +443,7 @@ panfrost_postclose(struct drm_device *dev, struct drm_file *file) + { + struct panfrost_file_priv *panfrost_priv = file->driver_priv; + +- panfrost_perfcnt_close(panfrost_priv); ++ panfrost_perfcnt_close(file); + panfrost_job_close(panfrost_priv); + + panfrost_mmu_pgtable_free(panfrost_priv); +diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c b/drivers/gpu/drm/panfrost/panfrost_gem.c +index acb07fe06580..bc3ff22e5e85 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_gem.c ++++ b/drivers/gpu/drm/panfrost/panfrost_gem.c +@@ -41,7 +41,7 @@ static void panfrost_gem_free_object(struct drm_gem_object *obj) + drm_gem_shmem_free_object(obj); + } + +-static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) ++int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv) + { + int ret; + size_t size = obj->size; +@@ -80,7 +80,7 @@ static int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_p + return ret; + } + +-static void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv) ++void panfrost_gem_close(struct drm_gem_object *obj, struct drm_file *file_priv) + { + struct panfrost_gem_object *bo = to_panfrost_bo(obj); + struct panfrost_file_priv *priv = file_priv->driver_priv; +diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.h b/drivers/gpu/drm/panfrost/panfrost_gem.h +index 50920819cc16..4b17e7308764 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_gem.h ++++ b/drivers/gpu/drm/panfrost/panfrost_gem.h +@@ -45,6 +45,10 @@ panfrost_gem_create_with_handle(struct drm_file *file_priv, + u32 flags, + uint32_t *handle); + ++int panfrost_gem_open(struct drm_gem_object *obj, struct drm_file *file_priv); ++void panfrost_gem_close(struct drm_gem_object *obj, ++ struct drm_file *file_priv); ++ + void panfrost_gem_shrinker_init(struct drm_device *dev); + void panfrost_gem_shrinker_cleanup(struct drm_device *dev); + +diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +index 2dba192bf198..2c04e858c50a 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c ++++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +@@ -67,9 +67,10 @@ static int panfrost_perfcnt_dump_locked(struct panfrost_device *pfdev) + } + + static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, +- struct panfrost_file_priv *user, ++ struct drm_file *file_priv, + unsigned int counterset) + { ++ struct panfrost_file_priv *user = file_priv->driver_priv; + struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct drm_gem_shmem_object *bo; + u32 cfg; +@@ -91,14 +92,14 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, + perfcnt->bo = to_panfrost_bo(&bo->base); + + /* Map the perfcnt buf in the address space attached to file_priv. */ +- ret = panfrost_mmu_map(perfcnt->bo); ++ ret = panfrost_gem_open(&perfcnt->bo->base.base, file_priv); + if (ret) + goto err_put_bo; + + perfcnt->buf = drm_gem_shmem_vmap(&bo->base); + if (IS_ERR(perfcnt->buf)) { + ret = PTR_ERR(perfcnt->buf); +- goto err_put_bo; ++ goto err_close_bo; + } + + /* +@@ -157,14 +158,17 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, + + err_vunmap: + drm_gem_shmem_vunmap(&perfcnt->bo->base.base, perfcnt->buf); ++err_close_bo: ++ panfrost_gem_close(&perfcnt->bo->base.base, file_priv); + err_put_bo: + drm_gem_object_put_unlocked(&bo->base); + return ret; + } + + static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, +- struct panfrost_file_priv *user) ++ struct drm_file *file_priv) + { ++ struct panfrost_file_priv *user = file_priv->driver_priv; + struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + + if (user != perfcnt->user) +@@ -180,6 +184,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, + perfcnt->user = NULL; + drm_gem_shmem_vunmap(&perfcnt->bo->base.base, perfcnt->buf); + perfcnt->buf = NULL; ++ panfrost_gem_close(&perfcnt->bo->base.base, file_priv); + drm_gem_object_put_unlocked(&perfcnt->bo->base.base); + perfcnt->bo = NULL; + pm_runtime_mark_last_busy(pfdev->dev); +@@ -191,7 +196,6 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, + int panfrost_ioctl_perfcnt_enable(struct drm_device *dev, void *data, + struct drm_file *file_priv) + { +- struct panfrost_file_priv *pfile = file_priv->driver_priv; + struct panfrost_device *pfdev = dev->dev_private; + struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct drm_panfrost_perfcnt_enable *req = data; +@@ -207,10 +211,10 @@ int panfrost_ioctl_perfcnt_enable(struct drm_device *dev, void *data, + + mutex_lock(&perfcnt->lock); + if (req->enable) +- ret = panfrost_perfcnt_enable_locked(pfdev, pfile, ++ ret = panfrost_perfcnt_enable_locked(pfdev, file_priv, + req->counterset); + else +- ret = panfrost_perfcnt_disable_locked(pfdev, pfile); ++ ret = panfrost_perfcnt_disable_locked(pfdev, file_priv); + mutex_unlock(&perfcnt->lock); + + return ret; +@@ -248,15 +252,16 @@ out: + return ret; + } + +-void panfrost_perfcnt_close(struct panfrost_file_priv *pfile) ++void panfrost_perfcnt_close(struct drm_file *file_priv) + { ++ struct panfrost_file_priv *pfile = file_priv->driver_priv; + struct panfrost_device *pfdev = pfile->pfdev; + struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + + pm_runtime_get_sync(pfdev->dev); + mutex_lock(&perfcnt->lock); + if (perfcnt->user == pfile) +- panfrost_perfcnt_disable_locked(pfdev, pfile); ++ panfrost_perfcnt_disable_locked(pfdev, file_priv); + mutex_unlock(&perfcnt->lock); + pm_runtime_mark_last_busy(pfdev->dev); + pm_runtime_put_autosuspend(pfdev->dev); +diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.h b/drivers/gpu/drm/panfrost/panfrost_perfcnt.h +index 13b8fdaa1b43..8bbcf5f5fb33 100644 +--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.h ++++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.h +@@ -9,7 +9,7 @@ void panfrost_perfcnt_sample_done(struct panfrost_device *pfdev); + void panfrost_perfcnt_clean_cache_done(struct panfrost_device *pfdev); + int panfrost_perfcnt_init(struct panfrost_device *pfdev); + void panfrost_perfcnt_fini(struct panfrost_device *pfdev); +-void panfrost_perfcnt_close(struct panfrost_file_priv *pfile); ++void panfrost_perfcnt_close(struct drm_file *file_priv); + int panfrost_ioctl_perfcnt_enable(struct drm_device *dev, void *data, + struct drm_file *file_priv); + int panfrost_ioctl_perfcnt_dump(struct drm_device *dev, void *data, +diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c +index 05f7896c3a01..b605889b507a 100644 +--- a/drivers/hwtracing/coresight/coresight-funnel.c ++++ b/drivers/hwtracing/coresight/coresight-funnel.c +@@ -38,12 +38,14 @@ DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel"); + * @atclk: optional clock for the core parts of the funnel. + * @csdev: component vitals needed by the framework. + * @priority: port selection order. ++ * @spinlock: serialize enable/disable operations. + */ + struct funnel_drvdata { + void __iomem *base; + struct clk *atclk; + struct coresight_device *csdev; + unsigned long priority; ++ spinlock_t spinlock; + }; + + static int dynamic_funnel_enable_hw(struct funnel_drvdata *drvdata, int port) +@@ -76,11 +78,21 @@ static int funnel_enable(struct coresight_device *csdev, int inport, + { + int rc = 0; + struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); +- +- if (drvdata->base) +- rc = dynamic_funnel_enable_hw(drvdata, inport); +- ++ unsigned long flags; ++ bool first_enable = false; ++ ++ spin_lock_irqsave(&drvdata->spinlock, flags); ++ if (atomic_read(&csdev->refcnt[inport]) == 0) { ++ if (drvdata->base) ++ rc = dynamic_funnel_enable_hw(drvdata, inport); ++ if (!rc) ++ first_enable = true; ++ } + if (!rc) ++ atomic_inc(&csdev->refcnt[inport]); ++ spin_unlock_irqrestore(&drvdata->spinlock, flags); ++ ++ if (first_enable) + dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", inport); + return rc; + } +@@ -107,11 +119,19 @@ static void funnel_disable(struct coresight_device *csdev, int inport, + int outport) + { + struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); ++ unsigned long flags; ++ bool last_disable = false; ++ ++ spin_lock_irqsave(&drvdata->spinlock, flags); ++ if (atomic_dec_return(&csdev->refcnt[inport]) == 0) { ++ if (drvdata->base) ++ dynamic_funnel_disable_hw(drvdata, inport); ++ last_disable = true; ++ } ++ spin_unlock_irqrestore(&drvdata->spinlock, flags); + +- if (drvdata->base) +- dynamic_funnel_disable_hw(drvdata, inport); +- +- dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport); ++ if (last_disable) ++ dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport); + } + + static const struct coresight_ops_link funnel_link_ops = { +diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c +index b29ba640eb25..43304196a1a6 100644 +--- a/drivers/hwtracing/coresight/coresight-replicator.c ++++ b/drivers/hwtracing/coresight/coresight-replicator.c +@@ -31,11 +31,13 @@ DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator"); + * whether this one is programmable or not. + * @atclk: optional clock for the core parts of the replicator. + * @csdev: component vitals needed by the framework ++ * @spinlock: serialize enable/disable operations. + */ + struct replicator_drvdata { + void __iomem *base; + struct clk *atclk; + struct coresight_device *csdev; ++ spinlock_t spinlock; + }; + + static void dynamic_replicator_reset(struct replicator_drvdata *drvdata) +@@ -97,10 +99,22 @@ static int replicator_enable(struct coresight_device *csdev, int inport, + { + int rc = 0; + struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); +- +- if (drvdata->base) +- rc = dynamic_replicator_enable(drvdata, inport, outport); ++ unsigned long flags; ++ bool first_enable = false; ++ ++ spin_lock_irqsave(&drvdata->spinlock, flags); ++ if (atomic_read(&csdev->refcnt[outport]) == 0) { ++ if (drvdata->base) ++ rc = dynamic_replicator_enable(drvdata, inport, ++ outport); ++ if (!rc) ++ first_enable = true; ++ } + if (!rc) ++ atomic_inc(&csdev->refcnt[outport]); ++ spin_unlock_irqrestore(&drvdata->spinlock, flags); ++ ++ if (first_enable) + dev_dbg(&csdev->dev, "REPLICATOR enabled\n"); + return rc; + } +@@ -137,10 +151,19 @@ static void replicator_disable(struct coresight_device *csdev, int inport, + int outport) + { + struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); ++ unsigned long flags; ++ bool last_disable = false; ++ ++ spin_lock_irqsave(&drvdata->spinlock, flags); ++ if (atomic_dec_return(&csdev->refcnt[outport]) == 0) { ++ if (drvdata->base) ++ dynamic_replicator_disable(drvdata, inport, outport); ++ last_disable = true; ++ } ++ spin_unlock_irqrestore(&drvdata->spinlock, flags); + +- if (drvdata->base) +- dynamic_replicator_disable(drvdata, inport, outport); +- dev_dbg(&csdev->dev, "REPLICATOR disabled\n"); ++ if (last_disable) ++ dev_dbg(&csdev->dev, "REPLICATOR disabled\n"); + } + + static const struct coresight_ops_link replicator_link_ops = { +diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c +index 807416b75ecc..d0cc3985b72a 100644 +--- a/drivers/hwtracing/coresight/coresight-tmc-etf.c ++++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c +@@ -334,9 +334,10 @@ static int tmc_disable_etf_sink(struct coresight_device *csdev) + static int tmc_enable_etf_link(struct coresight_device *csdev, + int inport, int outport) + { +- int ret; ++ int ret = 0; + unsigned long flags; + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); ++ bool first_enable = false; + + spin_lock_irqsave(&drvdata->spinlock, flags); + if (drvdata->reading) { +@@ -344,12 +345,18 @@ static int tmc_enable_etf_link(struct coresight_device *csdev, + return -EBUSY; + } + +- ret = tmc_etf_enable_hw(drvdata); ++ if (atomic_read(&csdev->refcnt[0]) == 0) { ++ ret = tmc_etf_enable_hw(drvdata); ++ if (!ret) { ++ drvdata->mode = CS_MODE_SYSFS; ++ first_enable = true; ++ } ++ } + if (!ret) +- drvdata->mode = CS_MODE_SYSFS; ++ atomic_inc(&csdev->refcnt[0]); + spin_unlock_irqrestore(&drvdata->spinlock, flags); + +- if (!ret) ++ if (first_enable) + dev_dbg(&csdev->dev, "TMC-ETF enabled\n"); + return ret; + } +@@ -359,6 +366,7 @@ static void tmc_disable_etf_link(struct coresight_device *csdev, + { + unsigned long flags; + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); ++ bool last_disable = false; + + spin_lock_irqsave(&drvdata->spinlock, flags); + if (drvdata->reading) { +@@ -366,11 +374,15 @@ static void tmc_disable_etf_link(struct coresight_device *csdev, + return; + } + +- tmc_etf_disable_hw(drvdata); +- drvdata->mode = CS_MODE_DISABLED; ++ if (atomic_dec_return(&csdev->refcnt[0]) == 0) { ++ tmc_etf_disable_hw(drvdata); ++ drvdata->mode = CS_MODE_DISABLED; ++ last_disable = true; ++ } + spin_unlock_irqrestore(&drvdata->spinlock, flags); + +- dev_dbg(&csdev->dev, "TMC-ETF disabled\n"); ++ if (last_disable) ++ dev_dbg(&csdev->dev, "TMC-ETF disabled\n"); + } + + static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, +diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c +index 6453c67a4d01..0bbce0d29158 100644 +--- a/drivers/hwtracing/coresight/coresight.c ++++ b/drivers/hwtracing/coresight/coresight.c +@@ -253,9 +253,9 @@ static int coresight_enable_link(struct coresight_device *csdev, + struct coresight_device *parent, + struct coresight_device *child) + { +- int ret; ++ int ret = 0; + int link_subtype; +- int refport, inport, outport; ++ int inport, outport; + + if (!parent || !child) + return -EINVAL; +@@ -264,29 +264,17 @@ static int coresight_enable_link(struct coresight_device *csdev, + outport = coresight_find_link_outport(csdev, child); + link_subtype = csdev->subtype.link_subtype; + +- if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) +- refport = inport; +- else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT) +- refport = outport; +- else +- refport = 0; +- +- if (refport < 0) +- return refport; ++ if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG && inport < 0) ++ return inport; ++ if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT && outport < 0) ++ return outport; + +- if (atomic_inc_return(&csdev->refcnt[refport]) == 1) { +- if (link_ops(csdev)->enable) { +- ret = link_ops(csdev)->enable(csdev, inport, outport); +- if (ret) { +- atomic_dec(&csdev->refcnt[refport]); +- return ret; +- } +- } +- } +- +- csdev->enable = true; ++ if (link_ops(csdev)->enable) ++ ret = link_ops(csdev)->enable(csdev, inport, outport); ++ if (!ret) ++ csdev->enable = true; + +- return 0; ++ return ret; + } + + static void coresight_disable_link(struct coresight_device *csdev, +@@ -295,7 +283,7 @@ static void coresight_disable_link(struct coresight_device *csdev, + { + int i, nr_conns; + int link_subtype; +- int refport, inport, outport; ++ int inport, outport; + + if (!parent || !child) + return; +@@ -305,20 +293,15 @@ static void coresight_disable_link(struct coresight_device *csdev, + link_subtype = csdev->subtype.link_subtype; + + if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) { +- refport = inport; + nr_conns = csdev->pdata->nr_inport; + } else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT) { +- refport = outport; + nr_conns = csdev->pdata->nr_outport; + } else { +- refport = 0; + nr_conns = 1; + } + +- if (atomic_dec_return(&csdev->refcnt[refport]) == 0) { +- if (link_ops(csdev)->disable) +- link_ops(csdev)->disable(csdev, inport, outport); +- } ++ if (link_ops(csdev)->disable) ++ link_ops(csdev)->disable(csdev, inport, outport); + + for (i = 0; i < nr_conns; i++) + if (atomic_read(&csdev->refcnt[i]) != 0) +diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c +index d5c1821b31c6..0dfd97bbde9e 100644 +--- a/drivers/hwtracing/intel_th/core.c ++++ b/drivers/hwtracing/intel_th/core.c +@@ -649,10 +649,8 @@ intel_th_subdevice_alloc(struct intel_th *th, + } + + err = intel_th_device_add_resources(thdev, res, subdev->nres); +- if (err) { +- put_device(&thdev->dev); ++ if (err) + goto fail_put_device; +- } + + if (subdev->type == INTEL_TH_OUTPUT) { + if (subdev->mknode) +@@ -667,10 +665,8 @@ intel_th_subdevice_alloc(struct intel_th *th, + } + + err = device_add(&thdev->dev); +- if (err) { +- put_device(&thdev->dev); ++ if (err) + goto fail_free_res; +- } + + /* need switch driver to be loaded to enumerate the rest */ + if (subdev->type == INTEL_TH_SWITCH && !req) { +diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c +index 03ca5b1bef9f..ebf3e30e989a 100644 +--- a/drivers/hwtracing/intel_th/pci.c ++++ b/drivers/hwtracing/intel_th/pci.c +@@ -209,6 +209,16 @@ static const struct pci_device_id intel_th_pci_id_table[] = { + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5), + .driver_data = (kernel_ulong_t)&intel_th_2x, + }, ++ { ++ /* Ice Lake CPU */ ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x8a29), ++ .driver_data = (kernel_ulong_t)&intel_th_2x, ++ }, ++ { ++ /* Tiger Lake CPU */ ++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x9a33), ++ .driver_data = (kernel_ulong_t)&intel_th_2x, ++ }, + { + /* Tiger Lake PCH */ + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa0a6), +diff --git a/drivers/hwtracing/stm/policy.c b/drivers/hwtracing/stm/policy.c +index 4b9e44b227d8..4f932a419752 100644 +--- a/drivers/hwtracing/stm/policy.c ++++ b/drivers/hwtracing/stm/policy.c +@@ -345,7 +345,11 @@ void stp_policy_unbind(struct stp_policy *policy) + stm->policy = NULL; + policy->stm = NULL; + ++ /* ++ * Drop the reference on the protocol driver and lose the link. ++ */ + stm_put_protocol(stm->pdrv); ++ stm->pdrv = NULL; + stm_put_device(stm); + } + +diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c +index edc6f1cc90b2..3f03abf100b5 100644 +--- a/drivers/iio/adc/ad7124.c ++++ b/drivers/iio/adc/ad7124.c +@@ -39,6 +39,8 @@ + #define AD7124_STATUS_POR_FLAG_MSK BIT(4) + + /* AD7124_ADC_CONTROL */ ++#define AD7124_ADC_CTRL_REF_EN_MSK BIT(8) ++#define AD7124_ADC_CTRL_REF_EN(x) FIELD_PREP(AD7124_ADC_CTRL_REF_EN_MSK, x) + #define AD7124_ADC_CTRL_PWR_MSK GENMASK(7, 6) + #define AD7124_ADC_CTRL_PWR(x) FIELD_PREP(AD7124_ADC_CTRL_PWR_MSK, x) + #define AD7124_ADC_CTRL_MODE_MSK GENMASK(5, 2) +@@ -424,7 +426,10 @@ static int ad7124_init_channel_vref(struct ad7124_state *st, + break; + case AD7124_INT_REF: + st->channel_config[channel_number].vref_mv = 2500; +- break; ++ st->adc_control &= ~AD7124_ADC_CTRL_REF_EN_MSK; ++ st->adc_control |= AD7124_ADC_CTRL_REF_EN(1); ++ return ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, ++ 2, st->adc_control); + default: + dev_err(&st->sd.spi->dev, "Invalid reference %d\n", refsel); + return -EINVAL; +diff --git a/drivers/iio/adc/ad7606.c b/drivers/iio/adc/ad7606.c +index f5ba94c03a8d..e4683a68522a 100644 +--- a/drivers/iio/adc/ad7606.c ++++ b/drivers/iio/adc/ad7606.c +@@ -85,7 +85,7 @@ err_unlock: + + static int ad7606_read_samples(struct ad7606_state *st) + { +- unsigned int num = st->chip_info->num_channels; ++ unsigned int num = st->chip_info->num_channels - 1; + u16 *data = st->data; + int ret; + +diff --git a/drivers/iio/adc/ad7949.c b/drivers/iio/adc/ad7949.c +index ac0ffff6c5ae..6b51bfcad0d0 100644 +--- a/drivers/iio/adc/ad7949.c ++++ b/drivers/iio/adc/ad7949.c +@@ -57,29 +57,11 @@ struct ad7949_adc_chip { + u32 buffer ____cacheline_aligned; + }; + +-static bool ad7949_spi_cfg_is_read_back(struct ad7949_adc_chip *ad7949_adc) +-{ +- if (!(ad7949_adc->cfg & AD7949_CFG_READ_BACK)) +- return true; +- +- return false; +-} +- +-static int ad7949_spi_bits_per_word(struct ad7949_adc_chip *ad7949_adc) +-{ +- int ret = ad7949_adc->resolution; +- +- if (ad7949_spi_cfg_is_read_back(ad7949_adc)) +- ret += AD7949_CFG_REG_SIZE_BITS; +- +- return ret; +-} +- + static int ad7949_spi_write_cfg(struct ad7949_adc_chip *ad7949_adc, u16 val, + u16 mask) + { + int ret; +- int bits_per_word = ad7949_spi_bits_per_word(ad7949_adc); ++ int bits_per_word = ad7949_adc->resolution; + int shift = bits_per_word - AD7949_CFG_REG_SIZE_BITS; + struct spi_message msg; + struct spi_transfer tx[] = { +@@ -107,7 +89,8 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val, + unsigned int channel) + { + int ret; +- int bits_per_word = ad7949_spi_bits_per_word(ad7949_adc); ++ int i; ++ int bits_per_word = ad7949_adc->resolution; + int mask = GENMASK(ad7949_adc->resolution, 0); + struct spi_message msg; + struct spi_transfer tx[] = { +@@ -118,12 +101,23 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val, + }, + }; + +- ret = ad7949_spi_write_cfg(ad7949_adc, +- channel << AD7949_OFFSET_CHANNEL_SEL, +- AD7949_MASK_CHANNEL_SEL); +- if (ret) +- return ret; ++ /* ++ * 1: write CFG for sample N and read old data (sample N-2) ++ * 2: if CFG was not changed since sample N-1 then we'll get good data ++ * at the next xfer, so we bail out now, otherwise we write something ++ * and we read garbage (sample N-1 configuration). ++ */ ++ for (i = 0; i < 2; i++) { ++ ret = ad7949_spi_write_cfg(ad7949_adc, ++ channel << AD7949_OFFSET_CHANNEL_SEL, ++ AD7949_MASK_CHANNEL_SEL); ++ if (ret) ++ return ret; ++ if (channel == ad7949_adc->current_channel) ++ break; ++ } + ++ /* 3: write something and read actual data */ + ad7949_adc->buffer = 0; + spi_message_init_with_transfers(&msg, tx, 1); + ret = spi_sync(ad7949_adc->spi, &msg); +@@ -138,10 +132,7 @@ static int ad7949_spi_read_channel(struct ad7949_adc_chip *ad7949_adc, int *val, + + ad7949_adc->current_channel = channel; + +- if (ad7949_spi_cfg_is_read_back(ad7949_adc)) +- *val = (ad7949_adc->buffer >> AD7949_CFG_REG_SIZE_BITS) & mask; +- else +- *val = ad7949_adc->buffer & mask; ++ *val = ad7949_adc->buffer & mask; + + return 0; + } +diff --git a/drivers/iio/humidity/hdc100x.c b/drivers/iio/humidity/hdc100x.c +index bfe1cdb16846..dcf5a5bdfaa8 100644 +--- a/drivers/iio/humidity/hdc100x.c ++++ b/drivers/iio/humidity/hdc100x.c +@@ -229,7 +229,7 @@ static int hdc100x_read_raw(struct iio_dev *indio_dev, + *val2 = 65536; + return IIO_VAL_FRACTIONAL; + } else { +- *val = 100; ++ *val = 100000; + *val2 = 65536; + return IIO_VAL_FRACTIONAL; + } +diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c +index 8743b2f376e2..7b966a41d623 100644 +--- a/drivers/iio/imu/adis16480.c ++++ b/drivers/iio/imu/adis16480.c +@@ -623,9 +623,13 @@ static int adis16480_read_raw(struct iio_dev *indio_dev, + *val2 = (st->chip_info->temp_scale % 1000) * 1000; + return IIO_VAL_INT_PLUS_MICRO; + case IIO_PRESSURE: +- *val = 0; +- *val2 = 4000; /* 40ubar = 0.004 kPa */ +- return IIO_VAL_INT_PLUS_MICRO; ++ /* ++ * max scale is 1310 mbar ++ * max raw value is 32767 shifted for 32bits ++ */ ++ *val = 131; /* 1310mbar = 131 kPa */ ++ *val2 = 32767 << 16; ++ return IIO_VAL_FRACTIONAL; + default: + return -EINVAL; + } +@@ -786,13 +790,14 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), + /* +- * storing the value in rad/degree and the scale in degree +- * gives us the result in rad and better precession than +- * storing the scale directly in rad. ++ * Typically we do IIO_RAD_TO_DEGREE in the denominator, which ++ * is exactly the same as IIO_DEGREE_TO_RAD in numerator, since ++ * it gives better approximation. However, in this case we ++ * cannot do it since it would not fit in a 32bit variable. + */ +- .gyro_max_val = IIO_RAD_TO_DEGREE(22887), +- .gyro_max_scale = 300, +- .accel_max_val = IIO_M_S_2_TO_G(21973), ++ .gyro_max_val = 22887 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(300), ++ .accel_max_val = IIO_M_S_2_TO_G(21973 << 16), + .accel_max_scale = 18, + .temp_scale = 5650, /* 5.65 milli degree Celsius */ + .int_clk = 2460000, +@@ -802,9 +807,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16480] = { + .channels = adis16480_channels, + .num_channels = ARRAY_SIZE(adis16480_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(22500), +- .gyro_max_scale = 450, +- .accel_max_val = IIO_M_S_2_TO_G(12500), ++ .gyro_max_val = 22500 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450), ++ .accel_max_val = IIO_M_S_2_TO_G(12500 << 16), + .accel_max_scale = 10, + .temp_scale = 5650, /* 5.65 milli degree Celsius */ + .int_clk = 2460000, +@@ -814,9 +819,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16485] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(22500), +- .gyro_max_scale = 450, +- .accel_max_val = IIO_M_S_2_TO_G(20000), ++ .gyro_max_val = 22500 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450), ++ .accel_max_val = IIO_M_S_2_TO_G(20000 << 16), + .accel_max_scale = 5, + .temp_scale = 5650, /* 5.65 milli degree Celsius */ + .int_clk = 2460000, +@@ -826,9 +831,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16488] = { + .channels = adis16480_channels, + .num_channels = ARRAY_SIZE(adis16480_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(22500), +- .gyro_max_scale = 450, +- .accel_max_val = IIO_M_S_2_TO_G(22500), ++ .gyro_max_val = 22500 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450), ++ .accel_max_val = IIO_M_S_2_TO_G(22500 << 16), + .accel_max_scale = 18, + .temp_scale = 5650, /* 5.65 milli degree Celsius */ + .int_clk = 2460000, +@@ -838,9 +843,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16495_1] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(20000), +- .gyro_max_scale = 125, +- .accel_max_val = IIO_M_S_2_TO_G(32000), ++ .gyro_max_val = 20000 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(125), ++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16), + .accel_max_scale = 8, + .temp_scale = 12500, /* 12.5 milli degree Celsius */ + .int_clk = 4250000, +@@ -851,9 +856,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16495_2] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(18000), +- .gyro_max_scale = 450, +- .accel_max_val = IIO_M_S_2_TO_G(32000), ++ .gyro_max_val = 18000 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450), ++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16), + .accel_max_scale = 8, + .temp_scale = 12500, /* 12.5 milli degree Celsius */ + .int_clk = 4250000, +@@ -864,9 +869,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16495_3] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(20000), +- .gyro_max_scale = 2000, +- .accel_max_val = IIO_M_S_2_TO_G(32000), ++ .gyro_max_val = 20000 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(2000), ++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16), + .accel_max_scale = 8, + .temp_scale = 12500, /* 12.5 milli degree Celsius */ + .int_clk = 4250000, +@@ -877,9 +882,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16497_1] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(20000), +- .gyro_max_scale = 125, +- .accel_max_val = IIO_M_S_2_TO_G(32000), ++ .gyro_max_val = 20000 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(125), ++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16), + .accel_max_scale = 40, + .temp_scale = 12500, /* 12.5 milli degree Celsius */ + .int_clk = 4250000, +@@ -890,9 +895,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16497_2] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(18000), +- .gyro_max_scale = 450, +- .accel_max_val = IIO_M_S_2_TO_G(32000), ++ .gyro_max_val = 18000 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(450), ++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16), + .accel_max_scale = 40, + .temp_scale = 12500, /* 12.5 milli degree Celsius */ + .int_clk = 4250000, +@@ -903,9 +908,9 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + [ADIS16497_3] = { + .channels = adis16485_channels, + .num_channels = ARRAY_SIZE(adis16485_channels), +- .gyro_max_val = IIO_RAD_TO_DEGREE(20000), +- .gyro_max_scale = 2000, +- .accel_max_val = IIO_M_S_2_TO_G(32000), ++ .gyro_max_val = 20000 << 16, ++ .gyro_max_scale = IIO_DEGREE_TO_RAD(2000), ++ .accel_max_val = IIO_M_S_2_TO_G(32000 << 16), + .accel_max_scale = 40, + .temp_scale = 12500, /* 12.5 milli degree Celsius */ + .int_clk = 4250000, +@@ -919,6 +924,7 @@ static const struct iio_info adis16480_info = { + .read_raw = &adis16480_read_raw, + .write_raw = &adis16480_write_raw, + .update_scan_mode = adis_update_scan_mode, ++ .debugfs_reg_access = adis_debugfs_reg_access, + }; + + static int adis16480_stop_device(struct iio_dev *indio_dev) +diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c +index 868281b8adb0..2261c6c4ac65 100644 +--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c ++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c +@@ -115,6 +115,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6050, + .config = &chip_config_6050, + .fifo_size = 1024, ++ .temp = {INV_MPU6050_TEMP_OFFSET, INV_MPU6050_TEMP_SCALE}, + }, + { + .whoami = INV_MPU6500_WHOAMI_VALUE, +@@ -122,6 +123,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6500, + .config = &chip_config_6050, + .fifo_size = 512, ++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE}, + }, + { + .whoami = INV_MPU6515_WHOAMI_VALUE, +@@ -129,6 +131,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6500, + .config = &chip_config_6050, + .fifo_size = 512, ++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE}, + }, + { + .whoami = INV_MPU6000_WHOAMI_VALUE, +@@ -136,6 +139,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6050, + .config = &chip_config_6050, + .fifo_size = 1024, ++ .temp = {INV_MPU6050_TEMP_OFFSET, INV_MPU6050_TEMP_SCALE}, + }, + { + .whoami = INV_MPU9150_WHOAMI_VALUE, +@@ -143,6 +147,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6050, + .config = &chip_config_6050, + .fifo_size = 1024, ++ .temp = {INV_MPU6050_TEMP_OFFSET, INV_MPU6050_TEMP_SCALE}, + }, + { + .whoami = INV_MPU9250_WHOAMI_VALUE, +@@ -150,6 +155,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6500, + .config = &chip_config_6050, + .fifo_size = 512, ++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE}, + }, + { + .whoami = INV_MPU9255_WHOAMI_VALUE, +@@ -157,6 +163,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6500, + .config = &chip_config_6050, + .fifo_size = 512, ++ .temp = {INV_MPU6500_TEMP_OFFSET, INV_MPU6500_TEMP_SCALE}, + }, + { + .whoami = INV_ICM20608_WHOAMI_VALUE, +@@ -164,6 +171,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_6500, + .config = &chip_config_6050, + .fifo_size = 512, ++ .temp = {INV_ICM20608_TEMP_OFFSET, INV_ICM20608_TEMP_SCALE}, + }, + { + .whoami = INV_ICM20602_WHOAMI_VALUE, +@@ -171,6 +179,7 @@ static const struct inv_mpu6050_hw hw_info[] = { + .reg = ®_set_icm20602, + .config = &chip_config_6050, + .fifo_size = 1008, ++ .temp = {INV_ICM20608_TEMP_OFFSET, INV_ICM20608_TEMP_SCALE}, + }, + }; + +@@ -471,12 +480,8 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev, + + return IIO_VAL_INT_PLUS_MICRO; + case IIO_TEMP: +- *val = 0; +- if (st->chip_type == INV_ICM20602) +- *val2 = INV_ICM20602_TEMP_SCALE; +- else +- *val2 = INV_MPU6050_TEMP_SCALE; +- ++ *val = st->hw->temp.scale / 1000000; ++ *val2 = st->hw->temp.scale % 1000000; + return IIO_VAL_INT_PLUS_MICRO; + default: + return -EINVAL; +@@ -484,11 +489,7 @@ inv_mpu6050_read_raw(struct iio_dev *indio_dev, + case IIO_CHAN_INFO_OFFSET: + switch (chan->type) { + case IIO_TEMP: +- if (st->chip_type == INV_ICM20602) +- *val = INV_ICM20602_TEMP_OFFSET; +- else +- *val = INV_MPU6050_TEMP_OFFSET; +- ++ *val = st->hw->temp.offset; + return IIO_VAL_INT; + default: + return -EINVAL; +diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h +index 51235677c534..c32bd0c012b5 100644 +--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h ++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h +@@ -101,6 +101,7 @@ struct inv_mpu6050_chip_config { + * @reg: register map of the chip. + * @config: configuration of the chip. + * @fifo_size: size of the FIFO in bytes. ++ * @temp: offset and scale to apply to raw temperature. + */ + struct inv_mpu6050_hw { + u8 whoami; +@@ -108,6 +109,10 @@ struct inv_mpu6050_hw { + const struct inv_mpu6050_reg_map *reg; + const struct inv_mpu6050_chip_config *config; + size_t fifo_size; ++ struct { ++ int offset; ++ int scale; ++ } temp; + }; + + /* +@@ -218,16 +223,19 @@ struct inv_mpu6050_state { + #define INV_MPU6050_REG_UP_TIME_MIN 5000 + #define INV_MPU6050_REG_UP_TIME_MAX 10000 + +-#define INV_MPU6050_TEMP_OFFSET 12421 +-#define INV_MPU6050_TEMP_SCALE 2941 ++#define INV_MPU6050_TEMP_OFFSET 12420 ++#define INV_MPU6050_TEMP_SCALE 2941176 + #define INV_MPU6050_MAX_GYRO_FS_PARAM 3 + #define INV_MPU6050_MAX_ACCL_FS_PARAM 3 + #define INV_MPU6050_THREE_AXIS 3 + #define INV_MPU6050_GYRO_CONFIG_FSR_SHIFT 3 + #define INV_MPU6050_ACCL_CONFIG_FSR_SHIFT 3 + +-#define INV_ICM20602_TEMP_OFFSET 8170 +-#define INV_ICM20602_TEMP_SCALE 3060 ++#define INV_MPU6500_TEMP_OFFSET 7011 ++#define INV_MPU6500_TEMP_SCALE 2995178 ++ ++#define INV_ICM20608_TEMP_OFFSET 8170 ++#define INV_ICM20608_TEMP_SCALE 3059976 + + /* 6 + 6 round up and plus 8 */ + #define INV_MPU6050_OUTPUT_DATA_SIZE 24 +diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c +index fd5ebe1e1594..28e011b35f21 100644 +--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c ++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c +@@ -985,8 +985,7 @@ int st_lsm6dsx_check_odr(struct st_lsm6dsx_sensor *sensor, u16 odr, u8 *val) + return -EINVAL; + + *val = odr_table->odr_avl[i].val; +- +- return 0; ++ return odr_table->odr_avl[i].hz; + } + + static u16 st_lsm6dsx_check_odr_dependency(struct st_lsm6dsx_hw *hw, u16 odr, +@@ -1149,8 +1148,10 @@ static int st_lsm6dsx_write_raw(struct iio_dev *iio_dev, + case IIO_CHAN_INFO_SAMP_FREQ: { + u8 data; + +- err = st_lsm6dsx_check_odr(sensor, val, &data); +- if (!err) ++ val = st_lsm6dsx_check_odr(sensor, val, &data); ++ if (val < 0) ++ err = val; ++ else + sensor->odr = val; + break; + } +diff --git a/drivers/interconnect/qcom/qcs404.c b/drivers/interconnect/qcom/qcs404.c +index b4966d8f3348..8e0735a87040 100644 +--- a/drivers/interconnect/qcom/qcs404.c ++++ b/drivers/interconnect/qcom/qcs404.c +@@ -414,7 +414,7 @@ static int qnoc_probe(struct platform_device *pdev) + struct icc_provider *provider; + struct qcom_icc_node **qnodes; + struct qcom_icc_provider *qp; +- struct icc_node *node; ++ struct icc_node *node, *tmp; + size_t num_nodes, i; + int ret; + +@@ -494,7 +494,7 @@ static int qnoc_probe(struct platform_device *pdev) + + return 0; + err: +- list_for_each_entry(node, &provider->nodes, node_list) { ++ list_for_each_entry_safe(node, tmp, &provider->nodes, node_list) { + icc_node_del(node); + icc_node_destroy(node->id); + } +@@ -508,9 +508,9 @@ static int qnoc_remove(struct platform_device *pdev) + { + struct qcom_icc_provider *qp = platform_get_drvdata(pdev); + struct icc_provider *provider = &qp->provider; +- struct icc_node *n; ++ struct icc_node *n, *tmp; + +- list_for_each_entry(n, &provider->nodes, node_list) { ++ list_for_each_entry_safe(n, tmp, &provider->nodes, node_list) { + icc_node_del(n); + icc_node_destroy(n->id); + } +diff --git a/drivers/interconnect/qcom/sdm845.c b/drivers/interconnect/qcom/sdm845.c +index 502a6c22b41e..387267ee9648 100644 +--- a/drivers/interconnect/qcom/sdm845.c ++++ b/drivers/interconnect/qcom/sdm845.c +@@ -868,9 +868,9 @@ static int qnoc_remove(struct platform_device *pdev) + { + struct qcom_icc_provider *qp = platform_get_drvdata(pdev); + struct icc_provider *provider = &qp->provider; +- struct icc_node *n; ++ struct icc_node *n, *tmp; + +- list_for_each_entry(n, &provider->nodes, node_list) { ++ list_for_each_entry_safe(n, tmp, &provider->nodes, node_list) { + icc_node_del(n); + icc_node_destroy(n->id); + } +diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c +index d06b8aa41e26..43d1af1d8173 100644 +--- a/drivers/md/dm-writecache.c ++++ b/drivers/md/dm-writecache.c +@@ -1218,7 +1218,8 @@ bio_copy: + } + } while (bio->bi_iter.bi_size); + +- if (unlikely(wc->uncommitted_blocks >= wc->autocommit_blocks)) ++ if (unlikely(bio->bi_opf & REQ_FUA || ++ wc->uncommitted_blocks >= wc->autocommit_blocks)) + writecache_flush(wc); + else + writecache_schedule_autocommit(wc); +diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c +index 595a73110e17..ac1179ca80d9 100644 +--- a/drivers/md/dm-zoned-metadata.c ++++ b/drivers/md/dm-zoned-metadata.c +@@ -554,6 +554,7 @@ static struct dmz_mblock *dmz_get_mblock(struct dmz_metadata *zmd, + TASK_UNINTERRUPTIBLE); + if (test_bit(DMZ_META_ERROR, &mblk->state)) { + dmz_release_mblock(zmd, mblk); ++ dmz_check_bdev(zmd->dev); + return ERR_PTR(-EIO); + } + +@@ -625,6 +626,8 @@ static int dmz_rdwr_block(struct dmz_metadata *zmd, int op, sector_t block, + ret = submit_bio_wait(bio); + bio_put(bio); + ++ if (ret) ++ dmz_check_bdev(zmd->dev); + return ret; + } + +@@ -691,6 +694,7 @@ static int dmz_write_dirty_mblocks(struct dmz_metadata *zmd, + TASK_UNINTERRUPTIBLE); + if (test_bit(DMZ_META_ERROR, &mblk->state)) { + clear_bit(DMZ_META_ERROR, &mblk->state); ++ dmz_check_bdev(zmd->dev); + ret = -EIO; + } + nr_mblks_submitted--; +@@ -768,7 +772,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) + /* If there are no dirty metadata blocks, just flush the device cache */ + if (list_empty(&write_list)) { + ret = blkdev_issue_flush(zmd->dev->bdev, GFP_NOIO, NULL); +- goto out; ++ goto err; + } + + /* +@@ -778,7 +782,7 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) + */ + ret = dmz_log_dirty_mblocks(zmd, &write_list); + if (ret) +- goto out; ++ goto err; + + /* + * The log is on disk. It is now safe to update in place +@@ -786,11 +790,11 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) + */ + ret = dmz_write_dirty_mblocks(zmd, &write_list, zmd->mblk_primary); + if (ret) +- goto out; ++ goto err; + + ret = dmz_write_sb(zmd, zmd->mblk_primary); + if (ret) +- goto out; ++ goto err; + + while (!list_empty(&write_list)) { + mblk = list_first_entry(&write_list, struct dmz_mblock, link); +@@ -805,16 +809,20 @@ int dmz_flush_metadata(struct dmz_metadata *zmd) + + zmd->sb_gen++; + out: +- if (ret && !list_empty(&write_list)) { +- spin_lock(&zmd->mblk_lock); +- list_splice(&write_list, &zmd->mblk_dirty_list); +- spin_unlock(&zmd->mblk_lock); +- } +- + dmz_unlock_flush(zmd); + up_write(&zmd->mblk_sem); + + return ret; ++ ++err: ++ if (!list_empty(&write_list)) { ++ spin_lock(&zmd->mblk_lock); ++ list_splice(&write_list, &zmd->mblk_dirty_list); ++ spin_unlock(&zmd->mblk_lock); ++ } ++ if (!dmz_check_bdev(zmd->dev)) ++ ret = -EIO; ++ goto out; + } + + /* +@@ -1244,6 +1252,7 @@ static int dmz_update_zone(struct dmz_metadata *zmd, struct dm_zone *zone) + if (ret) { + dmz_dev_err(zmd->dev, "Get zone %u report failed", + dmz_id(zmd, zone)); ++ dmz_check_bdev(zmd->dev); + return ret; + } + +diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c +index d240d7ca8a8a..e7ace908a9b7 100644 +--- a/drivers/md/dm-zoned-reclaim.c ++++ b/drivers/md/dm-zoned-reclaim.c +@@ -82,6 +82,7 @@ static int dmz_reclaim_align_wp(struct dmz_reclaim *zrc, struct dm_zone *zone, + "Align zone %u wp %llu to %llu (wp+%u) blocks failed %d", + dmz_id(zmd, zone), (unsigned long long)wp_block, + (unsigned long long)block, nr_blocks, ret); ++ dmz_check_bdev(zrc->dev); + return ret; + } + +@@ -489,12 +490,7 @@ static void dmz_reclaim_work(struct work_struct *work) + ret = dmz_do_reclaim(zrc); + if (ret) { + dmz_dev_debug(zrc->dev, "Reclaim error %d\n", ret); +- if (ret == -EIO) +- /* +- * LLD might be performing some error handling sequence +- * at the underlying device. To not interfere, do not +- * attempt to schedule the next reclaim run immediately. +- */ ++ if (!dmz_check_bdev(zrc->dev)) + return; + } + +diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c +index d3bcc4197f5d..4574e0dedbd6 100644 +--- a/drivers/md/dm-zoned-target.c ++++ b/drivers/md/dm-zoned-target.c +@@ -80,6 +80,8 @@ static inline void dmz_bio_endio(struct bio *bio, blk_status_t status) + + if (status != BLK_STS_OK && bio->bi_status == BLK_STS_OK) + bio->bi_status = status; ++ if (bio->bi_status != BLK_STS_OK) ++ bioctx->target->dev->flags |= DMZ_CHECK_BDEV; + + if (refcount_dec_and_test(&bioctx->ref)) { + struct dm_zone *zone = bioctx->zone; +@@ -565,31 +567,51 @@ out: + } + + /* +- * Check the backing device availability. If it's on the way out, ++ * Check if the backing device is being removed. If it's on the way out, + * start failing I/O. Reclaim and metadata components also call this + * function to cleanly abort operation in the event of such failure. + */ + bool dmz_bdev_is_dying(struct dmz_dev *dmz_dev) + { +- struct gendisk *disk; ++ if (dmz_dev->flags & DMZ_BDEV_DYING) ++ return true; + +- if (!(dmz_dev->flags & DMZ_BDEV_DYING)) { +- disk = dmz_dev->bdev->bd_disk; +- if (blk_queue_dying(bdev_get_queue(dmz_dev->bdev))) { +- dmz_dev_warn(dmz_dev, "Backing device queue dying"); +- dmz_dev->flags |= DMZ_BDEV_DYING; +- } else if (disk->fops->check_events) { +- if (disk->fops->check_events(disk, 0) & +- DISK_EVENT_MEDIA_CHANGE) { +- dmz_dev_warn(dmz_dev, "Backing device offline"); +- dmz_dev->flags |= DMZ_BDEV_DYING; +- } +- } ++ if (dmz_dev->flags & DMZ_CHECK_BDEV) ++ return !dmz_check_bdev(dmz_dev); ++ ++ if (blk_queue_dying(bdev_get_queue(dmz_dev->bdev))) { ++ dmz_dev_warn(dmz_dev, "Backing device queue dying"); ++ dmz_dev->flags |= DMZ_BDEV_DYING; + } + + return dmz_dev->flags & DMZ_BDEV_DYING; + } + ++/* ++ * Check the backing device availability. This detects such events as ++ * backing device going offline due to errors, media removals, etc. ++ * This check is less efficient than dmz_bdev_is_dying() and should ++ * only be performed as a part of error handling. ++ */ ++bool dmz_check_bdev(struct dmz_dev *dmz_dev) ++{ ++ struct gendisk *disk; ++ ++ dmz_dev->flags &= ~DMZ_CHECK_BDEV; ++ ++ if (dmz_bdev_is_dying(dmz_dev)) ++ return false; ++ ++ disk = dmz_dev->bdev->bd_disk; ++ if (disk->fops->check_events && ++ disk->fops->check_events(disk, 0) & DISK_EVENT_MEDIA_CHANGE) { ++ dmz_dev_warn(dmz_dev, "Backing device offline"); ++ dmz_dev->flags |= DMZ_BDEV_DYING; ++ } ++ ++ return !(dmz_dev->flags & DMZ_BDEV_DYING); ++} ++ + /* + * Process a new BIO. + */ +@@ -902,8 +924,8 @@ static int dmz_prepare_ioctl(struct dm_target *ti, struct block_device **bdev) + { + struct dmz_target *dmz = ti->private; + +- if (dmz_bdev_is_dying(dmz->dev)) +- return -ENODEV; ++ if (!dmz_check_bdev(dmz->dev)) ++ return -EIO; + + *bdev = dmz->dev->bdev; + +diff --git a/drivers/md/dm-zoned.h b/drivers/md/dm-zoned.h +index d8e70b0ade35..5b5e493d479c 100644 +--- a/drivers/md/dm-zoned.h ++++ b/drivers/md/dm-zoned.h +@@ -72,6 +72,7 @@ struct dmz_dev { + + /* Device flags. */ + #define DMZ_BDEV_DYING (1 << 0) ++#define DMZ_CHECK_BDEV (2 << 0) + + /* + * Zone descriptor. +@@ -255,5 +256,6 @@ void dmz_schedule_reclaim(struct dmz_reclaim *zrc); + * Functions defined in dm-zoned-target.c + */ + bool dmz_bdev_is_dying(struct dmz_dev *dmz_dev); ++bool dmz_check_bdev(struct dmz_dev *dmz_dev); + + #endif /* DM_ZONED_H */ +diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c +index c766c559d36d..26c75c0199fa 100644 +--- a/drivers/md/md-linear.c ++++ b/drivers/md/md-linear.c +@@ -244,10 +244,9 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio) + sector_t start_sector, end_sector, data_offset; + sector_t bio_sector = bio->bi_iter.bi_sector; + +- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { +- md_flush_request(mddev, bio); ++ if (unlikely(bio->bi_opf & REQ_PREFLUSH) ++ && md_flush_request(mddev, bio)) + return true; +- } + + tmp_dev = which_dev(mddev, bio_sector); + start_sector = tmp_dev->end_sector - tmp_dev->rdev->sectors; +diff --git a/drivers/md/md-multipath.c b/drivers/md/md-multipath.c +index 6780938d2991..152f9e65a226 100644 +--- a/drivers/md/md-multipath.c ++++ b/drivers/md/md-multipath.c +@@ -104,10 +104,9 @@ static bool multipath_make_request(struct mddev *mddev, struct bio * bio) + struct multipath_bh * mp_bh; + struct multipath_info *multipath; + +- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { +- md_flush_request(mddev, bio); ++ if (unlikely(bio->bi_opf & REQ_PREFLUSH) ++ && md_flush_request(mddev, bio)) + return true; +- } + + mp_bh = mempool_alloc(&conf->pool, GFP_NOIO); + +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 1be7abeb24fd..b8dd56b746da 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -550,7 +550,13 @@ static void md_submit_flush_data(struct work_struct *ws) + } + } + +-void md_flush_request(struct mddev *mddev, struct bio *bio) ++/* ++ * Manages consolidation of flushes and submitting any flushes needed for ++ * a bio with REQ_PREFLUSH. Returns true if the bio is finished or is ++ * being finished in another context. Returns false if the flushing is ++ * complete but still needs the I/O portion of the bio to be processed. ++ */ ++bool md_flush_request(struct mddev *mddev, struct bio *bio) + { + ktime_t start = ktime_get_boottime(); + spin_lock_irq(&mddev->lock); +@@ -575,9 +581,10 @@ void md_flush_request(struct mddev *mddev, struct bio *bio) + bio_endio(bio); + else { + bio->bi_opf &= ~REQ_PREFLUSH; +- mddev->pers->make_request(mddev, bio); ++ return false; + } + } ++ return true; + } + EXPORT_SYMBOL(md_flush_request); + +diff --git a/drivers/md/md.h b/drivers/md/md.h +index c5e3ff398b59..5f86f8adb0a4 100644 +--- a/drivers/md/md.h ++++ b/drivers/md/md.h +@@ -550,7 +550,7 @@ struct md_personality + int level; + struct list_head list; + struct module *owner; +- bool (*make_request)(struct mddev *mddev, struct bio *bio); ++ bool __must_check (*make_request)(struct mddev *mddev, struct bio *bio); + /* + * start up works that do NOT require md_thread. tasks that + * requires md_thread should go into start() +@@ -703,7 +703,7 @@ extern void md_error(struct mddev *mddev, struct md_rdev *rdev); + extern void md_finish_reshape(struct mddev *mddev); + + extern int mddev_congested(struct mddev *mddev, int bits); +-extern void md_flush_request(struct mddev *mddev, struct bio *bio); ++extern bool __must_check md_flush_request(struct mddev *mddev, struct bio *bio); + extern void md_super_write(struct mddev *mddev, struct md_rdev *rdev, + sector_t sector, int size, struct page *page); + extern int md_super_wait(struct mddev *mddev); +diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c +index aa88bdeb9978..b7c20979bd19 100644 +--- a/drivers/md/raid0.c ++++ b/drivers/md/raid0.c +@@ -575,10 +575,9 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio) + unsigned chunk_sects; + unsigned sectors; + +- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { +- md_flush_request(mddev, bio); ++ if (unlikely(bio->bi_opf & REQ_PREFLUSH) ++ && md_flush_request(mddev, bio)) + return true; +- } + + if (unlikely((bio_op(bio) == REQ_OP_DISCARD))) { + raid0_handle_discard(mddev, bio); +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c +index 0466ee2453b4..bb29aeefcbd0 100644 +--- a/drivers/md/raid1.c ++++ b/drivers/md/raid1.c +@@ -1567,10 +1567,9 @@ static bool raid1_make_request(struct mddev *mddev, struct bio *bio) + { + sector_t sectors; + +- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { +- md_flush_request(mddev, bio); ++ if (unlikely(bio->bi_opf & REQ_PREFLUSH) ++ && md_flush_request(mddev, bio)) + return true; +- } + + /* + * There is a limit to the maximum size, but +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index 8a62c920bb65..ec136e44aef7 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -1525,10 +1525,9 @@ static bool raid10_make_request(struct mddev *mddev, struct bio *bio) + int chunk_sects = chunk_mask + 1; + int sectors = bio_sectors(bio); + +- if (unlikely(bio->bi_opf & REQ_PREFLUSH)) { +- md_flush_request(mddev, bio); ++ if (unlikely(bio->bi_opf & REQ_PREFLUSH) ++ && md_flush_request(mddev, bio)) + return true; +- } + + if (!md_write_start(mddev, bio)) + return false; +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 223e97ab27e6..12a8ce83786e 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -5592,8 +5592,8 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) + if (ret == 0) + return true; + if (ret == -ENODEV) { +- md_flush_request(mddev, bi); +- return true; ++ if (md_flush_request(mddev, bi)) ++ return true; + } + /* ret == -EAGAIN, fallback */ + /* +diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c +index 7f4660555ddb..59ae7a1e63bc 100644 +--- a/drivers/media/platform/qcom/venus/vdec.c ++++ b/drivers/media/platform/qcom/venus/vdec.c +@@ -1412,9 +1412,6 @@ static const struct v4l2_file_operations vdec_fops = { + .unlocked_ioctl = video_ioctl2, + .poll = v4l2_m2m_fop_poll, + .mmap = v4l2_m2m_fop_mmap, +-#ifdef CONFIG_COMPAT +- .compat_ioctl32 = v4l2_compat_ioctl32, +-#endif + }; + + static int vdec_probe(struct platform_device *pdev) +diff --git a/drivers/media/platform/qcom/venus/venc.c b/drivers/media/platform/qcom/venus/venc.c +index 1b7fb2d5887c..30028ceb548b 100644 +--- a/drivers/media/platform/qcom/venus/venc.c ++++ b/drivers/media/platform/qcom/venus/venc.c +@@ -1235,9 +1235,6 @@ static const struct v4l2_file_operations venc_fops = { + .unlocked_ioctl = video_ioctl2, + .poll = v4l2_m2m_fop_poll, + .mmap = v4l2_m2m_fop_mmap, +-#ifdef CONFIG_COMPAT +- .compat_ioctl32 = v4l2_compat_ioctl32, +-#endif + }; + + static int venc_probe(struct platform_device *pdev) +diff --git a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c +index e90f1ba30574..675b5f2b4c2e 100644 +--- a/drivers/media/platform/sti/bdisp/bdisp-v4l2.c ++++ b/drivers/media/platform/sti/bdisp/bdisp-v4l2.c +@@ -651,8 +651,7 @@ static int bdisp_release(struct file *file) + + dev_dbg(bdisp->dev, "%s\n", __func__); + +- if (mutex_lock_interruptible(&bdisp->lock)) +- return -ERESTARTSYS; ++ mutex_lock(&bdisp->lock); + + v4l2_m2m_ctx_release(ctx->fh.m2m_ctx); + +diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c +index 6c53b9fc1617..4a6a7e8e66c2 100644 +--- a/drivers/media/platform/vimc/vimc-sensor.c ++++ b/drivers/media/platform/vimc/vimc-sensor.c +@@ -25,7 +25,6 @@ struct vimc_sen_device { + struct v4l2_subdev sd; + struct device *dev; + struct tpg_data tpg; +- struct task_struct *kthread_sen; + u8 *frame; + /* The active format */ + struct v4l2_mbus_framefmt mbus_format; +@@ -208,10 +207,6 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable) + const struct vimc_pix_map *vpix; + unsigned int frame_size; + +- if (vsen->kthread_sen) +- /* tpg is already executing */ +- return 0; +- + /* Calculate the frame size */ + vpix = vimc_pix_map_by_code(vsen->mbus_format.code); + frame_size = vsen->mbus_format.width * vpix->bpp * +diff --git a/drivers/media/radio/radio-wl1273.c b/drivers/media/radio/radio-wl1273.c +index 104ac41c6f96..112376873167 100644 +--- a/drivers/media/radio/radio-wl1273.c ++++ b/drivers/media/radio/radio-wl1273.c +@@ -1148,8 +1148,7 @@ static int wl1273_fm_fops_release(struct file *file) + if (radio->rds_users > 0) { + radio->rds_users--; + if (radio->rds_users == 0) { +- if (mutex_lock_interruptible(&core->lock)) +- return -EINTR; ++ mutex_lock(&core->lock); + + radio->irq_flags &= ~WL1273_RDS_EVENT; + +diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c +index 952fa4063ff8..d0df054b0b47 100644 +--- a/drivers/mmc/host/omap_hsmmc.c ++++ b/drivers/mmc/host/omap_hsmmc.c +@@ -1512,6 +1512,36 @@ static void omap_hsmmc_init_card(struct mmc_host *mmc, struct mmc_card *card) + + if (mmc_pdata(host)->init_card) + mmc_pdata(host)->init_card(card); ++ else if (card->type == MMC_TYPE_SDIO || ++ card->type == MMC_TYPE_SD_COMBO) { ++ struct device_node *np = mmc_dev(mmc)->of_node; ++ ++ /* ++ * REVISIT: should be moved to sdio core and made more ++ * general e.g. by expanding the DT bindings of child nodes ++ * to provide a mechanism to provide this information: ++ * Documentation/devicetree/bindings/mmc/mmc-card.txt ++ */ ++ ++ np = of_get_compatible_child(np, "ti,wl1251"); ++ if (np) { ++ /* ++ * We have TI wl1251 attached to MMC3. Pass this ++ * information to the SDIO core because it can't be ++ * probed by normal methods. ++ */ ++ ++ dev_info(host->dev, "found wl1251\n"); ++ card->quirks |= MMC_QUIRK_NONSTD_SDIO; ++ card->cccr.wide_bus = 1; ++ card->cis.vendor = 0x104c; ++ card->cis.device = 0x9066; ++ card->cis.blksize = 512; ++ card->cis.max_dtr = 24000000; ++ card->ocr = 0x80; ++ of_node_put(np); ++ } ++ } + } + + static void omap_hsmmc_enable_sdio_irq(struct mmc_host *mmc, int enable) +diff --git a/drivers/mtd/devices/spear_smi.c b/drivers/mtd/devices/spear_smi.c +index 986f81d2f93e..47ad0766affa 100644 +--- a/drivers/mtd/devices/spear_smi.c ++++ b/drivers/mtd/devices/spear_smi.c +@@ -592,6 +592,26 @@ static int spear_mtd_read(struct mtd_info *mtd, loff_t from, size_t len, + return 0; + } + ++/* ++ * The purpose of this function is to ensure a memcpy_toio() with byte writes ++ * only. Its structure is inspired from the ARM implementation of _memcpy_toio() ++ * which also does single byte writes but cannot be used here as this is just an ++ * implementation detail and not part of the API. Not mentioning the comment ++ * stating that _memcpy_toio() should be optimized. ++ */ ++static void spear_smi_memcpy_toio_b(volatile void __iomem *dest, ++ const void *src, size_t len) ++{ ++ const unsigned char *from = src; ++ ++ while (len) { ++ len--; ++ writeb(*from, dest); ++ from++; ++ dest++; ++ } ++} ++ + static inline int spear_smi_cpy_toio(struct spear_smi *dev, u32 bank, + void __iomem *dest, const void *src, size_t len) + { +@@ -614,7 +634,23 @@ static inline int spear_smi_cpy_toio(struct spear_smi *dev, u32 bank, + ctrlreg1 = readl(dev->io_base + SMI_CR1); + writel((ctrlreg1 | WB_MODE) & ~SW_MODE, dev->io_base + SMI_CR1); + +- memcpy_toio(dest, src, len); ++ /* ++ * In Write Burst mode (WB_MODE), the specs states that writes must be: ++ * - incremental ++ * - of the same size ++ * The ARM implementation of memcpy_toio() will optimize the number of ++ * I/O by using as much 4-byte writes as possible, surrounded by ++ * 2-byte/1-byte access if: ++ * - the destination is not 4-byte aligned ++ * - the length is not a multiple of 4-byte. ++ * Avoid this alternance of write access size by using our own 'byte ++ * access' helper if at least one of the two conditions above is true. ++ */ ++ if (IS_ALIGNED(len, sizeof(u32)) && ++ IS_ALIGNED((uintptr_t)dest, sizeof(u32))) ++ memcpy_toio(dest, src, len); ++ else ++ spear_smi_memcpy_toio_b(dest, src, len); + + writel(ctrlreg1, dev->io_base + SMI_CR1); + +diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c +index 5c2c30a7dffa..f64e3b6605c6 100644 +--- a/drivers/mtd/nand/raw/nand_base.c ++++ b/drivers/mtd/nand/raw/nand_base.c +@@ -292,12 +292,16 @@ int nand_bbm_get_next_page(struct nand_chip *chip, int page) + struct mtd_info *mtd = nand_to_mtd(chip); + int last_page = ((mtd->erasesize - mtd->writesize) >> + chip->page_shift) & chip->pagemask; ++ unsigned int bbm_flags = NAND_BBM_FIRSTPAGE | NAND_BBM_SECONDPAGE ++ | NAND_BBM_LASTPAGE; + ++ if (page == 0 && !(chip->options & bbm_flags)) ++ return 0; + if (page == 0 && chip->options & NAND_BBM_FIRSTPAGE) + return 0; +- else if (page <= 1 && chip->options & NAND_BBM_SECONDPAGE) ++ if (page <= 1 && chip->options & NAND_BBM_SECONDPAGE) + return 1; +- else if (page <= last_page && chip->options & NAND_BBM_LASTPAGE) ++ if (page <= last_page && chip->options & NAND_BBM_LASTPAGE) + return last_page; + + return -EINVAL; +diff --git a/drivers/mtd/nand/raw/nand_micron.c b/drivers/mtd/nand/raw/nand_micron.c +index 8ca9fad6e6ad..56654030ec7f 100644 +--- a/drivers/mtd/nand/raw/nand_micron.c ++++ b/drivers/mtd/nand/raw/nand_micron.c +@@ -446,8 +446,10 @@ static int micron_nand_init(struct nand_chip *chip) + if (ret) + goto err_free_manuf_data; + ++ chip->options |= NAND_BBM_FIRSTPAGE; ++ + if (mtd->writesize == 2048) +- chip->options |= NAND_BBM_FIRSTPAGE | NAND_BBM_SECONDPAGE; ++ chip->options |= NAND_BBM_SECONDPAGE; + + ondie = micron_supports_on_die_ecc(chip); + +diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c +index 1d67eeeab79d..235d51ea4d39 100644 +--- a/drivers/net/ethernet/realtek/r8169_main.c ++++ b/drivers/net/ethernet/realtek/r8169_main.c +@@ -4145,7 +4145,7 @@ static void rtl_hw_jumbo_disable(struct rtl8169_private *tp) + case RTL_GIGA_MAC_VER_27 ... RTL_GIGA_MAC_VER_28: + r8168dp_hw_jumbo_disable(tp); + break; +- case RTL_GIGA_MAC_VER_31 ... RTL_GIGA_MAC_VER_34: ++ case RTL_GIGA_MAC_VER_31 ... RTL_GIGA_MAC_VER_33: + r8168e_hw_jumbo_disable(tp); + break; + default: +diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c +index b94759daeacc..da2d179430ca 100644 +--- a/drivers/net/wireless/ath/ar5523/ar5523.c ++++ b/drivers/net/wireless/ath/ar5523/ar5523.c +@@ -255,7 +255,8 @@ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata, + + if (flags & AR5523_CMD_FLAG_MAGIC) + hdr->magic = cpu_to_be32(1 << 24); +- memcpy(hdr + 1, idata, ilen); ++ if (ilen) ++ memcpy(hdr + 1, idata, ilen); + + cmd->odata = odata; + cmd->olen = olen; +diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c +index 153b84447e40..41389c1eb252 100644 +--- a/drivers/net/wireless/ath/wil6210/wmi.c ++++ b/drivers/net/wireless/ath/wil6210/wmi.c +@@ -2505,7 +2505,8 @@ int wmi_set_ie(struct wil6210_vif *vif, u8 type, u16 ie_len, const void *ie) + cmd->mgmt_frm_type = type; + /* BUG: FW API define ieLen as u8. Will fix FW */ + cmd->ie_len = cpu_to_le16(ie_len); +- memcpy(cmd->ie_info, ie, ie_len); ++ if (ie_len) ++ memcpy(cmd->ie_info, ie, ie_len); + rc = wmi_send(wil, WMI_SET_APPIE_CMDID, vif->mid, cmd, len); + kfree(cmd); + out: +@@ -2541,7 +2542,8 @@ int wmi_update_ft_ies(struct wil6210_vif *vif, u16 ie_len, const void *ie) + } + + cmd->ie_len = cpu_to_le16(ie_len); +- memcpy(cmd->ie_info, ie, ie_len); ++ if (ie_len) ++ memcpy(cmd->ie_info, ie, ie_len); + rc = wmi_send(wil, WMI_UPDATE_FT_IES_CMDID, vif->mid, cmd, len); + kfree(cmd); + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c +index 6c463475e90b..3be60aef5465 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/pcie.c +@@ -1427,6 +1427,8 @@ static int brcmf_pcie_reset(struct device *dev) + struct brcmf_fw_request *fwreq; + int err; + ++ brcmf_pcie_intr_disable(devinfo); ++ + brcmf_pcie_bus_console_read(devinfo, true); + + brcmf_detach(dev); +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +index d80f71f82a6d..97cb3a8d505c 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c +@@ -468,6 +468,7 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, + dma_addr_t tb_phys; + int len, tb1_len, tb2_len; + void *tb1_addr; ++ struct sk_buff *frag; + + tb_phys = iwl_pcie_get_first_tb_dma(txq, idx); + +@@ -516,6 +517,19 @@ iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans, + if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta)) + goto out_err; + ++ skb_walk_frags(skb, frag) { ++ tb_phys = dma_map_single(trans->dev, frag->data, ++ skb_headlen(frag), DMA_TO_DEVICE); ++ if (unlikely(dma_mapping_error(trans->dev, tb_phys))) ++ goto out_err; ++ iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, skb_headlen(frag)); ++ trace_iwlwifi_dev_tx_tb(trans->dev, skb, ++ frag->data, ++ skb_headlen(frag)); ++ if (iwl_pcie_gen2_tx_add_frags(trans, frag, tfd, out_meta)) ++ goto out_err; ++ } ++ + return tfd; + + out_err: +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c +index c7f29a9be50d..146fe144f5f5 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c +@@ -1176,6 +1176,7 @@ void rtl92de_enable_interrupt(struct ieee80211_hw *hw) + + rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF); + rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF); ++ rtlpci->irq_enabled = true; + } + + void rtl92de_disable_interrupt(struct ieee80211_hw *hw) +@@ -1185,7 +1186,7 @@ void rtl92de_disable_interrupt(struct ieee80211_hw *hw) + + rtl_write_dword(rtlpriv, REG_HIMR, IMR8190_DISABLED); + rtl_write_dword(rtlpriv, REG_HIMRE, IMR8190_DISABLED); +- synchronize_irq(rtlpci->pdev->irq); ++ rtlpci->irq_enabled = false; + } + + static void _rtl92de_poweroff_adapter(struct ieee80211_hw *hw) +@@ -1351,7 +1352,7 @@ void rtl92de_set_beacon_related_registers(struct ieee80211_hw *hw) + + bcn_interval = mac->beacon_interval; + atim_window = 2; +- /*rtl92de_disable_interrupt(hw); */ ++ rtl92de_disable_interrupt(hw); + rtl_write_word(rtlpriv, REG_ATIMWND, atim_window); + rtl_write_word(rtlpriv, REG_BCN_INTERVAL, bcn_interval); + rtl_write_word(rtlpriv, REG_BCNTCFG, 0x660f); +@@ -1371,9 +1372,9 @@ void rtl92de_set_beacon_interval(struct ieee80211_hw *hw) + + RT_TRACE(rtlpriv, COMP_BEACON, DBG_DMESG, + "beacon_interval:%d\n", bcn_interval); +- /* rtl92de_disable_interrupt(hw); */ ++ rtl92de_disable_interrupt(hw); + rtl_write_word(rtlpriv, REG_BCN_INTERVAL, bcn_interval); +- /* rtl92de_enable_interrupt(hw); */ ++ rtl92de_enable_interrupt(hw); + } + + void rtl92de_update_interrupt_mask(struct ieee80211_hw *hw, +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c +index 99e5cd9a5c86..1dbdddce0823 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/sw.c +@@ -216,6 +216,7 @@ static struct rtl_hal_ops rtl8192de_hal_ops = { + .led_control = rtl92de_led_control, + .set_desc = rtl92de_set_desc, + .get_desc = rtl92de_get_desc, ++ .is_tx_desc_closed = rtl92de_is_tx_desc_closed, + .tx_polling = rtl92de_tx_polling, + .enable_hw_sec = rtl92de_enable_hw_security_config, + .set_key = rtl92de_set_key, +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c +index 2494e1f118f8..92c9fb45f800 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c +@@ -804,13 +804,15 @@ u64 rtl92de_get_desc(struct ieee80211_hw *hw, + break; + } + } else { +- struct rx_desc_92c *pdesc = (struct rx_desc_92c *)p_desc; + switch (desc_name) { + case HW_DESC_OWN: +- ret = GET_RX_DESC_OWN(pdesc); ++ ret = GET_RX_DESC_OWN(p_desc); + break; + case HW_DESC_RXPKT_LEN: +- ret = GET_RX_DESC_PKT_LEN(pdesc); ++ ret = GET_RX_DESC_PKT_LEN(p_desc); ++ break; ++ case HW_DESC_RXBUFF_ADDR: ++ ret = GET_RX_DESC_BUFF_ADDR(p_desc); + break; + default: + WARN_ONCE(true, "rtl8192de: ERR rxdesc :%d not processed\n", +@@ -821,6 +823,23 @@ u64 rtl92de_get_desc(struct ieee80211_hw *hw, + return ret; + } + ++bool rtl92de_is_tx_desc_closed(struct ieee80211_hw *hw, ++ u8 hw_queue, u16 index) ++{ ++ struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); ++ struct rtl8192_tx_ring *ring = &rtlpci->tx_ring[hw_queue]; ++ u8 *entry = (u8 *)(&ring->desc[ring->idx]); ++ u8 own = (u8)rtl92de_get_desc(hw, entry, true, HW_DESC_OWN); ++ ++ /* a beacon packet will only use the first ++ * descriptor by defaut, and the own bit may not ++ * be cleared by the hardware ++ */ ++ if (own) ++ return false; ++ return true; ++} ++ + void rtl92de_tx_polling(struct ieee80211_hw *hw, u8 hw_queue) + { + struct rtl_priv *rtlpriv = rtl_priv(hw); +diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h +index 36820070fd76..635989e15282 100644 +--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h ++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h +@@ -715,6 +715,8 @@ void rtl92de_set_desc(struct ieee80211_hw *hw, u8 *pdesc, bool istx, + u8 desc_name, u8 *val); + u64 rtl92de_get_desc(struct ieee80211_hw *hw, + u8 *p_desc, bool istx, u8 desc_name); ++bool rtl92de_is_tx_desc_closed(struct ieee80211_hw *hw, ++ u8 hw_queue, u16 index); + void rtl92de_tx_polling(struct ieee80211_hw *hw, u8 hw_queue); + void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc, + bool b_firstseg, bool b_lastseg, +diff --git a/drivers/net/wireless/virt_wifi.c b/drivers/net/wireless/virt_wifi.c +index 7997cc6de334..01305ba2d3aa 100644 +--- a/drivers/net/wireless/virt_wifi.c ++++ b/drivers/net/wireless/virt_wifi.c +@@ -450,7 +450,6 @@ static void virt_wifi_net_device_destructor(struct net_device *dev) + */ + kfree(dev->ieee80211_ptr); + dev->ieee80211_ptr = NULL; +- free_netdev(dev); + } + + /* No lock interaction. */ +@@ -458,7 +457,7 @@ static void virt_wifi_setup(struct net_device *dev) + { + ether_setup(dev); + dev->netdev_ops = &virt_wifi_ops; +- dev->priv_destructor = virt_wifi_net_device_destructor; ++ dev->needs_free_netdev = true; + } + + /* Called in a RCU read critical section from netif_receive_skb */ +@@ -544,6 +543,7 @@ static int virt_wifi_newlink(struct net *src_net, struct net_device *dev, + goto unregister_netdev; + } + ++ dev->priv_destructor = virt_wifi_net_device_destructor; + priv->being_deleted = false; + priv->is_connected = false; + priv->is_up = false; +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index fa7ba09dca77..af3212aec871 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -1727,6 +1727,8 @@ static int nvme_report_ns_ids(struct nvme_ctrl *ctrl, unsigned int nsid, + if (ret) + dev_warn(ctrl->device, + "Identify Descriptors failed (%d)\n", ret); ++ if (ret > 0) ++ ret = 0; + } + return ret; + } +@@ -2404,16 +2406,6 @@ static const struct nvme_core_quirk_entry core_quirks[] = { + .vid = 0x14a4, + .fr = "22301111", + .quirks = NVME_QUIRK_SIMPLE_SUSPEND, +- }, +- { +- /* +- * This Kingston E8FK11.T firmware version has no interrupt +- * after resume with actions related to suspend to idle +- * https://bugzilla.kernel.org/show_bug.cgi?id=204887 +- */ +- .vid = 0x2646, +- .fr = "E8FK11.T", +- .quirks = NVME_QUIRK_SIMPLE_SUSPEND, + } + }; + +diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c +index e4c46637f32f..b3869951c0eb 100644 +--- a/drivers/pci/hotplug/acpiphp_glue.c ++++ b/drivers/pci/hotplug/acpiphp_glue.c +@@ -449,8 +449,15 @@ static void acpiphp_native_scan_bridge(struct pci_dev *bridge) + + /* Scan non-hotplug bridges that need to be reconfigured */ + for_each_pci_bridge(dev, bus) { +- if (!hotplug_is_native(dev)) +- max = pci_scan_bridge(bus, dev, max, 1); ++ if (hotplug_is_native(dev)) ++ continue; ++ ++ max = pci_scan_bridge(bus, dev, max, 1); ++ if (dev->subordinate) { ++ pcibios_resource_survey_bus(dev->subordinate); ++ pci_bus_size_bridges(dev->subordinate); ++ pci_bus_assign_resources(dev->subordinate); ++ } + } + } + +@@ -480,7 +487,6 @@ static void enable_slot(struct acpiphp_slot *slot, bool bridge) + if (PCI_SLOT(dev->devfn) == slot->device) + acpiphp_native_scan_bridge(dev); + } +- pci_assign_unassigned_bridge_resources(bus->self); + } else { + LIST_HEAD(add_list); + int max, pass; +diff --git a/drivers/phy/renesas/phy-rcar-gen3-usb2.c b/drivers/phy/renesas/phy-rcar-gen3-usb2.c +index b7f6b1324395..6fd1390fd06e 100644 +--- a/drivers/phy/renesas/phy-rcar-gen3-usb2.c ++++ b/drivers/phy/renesas/phy-rcar-gen3-usb2.c +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -320,9 +321,9 @@ static ssize_t role_store(struct device *dev, struct device_attribute *attr, + if (!ch->is_otg_channel || !rcar_gen3_is_any_rphy_initialized(ch)) + return -EIO; + +- if (!strncmp(buf, "host", strlen("host"))) ++ if (sysfs_streq(buf, "host")) + new_mode = PHY_MODE_USB_HOST; +- else if (!strncmp(buf, "peripheral", strlen("peripheral"))) ++ else if (sysfs_streq(buf, "peripheral")) + new_mode = PHY_MODE_USB_DEVICE; + else + return -EINVAL; +diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c +index f2f5fcd9a237..83e585c5a613 100644 +--- a/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c ++++ b/drivers/pinctrl/mvebu/pinctrl-armada-37xx.c +@@ -595,10 +595,10 @@ static int armada_37xx_irq_set_type(struct irq_data *d, unsigned int type) + regmap_read(info->regmap, in_reg, &in_val); + + /* Set initial polarity based on current input level. */ +- if (in_val & d->mask) +- val |= d->mask; /* falling */ ++ if (in_val & BIT(d->hwirq % GPIO_PER_REG)) ++ val |= BIT(d->hwirq % GPIO_PER_REG); /* falling */ + else +- val &= ~d->mask; /* rising */ ++ val &= ~(BIT(d->hwirq % GPIO_PER_REG)); /* rising */ + break; + } + default: +diff --git a/drivers/pinctrl/pinctrl-rza2.c b/drivers/pinctrl/pinctrl-rza2.c +index 3be1d833bf25..eda88cdf870d 100644 +--- a/drivers/pinctrl/pinctrl-rza2.c ++++ b/drivers/pinctrl/pinctrl-rza2.c +@@ -213,8 +213,8 @@ static const char * const rza2_gpio_names[] = { + "PC_0", "PC_1", "PC_2", "PC_3", "PC_4", "PC_5", "PC_6", "PC_7", + "PD_0", "PD_1", "PD_2", "PD_3", "PD_4", "PD_5", "PD_6", "PD_7", + "PE_0", "PE_1", "PE_2", "PE_3", "PE_4", "PE_5", "PE_6", "PE_7", +- "PF_0", "PF_1", "PF_2", "PF_3", "P0_4", "PF_5", "PF_6", "PF_7", +- "PG_0", "PG_1", "PG_2", "P0_3", "PG_4", "PG_5", "PG_6", "PG_7", ++ "PF_0", "PF_1", "PF_2", "PF_3", "PF_4", "PF_5", "PF_6", "PF_7", ++ "PG_0", "PG_1", "PG_2", "PG_3", "PG_4", "PG_5", "PG_6", "PG_7", + "PH_0", "PH_1", "PH_2", "PH_3", "PH_4", "PH_5", "PH_6", "PH_7", + /* port I does not exist */ + "PJ_0", "PJ_1", "PJ_2", "PJ_3", "PJ_4", "PJ_5", "PJ_6", "PJ_7", +diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c +index ebc27b06718c..0599f5127b01 100644 +--- a/drivers/pinctrl/samsung/pinctrl-exynos.c ++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c +@@ -486,8 +486,10 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d) + if (match) { + irq_chip = kmemdup(match->data, + sizeof(*irq_chip), GFP_KERNEL); +- if (!irq_chip) ++ if (!irq_chip) { ++ of_node_put(np); + return -ENOMEM; ++ } + wkup_np = np; + break; + } +@@ -504,6 +506,7 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d) + bank->nr_pins, &exynos_eint_irqd_ops, bank); + if (!bank->irq_domain) { + dev_err(dev, "wkup irq domain add failed\n"); ++ of_node_put(wkup_np); + return -ENXIO; + } + +@@ -518,8 +521,10 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d) + weint_data = devm_kcalloc(dev, + bank->nr_pins, sizeof(*weint_data), + GFP_KERNEL); +- if (!weint_data) ++ if (!weint_data) { ++ of_node_put(wkup_np); + return -ENOMEM; ++ } + + for (idx = 0; idx < bank->nr_pins; ++idx) { + irq = irq_of_parse_and_map(bank->of_node, idx); +@@ -536,10 +541,13 @@ int exynos_eint_wkup_init(struct samsung_pinctrl_drv_data *d) + } + } + +- if (!muxed_banks) ++ if (!muxed_banks) { ++ of_node_put(wkup_np); + return 0; ++ } + + irq = irq_of_parse_and_map(wkup_np, 0); ++ of_node_put(wkup_np); + if (!irq) { + dev_err(dev, "irq number for muxed EINTs not found\n"); + return 0; +diff --git a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c +index 7e824e4d20f4..9bd0a3de101d 100644 +--- a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c ++++ b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c +@@ -490,8 +490,10 @@ static int s3c24xx_eint_init(struct samsung_pinctrl_drv_data *d) + return -ENODEV; + + eint_data = devm_kzalloc(dev, sizeof(*eint_data), GFP_KERNEL); +- if (!eint_data) ++ if (!eint_data) { ++ of_node_put(eint_np); + return -ENOMEM; ++ } + + eint_data->drvdata = d; + +@@ -503,12 +505,14 @@ static int s3c24xx_eint_init(struct samsung_pinctrl_drv_data *d) + irq = irq_of_parse_and_map(eint_np, i); + if (!irq) { + dev_err(dev, "failed to get wakeup EINT IRQ %d\n", i); ++ of_node_put(eint_np); + return -ENXIO; + } + + eint_data->parents[i] = irq; + irq_set_chained_handler_and_data(irq, handlers[i], eint_data); + } ++ of_node_put(eint_np); + + bank = d->pin_banks; + for (i = 0; i < d->nr_banks; ++i, ++bank) { +diff --git a/drivers/pinctrl/samsung/pinctrl-s3c64xx.c b/drivers/pinctrl/samsung/pinctrl-s3c64xx.c +index c399f0932af5..f97f8179f2b1 100644 +--- a/drivers/pinctrl/samsung/pinctrl-s3c64xx.c ++++ b/drivers/pinctrl/samsung/pinctrl-s3c64xx.c +@@ -704,8 +704,10 @@ static int s3c64xx_eint_eint0_init(struct samsung_pinctrl_drv_data *d) + return -ENODEV; + + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); +- if (!data) ++ if (!data) { ++ of_node_put(eint0_np); + return -ENOMEM; ++ } + data->drvdata = d; + + for (i = 0; i < NUM_EINT0_IRQ; ++i) { +@@ -714,6 +716,7 @@ static int s3c64xx_eint_eint0_init(struct samsung_pinctrl_drv_data *d) + irq = irq_of_parse_and_map(eint0_np, i); + if (!irq) { + dev_err(dev, "failed to get wakeup EINT IRQ %d\n", i); ++ of_node_put(eint0_np); + return -ENXIO; + } + +@@ -721,6 +724,7 @@ static int s3c64xx_eint_eint0_init(struct samsung_pinctrl_drv_data *d) + s3c64xx_eint0_handlers[i], + data); + } ++ of_node_put(eint0_np); + + bank = d->pin_banks; + for (i = 0; i < d->nr_banks; ++i, ++bank) { +diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c +index de0477bb469d..f26574ef234a 100644 +--- a/drivers/pinctrl/samsung/pinctrl-samsung.c ++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c +@@ -272,6 +272,7 @@ static int samsung_dt_node_to_map(struct pinctrl_dev *pctldev, + &reserved_maps, num_maps); + if (ret < 0) { + samsung_dt_free_map(pctldev, *map, *num_maps); ++ of_node_put(np); + return ret; + } + } +@@ -785,8 +786,10 @@ static struct samsung_pmx_func *samsung_pinctrl_create_functions( + if (!of_get_child_count(cfg_np)) { + ret = samsung_pinctrl_create_function(dev, drvdata, + cfg_np, func); +- if (ret < 0) ++ if (ret < 0) { ++ of_node_put(cfg_np); + return ERR_PTR(ret); ++ } + if (ret > 0) { + ++func; + ++func_cnt; +@@ -797,8 +800,11 @@ static struct samsung_pmx_func *samsung_pinctrl_create_functions( + for_each_child_of_node(cfg_np, func_np) { + ret = samsung_pinctrl_create_function(dev, drvdata, + func_np, func); +- if (ret < 0) ++ if (ret < 0) { ++ of_node_put(func_np); ++ of_node_put(cfg_np); + return ERR_PTR(ret); ++ } + if (ret > 0) { + ++func; + ++func_cnt; +diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c +index c93ef33b01d3..5c1378d2fab3 100644 +--- a/drivers/rtc/interface.c ++++ b/drivers/rtc/interface.c +@@ -125,7 +125,7 @@ EXPORT_SYMBOL_GPL(rtc_read_time); + + int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm) + { +- int err; ++ int err, uie; + + err = rtc_valid_tm(tm); + if (err != 0) +@@ -137,6 +137,17 @@ int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm) + + rtc_subtract_offset(rtc, tm); + ++#ifdef CONFIG_RTC_INTF_DEV_UIE_EMUL ++ uie = rtc->uie_rtctimer.enabled || rtc->uie_irq_active; ++#else ++ uie = rtc->uie_rtctimer.enabled; ++#endif ++ if (uie) { ++ err = rtc_update_irq_enable(rtc, 0); ++ if (err) ++ return err; ++ } ++ + err = mutex_lock_interruptible(&rtc->ops_lock); + if (err) + return err; +@@ -153,6 +164,12 @@ int rtc_set_time(struct rtc_device *rtc, struct rtc_time *tm) + /* A timer might have just expired */ + schedule_work(&rtc->irqwork); + ++ if (uie) { ++ err = rtc_update_irq_enable(rtc, 1); ++ if (err) ++ return err; ++ } ++ + trace_rtc_set_time(rtc_tm_to_time64(tm), err); + return err; + } +diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c +index dccdb41bed8c..1234294700c4 100644 +--- a/drivers/s390/scsi/zfcp_dbf.c ++++ b/drivers/s390/scsi/zfcp_dbf.c +@@ -95,11 +95,9 @@ void zfcp_dbf_hba_fsf_res(char *tag, int level, struct zfcp_fsf_req *req) + memcpy(rec->u.res.fsf_status_qual, &q_head->fsf_status_qual, + FSF_STATUS_QUALIFIER_SIZE); + +- if (q_head->fsf_command != FSF_QTCB_FCP_CMND) { +- rec->pl_len = q_head->log_length; +- zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start, +- rec->pl_len, "fsf_res", req->req_id); +- } ++ rec->pl_len = q_head->log_length; ++ zfcp_dbf_pl_write(dbf, (char *)q_pref + q_head->log_start, ++ rec->pl_len, "fsf_res", req->req_id); + + debug_event(dbf->hba, level, rec, sizeof(*rec)); + spin_unlock_irqrestore(&dbf->hba_lock, flags); +diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c +index 6822cd9ff8f1..ad8ef67a1db3 100644 +--- a/drivers/scsi/lpfc/lpfc_scsi.c ++++ b/drivers/scsi/lpfc/lpfc_scsi.c +@@ -526,7 +526,7 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba, + &qp->lpfc_abts_io_buf_list, list) { + if (psb->cur_iocbq.sli4_xritag == xri) { + list_del_init(&psb->list); +- psb->exch_busy = 0; ++ psb->flags &= ~LPFC_SBUF_XBUSY; + psb->status = IOSTAT_SUCCESS; + if (psb->cur_iocbq.iocb_flag == LPFC_IO_NVME) { + qp->abts_nvme_io_bufs--; +@@ -566,7 +566,7 @@ lpfc_sli4_io_xri_aborted(struct lpfc_hba *phba, + if (iocbq->sli4_xritag != xri) + continue; + psb = container_of(iocbq, struct lpfc_io_buf, cur_iocbq); +- psb->exch_busy = 0; ++ psb->flags &= ~LPFC_SBUF_XBUSY; + spin_unlock_irqrestore(&phba->hbalock, iflag); + if (!list_empty(&pring->txq)) + lpfc_worker_wake_up(phba); +@@ -786,7 +786,7 @@ lpfc_release_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_io_buf *psb) + psb->prot_seg_cnt = 0; + + qp = psb->hdwq; +- if (psb->exch_busy) { ++ if (psb->flags & LPFC_SBUF_XBUSY) { + spin_lock_irqsave(&qp->abts_io_buf_list_lock, iflag); + psb->pCmd = NULL; + list_add_tail(&psb->list, &qp->lpfc_abts_io_buf_list); +@@ -3835,7 +3835,10 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn, + lpfc_cmd->result = (pIocbOut->iocb.un.ulpWord[4] & IOERR_PARAM_MASK); + lpfc_cmd->status = pIocbOut->iocb.ulpStatus; + /* pick up SLI4 exhange busy status from HBA */ +- lpfc_cmd->exch_busy = pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY; ++ if (pIocbOut->iocb_flag & LPFC_EXCHANGE_BUSY) ++ lpfc_cmd->flags |= LPFC_SBUF_XBUSY; ++ else ++ lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY; + + #ifdef CONFIG_SCSI_LPFC_DEBUG_FS + if (lpfc_cmd->prot_data_type) { +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c +index 614f78dddafe..5ed4219675eb 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.c ++++ b/drivers/scsi/lpfc/lpfc_sli.c +@@ -11736,7 +11736,10 @@ lpfc_sli_wake_iocb_wait(struct lpfc_hba *phba, + !(cmdiocbq->iocb_flag & LPFC_IO_LIBDFC)) { + lpfc_cmd = container_of(cmdiocbq, struct lpfc_io_buf, + cur_iocbq); +- lpfc_cmd->exch_busy = rspiocbq->iocb_flag & LPFC_EXCHANGE_BUSY; ++ if (rspiocbq && (rspiocbq->iocb_flag & LPFC_EXCHANGE_BUSY)) ++ lpfc_cmd->flags |= LPFC_SBUF_XBUSY; ++ else ++ lpfc_cmd->flags &= ~LPFC_SBUF_XBUSY; + } + + pdone_q = cmdiocbq->context_un.wait_queue; +diff --git a/drivers/scsi/lpfc/lpfc_sli.h b/drivers/scsi/lpfc/lpfc_sli.h +index 37fbcb46387e..7bcf922a8be2 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.h ++++ b/drivers/scsi/lpfc/lpfc_sli.h +@@ -384,14 +384,13 @@ struct lpfc_io_buf { + + struct lpfc_nodelist *ndlp; + uint32_t timeout; +- uint16_t flags; /* TBD convert exch_busy to flags */ ++ uint16_t flags; + #define LPFC_SBUF_XBUSY 0x1 /* SLI4 hba reported XB on WCQE cmpl */ + #define LPFC_SBUF_BUMP_QDEPTH 0x2 /* bumped queue depth counter */ + /* External DIF device IO conversions */ + #define LPFC_SBUF_NORMAL_DIF 0x4 /* normal mode to insert/strip */ + #define LPFC_SBUF_PASS_DIF 0x8 /* insert/strip mode to passthru */ + #define LPFC_SBUF_NOT_POSTED 0x10 /* SGL failed post to FW. */ +- uint16_t exch_busy; /* SLI4 hba reported XB on complete WCQE */ + uint16_t status; /* From IOCB Word 7- ulpStatus */ + uint32_t result; /* From IOCB Word 4. */ + +diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h +index 6ffa9877c28b..d5386edddaf6 100644 +--- a/drivers/scsi/qla2xxx/qla_def.h ++++ b/drivers/scsi/qla2xxx/qla_def.h +@@ -591,19 +591,23 @@ typedef struct srb { + */ + uint8_t cmd_type; + uint8_t pad[3]; +- atomic_t ref_count; + struct kref cmd_kref; /* need to migrate ref_count over to this */ + void *priv; + wait_queue_head_t nvme_ls_waitq; + struct fc_port *fcport; + struct scsi_qla_host *vha; + unsigned int start_timer:1; ++ unsigned int abort:1; ++ unsigned int aborted:1; ++ unsigned int completed:1; ++ + uint32_t handle; + uint16_t flags; + uint16_t type; + const char *name; + int iocbs; + struct qla_qpair *qpair; ++ struct srb *cmd_sp; + struct list_head elem; + u32 gen1; /* scratch */ + u32 gen2; /* scratch */ +diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c +index 5298ed10059f..84bb4a048016 100644 +--- a/drivers/scsi/qla2xxx/qla_gs.c ++++ b/drivers/scsi/qla2xxx/qla_gs.c +@@ -3005,7 +3005,7 @@ static void qla24xx_async_gpsc_sp_done(srb_t *sp, int res) + fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); + + if (res == QLA_FUNCTION_TIMEOUT) +- return; ++ goto done; + + if (res == (DID_ERROR << 16)) { + /* entry status error */ +diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c +index 1d041313ec52..d400b51929a6 100644 +--- a/drivers/scsi/qla2xxx/qla_init.c ++++ b/drivers/scsi/qla2xxx/qla_init.c +@@ -101,8 +101,22 @@ static void qla24xx_abort_iocb_timeout(void *data) + u32 handle; + unsigned long flags; + ++ if (sp->cmd_sp) ++ ql_dbg(ql_dbg_async, sp->vha, 0x507c, ++ "Abort timeout - cmd hdl=%x, cmd type=%x hdl=%x, type=%x\n", ++ sp->cmd_sp->handle, sp->cmd_sp->type, ++ sp->handle, sp->type); ++ else ++ ql_dbg(ql_dbg_async, sp->vha, 0x507c, ++ "Abort timeout 2 - hdl=%x, type=%x\n", ++ sp->handle, sp->type); ++ + spin_lock_irqsave(qpair->qp_lock_ptr, flags); + for (handle = 1; handle < qpair->req->num_outstanding_cmds; handle++) { ++ if (sp->cmd_sp && (qpair->req->outstanding_cmds[handle] == ++ sp->cmd_sp)) ++ qpair->req->outstanding_cmds[handle] = NULL; ++ + /* removing the abort */ + if (qpair->req->outstanding_cmds[handle] == sp) { + qpair->req->outstanding_cmds[handle] = NULL; +@@ -111,6 +125,9 @@ static void qla24xx_abort_iocb_timeout(void *data) + } + spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); + ++ if (sp->cmd_sp) ++ sp->cmd_sp->done(sp->cmd_sp, QLA_OS_TIMER_EXPIRED); ++ + abt->u.abt.comp_status = CS_TIMEOUT; + sp->done(sp, QLA_OS_TIMER_EXPIRED); + } +@@ -142,6 +159,7 @@ static int qla24xx_async_abort_cmd(srb_t *cmd_sp, bool wait) + sp->type = SRB_ABT_CMD; + sp->name = "abort"; + sp->qpair = cmd_sp->qpair; ++ sp->cmd_sp = cmd_sp; + if (wait) + sp->flags = SRB_WAKEUP_ON_COMP; + +@@ -1135,19 +1153,18 @@ static void qla24xx_async_gpdb_sp_done(srb_t *sp, int res) + "Async done-%s res %x, WWPN %8phC mb[1]=%x mb[2]=%x \n", + sp->name, res, fcport->port_name, mb[1], mb[2]); + +- if (res == QLA_FUNCTION_TIMEOUT) { +- dma_pool_free(sp->vha->hw->s_dma_pool, sp->u.iocb_cmd.u.mbx.in, +- sp->u.iocb_cmd.u.mbx.in_dma); +- return; +- } +- + fcport->flags &= ~(FCF_ASYNC_SENT | FCF_ASYNC_ACTIVE); ++ ++ if (res == QLA_FUNCTION_TIMEOUT) ++ goto done; ++ + memset(&ea, 0, sizeof(ea)); + ea.fcport = fcport; + ea.sp = sp; + + qla24xx_handle_gpdb_event(vha, &ea); + ++done: + dma_pool_free(ha->s_dma_pool, sp->u.iocb_cmd.u.mbx.in, + sp->u.iocb_cmd.u.mbx.in_dma); + +@@ -9003,8 +9020,6 @@ int qla2xxx_delete_qpair(struct scsi_qla_host *vha, struct qla_qpair *qpair) + struct qla_hw_data *ha = qpair->hw; + + qpair->delete_in_progress = 1; +- while (atomic_read(&qpair->ref_count)) +- msleep(500); + + ret = qla25xx_delete_req_que(vha, qpair->req); + if (ret != QLA_SUCCESS) +diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c +index 009fd5a33fcd..9204e8467a4e 100644 +--- a/drivers/scsi/qla2xxx/qla_isr.c ++++ b/drivers/scsi/qla2xxx/qla_isr.c +@@ -2466,6 +2466,11 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt) + return; + } + ++ if (sp->abort) ++ sp->aborted = 1; ++ else ++ sp->completed = 1; ++ + if (sp->cmd_type != TYPE_SRB) { + req->outstanding_cmds[handle] = NULL; + ql_dbg(ql_dbg_io, vha, 0x3015, +diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c +index 4a1f21c11758..4d90cf101f5f 100644 +--- a/drivers/scsi/qla2xxx/qla_mbx.c ++++ b/drivers/scsi/qla2xxx/qla_mbx.c +@@ -6287,17 +6287,13 @@ int qla24xx_send_mb_cmd(struct scsi_qla_host *vha, mbx_cmd_t *mcp) + case QLA_SUCCESS: + ql_dbg(ql_dbg_mbx, vha, 0x119d, "%s: %s done.\n", + __func__, sp->name); +- sp->free(sp); + break; + default: + ql_dbg(ql_dbg_mbx, vha, 0x119e, "%s: %s Failed. %x.\n", + __func__, sp->name, rval); +- sp->free(sp); + break; + } + +- return rval; +- + done_free_sp: + sp->free(sp); + done: +diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c +index 238240984bc1..eabc5127174e 100644 +--- a/drivers/scsi/qla2xxx/qla_mid.c ++++ b/drivers/scsi/qla2xxx/qla_mid.c +@@ -946,7 +946,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd) + + sp = qla2x00_get_sp(base_vha, NULL, GFP_KERNEL); + if (!sp) +- goto done; ++ return rval; + + sp->type = SRB_CTRL_VP; + sp->name = "ctrl_vp"; +@@ -962,7 +962,7 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd) + ql_dbg(ql_dbg_async, vha, 0xffff, + "%s: %s Failed submission. %x.\n", + __func__, sp->name, rval); +- goto done_free_sp; ++ goto done; + } + + ql_dbg(ql_dbg_vport, vha, 0x113f, "%s hndl %x submitted\n", +@@ -980,16 +980,13 @@ int qla24xx_control_vp(scsi_qla_host_t *vha, int cmd) + case QLA_SUCCESS: + ql_dbg(ql_dbg_vport, vha, 0xffff, "%s: %s done.\n", + __func__, sp->name); +- goto done_free_sp; ++ break; + default: + ql_dbg(ql_dbg_vport, vha, 0xffff, "%s: %s Failed. %x.\n", + __func__, sp->name, rval); +- goto done_free_sp; ++ break; + } + done: +- return rval; +- +-done_free_sp: + sp->free(sp); + return rval; + } +diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c +index 6cc19e060afc..941aa53363f5 100644 +--- a/drivers/scsi/qla2xxx/qla_nvme.c ++++ b/drivers/scsi/qla2xxx/qla_nvme.c +@@ -224,8 +224,8 @@ static void qla_nvme_abort_work(struct work_struct *work) + + if (ha->flags.host_shutting_down) { + ql_log(ql_log_info, sp->fcport->vha, 0xffff, +- "%s Calling done on sp: %p, type: 0x%x, sp->ref_count: 0x%x\n", +- __func__, sp, sp->type, atomic_read(&sp->ref_count)); ++ "%s Calling done on sp: %p, type: 0x%x\n", ++ __func__, sp, sp->type); + sp->done(sp, 0); + goto out; + } +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c +index 726ad4cbf4a6..06037e3c7854 100644 +--- a/drivers/scsi/qla2xxx/qla_os.c ++++ b/drivers/scsi/qla2xxx/qla_os.c +@@ -698,11 +698,6 @@ void qla2x00_sp_compl(srb_t *sp, int res) + struct scsi_cmnd *cmd = GET_CMD_SP(sp); + struct completion *comp = sp->comp; + +- if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) +- return; +- +- atomic_dec(&sp->ref_count); +- + sp->free(sp); + cmd->result = res; + CMD_SP(cmd) = NULL; +@@ -794,11 +789,6 @@ void qla2xxx_qpair_sp_compl(srb_t *sp, int res) + struct scsi_cmnd *cmd = GET_CMD_SP(sp); + struct completion *comp = sp->comp; + +- if (WARN_ON_ONCE(atomic_read(&sp->ref_count) == 0)) +- return; +- +- atomic_dec(&sp->ref_count); +- + sp->free(sp); + cmd->result = res; + CMD_SP(cmd) = NULL; +@@ -903,7 +893,7 @@ qla2xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd) + + sp->u.scmd.cmd = cmd; + sp->type = SRB_SCSI_CMD; +- atomic_set(&sp->ref_count, 1); ++ + CMD_SP(cmd) = (void *)sp; + sp->free = qla2x00_sp_free_dma; + sp->done = qla2x00_sp_compl; +@@ -985,18 +975,16 @@ qla2xxx_mqueuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd, + + sp->u.scmd.cmd = cmd; + sp->type = SRB_SCSI_CMD; +- atomic_set(&sp->ref_count, 1); + CMD_SP(cmd) = (void *)sp; + sp->free = qla2xxx_qpair_sp_free_dma; + sp->done = qla2xxx_qpair_sp_compl; +- sp->qpair = qpair; + + rval = ha->isp_ops->start_scsi_mq(sp); + if (rval != QLA_SUCCESS) { + ql_dbg(ql_dbg_io + ql_dbg_verbose, vha, 0x3078, + "Start scsi failed rval=%d for cmd=%p.\n", rval, cmd); + if (rval == QLA_INTERFACE_ERROR) +- goto qc24_fail_command; ++ goto qc24_free_sp_fail_command; + goto qc24_host_busy_free_sp; + } + +@@ -1008,6 +996,11 @@ qc24_host_busy_free_sp: + qc24_target_busy: + return SCSI_MLQUEUE_TARGET_BUSY; + ++qc24_free_sp_fail_command: ++ sp->free(sp); ++ CMD_SP(cmd) = NULL; ++ qla2xxx_rel_qpair_sp(sp->qpair, sp); ++ + qc24_fail_command: + cmd->scsi_done(cmd); + +@@ -1184,16 +1177,6 @@ qla2x00_wait_for_chip_reset(scsi_qla_host_t *vha) + return return_status; + } + +-static int +-sp_get(struct srb *sp) +-{ +- if (!refcount_inc_not_zero((refcount_t *)&sp->ref_count)) +- /* kref get fail */ +- return ENXIO; +- else +- return 0; +-} +- + #define ISP_REG_DISCONNECT 0xffffffffU + /************************************************************************** + * qla2x00_isp_reg_stat +@@ -1249,6 +1232,9 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd) + uint64_t lun; + int rval; + struct qla_hw_data *ha = vha->hw; ++ uint32_t ratov_j; ++ struct qla_qpair *qpair; ++ unsigned long flags; + + if (qla2x00_isp_reg_stat(ha)) { + ql_log(ql_log_info, vha, 0x8042, +@@ -1261,13 +1247,26 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd) + return ret; + + sp = scsi_cmd_priv(cmd); ++ qpair = sp->qpair; + +- if (sp->fcport && sp->fcport->deleted) ++ if ((sp->fcport && sp->fcport->deleted) || !qpair) + return SUCCESS; + +- /* Return if the command has already finished. */ +- if (sp_get(sp)) ++ spin_lock_irqsave(qpair->qp_lock_ptr, flags); ++ if (sp->completed) { ++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); + return SUCCESS; ++ } ++ ++ if (sp->abort || sp->aborted) { ++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); ++ return FAILED; ++ } ++ ++ sp->abort = 1; ++ sp->comp = ∁ ++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags); ++ + + id = cmd->device->id; + lun = cmd->device->lun; +@@ -1276,47 +1275,37 @@ qla2xxx_eh_abort(struct scsi_cmnd *cmd) + "Aborting from RISC nexus=%ld:%d:%llu sp=%p cmd=%p handle=%x\n", + vha->host_no, id, lun, sp, cmd, sp->handle); + ++ /* ++ * Abort will release the original Command/sp from FW. Let the ++ * original command call scsi_done. In return, he will wakeup ++ * this sleeping thread. ++ */ + rval = ha->isp_ops->abort_command(sp); ++ + ql_dbg(ql_dbg_taskm, vha, 0x8003, + "Abort command mbx cmd=%p, rval=%x.\n", cmd, rval); + ++ /* Wait for the command completion. */ ++ ratov_j = ha->r_a_tov/10 * 4 * 1000; ++ ratov_j = msecs_to_jiffies(ratov_j); + switch (rval) { + case QLA_SUCCESS: +- /* +- * The command has been aborted. That means that the firmware +- * won't report a completion. +- */ +- sp->done(sp, DID_ABORT << 16); +- ret = SUCCESS; +- break; +- case QLA_FUNCTION_PARAMETER_ERROR: { +- /* Wait for the command completion. */ +- uint32_t ratov = ha->r_a_tov/10; +- uint32_t ratov_j = msecs_to_jiffies(4 * ratov * 1000); +- +- WARN_ON_ONCE(sp->comp); +- sp->comp = ∁ + if (!wait_for_completion_timeout(&comp, ratov_j)) { + ql_dbg(ql_dbg_taskm, vha, 0xffff, + "%s: Abort wait timer (4 * R_A_TOV[%d]) expired\n", +- __func__, ha->r_a_tov); ++ __func__, ha->r_a_tov/10); + ret = FAILED; + } else { + ret = SUCCESS; + } + break; +- } + default: +- /* +- * Either abort failed or abort and completion raced. Let +- * the SCSI core retry the abort in the former case. +- */ + ret = FAILED; + break; + } + + sp->comp = NULL; +- atomic_dec(&sp->ref_count); ++ + ql_log(ql_log_info, vha, 0x801c, + "Abort command issued nexus=%ld:%d:%llu -- %x.\n", + vha->host_no, id, lun, ret); +@@ -1708,32 +1697,53 @@ static void qla2x00_abort_srb(struct qla_qpair *qp, srb_t *sp, const int res, + scsi_qla_host_t *vha = qp->vha; + struct qla_hw_data *ha = vha->hw; + int rval; ++ bool ret_cmd; ++ uint32_t ratov_j; + +- if (sp_get(sp)) ++ if (qla2x00_chip_is_down(vha)) { ++ sp->done(sp, res); + return; ++ } + + if (sp->type == SRB_NVME_CMD || sp->type == SRB_NVME_LS || + (sp->type == SRB_SCSI_CMD && !ha->flags.eeh_busy && + !test_bit(ABORT_ISP_ACTIVE, &vha->dpc_flags) && + !qla2x00_isp_reg_stat(ha))) { ++ if (sp->comp) { ++ sp->done(sp, res); ++ return; ++ } ++ + sp->comp = ∁ ++ sp->abort = 1; + spin_unlock_irqrestore(qp->qp_lock_ptr, *flags); +- rval = ha->isp_ops->abort_command(sp); + ++ rval = ha->isp_ops->abort_command(sp); ++ /* Wait for command completion. */ ++ ret_cmd = false; ++ ratov_j = ha->r_a_tov/10 * 4 * 1000; ++ ratov_j = msecs_to_jiffies(ratov_j); + switch (rval) { + case QLA_SUCCESS: +- sp->done(sp, res); ++ if (wait_for_completion_timeout(&comp, ratov_j)) { ++ ql_dbg(ql_dbg_taskm, vha, 0xffff, ++ "%s: Abort wait timer (4 * R_A_TOV[%d]) expired\n", ++ __func__, ha->r_a_tov/10); ++ ret_cmd = true; ++ } ++ /* else FW return SP to driver */ + break; +- case QLA_FUNCTION_PARAMETER_ERROR: +- wait_for_completion(&comp); ++ default: ++ ret_cmd = true; + break; + } + + spin_lock_irqsave(qp->qp_lock_ptr, *flags); +- sp->comp = NULL; ++ if (ret_cmd && (!sp->completed || !sp->aborted)) ++ sp->done(sp, res); ++ } else { ++ sp->done(sp, res); + } +- +- atomic_dec(&sp->ref_count); + } + + static void +@@ -1755,7 +1765,6 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res) + for (cnt = 1; cnt < req->num_outstanding_cmds; cnt++) { + sp = req->outstanding_cmds[cnt]; + if (sp) { +- req->outstanding_cmds[cnt] = NULL; + switch (sp->cmd_type) { + case TYPE_SRB: + qla2x00_abort_srb(qp, sp, res, &flags); +@@ -1777,6 +1786,7 @@ __qla2x00_abort_all_cmds(struct qla_qpair *qp, int res) + default: + break; + } ++ req->outstanding_cmds[cnt] = NULL; + } + } + spin_unlock_irqrestore(qp->qp_lock_ptr, flags); +@@ -4666,7 +4676,8 @@ qla2x00_mem_free(struct qla_hw_data *ha) + ha->sfp_data = NULL; + + if (ha->flt) +- dma_free_coherent(&ha->pdev->dev, SFP_DEV_SIZE, ++ dma_free_coherent(&ha->pdev->dev, ++ sizeof(struct qla_flt_header) + FLT_REGIONS_SIZE, + ha->flt, ha->flt_dma); + ha->flt = NULL; + ha->flt_dma = 0; +diff --git a/drivers/staging/exfat/exfat.h b/drivers/staging/exfat/exfat.h +index 3abab33e932c..4973c9edc26e 100644 +--- a/drivers/staging/exfat/exfat.h ++++ b/drivers/staging/exfat/exfat.h +@@ -943,8 +943,8 @@ s32 create_dir(struct inode *inode, struct chain_t *p_dir, + s32 create_file(struct inode *inode, struct chain_t *p_dir, + struct uni_name_t *p_uniname, u8 mode, struct file_id_t *fid); + void remove_file(struct inode *inode, struct chain_t *p_dir, s32 entry); +-s32 rename_file(struct inode *inode, struct chain_t *p_dir, s32 old_entry, +- struct uni_name_t *p_uniname, struct file_id_t *fid); ++s32 exfat_rename_file(struct inode *inode, struct chain_t *p_dir, s32 old_entry, ++ struct uni_name_t *p_uniname, struct file_id_t *fid); + s32 move_file(struct inode *inode, struct chain_t *p_olddir, s32 oldentry, + struct chain_t *p_newdir, struct uni_name_t *p_uniname, + struct file_id_t *fid); +diff --git a/drivers/staging/exfat/exfat_core.c b/drivers/staging/exfat/exfat_core.c +index 79174e5c4145..f3774a1912d1 100644 +--- a/drivers/staging/exfat/exfat_core.c ++++ b/drivers/staging/exfat/exfat_core.c +@@ -3381,8 +3381,8 @@ void remove_file(struct inode *inode, struct chain_t *p_dir, s32 entry) + fs_func->delete_dir_entry(sb, p_dir, entry, 0, num_entries); + } + +-s32 rename_file(struct inode *inode, struct chain_t *p_dir, s32 oldentry, +- struct uni_name_t *p_uniname, struct file_id_t *fid) ++s32 exfat_rename_file(struct inode *inode, struct chain_t *p_dir, s32 oldentry, ++ struct uni_name_t *p_uniname, struct file_id_t *fid) + { + s32 ret, newentry = -1, num_old_entries, num_new_entries; + sector_t sector_old, sector_new; +diff --git a/drivers/staging/exfat/exfat_super.c b/drivers/staging/exfat/exfat_super.c +index 3b2b0ceb7297..58c7d66060f7 100644 +--- a/drivers/staging/exfat/exfat_super.c ++++ b/drivers/staging/exfat/exfat_super.c +@@ -1308,8 +1308,8 @@ static int ffsMoveFile(struct inode *old_parent_inode, struct file_id_t *fid, + fs_set_vol_flags(sb, VOL_DIRTY); + + if (olddir.dir == newdir.dir) +- ret = rename_file(new_parent_inode, &olddir, dentry, &uni_name, +- fid); ++ ret = exfat_rename_file(new_parent_inode, &olddir, dentry, ++ &uni_name, fid); + else + ret = move_file(new_parent_inode, &olddir, dentry, &newdir, + &uni_name, fid); +diff --git a/drivers/staging/isdn/gigaset/usb-gigaset.c b/drivers/staging/isdn/gigaset/usb-gigaset.c +index 1b9b43659bdf..a20c0bfa68f3 100644 +--- a/drivers/staging/isdn/gigaset/usb-gigaset.c ++++ b/drivers/staging/isdn/gigaset/usb-gigaset.c +@@ -571,8 +571,7 @@ static int gigaset_initcshw(struct cardstate *cs) + { + struct usb_cardstate *ucs; + +- cs->hw.usb = ucs = +- kmalloc(sizeof(struct usb_cardstate), GFP_KERNEL); ++ cs->hw.usb = ucs = kzalloc(sizeof(struct usb_cardstate), GFP_KERNEL); + if (!ucs) { + pr_err("out of memory\n"); + return -ENOMEM; +@@ -584,9 +583,6 @@ static int gigaset_initcshw(struct cardstate *cs) + ucs->bchars[3] = 0; + ucs->bchars[4] = 0x11; + ucs->bchars[5] = 0x13; +- ucs->bulk_out_buffer = NULL; +- ucs->bulk_out_urb = NULL; +- ucs->read_urb = NULL; + tasklet_init(&cs->write_tasklet, + gigaset_modem_fill, (unsigned long) cs); + +@@ -685,6 +681,11 @@ static int gigaset_probe(struct usb_interface *interface, + return -ENODEV; + } + ++ if (hostif->desc.bNumEndpoints < 2) { ++ dev_err(&interface->dev, "missing endpoints\n"); ++ return -ENODEV; ++ } ++ + dev_info(&udev->dev, "%s: Device matched ... !\n", __func__); + + /* allocate memory for our device state and initialize it */ +@@ -704,6 +705,12 @@ static int gigaset_probe(struct usb_interface *interface, + + endpoint = &hostif->endpoint[0].desc; + ++ if (!usb_endpoint_is_bulk_out(endpoint)) { ++ dev_err(&interface->dev, "missing bulk-out endpoint\n"); ++ retval = -ENODEV; ++ goto error; ++ } ++ + buffer_size = le16_to_cpu(endpoint->wMaxPacketSize); + ucs->bulk_out_size = buffer_size; + ucs->bulk_out_epnum = usb_endpoint_num(endpoint); +@@ -723,6 +730,12 @@ static int gigaset_probe(struct usb_interface *interface, + + endpoint = &hostif->endpoint[1].desc; + ++ if (!usb_endpoint_is_int_in(endpoint)) { ++ dev_err(&interface->dev, "missing int-in endpoint\n"); ++ retval = -ENODEV; ++ goto error; ++ } ++ + ucs->busy = 0; + + ucs->read_urb = usb_alloc_urb(0, GFP_KERNEL); +diff --git a/drivers/staging/media/hantro/hantro_g1_h264_dec.c b/drivers/staging/media/hantro/hantro_g1_h264_dec.c +index 7ab534936843..636bf972adcf 100644 +--- a/drivers/staging/media/hantro/hantro_g1_h264_dec.c ++++ b/drivers/staging/media/hantro/hantro_g1_h264_dec.c +@@ -34,9 +34,11 @@ static void set_params(struct hantro_ctx *ctx) + reg = G1_REG_DEC_CTRL0_DEC_AXI_WR_ID(0x0); + if (sps->flags & V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD) + reg |= G1_REG_DEC_CTRL0_SEQ_MBAFF_E; +- reg |= G1_REG_DEC_CTRL0_PICORD_COUNT_E; +- if (dec_param->nal_ref_idc) +- reg |= G1_REG_DEC_CTRL0_WRITE_MVS_E; ++ if (sps->profile_idc > 66) { ++ reg |= G1_REG_DEC_CTRL0_PICORD_COUNT_E; ++ if (dec_param->nal_ref_idc) ++ reg |= G1_REG_DEC_CTRL0_WRITE_MVS_E; ++ } + + if (!(sps->flags & V4L2_H264_SPS_FLAG_FRAME_MBS_ONLY) && + (sps->flags & V4L2_H264_SPS_FLAG_MB_ADAPTIVE_FRAME_FIELD || +@@ -246,7 +248,7 @@ static void set_buffers(struct hantro_ctx *ctx) + vdpu_write_relaxed(vpu, dst_dma, G1_REG_ADDR_DST); + + /* Higher profiles require DMV buffer appended to reference frames. */ +- if (ctrls->sps->profile_idc > 66) { ++ if (ctrls->sps->profile_idc > 66 && ctrls->decode->nal_ref_idc) { + size_t pic_size = ctx->h264_dec.pic_size; + size_t mv_offset = round_up(pic_size, 8); + +diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c +index 3dae52abb96c..fcf95c1d39ca 100644 +--- a/drivers/staging/media/hantro/hantro_v4l2.c ++++ b/drivers/staging/media/hantro/hantro_v4l2.c +@@ -367,19 +367,26 @@ vidioc_s_fmt_out_mplane(struct file *file, void *priv, struct v4l2_format *f) + { + struct v4l2_pix_format_mplane *pix_mp = &f->fmt.pix_mp; + struct hantro_ctx *ctx = fh_to_ctx(priv); ++ struct vb2_queue *vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type); + const struct hantro_fmt *formats; + unsigned int num_fmts; +- struct vb2_queue *vq; + int ret; + +- /* Change not allowed if queue is busy. */ +- vq = v4l2_m2m_get_vq(ctx->fh.m2m_ctx, f->type); +- if (vb2_is_busy(vq)) +- return -EBUSY; ++ ret = vidioc_try_fmt_out_mplane(file, priv, f); ++ if (ret) ++ return ret; + + if (!hantro_is_encoder_ctx(ctx)) { + struct vb2_queue *peer_vq; + ++ /* ++ * In order to support dynamic resolution change, ++ * the decoder admits a resolution change, as long ++ * as the pixelformat remains. Can't be done if streaming. ++ */ ++ if (vb2_is_streaming(vq) || (vb2_is_busy(vq) && ++ pix_mp->pixelformat != ctx->src_fmt.pixelformat)) ++ return -EBUSY; + /* + * Since format change on the OUTPUT queue will reset + * the CAPTURE queue, we can't allow doing so +@@ -389,12 +396,15 @@ vidioc_s_fmt_out_mplane(struct file *file, void *priv, struct v4l2_format *f) + V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE); + if (vb2_is_busy(peer_vq)) + return -EBUSY; ++ } else { ++ /* ++ * The encoder doesn't admit a format change if ++ * there are OUTPUT buffers allocated. ++ */ ++ if (vb2_is_busy(vq)) ++ return -EBUSY; + } + +- ret = vidioc_try_fmt_out_mplane(file, priv, f); +- if (ret) +- return ret; +- + formats = hantro_get_formats(ctx, &num_fmts); + ctx->vpu_src_fmt = hantro_find_format(formats, num_fmts, + pix_mp->pixelformat); +diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +index 4fac9dca798e..a7cac0719b8b 100644 +--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c ++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +@@ -70,7 +70,7 @@ static struct dvobj_priv *usb_dvobj_init(struct usb_interface *usb_intf) + phost_conf = pusbd->actconfig; + pconf_desc = &phost_conf->desc; + +- phost_iface = &usb_intf->altsetting[0]; ++ phost_iface = usb_intf->cur_altsetting; + piface_desc = &phost_iface->desc; + + pdvobjpriv->NumInterfaces = pconf_desc->bNumInterfaces; +diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c +index ba1288297ee4..a87562f632a7 100644 +--- a/drivers/staging/rtl8712/usb_intf.c ++++ b/drivers/staging/rtl8712/usb_intf.c +@@ -247,7 +247,7 @@ static uint r8712_usb_dvobj_init(struct _adapter *padapter) + + pdvobjpriv->padapter = padapter; + padapter->eeprom_address_size = 6; +- phost_iface = &pintf->altsetting[0]; ++ phost_iface = pintf->cur_altsetting; + piface_desc = &phost_iface->desc; + pdvobjpriv->nr_endpoint = piface_desc->bNumEndpoints; + if (pusbd->speed == USB_SPEED_HIGH) { +diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +index b1595b13dea8..af6bf0736b52 100644 +--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c ++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c +@@ -3299,7 +3299,7 @@ static int __init vchiq_driver_init(void) + return 0; + + region_unregister: +- platform_driver_unregister(&vchiq_driver); ++ unregister_chrdev_region(vchiq_devid, 1); + + class_destroy: + class_destroy(vchiq_class); +diff --git a/drivers/usb/atm/ueagle-atm.c b/drivers/usb/atm/ueagle-atm.c +index 8b0ea8c70d73..635cf0466b59 100644 +--- a/drivers/usb/atm/ueagle-atm.c ++++ b/drivers/usb/atm/ueagle-atm.c +@@ -2124,10 +2124,11 @@ resubmit: + /* + * Start the modem : init the data and start kernel thread + */ +-static int uea_boot(struct uea_softc *sc) ++static int uea_boot(struct uea_softc *sc, struct usb_interface *intf) + { +- int ret, size; + struct intr_pkt *intr; ++ int ret = -ENOMEM; ++ int size; + + uea_enters(INS_TO_USBDEV(sc)); + +@@ -2152,6 +2153,11 @@ static int uea_boot(struct uea_softc *sc) + if (UEA_CHIP_VERSION(sc) == ADI930) + load_XILINX_firmware(sc); + ++ if (intf->cur_altsetting->desc.bNumEndpoints < 1) { ++ ret = -ENODEV; ++ goto err0; ++ } ++ + intr = kmalloc(size, GFP_KERNEL); + if (!intr) + goto err0; +@@ -2163,8 +2169,7 @@ static int uea_boot(struct uea_softc *sc) + usb_fill_int_urb(sc->urb_int, sc->usb_dev, + usb_rcvintpipe(sc->usb_dev, UEA_INTR_PIPE), + intr, size, uea_intr, sc, +- sc->usb_dev->actconfig->interface[0]->altsetting[0]. +- endpoint[0].desc.bInterval); ++ intf->cur_altsetting->endpoint[0].desc.bInterval); + + ret = usb_submit_urb(sc->urb_int, GFP_KERNEL); + if (ret < 0) { +@@ -2179,6 +2184,7 @@ static int uea_boot(struct uea_softc *sc) + sc->kthread = kthread_create(uea_kthread, sc, "ueagle-atm"); + if (IS_ERR(sc->kthread)) { + uea_err(INS_TO_USBDEV(sc), "failed to create thread\n"); ++ ret = PTR_ERR(sc->kthread); + goto err2; + } + +@@ -2193,7 +2199,7 @@ err1: + kfree(intr); + err0: + uea_leaves(INS_TO_USBDEV(sc)); +- return -ENOMEM; ++ return ret; + } + + /* +@@ -2548,7 +2554,7 @@ static int uea_bind(struct usbatm_data *usbatm, struct usb_interface *intf, + } + } + +- ret = uea_boot(sc); ++ ret = uea_boot(sc, intf); + if (ret < 0) + goto error; + +diff --git a/drivers/usb/common/usb-conn-gpio.c b/drivers/usb/common/usb-conn-gpio.c +index 87338f9eb5be..ed204cbb63ea 100644 +--- a/drivers/usb/common/usb-conn-gpio.c ++++ b/drivers/usb/common/usb-conn-gpio.c +@@ -156,7 +156,8 @@ static int usb_conn_probe(struct platform_device *pdev) + + info->vbus = devm_regulator_get(dev, "vbus"); + if (IS_ERR(info->vbus)) { +- dev_err(dev, "failed to get vbus\n"); ++ if (PTR_ERR(info->vbus) != -EPROBE_DEFER) ++ dev_err(dev, "failed to get vbus\n"); + return PTR_ERR(info->vbus); + } + +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 236313f41f4a..dfe9ac8d2375 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -5814,7 +5814,7 @@ re_enumerate_no_bos: + + /** + * usb_reset_device - warn interface drivers and perform a USB port reset +- * @udev: device to reset (not in SUSPENDED or NOTATTACHED state) ++ * @udev: device to reset (not in NOTATTACHED state) + * + * Warns all drivers bound to registered interfaces (using their pre_reset + * method), performs the port reset, and then lets the drivers know that +@@ -5842,8 +5842,7 @@ int usb_reset_device(struct usb_device *udev) + struct usb_host_config *config = udev->actconfig; + struct usb_hub *hub = usb_hub_to_struct_hub(udev->parent); + +- if (udev->state == USB_STATE_NOTATTACHED || +- udev->state == USB_STATE_SUSPENDED) { ++ if (udev->state == USB_STATE_NOTATTACHED) { + dev_dbg(&udev->dev, "device reset not allowed in state %d\n", + udev->state); + return -EINVAL; +diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c +index 0eab79f82ce4..da923ec17612 100644 +--- a/drivers/usb/core/urb.c ++++ b/drivers/usb/core/urb.c +@@ -45,6 +45,7 @@ void usb_init_urb(struct urb *urb) + if (urb) { + memset(urb, 0, sizeof(*urb)); + kref_init(&urb->kref); ++ INIT_LIST_HEAD(&urb->urb_list); + INIT_LIST_HEAD(&urb->anchor_list); + } + } +diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c +index 023f0357efd7..294276f7deb9 100644 +--- a/drivers/usb/dwc3/dwc3-pci.c ++++ b/drivers/usb/dwc3/dwc3-pci.c +@@ -29,7 +29,8 @@ + #define PCI_DEVICE_ID_INTEL_BXT_M 0x1aaa + #define PCI_DEVICE_ID_INTEL_APL 0x5aaa + #define PCI_DEVICE_ID_INTEL_KBP 0xa2b0 +-#define PCI_DEVICE_ID_INTEL_CMLH 0x02ee ++#define PCI_DEVICE_ID_INTEL_CMLLP 0x02ee ++#define PCI_DEVICE_ID_INTEL_CMLH 0x06ee + #define PCI_DEVICE_ID_INTEL_GLK 0x31aa + #define PCI_DEVICE_ID_INTEL_CNPLP 0x9dee + #define PCI_DEVICE_ID_INTEL_CNPH 0xa36e +@@ -308,6 +309,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = { + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_MRFLD), + (kernel_ulong_t) &dwc3_pci_mrfld_properties, }, + ++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_CMLLP), ++ (kernel_ulong_t) &dwc3_pci_intel_properties, }, ++ + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_CMLH), + (kernel_ulong_t) &dwc3_pci_intel_properties, }, + +diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c +index 3996b9c4ff8d..fd1b100d2927 100644 +--- a/drivers/usb/dwc3/ep0.c ++++ b/drivers/usb/dwc3/ep0.c +@@ -1117,6 +1117,9 @@ static void dwc3_ep0_xfernotready(struct dwc3 *dwc, + void dwc3_ep0_interrupt(struct dwc3 *dwc, + const struct dwc3_event_depevt *event) + { ++ struct dwc3_ep *dep = dwc->eps[event->endpoint_number]; ++ u8 cmd; ++ + switch (event->endpoint_event) { + case DWC3_DEPEVT_XFERCOMPLETE: + dwc3_ep0_xfer_complete(dwc, event); +@@ -1129,7 +1132,12 @@ void dwc3_ep0_interrupt(struct dwc3 *dwc, + case DWC3_DEPEVT_XFERINPROGRESS: + case DWC3_DEPEVT_RXTXFIFOEVT: + case DWC3_DEPEVT_STREAMEVT: ++ break; + case DWC3_DEPEVT_EPCMDCMPLT: ++ cmd = DEPEVT_PARAMETER_CMD(event->parameters); ++ ++ if (cmd == DWC3_DEPCMD_ENDTRANSFER) ++ dep->flags &= ~DWC3_EP_TRANSFER_STARTED; + break; + } + } +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index a9aba716bf80..0c960a97ea02 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -2491,7 +2491,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep, + + req->request.actual = req->request.length - req->remaining; + +- if (!dwc3_gadget_ep_request_completed(req) && ++ if (!dwc3_gadget_ep_request_completed(req) || + req->num_pending_sgs) { + __dwc3_gadget_kick_transfer(dep); + goto out; +@@ -2719,6 +2719,9 @@ static void dwc3_stop_active_transfer(struct dwc3_ep *dep, bool force, + WARN_ON_ONCE(ret); + dep->resource_index = 0; + ++ if (!interrupt) ++ dep->flags &= ~DWC3_EP_TRANSFER_STARTED; ++ + if (dwc3_is_usb31(dwc) || dwc->revision < DWC3_REVISION_310A) + udelay(100); + } +diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c +index 33852c2b29d1..ab9ac48a751a 100644 +--- a/drivers/usb/gadget/configfs.c ++++ b/drivers/usb/gadget/configfs.c +@@ -1544,6 +1544,7 @@ static struct config_group *gadgets_make( + gi->composite.resume = NULL; + gi->composite.max_speed = USB_SPEED_SUPER; + ++ spin_lock_init(&gi->spinlock); + mutex_init(&gi->lock); + INIT_LIST_HEAD(&gi->string_list); + INIT_LIST_HEAD(&gi->available_func); +diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c +index 3d499d93c083..a8f1e5707c14 100644 +--- a/drivers/usb/gadget/udc/dummy_hcd.c ++++ b/drivers/usb/gadget/udc/dummy_hcd.c +@@ -2725,7 +2725,7 @@ static struct platform_driver dummy_hcd_driver = { + }; + + /*-------------------------------------------------------------------------*/ +-#define MAX_NUM_UDC 2 ++#define MAX_NUM_UDC 32 + static struct platform_device *the_udc_pdev[MAX_NUM_UDC]; + static struct platform_device *the_hcd_pdev[MAX_NUM_UDC]; + +diff --git a/drivers/usb/gadget/udc/pch_udc.c b/drivers/usb/gadget/udc/pch_udc.c +index 265dab2bbfac..3344fb8c4181 100644 +--- a/drivers/usb/gadget/udc/pch_udc.c ++++ b/drivers/usb/gadget/udc/pch_udc.c +@@ -1519,7 +1519,6 @@ static void pch_udc_free_dma_chain(struct pch_udc_dev *dev, + td = phys_to_virt(addr); + addr2 = (dma_addr_t)td->next; + dma_pool_free(dev->data_requests, td, addr); +- td->next = 0x00; + addr = addr2; + } + req->chain_len = 1; +diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c +index b7d23c438756..7a3a29e5e9d2 100644 +--- a/drivers/usb/host/xhci-hub.c ++++ b/drivers/usb/host/xhci-hub.c +@@ -806,7 +806,7 @@ static void xhci_del_comp_mod_timer(struct xhci_hcd *xhci, u32 status, + + static int xhci_handle_usb2_port_link_resume(struct xhci_port *port, + u32 *status, u32 portsc, +- unsigned long flags) ++ unsigned long *flags) + { + struct xhci_bus_state *bus_state; + struct xhci_hcd *xhci; +@@ -860,11 +860,11 @@ static int xhci_handle_usb2_port_link_resume(struct xhci_port *port, + xhci_test_and_clear_bit(xhci, port, PORT_PLC); + xhci_set_link_state(xhci, port, XDEV_U0); + +- spin_unlock_irqrestore(&xhci->lock, flags); ++ spin_unlock_irqrestore(&xhci->lock, *flags); + time_left = wait_for_completion_timeout( + &bus_state->rexit_done[wIndex], + msecs_to_jiffies(XHCI_MAX_REXIT_TIMEOUT_MS)); +- spin_lock_irqsave(&xhci->lock, flags); ++ spin_lock_irqsave(&xhci->lock, *flags); + + if (time_left) { + slot_id = xhci_find_slot_id_by_port(hcd, xhci, +@@ -920,11 +920,13 @@ static void xhci_get_usb3_port_status(struct xhci_port *port, u32 *status, + { + struct xhci_bus_state *bus_state; + struct xhci_hcd *xhci; ++ struct usb_hcd *hcd; + u32 link_state; + u32 portnum; + + bus_state = &port->rhub->bus_state; + xhci = hcd_to_xhci(port->rhub->hcd); ++ hcd = port->rhub->hcd; + link_state = portsc & PORT_PLS_MASK; + portnum = port->hcd_portnum; + +@@ -952,12 +954,20 @@ static void xhci_get_usb3_port_status(struct xhci_port *port, u32 *status, + bus_state->suspended_ports &= ~(1 << portnum); + } + ++ /* remote wake resume signaling complete */ ++ if (bus_state->port_remote_wakeup & (1 << portnum) && ++ link_state != XDEV_RESUME && ++ link_state != XDEV_RECOVERY) { ++ bus_state->port_remote_wakeup &= ~(1 << portnum); ++ usb_hcd_end_port_resume(&hcd->self, portnum); ++ } ++ + xhci_hub_report_usb3_link_state(xhci, status, portsc); + xhci_del_comp_mod_timer(xhci, portsc, portnum); + } + + static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status, +- u32 portsc, unsigned long flags) ++ u32 portsc, unsigned long *flags) + { + struct xhci_bus_state *bus_state; + u32 link_state; +@@ -1007,7 +1017,7 @@ static void xhci_get_usb2_port_status(struct xhci_port *port, u32 *status, + static u32 xhci_get_port_status(struct usb_hcd *hcd, + struct xhci_bus_state *bus_state, + u16 wIndex, u32 raw_port_status, +- unsigned long flags) ++ unsigned long *flags) + __releases(&xhci->lock) + __acquires(&xhci->lock) + { +@@ -1130,7 +1140,7 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue, + } + trace_xhci_get_port_status(wIndex, temp); + status = xhci_get_port_status(hcd, bus_state, wIndex, temp, +- flags); ++ &flags); + if (status == 0xffffffff) + goto error; + +diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c +index e16eda6e2b8b..3b1388fa2f36 100644 +--- a/drivers/usb/host/xhci-mem.c ++++ b/drivers/usb/host/xhci-mem.c +@@ -1909,13 +1909,17 @@ no_bw: + xhci->usb3_rhub.num_ports = 0; + xhci->num_active_eps = 0; + kfree(xhci->usb2_rhub.ports); ++ kfree(xhci->usb2_rhub.psi); + kfree(xhci->usb3_rhub.ports); ++ kfree(xhci->usb3_rhub.psi); + kfree(xhci->hw_ports); + kfree(xhci->rh_bw); + kfree(xhci->ext_caps); + + xhci->usb2_rhub.ports = NULL; ++ xhci->usb2_rhub.psi = NULL; + xhci->usb3_rhub.ports = NULL; ++ xhci->usb3_rhub.psi = NULL; + xhci->hw_ports = NULL; + xhci->rh_bw = NULL; + xhci->ext_caps = NULL; +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index 1e0236e90687..1904ef56f61c 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -519,6 +519,18 @@ static int xhci_pci_resume(struct usb_hcd *hcd, bool hibernated) + } + #endif /* CONFIG_PM */ + ++static void xhci_pci_shutdown(struct usb_hcd *hcd) ++{ ++ struct xhci_hcd *xhci = hcd_to_xhci(hcd); ++ struct pci_dev *pdev = to_pci_dev(hcd->self.controller); ++ ++ xhci_shutdown(hcd); ++ ++ /* Yet another workaround for spurious wakeups at shutdown with HSW */ ++ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) ++ pci_set_power_state(pdev, PCI_D3hot); ++} ++ + /*-------------------------------------------------------------------------*/ + + /* PCI driver selection metadata; PCI hotplugging uses this */ +@@ -554,6 +566,7 @@ static int __init xhci_pci_init(void) + #ifdef CONFIG_PM + xhci_pci_hc_driver.pci_suspend = xhci_pci_suspend; + xhci_pci_hc_driver.pci_resume = xhci_pci_resume; ++ xhci_pci_hc_driver.shutdown = xhci_pci_shutdown; + #endif + return pci_register_driver(&xhci_pci_driver); + } +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c +index e7aab31fd9a5..4a2fe56940bd 100644 +--- a/drivers/usb/host/xhci-ring.c ++++ b/drivers/usb/host/xhci-ring.c +@@ -1624,7 +1624,6 @@ static void handle_port_status(struct xhci_hcd *xhci, + slot_id = xhci_find_slot_id_by_port(hcd, xhci, hcd_portnum + 1); + if (slot_id && xhci->devs[slot_id]) + xhci->devs[slot_id]->flags |= VDEV_PORT_ERROR; +- bus_state->port_remote_wakeup &= ~(1 << hcd_portnum); + } + + if ((portsc & PORT_PLC) && (portsc & PORT_PLS_MASK) == XDEV_RESUME) { +@@ -1644,6 +1643,7 @@ static void handle_port_status(struct xhci_hcd *xhci, + */ + bus_state->port_remote_wakeup |= 1 << hcd_portnum; + xhci_test_and_clear_bit(xhci, port, PORT_PLC); ++ usb_hcd_start_port_resume(&hcd->self, hcd_portnum); + xhci_set_link_state(xhci, port, XDEV_U0); + /* Need to wait until the next link state change + * indicates the device is actually in U0. +@@ -1684,7 +1684,6 @@ static void handle_port_status(struct xhci_hcd *xhci, + if (slot_id && xhci->devs[slot_id]) + xhci_ring_device(xhci, slot_id); + if (bus_state->port_remote_wakeup & (1 << hcd_portnum)) { +- bus_state->port_remote_wakeup &= ~(1 << hcd_portnum); + xhci_test_and_clear_bit(xhci, port, PORT_PLC); + usb_wakeup_notification(hcd->self.root_hub, + hcd_portnum + 1); +@@ -2378,7 +2377,8 @@ static int handle_tx_event(struct xhci_hcd *xhci, + case COMP_SUCCESS: + if (EVENT_TRB_LEN(le32_to_cpu(event->transfer_len)) == 0) + break; +- if (xhci->quirks & XHCI_TRUST_TX_LENGTH) ++ if (xhci->quirks & XHCI_TRUST_TX_LENGTH || ++ ep_ring->last_td_was_short) + trb_comp_code = COMP_SHORT_PACKET; + else + xhci_warn_ratelimited(xhci, +diff --git a/drivers/usb/host/xhci-tegra.c b/drivers/usb/host/xhci-tegra.c +index 2ff7c911fbd0..dc172513a4aa 100644 +--- a/drivers/usb/host/xhci-tegra.c ++++ b/drivers/usb/host/xhci-tegra.c +@@ -755,7 +755,6 @@ static int tegra_xusb_runtime_suspend(struct device *dev) + { + struct tegra_xusb *tegra = dev_get_drvdata(dev); + +- tegra_xusb_phy_disable(tegra); + regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies); + tegra_xusb_clk_disable(tegra); + +@@ -779,16 +778,8 @@ static int tegra_xusb_runtime_resume(struct device *dev) + goto disable_clk; + } + +- err = tegra_xusb_phy_enable(tegra); +- if (err < 0) { +- dev_err(dev, "failed to enable PHYs: %d\n", err); +- goto disable_regulator; +- } +- + return 0; + +-disable_regulator: +- regulator_bulk_disable(tegra->soc->num_supplies, tegra->supplies); + disable_clk: + tegra_xusb_clk_disable(tegra); + return err; +@@ -1181,6 +1172,12 @@ static int tegra_xusb_probe(struct platform_device *pdev) + */ + platform_set_drvdata(pdev, tegra); + ++ err = tegra_xusb_phy_enable(tegra); ++ if (err < 0) { ++ dev_err(&pdev->dev, "failed to enable PHYs: %d\n", err); ++ goto put_hcd; ++ } ++ + pm_runtime_enable(&pdev->dev); + if (pm_runtime_enabled(&pdev->dev)) + err = pm_runtime_get_sync(&pdev->dev); +@@ -1189,7 +1186,7 @@ static int tegra_xusb_probe(struct platform_device *pdev) + + if (err < 0) { + dev_err(&pdev->dev, "failed to enable device: %d\n", err); +- goto disable_rpm; ++ goto disable_phy; + } + + tegra_xusb_config(tegra, regs); +@@ -1275,9 +1272,11 @@ remove_usb2: + put_rpm: + if (!pm_runtime_status_suspended(&pdev->dev)) + tegra_xusb_runtime_suspend(&pdev->dev); +-disable_rpm: +- pm_runtime_disable(&pdev->dev); ++put_hcd: + usb_put_hcd(tegra->hcd); ++disable_phy: ++ tegra_xusb_phy_disable(tegra); ++ pm_runtime_disable(&pdev->dev); + put_powerdomains: + if (!of_property_read_bool(pdev->dev.of_node, "power-domains")) { + tegra_powergate_power_off(TEGRA_POWERGATE_XUSBC); +@@ -1314,6 +1313,8 @@ static int tegra_xusb_remove(struct platform_device *pdev) + tegra_xusb_powerdomain_remove(&pdev->dev, tegra); + } + ++ tegra_xusb_phy_disable(tegra); ++ + tegra_xusb_padctl_put(tegra->padctl); + + return 0; +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index 6c17e3fe181a..9b3b1b16eafb 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -770,7 +770,7 @@ static void xhci_stop(struct usb_hcd *hcd) + * + * This will only ever be called with the main usb_hcd (the USB3 roothub). + */ +-static void xhci_shutdown(struct usb_hcd *hcd) ++void xhci_shutdown(struct usb_hcd *hcd) + { + struct xhci_hcd *xhci = hcd_to_xhci(hcd); + +@@ -789,11 +789,8 @@ static void xhci_shutdown(struct usb_hcd *hcd) + xhci_dbg_trace(xhci, trace_xhci_dbg_init, + "xhci_shutdown completed - status = %x", + readl(&xhci->op_regs->status)); +- +- /* Yet another workaround for spurious wakeups at shutdown with HSW */ +- if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) +- pci_set_power_state(to_pci_dev(hcd->self.sysdev), PCI_D3hot); + } ++EXPORT_SYMBOL_GPL(xhci_shutdown); + + #ifdef CONFIG_PM + static void xhci_save_registers(struct xhci_hcd *xhci) +@@ -973,7 +970,7 @@ static bool xhci_pending_portevent(struct xhci_hcd *xhci) + int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup) + { + int rc = 0; +- unsigned int delay = XHCI_MAX_HALT_USEC; ++ unsigned int delay = XHCI_MAX_HALT_USEC * 2; + struct usb_hcd *hcd = xhci_to_hcd(xhci); + u32 command; + u32 res; +diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h +index f9f88626a57a..973d665052a2 100644 +--- a/drivers/usb/host/xhci.h ++++ b/drivers/usb/host/xhci.h +@@ -2050,6 +2050,7 @@ int xhci_start(struct xhci_hcd *xhci); + int xhci_reset(struct xhci_hcd *xhci); + int xhci_run(struct usb_hcd *hcd); + int xhci_gen_setup(struct usb_hcd *hcd, xhci_get_quirks_t get_quirks); ++void xhci_shutdown(struct usb_hcd *hcd); + void xhci_init_driver(struct hc_driver *drv, + const struct xhci_driver_overrides *over); + int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id); +diff --git a/drivers/usb/misc/adutux.c b/drivers/usb/misc/adutux.c +index 6f5edb9fc61e..d8d157c4c271 100644 +--- a/drivers/usb/misc/adutux.c ++++ b/drivers/usb/misc/adutux.c +@@ -669,7 +669,7 @@ static int adu_probe(struct usb_interface *interface, + init_waitqueue_head(&dev->read_wait); + init_waitqueue_head(&dev->write_wait); + +- res = usb_find_common_endpoints_reverse(&interface->altsetting[0], ++ res = usb_find_common_endpoints_reverse(interface->cur_altsetting, + NULL, NULL, + &dev->interrupt_in_endpoint, + &dev->interrupt_out_endpoint); +diff --git a/drivers/usb/misc/idmouse.c b/drivers/usb/misc/idmouse.c +index 20b0f91a5d9b..bb24527f3c70 100644 +--- a/drivers/usb/misc/idmouse.c ++++ b/drivers/usb/misc/idmouse.c +@@ -337,7 +337,7 @@ static int idmouse_probe(struct usb_interface *interface, + int result; + + /* check if we have gotten the data or the hid interface */ +- iface_desc = &interface->altsetting[0]; ++ iface_desc = interface->cur_altsetting; + if (iface_desc->desc.bInterfaceClass != 0x0A) + return -ENODEV; + +diff --git a/drivers/usb/mon/mon_bin.c b/drivers/usb/mon/mon_bin.c +index ac2b4fcc265f..f48a23adbc35 100644 +--- a/drivers/usb/mon/mon_bin.c ++++ b/drivers/usb/mon/mon_bin.c +@@ -1039,12 +1039,18 @@ static long mon_bin_ioctl(struct file *file, unsigned int cmd, unsigned long arg + + mutex_lock(&rp->fetch_lock); + spin_lock_irqsave(&rp->b_lock, flags); +- mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE); +- kfree(rp->b_vec); +- rp->b_vec = vec; +- rp->b_size = size; +- rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0; +- rp->cnt_lost = 0; ++ if (rp->mmap_active) { ++ mon_free_buff(vec, size/CHUNK_SIZE); ++ kfree(vec); ++ ret = -EBUSY; ++ } else { ++ mon_free_buff(rp->b_vec, rp->b_size/CHUNK_SIZE); ++ kfree(rp->b_vec); ++ rp->b_vec = vec; ++ rp->b_size = size; ++ rp->b_read = rp->b_in = rp->b_out = rp->b_cnt = 0; ++ rp->cnt_lost = 0; ++ } + spin_unlock_irqrestore(&rp->b_lock, flags); + mutex_unlock(&rp->fetch_lock); + } +@@ -1216,13 +1222,21 @@ mon_bin_poll(struct file *file, struct poll_table_struct *wait) + static void mon_bin_vma_open(struct vm_area_struct *vma) + { + struct mon_reader_bin *rp = vma->vm_private_data; ++ unsigned long flags; ++ ++ spin_lock_irqsave(&rp->b_lock, flags); + rp->mmap_active++; ++ spin_unlock_irqrestore(&rp->b_lock, flags); + } + + static void mon_bin_vma_close(struct vm_area_struct *vma) + { ++ unsigned long flags; ++ + struct mon_reader_bin *rp = vma->vm_private_data; ++ spin_lock_irqsave(&rp->b_lock, flags); + rp->mmap_active--; ++ spin_unlock_irqrestore(&rp->b_lock, flags); + } + + /* +@@ -1234,16 +1248,12 @@ static vm_fault_t mon_bin_vma_fault(struct vm_fault *vmf) + unsigned long offset, chunk_idx; + struct page *pageptr; + +- mutex_lock(&rp->fetch_lock); + offset = vmf->pgoff << PAGE_SHIFT; +- if (offset >= rp->b_size) { +- mutex_unlock(&rp->fetch_lock); ++ if (offset >= rp->b_size) + return VM_FAULT_SIGBUS; +- } + chunk_idx = offset / CHUNK_SIZE; + pageptr = rp->b_vec[chunk_idx].pg; + get_page(pageptr); +- mutex_unlock(&rp->fetch_lock); + vmf->page = pageptr; + return 0; + } +diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c +index 94b4e7db2b94..97e3d75b19a3 100644 +--- a/drivers/usb/roles/class.c ++++ b/drivers/usb/roles/class.c +@@ -169,8 +169,8 @@ EXPORT_SYMBOL_GPL(fwnode_usb_role_switch_get); + void usb_role_switch_put(struct usb_role_switch *sw) + { + if (!IS_ERR_OR_NULL(sw)) { +- put_device(&sw->dev); + module_put(sw->dev.parent->driver->owner); ++ put_device(&sw->dev); + } + } + EXPORT_SYMBOL_GPL(usb_role_switch_put); +diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c +index 48a439298a68..9690a5f4b9d6 100644 +--- a/drivers/usb/serial/io_edgeport.c ++++ b/drivers/usb/serial/io_edgeport.c +@@ -2901,16 +2901,18 @@ static int edge_startup(struct usb_serial *serial) + response = 0; + + if (edge_serial->is_epic) { ++ struct usb_host_interface *alt; ++ ++ alt = serial->interface->cur_altsetting; ++ + /* EPIC thing, set up our interrupt polling now and our read + * urb, so that the device knows it really is connected. */ + interrupt_in_found = bulk_in_found = bulk_out_found = false; +- for (i = 0; i < serial->interface->altsetting[0] +- .desc.bNumEndpoints; ++i) { ++ for (i = 0; i < alt->desc.bNumEndpoints; ++i) { + struct usb_endpoint_descriptor *endpoint; + int buffer_size; + +- endpoint = &serial->interface->altsetting[0]. +- endpoint[i].desc; ++ endpoint = &alt->endpoint[i].desc; + buffer_size = usb_endpoint_maxp(endpoint); + if (!interrupt_in_found && + (usb_endpoint_is_int_in(endpoint))) { +diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c +index 34538253f12c..475b9c692827 100644 +--- a/drivers/usb/storage/uas.c ++++ b/drivers/usb/storage/uas.c +@@ -825,6 +825,10 @@ static int uas_slave_configure(struct scsi_device *sdev) + sdev->wce_default_on = 1; + } + ++ /* Some disks cannot handle READ_CAPACITY_16 */ ++ if (devinfo->flags & US_FL_NO_READ_CAPACITY_16) ++ sdev->no_read_capacity_16 = 1; ++ + /* + * Some disks return the total number of blocks in response + * to READ CAPACITY rather than the highest block number. +@@ -833,6 +837,12 @@ static int uas_slave_configure(struct scsi_device *sdev) + if (devinfo->flags & US_FL_FIX_CAPACITY) + sdev->fix_capacity = 1; + ++ /* ++ * in some cases we have to guess ++ */ ++ if (devinfo->flags & US_FL_CAPACITY_HEURISTICS) ++ sdev->guess_capacity = 1; ++ + /* + * Some devices don't like MODE SENSE with page=0x3f, + * which is the command used for checking if a device +diff --git a/drivers/usb/typec/class.c b/drivers/usb/typec/class.c +index 94a3eda62add..a400b65cf17b 100644 +--- a/drivers/usb/typec/class.c ++++ b/drivers/usb/typec/class.c +@@ -1592,14 +1592,16 @@ struct typec_port *typec_register_port(struct device *parent, + + port->sw = typec_switch_get(&port->dev); + if (IS_ERR(port->sw)) { ++ ret = PTR_ERR(port->sw); + put_device(&port->dev); +- return ERR_CAST(port->sw); ++ return ERR_PTR(ret); + } + + port->mux = typec_mux_get(&port->dev, NULL); + if (IS_ERR(port->mux)) { ++ ret = PTR_ERR(port->mux); + put_device(&port->dev); +- return ERR_CAST(port->mux); ++ return ERR_PTR(ret); + } + + ret = device_add(&port->dev); +diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c +index b939bc28d886..9c82e2a0a411 100644 +--- a/drivers/video/hdmi.c ++++ b/drivers/video/hdmi.c +@@ -1576,12 +1576,12 @@ static int hdmi_avi_infoframe_unpack(struct hdmi_avi_infoframe *frame, + if (ptr[0] & 0x10) + frame->active_aspect = ptr[1] & 0xf; + if (ptr[0] & 0x8) { +- frame->top_bar = (ptr[5] << 8) + ptr[6]; +- frame->bottom_bar = (ptr[7] << 8) + ptr[8]; ++ frame->top_bar = (ptr[6] << 8) | ptr[5]; ++ frame->bottom_bar = (ptr[8] << 8) | ptr[7]; + } + if (ptr[0] & 0x4) { +- frame->left_bar = (ptr[9] << 8) + ptr[10]; +- frame->right_bar = (ptr[11] << 8) + ptr[12]; ++ frame->left_bar = (ptr[10] << 8) | ptr[9]; ++ frame->right_bar = (ptr[12] << 8) | ptr[11]; + } + frame->scan_mode = ptr[0] & 0x3; + +diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c +index e05679c478e2..9f4117766bb1 100644 +--- a/drivers/virtio/virtio_balloon.c ++++ b/drivers/virtio/virtio_balloon.c +@@ -721,6 +721,17 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info, + + get_page(newpage); /* balloon reference */ + ++ /* ++ * When we migrate a page to a different zone and adjusted the ++ * managed page count when inflating, we have to fixup the count of ++ * both involved zones. ++ */ ++ if (!virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM) && ++ page_zone(page) != page_zone(newpage)) { ++ adjust_managed_page_count(page, 1); ++ adjust_managed_page_count(newpage, -1); ++ } ++ + /* balloon's page migration 1st step -- inflate "newpage" */ + spin_lock_irqsave(&vb_dev_info->pages_lock, flags); + balloon_page_insert(vb_dev_info, newpage); +diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c +index 670700cb1110..0d2da2366869 100644 +--- a/fs/btrfs/block-group.c ++++ b/fs/btrfs/block-group.c +@@ -2662,7 +2662,7 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans, + * is because we need the unpinning stage to actually add the + * space back to the block group, otherwise we will leak space. + */ +- if (!alloc && cache->cached == BTRFS_CACHE_NO) ++ if (!alloc && !btrfs_block_group_cache_done(cache)) + btrfs_cache_block_group(cache, 1); + + byte_in_group = bytenr - cache->key.objectid; +diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c +index 1f7f39b10bd0..57a9ad3e8c29 100644 +--- a/fs/btrfs/delayed-inode.c ++++ b/fs/btrfs/delayed-inode.c +@@ -1949,12 +1949,19 @@ void btrfs_kill_all_delayed_nodes(struct btrfs_root *root) + } + + inode_id = delayed_nodes[n - 1]->inode_id + 1; +- +- for (i = 0; i < n; i++) +- refcount_inc(&delayed_nodes[i]->refs); ++ for (i = 0; i < n; i++) { ++ /* ++ * Don't increase refs in case the node is dead and ++ * about to be removed from the tree in the loop below ++ */ ++ if (!refcount_inc_not_zero(&delayed_nodes[i]->refs)) ++ delayed_nodes[i] = NULL; ++ } + spin_unlock(&root->inode_lock); + + for (i = 0; i < n; i++) { ++ if (!delayed_nodes[i]) ++ continue; + __btrfs_kill_delayed_node(delayed_nodes[i]); + btrfs_release_delayed_node(delayed_nodes[i]); + } +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index cceaf05aada2..4905f48587df 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -4121,7 +4121,7 @@ retry: + for (i = 0; i < nr_pages; i++) { + struct page *page = pvec.pages[i]; + +- done_index = page->index; ++ done_index = page->index + 1; + /* + * At this point we hold neither the i_pages lock nor + * the page lock: the page may be truncated or +@@ -4156,16 +4156,6 @@ retry: + + ret = __extent_writepage(page, wbc, epd); + if (ret < 0) { +- /* +- * done_index is set past this page, +- * so media errors will not choke +- * background writeout for the entire +- * file. This has consequences for +- * range_cyclic semantics (ie. it may +- * not be suitable for data integrity +- * writeout). +- */ +- done_index = page->index + 1; + done = 1; + break; + } +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index 435a502a3226..c332968f9056 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -1636,6 +1636,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb, + break; + } + ++ only_release_metadata = false; + sector_offset = pos & (fs_info->sectorsize - 1); + reserve_bytes = round_up(write_bytes + sector_offset, + fs_info->sectorsize); +@@ -1791,7 +1792,6 @@ again: + set_extent_bit(&BTRFS_I(inode)->io_tree, lockstart, + lockend, EXTENT_NORESERVE, NULL, + NULL, GFP_NOFS); +- only_release_metadata = false; + } + + btrfs_drop_pages(pages, num_pages); +diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c +index d54dcd0ab230..d86ada9c3c54 100644 +--- a/fs/btrfs/free-space-cache.c ++++ b/fs/btrfs/free-space-cache.c +@@ -385,6 +385,12 @@ static int io_ctl_prepare_pages(struct btrfs_io_ctl *io_ctl, struct inode *inode + if (uptodate && !PageUptodate(page)) { + btrfs_readpage(NULL, page); + lock_page(page); ++ if (page->mapping != inode->i_mapping) { ++ btrfs_err(BTRFS_I(inode)->root->fs_info, ++ "free space cache page truncated"); ++ io_ctl_drop_pages(io_ctl); ++ return -EIO; ++ } + if (!PageUptodate(page)) { + btrfs_err(BTRFS_I(inode)->root->fs_info, + "error reading free space cache"); +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 015910079e73..10a01dd0c4e6 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -2214,12 +2214,16 @@ again: + mapping_set_error(page->mapping, ret); + end_extent_writepage(page, ret, page_start, page_end); + ClearPageChecked(page); +- goto out; ++ goto out_reserved; + } + + ClearPageChecked(page); + set_page_dirty(page); ++out_reserved: + btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE); ++ if (ret) ++ btrfs_delalloc_release_space(inode, data_reserved, page_start, ++ PAGE_SIZE, true); + out: + unlock_extent_cached(&BTRFS_I(inode)->io_tree, page_start, page_end, + &cached_state); +@@ -9550,6 +9554,9 @@ static int btrfs_rename_exchange(struct inode *old_dir, + goto out_notrans; + } + ++ if (dest != root) ++ btrfs_record_root_in_trans(trans, dest); ++ + /* + * We need to find a free sequence number both in the source and + * in the destination directory for the exchange. +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c +index 123ac54af071..518ec1265a0c 100644 +--- a/fs/btrfs/send.c ++++ b/fs/btrfs/send.c +@@ -24,6 +24,14 @@ + #include "transaction.h" + #include "compression.h" + ++/* ++ * Maximum number of references an extent can have in order for us to attempt to ++ * issue clone operations instead of write operations. This currently exists to ++ * avoid hitting limitations of the backreference walking code (taking a lot of ++ * time and using too much memory for extents with large number of references). ++ */ ++#define SEND_MAX_EXTENT_REFS 64 ++ + /* + * A fs_path is a helper to dynamically build path names with unknown size. + * It reallocates the internal buffer on demand. +@@ -1302,6 +1310,7 @@ static int find_extent_clone(struct send_ctx *sctx, + struct clone_root *cur_clone_root; + struct btrfs_key found_key; + struct btrfs_path *tmp_path; ++ struct btrfs_extent_item *ei; + int compressed; + u32 i; + +@@ -1349,7 +1358,6 @@ static int find_extent_clone(struct send_ctx *sctx, + ret = extent_from_logical(fs_info, disk_byte, tmp_path, + &found_key, &flags); + up_read(&fs_info->commit_root_sem); +- btrfs_release_path(tmp_path); + + if (ret < 0) + goto out; +@@ -1358,6 +1366,21 @@ static int find_extent_clone(struct send_ctx *sctx, + goto out; + } + ++ ei = btrfs_item_ptr(tmp_path->nodes[0], tmp_path->slots[0], ++ struct btrfs_extent_item); ++ /* ++ * Backreference walking (iterate_extent_inodes() below) is currently ++ * too expensive when an extent has a large number of references, both ++ * in time spent and used memory. So for now just fallback to write ++ * operations instead of clone operations when an extent has more than ++ * a certain amount of references. ++ */ ++ if (btrfs_extent_refs(tmp_path->nodes[0], ei) > SEND_MAX_EXTENT_REFS) { ++ ret = -ENOENT; ++ goto out; ++ } ++ btrfs_release_path(tmp_path); ++ + /* + * Setup the clone roots. + */ +diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h +index a7da1f3e3627..5acf5c507ec2 100644 +--- a/fs/btrfs/volumes.h ++++ b/fs/btrfs/volumes.h +@@ -330,7 +330,6 @@ struct btrfs_bio { + u64 map_type; /* get from map_lookup->type */ + bio_end_io_t *end_io; + struct bio *orig_bio; +- unsigned long flags; + void *private; + atomic_t error; + int max_errors; +diff --git a/fs/ceph/dir.c b/fs/ceph/dir.c +index d17a789fd856..2e4764fd1872 100644 +--- a/fs/ceph/dir.c ++++ b/fs/ceph/dir.c +@@ -1809,6 +1809,7 @@ const struct file_operations ceph_dir_fops = { + .open = ceph_open, + .release = ceph_release, + .unlocked_ioctl = ceph_ioctl, ++ .compat_ioctl = compat_ptr_ioctl, + .fsync = ceph_fsync, + .lock = ceph_lock, + .flock = ceph_flock, +diff --git a/fs/ceph/file.c b/fs/ceph/file.c +index 8de633964dc3..11929d2bb594 100644 +--- a/fs/ceph/file.c ++++ b/fs/ceph/file.c +@@ -2188,7 +2188,7 @@ const struct file_operations ceph_file_fops = { + .splice_read = generic_file_splice_read, + .splice_write = iter_file_splice_write, + .unlocked_ioctl = ceph_ioctl, +- .compat_ioctl = ceph_ioctl, ++ .compat_ioctl = compat_ptr_ioctl, + .fallocate = ceph_fallocate, + .copy_file_range = ceph_copy_file_range, + }; +diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c +index a13a78725c57..b766c3ee5fa8 100644 +--- a/fs/erofs/xattr.c ++++ b/fs/erofs/xattr.c +@@ -649,6 +649,8 @@ ssize_t erofs_listxattr(struct dentry *dentry, + struct listxattr_iter it; + + ret = init_inode_xattrs(d_inode(dentry)); ++ if (ret == -ENOATTR) ++ return 0; + if (ret) + return ret; + +diff --git a/fs/ext2/inode.c b/fs/ext2/inode.c +index 7004ce581a32..a16c53655e77 100644 +--- a/fs/ext2/inode.c ++++ b/fs/ext2/inode.c +@@ -701,10 +701,13 @@ static int ext2_get_blocks(struct inode *inode, + if (!partial) { + count++; + mutex_unlock(&ei->truncate_mutex); +- if (err) +- goto cleanup; + goto got_it; + } ++ ++ if (err) { ++ mutex_unlock(&ei->truncate_mutex); ++ goto cleanup; ++ } + } + + /* +diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c +index 764ff4c56233..564e2ceb8417 100644 +--- a/fs/ext4/ialloc.c ++++ b/fs/ext4/ialloc.c +@@ -265,13 +265,8 @@ void ext4_free_inode(handle_t *handle, struct inode *inode) + ext4_debug("freeing inode %lu\n", ino); + trace_ext4_free_inode(inode); + +- /* +- * Note: we must free any quota before locking the superblock, +- * as writing the quota to disk may need the lock as well. +- */ + dquot_initialize(inode); + dquot_free_inode(inode); +- dquot_drop(inode); + + is_directory = S_ISDIR(inode->i_mode); + +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c +index d691d1783ed6..91da21890360 100644 +--- a/fs/ext4/inode.c ++++ b/fs/ext4/inode.c +@@ -196,7 +196,12 @@ void ext4_evict_inode(struct inode *inode) + { + handle_t *handle; + int err; +- int extra_credits = 3; ++ /* ++ * Credits for final inode cleanup and freeing: ++ * sb + inode (ext4_orphan_del()), block bitmap, group descriptor ++ * (xattr block freeing), bitmap, group descriptor (inode freeing) ++ */ ++ int extra_credits = 6; + struct ext4_xattr_inode_array *ea_inode_array = NULL; + + trace_ext4_evict_inode(inode); +@@ -252,8 +257,12 @@ void ext4_evict_inode(struct inode *inode) + if (!IS_NOQUOTA(inode)) + extra_credits += EXT4_MAXQUOTAS_DEL_BLOCKS(inode->i_sb); + ++ /* ++ * Block bitmap, group descriptor, and inode are accounted in both ++ * ext4_blocks_for_truncate() and extra_credits. So subtract 3. ++ */ + handle = ext4_journal_start(inode, EXT4_HT_TRUNCATE, +- ext4_blocks_for_truncate(inode)+extra_credits); ++ ext4_blocks_for_truncate(inode) + extra_credits - 3); + if (IS_ERR(handle)) { + ext4_std_error(inode->i_sb, PTR_ERR(handle)); + /* +@@ -5450,11 +5459,15 @@ static void ext4_wait_for_tail_page_commit(struct inode *inode) + + offset = inode->i_size & (PAGE_SIZE - 1); + /* +- * All buffers in the last page remain valid? Then there's nothing to +- * do. We do the check mainly to optimize the common PAGE_SIZE == +- * blocksize case ++ * If the page is fully truncated, we don't need to wait for any commit ++ * (and we even should not as __ext4_journalled_invalidatepage() may ++ * strip all buffers from the page but keep the page dirty which can then ++ * confuse e.g. concurrent ext4_writepage() seeing dirty page without ++ * buffers). Also we don't need to wait for any commit if all buffers in ++ * the page remain valid. This is most beneficial for the common case of ++ * blocksize == PAGESIZE. + */ +- if (offset > PAGE_SIZE - i_blocksize(inode)) ++ if (!offset || offset > (PAGE_SIZE - i_blocksize(inode))) + return; + while (1) { + page = find_lock_page(inode->i_mapping, +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index a427d2031a8d..923476e3aefb 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -3182,18 +3182,17 @@ static int ext4_unlink(struct inode *dir, struct dentry *dentry) + if (IS_DIRSYNC(dir)) + ext4_handle_sync(handle); + +- if (inode->i_nlink == 0) { +- ext4_warning_inode(inode, "Deleting file '%.*s' with no links", +- dentry->d_name.len, dentry->d_name.name); +- set_nlink(inode, 1); +- } + retval = ext4_delete_entry(handle, dir, de, bh); + if (retval) + goto end_unlink; + dir->i_ctime = dir->i_mtime = current_time(dir); + ext4_update_dx_flag(dir); + ext4_mark_inode_dirty(handle, dir); +- drop_nlink(inode); ++ if (inode->i_nlink == 0) ++ ext4_warning_inode(inode, "Deleting file '%.*s' with no links", ++ dentry->d_name.len, dentry->d_name.name); ++ else ++ drop_nlink(inode); + if (!inode->i_nlink) + ext4_orphan_add(handle, inode); + inode->i_ctime = current_time(inode); +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index 73578359d451..98d37b8d0050 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -1172,9 +1172,9 @@ void ext4_clear_inode(struct inode *inode) + { + invalidate_inode_buffers(inode); + clear_inode(inode); +- dquot_drop(inode); + ext4_discard_preallocations(inode); + ext4_es_remove_extent(inode, 0, EXT_MAX_BLOCKS); ++ dquot_drop(inode); + if (EXT4_I(inode)->jinode) { + jbd2_journal_release_jbd_inode(EXT4_JOURNAL(inode), + EXT4_I(inode)->jinode); +diff --git a/fs/ioctl.c b/fs/ioctl.c +index fef3a6bf7c78..3118da0de158 100644 +--- a/fs/ioctl.c ++++ b/fs/ioctl.c +@@ -8,6 +8,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -719,3 +720,37 @@ SYSCALL_DEFINE3(ioctl, unsigned int, fd, unsigned int, cmd, unsigned long, arg) + { + return ksys_ioctl(fd, cmd, arg); + } ++ ++#ifdef CONFIG_COMPAT ++/** ++ * compat_ptr_ioctl - generic implementation of .compat_ioctl file operation ++ * ++ * This is not normally called as a function, but instead set in struct ++ * file_operations as ++ * ++ * .compat_ioctl = compat_ptr_ioctl, ++ * ++ * On most architectures, the compat_ptr_ioctl() just passes all arguments ++ * to the corresponding ->ioctl handler. The exception is arch/s390, where ++ * compat_ptr() clears the top bit of a 32-bit pointer value, so user space ++ * pointers to the second 2GB alias the first 2GB, as is the case for ++ * native 32-bit s390 user space. ++ * ++ * The compat_ptr_ioctl() function must therefore be used only with ioctl ++ * functions that either ignore the argument or pass a pointer to a ++ * compatible data type. ++ * ++ * If any ioctl command handled by fops->unlocked_ioctl passes a plain ++ * integer instead of a pointer, or any of the passed data types ++ * is incompatible between 32-bit and 64-bit architectures, a proper ++ * handler is required instead of compat_ptr_ioctl. ++ */ ++long compat_ptr_ioctl(struct file *file, unsigned int cmd, unsigned long arg) ++{ ++ if (!file->f_op->unlocked_ioctl) ++ return -ENOIOCTLCMD; ++ ++ return file->f_op->unlocked_ioctl(file, cmd, (unsigned long)compat_ptr(arg)); ++} ++EXPORT_SYMBOL(compat_ptr_ioctl); ++#endif +diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c +index 7a922190a8c7..eda83487c9ec 100644 +--- a/fs/ocfs2/quota_global.c ++++ b/fs/ocfs2/quota_global.c +@@ -728,7 +728,7 @@ static int ocfs2_release_dquot(struct dquot *dquot) + + mutex_lock(&dquot->dq_lock); + /* Check whether we are not racing with some other dqget() */ +- if (atomic_read(&dquot->dq_count) > 1) ++ if (dquot_is_busy(dquot)) + goto out; + /* Running from downconvert thread? Postpone quota processing to wq */ + if (current == osb->dc_task) { +diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c +index 702aa63f6774..29abdb1d3b5c 100644 +--- a/fs/overlayfs/dir.c ++++ b/fs/overlayfs/dir.c +@@ -1170,7 +1170,7 @@ static int ovl_rename(struct inode *olddir, struct dentry *old, + if (newdentry == trap) + goto out_dput; + +- if (WARN_ON(olddentry->d_inode == newdentry->d_inode)) ++ if (olddentry->d_inode == newdentry->d_inode) + goto out_dput; + + err = 0; +diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c +index bc14781886bf..b045cf1826fc 100644 +--- a/fs/overlayfs/inode.c ++++ b/fs/overlayfs/inode.c +@@ -200,8 +200,14 @@ int ovl_getattr(const struct path *path, struct kstat *stat, + if (ovl_test_flag(OVL_INDEX, d_inode(dentry)) || + (!ovl_verify_lower(dentry->d_sb) && + (is_dir || lowerstat.nlink == 1))) { +- stat->ino = lowerstat.ino; + lower_layer = ovl_layer_lower(dentry); ++ /* ++ * Cannot use origin st_dev;st_ino because ++ * origin inode content may differ from overlay ++ * inode content. ++ */ ++ if (samefs || lower_layer->fsid) ++ stat->ino = lowerstat.ino; + } + + /* +diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c +index e9717c2f7d45..f47c591402d7 100644 +--- a/fs/overlayfs/namei.c ++++ b/fs/overlayfs/namei.c +@@ -325,6 +325,14 @@ int ovl_check_origin_fh(struct ovl_fs *ofs, struct ovl_fh *fh, bool connected, + int i; + + for (i = 0; i < ofs->numlower; i++) { ++ /* ++ * If lower fs uuid is not unique among lower fs we cannot match ++ * fh->uuid to layer. ++ */ ++ if (ofs->lower_layers[i].fsid && ++ ofs->lower_layers[i].fs->bad_uuid) ++ continue; ++ + origin = ovl_decode_real_fh(fh, ofs->lower_layers[i].mnt, + connected); + if (origin) +diff --git a/fs/overlayfs/ovl_entry.h b/fs/overlayfs/ovl_entry.h +index a8279280e88d..28348c44ea5b 100644 +--- a/fs/overlayfs/ovl_entry.h ++++ b/fs/overlayfs/ovl_entry.h +@@ -22,6 +22,8 @@ struct ovl_config { + struct ovl_sb { + struct super_block *sb; + dev_t pseudo_dev; ++ /* Unusable (conflicting) uuid */ ++ bool bad_uuid; + }; + + struct ovl_layer { +diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c +index afbcb116a7f1..7621ff176d15 100644 +--- a/fs/overlayfs/super.c ++++ b/fs/overlayfs/super.c +@@ -1255,7 +1255,7 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid) + { + unsigned int i; + +- if (!ofs->config.nfs_export && !(ofs->config.index && ofs->upper_mnt)) ++ if (!ofs->config.nfs_export && !ofs->upper_mnt) + return true; + + for (i = 0; i < ofs->numlowerfs; i++) { +@@ -1263,9 +1263,13 @@ static bool ovl_lower_uuid_ok(struct ovl_fs *ofs, const uuid_t *uuid) + * We use uuid to associate an overlay lower file handle with a + * lower layer, so we can accept lower fs with null uuid as long + * as all lower layers with null uuid are on the same fs. ++ * if we detect multiple lower fs with the same uuid, we ++ * disable lower file handle decoding on all of them. + */ +- if (uuid_equal(&ofs->lower_fs[i].sb->s_uuid, uuid)) ++ if (uuid_equal(&ofs->lower_fs[i].sb->s_uuid, uuid)) { ++ ofs->lower_fs[i].bad_uuid = true; + return false; ++ } + } + return true; + } +@@ -1277,6 +1281,7 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path) + unsigned int i; + dev_t dev; + int err; ++ bool bad_uuid = false; + + /* fsid 0 is reserved for upper fs even with non upper overlay */ + if (ofs->upper_mnt && ofs->upper_mnt->mnt_sb == sb) +@@ -1288,11 +1293,15 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path) + } + + if (!ovl_lower_uuid_ok(ofs, &sb->s_uuid)) { +- ofs->config.index = false; +- ofs->config.nfs_export = false; +- pr_warn("overlayfs: %s uuid detected in lower fs '%pd2', falling back to index=off,nfs_export=off.\n", +- uuid_is_null(&sb->s_uuid) ? "null" : "conflicting", +- path->dentry); ++ bad_uuid = true; ++ if (ofs->config.index || ofs->config.nfs_export) { ++ ofs->config.index = false; ++ ofs->config.nfs_export = false; ++ pr_warn("overlayfs: %s uuid detected in lower fs '%pd2', falling back to index=off,nfs_export=off.\n", ++ uuid_is_null(&sb->s_uuid) ? "null" : ++ "conflicting", ++ path->dentry); ++ } + } + + err = get_anon_bdev(&dev); +@@ -1303,6 +1312,7 @@ static int ovl_get_fsid(struct ovl_fs *ofs, const struct path *path) + + ofs->lower_fs[ofs->numlowerfs].sb = sb; + ofs->lower_fs[ofs->numlowerfs].pseudo_dev = dev; ++ ofs->lower_fs[ofs->numlowerfs].bad_uuid = bad_uuid; + ofs->numlowerfs++; + + return ofs->numlowerfs; +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index 6e826b454082..7f0b39da5022 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -497,7 +497,7 @@ int dquot_release(struct dquot *dquot) + + mutex_lock(&dquot->dq_lock); + /* Check whether we are not racing with some other dqget() */ +- if (atomic_read(&dquot->dq_count) > 1) ++ if (dquot_is_busy(dquot)) + goto out_dqlock; + if (dqopt->ops[dquot->dq_id.type]->release_dqblk) { + ret = dqopt->ops[dquot->dq_id.type]->release_dqblk(dquot); +@@ -623,7 +623,7 @@ EXPORT_SYMBOL(dquot_scan_active); + /* Write all dquot structures to quota files */ + int dquot_writeback_dquots(struct super_block *sb, int type) + { +- struct list_head *dirty; ++ struct list_head dirty; + struct dquot *dquot; + struct quota_info *dqopt = sb_dqopt(sb); + int cnt; +@@ -637,9 +637,10 @@ int dquot_writeback_dquots(struct super_block *sb, int type) + if (!sb_has_quota_active(sb, cnt)) + continue; + spin_lock(&dq_list_lock); +- dirty = &dqopt->info[cnt].dqi_dirty_list; +- while (!list_empty(dirty)) { +- dquot = list_first_entry(dirty, struct dquot, ++ /* Move list away to avoid livelock. */ ++ list_replace_init(&dqopt->info[cnt].dqi_dirty_list, &dirty); ++ while (!list_empty(&dirty)) { ++ dquot = list_first_entry(&dirty, struct dquot, + dq_dirty); + + WARN_ON(!test_bit(DQ_ACTIVE_B, &dquot->dq_flags)); +diff --git a/fs/reiserfs/inode.c b/fs/reiserfs/inode.c +index 132ec4406ed0..6419e6dacc39 100644 +--- a/fs/reiserfs/inode.c ++++ b/fs/reiserfs/inode.c +@@ -2097,6 +2097,15 @@ int reiserfs_new_inode(struct reiserfs_transaction_handle *th, + goto out_inserted_sd; + } + ++ /* ++ * Mark it private if we're creating the privroot ++ * or something under it. ++ */ ++ if (IS_PRIVATE(dir) || dentry == REISERFS_SB(sb)->priv_root) { ++ inode->i_flags |= S_PRIVATE; ++ inode->i_opflags &= ~IOP_XATTR; ++ } ++ + if (reiserfs_posixacl(inode->i_sb)) { + reiserfs_write_unlock(inode->i_sb); + retval = reiserfs_inherit_default_acl(th, dir, dentry, inode); +@@ -2111,8 +2120,7 @@ int reiserfs_new_inode(struct reiserfs_transaction_handle *th, + reiserfs_warning(inode->i_sb, "jdm-13090", + "ACLs aren't enabled in the fs, " + "but vfs thinks they are!"); +- } else if (IS_PRIVATE(dir)) +- inode->i_flags |= S_PRIVATE; ++ } + + if (security->name) { + reiserfs_write_unlock(inode->i_sb); +diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c +index 97f3fc4fdd79..959a066b7bb0 100644 +--- a/fs/reiserfs/namei.c ++++ b/fs/reiserfs/namei.c +@@ -377,10 +377,13 @@ static struct dentry *reiserfs_lookup(struct inode *dir, struct dentry *dentry, + + /* + * Propagate the private flag so we know we're +- * in the priv tree ++ * in the priv tree. Also clear IOP_XATTR ++ * since we don't have xattrs on xattr files. + */ +- if (IS_PRIVATE(dir)) ++ if (IS_PRIVATE(dir)) { + inode->i_flags |= S_PRIVATE; ++ inode->i_opflags &= ~IOP_XATTR; ++ } + } + reiserfs_write_unlock(dir->i_sb); + if (retval == IO_ERROR) { +diff --git a/fs/reiserfs/reiserfs.h b/fs/reiserfs/reiserfs.h +index e5ca9ed79e54..726580114d55 100644 +--- a/fs/reiserfs/reiserfs.h ++++ b/fs/reiserfs/reiserfs.h +@@ -1168,6 +1168,8 @@ static inline int bmap_would_wrap(unsigned bmap_nr) + return bmap_nr > ((1LL << 16) - 1); + } + ++extern const struct xattr_handler *reiserfs_xattr_handlers[]; ++ + /* + * this says about version of key of all items (but stat data) the + * object consists of +diff --git a/fs/reiserfs/super.c b/fs/reiserfs/super.c +index d69b4ac0ae2f..3244037b1286 100644 +--- a/fs/reiserfs/super.c ++++ b/fs/reiserfs/super.c +@@ -2049,6 +2049,8 @@ static int reiserfs_fill_super(struct super_block *s, void *data, int silent) + if (replay_only(s)) + goto error_unlocked; + ++ s->s_xattr = reiserfs_xattr_handlers; ++ + if (bdev_read_only(s->s_bdev) && !sb_rdonly(s)) { + SWARN(silent, s, "clm-7000", + "Detected readonly device, marking FS readonly"); +diff --git a/fs/reiserfs/xattr.c b/fs/reiserfs/xattr.c +index b5b26d8a192c..62b40df36c98 100644 +--- a/fs/reiserfs/xattr.c ++++ b/fs/reiserfs/xattr.c +@@ -122,13 +122,13 @@ static struct dentry *open_xa_root(struct super_block *sb, int flags) + struct dentry *xaroot; + + if (d_really_is_negative(privroot)) +- return ERR_PTR(-ENODATA); ++ return ERR_PTR(-EOPNOTSUPP); + + inode_lock_nested(d_inode(privroot), I_MUTEX_XATTR); + + xaroot = dget(REISERFS_SB(sb)->xattr_root); + if (!xaroot) +- xaroot = ERR_PTR(-ENODATA); ++ xaroot = ERR_PTR(-EOPNOTSUPP); + else if (d_really_is_negative(xaroot)) { + int err = -ENODATA; + +@@ -619,6 +619,10 @@ int reiserfs_xattr_set(struct inode *inode, const char *name, + int error, error2; + size_t jbegin_count = reiserfs_xattr_nblocks(inode, buffer_size); + ++ /* Check before we start a transaction and then do nothing. */ ++ if (!d_really_is_positive(REISERFS_SB(inode->i_sb)->priv_root)) ++ return -EOPNOTSUPP; ++ + if (!(flags & XATTR_REPLACE)) + jbegin_count += reiserfs_xattr_jcreate_nblocks(inode); + +@@ -841,8 +845,7 @@ ssize_t reiserfs_listxattr(struct dentry * dentry, char *buffer, size_t size) + if (d_really_is_negative(dentry)) + return -EINVAL; + +- if (!dentry->d_sb->s_xattr || +- get_inode_sd_version(d_inode(dentry)) == STAT_DATA_V1) ++ if (get_inode_sd_version(d_inode(dentry)) == STAT_DATA_V1) + return -EOPNOTSUPP; + + dir = open_xa_dir(d_inode(dentry), XATTR_REPLACE); +@@ -882,6 +885,7 @@ static int create_privroot(struct dentry *dentry) + } + + d_inode(dentry)->i_flags |= S_PRIVATE; ++ d_inode(dentry)->i_opflags &= ~IOP_XATTR; + reiserfs_info(dentry->d_sb, "Created %s - reserved for xattr " + "storage.\n", PRIVROOT_NAME); + +@@ -895,7 +899,7 @@ static int create_privroot(struct dentry *dentry) { return 0; } + #endif + + /* Actual operations that are exported to VFS-land */ +-static const struct xattr_handler *reiserfs_xattr_handlers[] = { ++const struct xattr_handler *reiserfs_xattr_handlers[] = { + #ifdef CONFIG_REISERFS_FS_XATTR + &reiserfs_xattr_user_handler, + &reiserfs_xattr_trusted_handler, +@@ -966,8 +970,10 @@ int reiserfs_lookup_privroot(struct super_block *s) + if (!IS_ERR(dentry)) { + REISERFS_SB(s)->priv_root = dentry; + d_set_d_op(dentry, &xattr_lookup_poison_ops); +- if (d_really_is_positive(dentry)) ++ if (d_really_is_positive(dentry)) { + d_inode(dentry)->i_flags |= S_PRIVATE; ++ d_inode(dentry)->i_opflags &= ~IOP_XATTR; ++ } + } else + err = PTR_ERR(dentry); + inode_unlock(d_inode(s->s_root)); +@@ -996,7 +1002,6 @@ int reiserfs_xattr_init(struct super_block *s, int mount_flags) + } + + if (d_really_is_positive(privroot)) { +- s->s_xattr = reiserfs_xattr_handlers; + inode_lock(d_inode(privroot)); + if (!REISERFS_SB(s)->xattr_root) { + struct dentry *dentry; +diff --git a/fs/reiserfs/xattr_acl.c b/fs/reiserfs/xattr_acl.c +index aa9380bac196..05f666794561 100644 +--- a/fs/reiserfs/xattr_acl.c ++++ b/fs/reiserfs/xattr_acl.c +@@ -320,10 +320,8 @@ reiserfs_inherit_default_acl(struct reiserfs_transaction_handle *th, + * would be useless since permissions are ignored, and a pain because + * it introduces locking cycles + */ +- if (IS_PRIVATE(dir)) { +- inode->i_flags |= S_PRIVATE; ++ if (IS_PRIVATE(inode)) + goto apply_umask; +- } + + err = posix_acl_create(dir, &inode->i_mode, &default_acl, &acl); + if (err) +diff --git a/fs/splice.c b/fs/splice.c +index 98412721f056..e509239d7e06 100644 +--- a/fs/splice.c ++++ b/fs/splice.c +@@ -945,12 +945,13 @@ ssize_t splice_direct_to_actor(struct file *in, struct splice_desc *sd, + WARN_ON_ONCE(pipe->nrbufs != 0); + + while (len) { ++ unsigned int pipe_pages; + size_t read_len; + loff_t pos = sd->pos, prev_pos = pos; + + /* Don't try to read more the pipe has space for. */ +- read_len = min_t(size_t, len, +- (pipe->buffers - pipe->nrbufs) << PAGE_SHIFT); ++ pipe_pages = pipe->buffers - pipe->nrbufs; ++ read_len = min(len, (size_t)pipe_pages << PAGE_SHIFT); + ret = do_splice_to(in, &pos, pipe, read_len, flags); + if (unlikely(ret <= 0)) + goto out_release; +@@ -1180,8 +1181,15 @@ static long do_splice(struct file *in, loff_t __user *off_in, + + pipe_lock(opipe); + ret = wait_for_space(opipe, flags); +- if (!ret) ++ if (!ret) { ++ unsigned int pipe_pages; ++ ++ /* Don't try to read more the pipe has space for. */ ++ pipe_pages = opipe->buffers - opipe->nrbufs; ++ len = min(len, (size_t)pipe_pages << PAGE_SHIFT); ++ + ret = do_splice_to(in, &offset, opipe, len, flags); ++ } + pipe_unlock(opipe); + if (ret > 0) + wakeup_pipe_readers(opipe); +diff --git a/include/acpi/acpi_bus.h b/include/acpi/acpi_bus.h +index 175f7b40c585..3f6fddeb7519 100644 +--- a/include/acpi/acpi_bus.h ++++ b/include/acpi/acpi_bus.h +@@ -78,9 +78,6 @@ acpi_evaluate_dsm_typed(acpi_handle handle, const guid_t *guid, u64 rev, + bool acpi_dev_found(const char *hid); + bool acpi_dev_present(const char *hid, const char *uid, s64 hrv); + +-struct acpi_device * +-acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv); +- + #ifdef CONFIG_ACPI + + #include +@@ -683,6 +680,9 @@ static inline bool acpi_device_can_poweroff(struct acpi_device *adev) + adev->power.states[ACPI_STATE_D3_HOT].flags.explicit_set); + } + ++struct acpi_device * ++acpi_dev_get_first_match_dev(const char *hid, const char *uid, s64 hrv); ++ + static inline void acpi_dev_put(struct acpi_device *adev) + { + put_device(&adev->dev); +diff --git a/include/linux/fs.h b/include/linux/fs.h +index e0d909d35763..0b4d8fc79e0f 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -1727,6 +1727,13 @@ int vfs_mkobj(struct dentry *, umode_t, + + extern long vfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg); + ++#ifdef CONFIG_COMPAT ++extern long compat_ptr_ioctl(struct file *file, unsigned int cmd, ++ unsigned long arg); ++#else ++#define compat_ptr_ioctl NULL ++#endif ++ + /* + * VFS file helper functions. + */ +diff --git a/include/linux/mfd/rk808.h b/include/linux/mfd/rk808.h +index 7cfd2b0504df..a59bf323f713 100644 +--- a/include/linux/mfd/rk808.h ++++ b/include/linux/mfd/rk808.h +@@ -610,7 +610,7 @@ enum { + RK808_ID = 0x0000, + RK809_ID = 0x8090, + RK817_ID = 0x8170, +- RK818_ID = 0x8181, ++ RK818_ID = 0x8180, + }; + + struct rk808 { +diff --git a/include/linux/quotaops.h b/include/linux/quotaops.h +index 185d94829701..91e0b7624053 100644 +--- a/include/linux/quotaops.h ++++ b/include/linux/quotaops.h +@@ -54,6 +54,16 @@ static inline struct dquot *dqgrab(struct dquot *dquot) + atomic_inc(&dquot->dq_count); + return dquot; + } ++ ++static inline bool dquot_is_busy(struct dquot *dquot) ++{ ++ if (test_bit(DQ_MOD_B, &dquot->dq_flags)) ++ return true; ++ if (atomic_read(&dquot->dq_count) > 1) ++ return true; ++ return false; ++} ++ + void dqput(struct dquot *dquot); + int dquot_scan_active(struct super_block *sb, + int (*fn)(struct dquot *dquot, unsigned long priv), +diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h +index e7e733add99f..44c52639db55 100644 +--- a/include/rdma/ib_verbs.h ++++ b/include/rdma/ib_verbs.h +@@ -4043,9 +4043,7 @@ static inline void ib_dma_unmap_sg_attrs(struct ib_device *dev, + */ + static inline unsigned int ib_dma_max_seg_size(struct ib_device *dev) + { +- struct device_dma_parameters *p = dev->dma_device->dma_parms; +- +- return p ? p->max_segment_size : UINT_MAX; ++ return dma_get_max_seg_size(dev->dma_device); + } + + /** +diff --git a/include/uapi/linux/cec.h b/include/uapi/linux/cec.h +index 5704fa0292b5..423859e489c7 100644 +--- a/include/uapi/linux/cec.h ++++ b/include/uapi/linux/cec.h +@@ -768,8 +768,8 @@ struct cec_event { + #define CEC_MSG_SELECT_DIGITAL_SERVICE 0x93 + #define CEC_MSG_TUNER_DEVICE_STATUS 0x07 + /* Recording Flag Operand (rec_flag) */ +-#define CEC_OP_REC_FLAG_USED 0 +-#define CEC_OP_REC_FLAG_NOT_USED 1 ++#define CEC_OP_REC_FLAG_NOT_USED 0 ++#define CEC_OP_REC_FLAG_USED 1 + /* Tuner Display Info Operand (tuner_display_info) */ + #define CEC_OP_TUNER_DISPLAY_INFO_DIGITAL 0 + #define CEC_OP_TUNER_DISPLAY_INFO_NONE 1 +diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c +index 8e513a573fe9..138059eb730d 100644 +--- a/kernel/cgroup/pids.c ++++ b/kernel/cgroup/pids.c +@@ -45,7 +45,7 @@ struct pids_cgroup { + * %PIDS_MAX = (%PID_MAX_LIMIT + 1). + */ + atomic64_t counter; +- int64_t limit; ++ atomic64_t limit; + + /* Handle for "pids.events" */ + struct cgroup_file events_file; +@@ -73,8 +73,8 @@ pids_css_alloc(struct cgroup_subsys_state *parent) + if (!pids) + return ERR_PTR(-ENOMEM); + +- pids->limit = PIDS_MAX; + atomic64_set(&pids->counter, 0); ++ atomic64_set(&pids->limit, PIDS_MAX); + atomic64_set(&pids->events_limit, 0); + return &pids->css; + } +@@ -146,13 +146,14 @@ static int pids_try_charge(struct pids_cgroup *pids, int num) + + for (p = pids; parent_pids(p); p = parent_pids(p)) { + int64_t new = atomic64_add_return(num, &p->counter); ++ int64_t limit = atomic64_read(&p->limit); + + /* + * Since new is capped to the maximum number of pid_t, if + * p->limit is %PIDS_MAX then we know that this test will never + * fail. + */ +- if (new > p->limit) ++ if (new > limit) + goto revert; + } + +@@ -277,7 +278,7 @@ set_limit: + * Limit updates don't need to be mutex'd, since it isn't + * critical that any racing fork()s follow the new limit. + */ +- pids->limit = limit; ++ atomic64_set(&pids->limit, limit); + return nbytes; + } + +@@ -285,7 +286,7 @@ static int pids_max_show(struct seq_file *sf, void *v) + { + struct cgroup_subsys_state *css = seq_css(sf); + struct pids_cgroup *pids = css_pids(css); +- int64_t limit = pids->limit; ++ int64_t limit = atomic64_read(&pids->limit); + + if (limit >= PIDS_MAX) + seq_printf(sf, "%s\n", PIDS_MAX_STR); +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index bc2e09a8ea61..649687622654 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -2532,8 +2532,14 @@ repeat: + */ + if (need_to_create_worker(pool)) { + spin_lock(&wq_mayday_lock); +- get_pwq(pwq); +- list_move_tail(&pwq->mayday_node, &wq->maydays); ++ /* ++ * Queue iff we aren't racing destruction ++ * and somebody else hasn't queued it already. ++ */ ++ if (wq->rescuer && list_empty(&pwq->mayday_node)) { ++ get_pwq(pwq); ++ list_add_tail(&pwq->mayday_node, &wq->maydays); ++ } + spin_unlock(&wq_mayday_lock); + } + } +@@ -4325,9 +4331,29 @@ void destroy_workqueue(struct workqueue_struct *wq) + struct pool_workqueue *pwq; + int node; + ++ /* ++ * Remove it from sysfs first so that sanity check failure doesn't ++ * lead to sysfs name conflicts. ++ */ ++ workqueue_sysfs_unregister(wq); ++ + /* drain it before proceeding with destruction */ + drain_workqueue(wq); + ++ /* kill rescuer, if sanity checks fail, leave it w/o rescuer */ ++ if (wq->rescuer) { ++ struct worker *rescuer = wq->rescuer; ++ ++ /* this prevents new queueing */ ++ spin_lock_irq(&wq_mayday_lock); ++ wq->rescuer = NULL; ++ spin_unlock_irq(&wq_mayday_lock); ++ ++ /* rescuer will empty maydays list before exiting */ ++ kthread_stop(rescuer->task); ++ kfree(rescuer); ++ } ++ + /* sanity checks */ + mutex_lock(&wq->mutex); + for_each_pwq(pwq, wq) { +@@ -4359,11 +4385,6 @@ void destroy_workqueue(struct workqueue_struct *wq) + list_del_rcu(&wq->list); + mutex_unlock(&wq_pool_mutex); + +- workqueue_sysfs_unregister(wq); +- +- if (wq->rescuer) +- kthread_stop(wq->rescuer->task); +- + if (!(wq->flags & WQ_UNBOUND)) { + wq_unregister_lockdep(wq); + /* +@@ -4638,7 +4659,8 @@ static void show_pwq(struct pool_workqueue *pwq) + pr_info(" pwq %d:", pool->id); + pr_cont_pool_info(pool); + +- pr_cont(" active=%d/%d%s\n", pwq->nr_active, pwq->max_active, ++ pr_cont(" active=%d/%d refcnt=%d%s\n", ++ pwq->nr_active, pwq->max_active, pwq->refcnt, + !list_empty(&pwq->mayday_node) ? " MAYDAY" : ""); + + hash_for_each(pool->busy_hash, bkt, worker, hentry) { +diff --git a/lib/raid6/unroll.awk b/lib/raid6/unroll.awk +index c6aa03631df8..0809805a7e23 100644 +--- a/lib/raid6/unroll.awk ++++ b/lib/raid6/unroll.awk +@@ -13,7 +13,7 @@ BEGIN { + for (i = 0; i < rep; ++i) { + tmp = $0 + gsub(/\$\$/, i, tmp) +- gsub(/\$\#/, n, tmp) ++ gsub(/\$#/, n, tmp) + gsub(/\$\*/, "$", tmp) + print tmp + } +diff --git a/mm/shmem.c b/mm/shmem.c +index 220be9fa2c41..7a22e3e03d11 100644 +--- a/mm/shmem.c ++++ b/mm/shmem.c +@@ -2213,11 +2213,14 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) + return -EPERM; + + /* +- * Since the F_SEAL_FUTURE_WRITE seals allow for a MAP_SHARED +- * read-only mapping, take care to not allow mprotect to revert +- * protections. ++ * Since an F_SEAL_FUTURE_WRITE sealed memfd can be mapped as ++ * MAP_SHARED and read-only, take care to not allow mprotect to ++ * revert protections on such mappings. Do this only for shared ++ * mappings. For private mappings, don't need to mask ++ * VM_MAYWRITE as we still want them to be COW-writable. + */ +- vma->vm_flags &= ~(VM_MAYWRITE); ++ if (vma->vm_flags & VM_SHARED) ++ vma->vm_flags &= ~(VM_MAYWRITE); + } + + file_accessed(file); +@@ -2742,7 +2745,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, + } + + shmem_falloc.waitq = &shmem_falloc_waitq; +- shmem_falloc.start = unmap_start >> PAGE_SHIFT; ++ shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT; + shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; + spin_lock(&inode->i_lock); + inode->i_private = &shmem_falloc; +diff --git a/mm/slab_common.c b/mm/slab_common.c +index f9fb27b4c843..78402b362df9 100644 +--- a/mm/slab_common.c ++++ b/mm/slab_common.c +@@ -904,6 +904,18 @@ static void flush_memcg_workqueue(struct kmem_cache *s) + * previous workitems on workqueue are processed. + */ + flush_workqueue(memcg_kmem_cache_wq); ++ ++ /* ++ * If we're racing with children kmem_cache deactivation, it might ++ * take another rcu grace period to complete their destruction. ++ * At this moment the corresponding percpu_ref_kill() call should be ++ * done, but it might take another rcu grace period to complete ++ * switching to the atomic mode. ++ * Please, note that we check without grabbing the slab_mutex. It's safe ++ * because at this moment the children list can't grow. ++ */ ++ if (!list_empty(&s->memcg_params.children)) ++ rcu_barrier(); + } + #else + static inline int shutdown_memcg_caches(struct kmem_cache *s) +diff --git a/sound/firewire/fireface/ff-pcm.c b/sound/firewire/fireface/ff-pcm.c +index 9eab3ad283ce..df6ff2df0124 100644 +--- a/sound/firewire/fireface/ff-pcm.c ++++ b/sound/firewire/fireface/ff-pcm.c +@@ -219,7 +219,7 @@ static int pcm_hw_params(struct snd_pcm_substream *substream, + mutex_unlock(&ff->mutex); + } + +- return 0; ++ return err; + } + + static int pcm_hw_free(struct snd_pcm_substream *substream) +diff --git a/sound/firewire/oxfw/oxfw-pcm.c b/sound/firewire/oxfw/oxfw-pcm.c +index 7c6d1c277d4d..78d906af9c00 100644 +--- a/sound/firewire/oxfw/oxfw-pcm.c ++++ b/sound/firewire/oxfw/oxfw-pcm.c +@@ -255,7 +255,7 @@ static int pcm_playback_hw_params(struct snd_pcm_substream *substream, + mutex_unlock(&oxfw->mutex); + } + +- return 0; ++ return err; + } + + static int pcm_capture_hw_free(struct snd_pcm_substream *substream) +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index ed3e314b5233..e1229dbad6b2 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -7672,11 +7672,6 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = { + {0x1a, 0x90a70130}, + {0x1b, 0x90170110}, + {0x21, 0x03211020}), +- SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB, +- {0x12, 0xb7a60130}, +- {0x13, 0xb8a61140}, +- {0x16, 0x90170110}, +- {0x21, 0x04211020}), + SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4, + {0x12, 0x90a60130}, + {0x14, 0x90170110}, +@@ -7864,6 +7859,9 @@ static const struct snd_hda_pin_quirk alc269_fallback_pin_fixup_tbl[] = { + SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, + {0x19, 0x40000000}, + {0x1b, 0x40000000}), ++ SND_HDA_PIN_QUIRK(0x10ec0274, 0x1028, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB, ++ {0x19, 0x40000000}, ++ {0x1a, 0x40000000}), + {} + }; + +diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c +index 1c06b3b9218c..19662ee330d6 100644 +--- a/sound/soc/codecs/rt5645.c ++++ b/sound/soc/codecs/rt5645.c +@@ -3270,6 +3270,9 @@ static void rt5645_jack_detect_work(struct work_struct *work) + snd_soc_jack_report(rt5645->mic_jack, + report, SND_JACK_MICROPHONE); + return; ++ case 4: ++ val = snd_soc_component_read32(rt5645->component, RT5645_A_JD_CTRL1) & 0x0020; ++ break; + default: /* read rt5645 jd1_1 status */ + val = snd_soc_component_read32(rt5645->component, RT5645_INT_IRQ_ST) & 0x1000; + break; +@@ -3603,7 +3606,7 @@ static const struct rt5645_platform_data intel_braswell_platform_data = { + static const struct rt5645_platform_data buddy_platform_data = { + .dmic1_data_pin = RT5645_DMIC_DATA_GPIO5, + .dmic2_data_pin = RT5645_DMIC_DATA_IN2P, +- .jd_mode = 3, ++ .jd_mode = 4, + .level_trigger_irq = true, + }; + +@@ -3999,6 +4002,7 @@ static int rt5645_i2c_probe(struct i2c_client *i2c, + RT5645_JD1_MODE_1); + break; + case 3: ++ case 4: + regmap_update_bits(rt5645->regmap, RT5645_A_JD_CTRL1, + RT5645_JD1_MODE_MASK, + RT5645_JD1_MODE_2); +diff --git a/sound/soc/fsl/fsl_audmix.c b/sound/soc/fsl/fsl_audmix.c +index c7e4e9757dce..a1db1bce330f 100644 +--- a/sound/soc/fsl/fsl_audmix.c ++++ b/sound/soc/fsl/fsl_audmix.c +@@ -286,6 +286,7 @@ static int fsl_audmix_dai_trigger(struct snd_pcm_substream *substream, int cmd, + struct snd_soc_dai *dai) + { + struct fsl_audmix *priv = snd_soc_dai_get_drvdata(dai); ++ unsigned long lock_flags; + + /* Capture stream shall not be handled */ + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) +@@ -295,12 +296,16 @@ static int fsl_audmix_dai_trigger(struct snd_pcm_substream *substream, int cmd, + case SNDRV_PCM_TRIGGER_START: + case SNDRV_PCM_TRIGGER_RESUME: + case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: ++ spin_lock_irqsave(&priv->lock, lock_flags); + priv->tdms |= BIT(dai->driver->id); ++ spin_unlock_irqrestore(&priv->lock, lock_flags); + break; + case SNDRV_PCM_TRIGGER_STOP: + case SNDRV_PCM_TRIGGER_SUSPEND: + case SNDRV_PCM_TRIGGER_PAUSE_PUSH: ++ spin_lock_irqsave(&priv->lock, lock_flags); + priv->tdms &= ~BIT(dai->driver->id); ++ spin_unlock_irqrestore(&priv->lock, lock_flags); + break; + default: + return -EINVAL; +@@ -491,6 +496,7 @@ static int fsl_audmix_probe(struct platform_device *pdev) + return PTR_ERR(priv->ipg_clk); + } + ++ spin_lock_init(&priv->lock); + platform_set_drvdata(pdev, priv); + pm_runtime_enable(dev); + +diff --git a/sound/soc/fsl/fsl_audmix.h b/sound/soc/fsl/fsl_audmix.h +index 7812ffec45c5..479f05695d53 100644 +--- a/sound/soc/fsl/fsl_audmix.h ++++ b/sound/soc/fsl/fsl_audmix.h +@@ -96,6 +96,7 @@ struct fsl_audmix { + struct platform_device *pdev; + struct regmap *regmap; + struct clk *ipg_clk; ++ spinlock_t lock; /* Protect tdms */ + u8 tdms; + }; + +diff --git a/sound/soc/soc-jack.c b/sound/soc/soc-jack.c +index a71d2340eb05..b5748dcd490f 100644 +--- a/sound/soc/soc-jack.c ++++ b/sound/soc/soc-jack.c +@@ -82,10 +82,9 @@ void snd_soc_jack_report(struct snd_soc_jack *jack, int status, int mask) + unsigned int sync = 0; + int enable; + +- trace_snd_soc_jack_report(jack, mask, status); +- + if (!jack) + return; ++ trace_snd_soc_jack_report(jack, mask, status); + + dapm = &jack->card->dapm; + +diff --git a/tools/perf/tests/backward-ring-buffer.c b/tools/perf/tests/backward-ring-buffer.c +index 338cd9faa835..5128f727c0ef 100644 +--- a/tools/perf/tests/backward-ring-buffer.c ++++ b/tools/perf/tests/backward-ring-buffer.c +@@ -147,6 +147,15 @@ int test__backward_ring_buffer(struct test *test __maybe_unused, int subtest __m + goto out_delete_evlist; + } + ++ evlist__close(evlist); ++ ++ err = evlist__open(evlist); ++ if (err < 0) { ++ pr_debug("perf_evlist__open: %s\n", ++ str_error_r(errno, sbuf, sizeof(sbuf))); ++ goto out_delete_evlist; ++ } ++ + err = do_test(evlist, 1, &sample_count, &comm_count); + if (err != TEST_OK) + goto out_delete_evlist; +diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c +index 7f8b5c8982e3..b505bb062d07 100644 +--- a/tools/testing/selftests/seccomp/seccomp_bpf.c ++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c +@@ -35,6 +35,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -3077,7 +3078,7 @@ static int user_trap_syscall(int nr, unsigned int flags) + return seccomp(SECCOMP_SET_MODE_FILTER, flags, &prog); + } + +-#define USER_NOTIF_MAGIC 116983961184613L ++#define USER_NOTIF_MAGIC INT_MAX + TEST(user_notification_basic) + { + pid_t pid;