From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <gentoo-commits+bounces-1062060-garchives=archives.gentoo.org@lists.gentoo.org> Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 9145E138334 for <garchives@archives.gentoo.org>; Thu, 13 Dec 2018 11:40:25 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id B7AA0E075F; Thu, 13 Dec 2018 11:40:24 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 4F3B8E075F for <gentoo-commits@lists.gentoo.org>; Thu, 13 Dec 2018 11:40:24 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 668E1335CB7 for <gentoo-commits@lists.gentoo.org>; Thu, 13 Dec 2018 11:40:22 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id D28E643A for <gentoo-commits@lists.gentoo.org>; Thu, 13 Dec 2018 11:40:20 +0000 (UTC) From: "Mike Pagano" <mpagano@gentoo.org> To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" <mpagano@gentoo.org> Message-ID: <1544701195.59ef1e9801722a3585eac6c0eb62f2087ca1d235.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.19 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 1008_linux-4.19.9.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 59ef1e9801722a3585eac6c0eb62f2087ca1d235 X-VCS-Branch: 4.19 Date: Thu, 13 Dec 2018 11:40:20 +0000 (UTC) Precedence: bulk List-Post: <mailto:gentoo-commits@lists.gentoo.org> List-Help: <mailto:gentoo-commits+help@lists.gentoo.org> List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org> List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org> List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org> X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 921b0ac3-3c56-4b3e-b0e0-c53db830f46d X-Archives-Hash: f82b31212c49d97fb55500b1bd1c5dfd commit: 59ef1e9801722a3585eac6c0eb62f2087ca1d235 Author: Mike Pagano <mpagano <AT> gentoo <DOT> org> AuthorDate: Thu Dec 13 11:39:55 2018 +0000 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org> CommitDate: Thu Dec 13 11:39:55 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=59ef1e98 proj/linux-patches: Linux patch 4.19.9 Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org> 1008_linux-4.19.9.patch | 4670 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 4670 insertions(+) diff --git a/1008_linux-4.19.9.patch b/1008_linux-4.19.9.patch new file mode 100644 index 0000000..dae36a8 --- /dev/null +++ b/1008_linux-4.19.9.patch @@ -0,0 +1,4670 @@ +diff --git a/Makefile b/Makefile +index 34bc4c752c49..8717f34464d5 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 19 +-SUBLEVEL = 8 ++SUBLEVEL = 9 + EXTRAVERSION = + NAME = "People's Front" + +diff --git a/arch/arm/probes/kprobes/opt-arm.c b/arch/arm/probes/kprobes/opt-arm.c +index b2aa9b32bff2..2c118a6ab358 100644 +--- a/arch/arm/probes/kprobes/opt-arm.c ++++ b/arch/arm/probes/kprobes/opt-arm.c +@@ -247,7 +247,7 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *or + } + + /* Copy arch-dep-instance from template. */ +- memcpy(code, &optprobe_template_entry, ++ memcpy(code, (unsigned char *)optprobe_template_entry, + TMPL_END_IDX * sizeof(kprobe_opcode_t)); + + /* Adjust buffer according to instruction. */ +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-ficus.dts b/arch/arm64/boot/dts/rockchip/rk3399-ficus.dts +index 8978d924eb83..85cf0b6bdda9 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-ficus.dts ++++ b/arch/arm64/boot/dts/rockchip/rk3399-ficus.dts +@@ -75,18 +75,6 @@ + regulator-always-on; + vin-supply = <&vcc_sys>; + }; +- +- vdd_log: vdd-log { +- compatible = "pwm-regulator"; +- pwms = <&pwm2 0 25000 0>; +- regulator-name = "vdd_log"; +- regulator-min-microvolt = <800000>; +- regulator-max-microvolt = <1400000>; +- regulator-always-on; +- regulator-boot-on; +- vin-supply = <&vcc_sys>; +- }; +- + }; + + &cpu_l0 { +diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c +index 6b2686d54411..29cdc99688f3 100644 +--- a/arch/arm64/kernel/hibernate.c ++++ b/arch/arm64/kernel/hibernate.c +@@ -214,7 +214,7 @@ static int create_safe_exec_page(void *src_start, size_t length, + } + + memcpy((void *)dst, src_start, length); +- flush_icache_range(dst, dst + length); ++ __flush_icache_range(dst, dst + length); + + pgdp = pgd_offset_raw(allocator(mask), dst_addr); + if (pgd_none(READ_ONCE(*pgdp))) { +diff --git a/arch/parisc/Makefile b/arch/parisc/Makefile +index 5ce030266e7d..253d7ca71472 100644 +--- a/arch/parisc/Makefile ++++ b/arch/parisc/Makefile +@@ -71,6 +71,13 @@ ifdef CONFIG_MLONGCALLS + KBUILD_CFLAGS_KERNEL += -mlong-calls + endif + ++# Without this, "ld -r" results in .text sections that are too big (> 0x40000) ++# for branches to reach stubs. And multiple .text sections trigger a warning ++# when creating the sysfs module information section. ++ifndef CONFIG_64BIT ++KBUILD_CFLAGS_MODULE += -ffunction-sections ++endif ++ + # select which processor to optimise for + cflags-$(CONFIG_PA7000) += -march=1.1 -mschedule=7100 + cflags-$(CONFIG_PA7200) += -march=1.1 -mschedule=7200 +diff --git a/arch/riscv/include/asm/module.h b/arch/riscv/include/asm/module.h +index 349df33808c4..cd2af4b013e3 100644 +--- a/arch/riscv/include/asm/module.h ++++ b/arch/riscv/include/asm/module.h +@@ -8,6 +8,7 @@ + + #define MODULE_ARCH_VERMAGIC "riscv" + ++struct module; + u64 module_emit_got_entry(struct module *mod, u64 val); + u64 module_emit_plt_entry(struct module *mod, u64 val); + +diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c +index 8b4c5e001157..544ac4fafd11 100644 +--- a/arch/x86/boot/compressed/eboot.c ++++ b/arch/x86/boot/compressed/eboot.c +@@ -1,3 +1,4 @@ ++ + /* ----------------------------------------------------------------------- + * + * Copyright 2011 Intel Corporation; author Matt Fleming +@@ -634,37 +635,54 @@ static efi_status_t alloc_e820ext(u32 nr_desc, struct setup_data **e820ext, + return status; + } + ++static efi_status_t allocate_e820(struct boot_params *params, ++ struct setup_data **e820ext, ++ u32 *e820ext_size) ++{ ++ unsigned long map_size, desc_size, buff_size; ++ struct efi_boot_memmap boot_map; ++ efi_memory_desc_t *map; ++ efi_status_t status; ++ __u32 nr_desc; ++ ++ boot_map.map = ↦ ++ boot_map.map_size = &map_size; ++ boot_map.desc_size = &desc_size; ++ boot_map.desc_ver = NULL; ++ boot_map.key_ptr = NULL; ++ boot_map.buff_size = &buff_size; ++ ++ status = efi_get_memory_map(sys_table, &boot_map); ++ if (status != EFI_SUCCESS) ++ return status; ++ ++ nr_desc = buff_size / desc_size; ++ ++ if (nr_desc > ARRAY_SIZE(params->e820_table)) { ++ u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table); ++ ++ status = alloc_e820ext(nr_e820ext, e820ext, e820ext_size); ++ if (status != EFI_SUCCESS) ++ return status; ++ } ++ ++ return EFI_SUCCESS; ++} ++ + struct exit_boot_struct { + struct boot_params *boot_params; + struct efi_info *efi; +- struct setup_data *e820ext; +- __u32 e820ext_size; + }; + + static efi_status_t exit_boot_func(efi_system_table_t *sys_table_arg, + struct efi_boot_memmap *map, + void *priv) + { +- static bool first = true; + const char *signature; + __u32 nr_desc; + efi_status_t status; + struct exit_boot_struct *p = priv; + +- if (first) { +- nr_desc = *map->buff_size / *map->desc_size; +- if (nr_desc > ARRAY_SIZE(p->boot_params->e820_table)) { +- u32 nr_e820ext = nr_desc - +- ARRAY_SIZE(p->boot_params->e820_table); +- +- status = alloc_e820ext(nr_e820ext, &p->e820ext, +- &p->e820ext_size); +- if (status != EFI_SUCCESS) +- return status; +- } +- first = false; +- } +- + signature = efi_is_64bit() ? EFI64_LOADER_SIGNATURE + : EFI32_LOADER_SIGNATURE; + memcpy(&p->efi->efi_loader_signature, signature, sizeof(__u32)); +@@ -687,8 +705,8 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle) + { + unsigned long map_sz, key, desc_size, buff_size; + efi_memory_desc_t *mem_map; +- struct setup_data *e820ext; +- __u32 e820ext_size; ++ struct setup_data *e820ext = NULL; ++ __u32 e820ext_size = 0; + efi_status_t status; + __u32 desc_version; + struct efi_boot_memmap map; +@@ -702,8 +720,10 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle) + map.buff_size = &buff_size; + priv.boot_params = boot_params; + priv.efi = &boot_params->efi_info; +- priv.e820ext = NULL; +- priv.e820ext_size = 0; ++ ++ status = allocate_e820(boot_params, &e820ext, &e820ext_size); ++ if (status != EFI_SUCCESS) ++ return status; + + /* Might as well exit boot services now */ + status = efi_exit_boot_services(sys_table, handle, &map, &priv, +@@ -711,9 +731,6 @@ static efi_status_t exit_boot(struct boot_params *boot_params, void *handle) + if (status != EFI_SUCCESS) + return status; + +- e820ext = priv.e820ext; +- e820ext_size = priv.e820ext_size; +- + /* Historic? */ + boot_params->alt_mem_k = 32 * 1024; + +diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c +index c88c23c658c1..d1f25c831447 100644 +--- a/arch/x86/kernel/e820.c ++++ b/arch/x86/kernel/e820.c +@@ -1248,7 +1248,6 @@ void __init e820__memblock_setup(void) + { + int i; + u64 end; +- u64 addr = 0; + + /* + * The bootstrap memblock region count maximum is 128 entries +@@ -1265,21 +1264,13 @@ void __init e820__memblock_setup(void) + struct e820_entry *entry = &e820_table->entries[i]; + + end = entry->addr + entry->size; +- if (addr < entry->addr) +- memblock_reserve(addr, entry->addr - addr); +- addr = end; + if (end != (resource_size_t)end) + continue; + +- /* +- * all !E820_TYPE_RAM ranges (including gap ranges) are put +- * into memblock.reserved to make sure that struct pages in +- * such regions are not left uninitialized after bootup. +- */ + if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN) +- memblock_reserve(entry->addr, entry->size); +- else +- memblock_add(entry->addr, entry->size); ++ continue; ++ ++ memblock_add(entry->addr, entry->size); + } + + /* Throw away partial pages: */ +diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c +index 40b16b270656..6adf6e6c2933 100644 +--- a/arch/x86/kernel/kprobes/opt.c ++++ b/arch/x86/kernel/kprobes/opt.c +@@ -189,7 +189,7 @@ static int copy_optimized_instructions(u8 *dest, u8 *src, u8 *real) + int len = 0, ret; + + while (len < RELATIVEJUMP_SIZE) { +- ret = __copy_instruction(dest + len, src + len, real, &insn); ++ ret = __copy_instruction(dest + len, src + len, real + len, &insn); + if (!ret || !can_boost(&insn, src + len)) + return -EINVAL; + len += ret; +diff --git a/crypto/cbc.c b/crypto/cbc.c +index b761b1f9c6ca..dd5f332fd566 100644 +--- a/crypto/cbc.c ++++ b/crypto/cbc.c +@@ -140,9 +140,8 @@ static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb) + spawn = skcipher_instance_ctx(inst); + err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst), + CRYPTO_ALG_TYPE_MASK); +- crypto_mod_put(alg); + if (err) +- goto err_free_inst; ++ goto err_put_alg; + + err = crypto_inst_setname(skcipher_crypto_instance(inst), "cbc", alg); + if (err) +@@ -174,12 +173,15 @@ static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb) + err = skcipher_register_instance(tmpl, inst); + if (err) + goto err_drop_spawn; ++ crypto_mod_put(alg); + + out: + return err; + + err_drop_spawn: + crypto_drop_spawn(spawn); ++err_put_alg: ++ crypto_mod_put(alg); + err_free_inst: + kfree(inst); + goto out; +diff --git a/crypto/cfb.c b/crypto/cfb.c +index a0d68c09e1b9..20987d0e09d8 100644 +--- a/crypto/cfb.c ++++ b/crypto/cfb.c +@@ -286,9 +286,8 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb) + spawn = skcipher_instance_ctx(inst); + err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst), + CRYPTO_ALG_TYPE_MASK); +- crypto_mod_put(alg); + if (err) +- goto err_free_inst; ++ goto err_put_alg; + + err = crypto_inst_setname(skcipher_crypto_instance(inst), "cfb", alg); + if (err) +@@ -317,12 +316,15 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb) + err = skcipher_register_instance(tmpl, inst); + if (err) + goto err_drop_spawn; ++ crypto_mod_put(alg); + + out: + return err; + + err_drop_spawn: + crypto_drop_spawn(spawn); ++err_put_alg: ++ crypto_mod_put(alg); + err_free_inst: + kfree(inst); + goto out; +diff --git a/crypto/pcbc.c b/crypto/pcbc.c +index ef802f6e9642..8aa10144407c 100644 +--- a/crypto/pcbc.c ++++ b/crypto/pcbc.c +@@ -244,9 +244,8 @@ static int crypto_pcbc_create(struct crypto_template *tmpl, struct rtattr **tb) + spawn = skcipher_instance_ctx(inst); + err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst), + CRYPTO_ALG_TYPE_MASK); +- crypto_mod_put(alg); + if (err) +- goto err_free_inst; ++ goto err_put_alg; + + err = crypto_inst_setname(skcipher_crypto_instance(inst), "pcbc", alg); + if (err) +@@ -275,12 +274,15 @@ static int crypto_pcbc_create(struct crypto_template *tmpl, struct rtattr **tb) + err = skcipher_register_instance(tmpl, inst); + if (err) + goto err_drop_spawn; ++ crypto_mod_put(alg); + + out: + return err; + + err_drop_spawn: + crypto_drop_spawn(spawn); ++err_put_alg: ++ crypto_mod_put(alg); + err_free_inst: + kfree(inst); + goto out; +diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c +index 3f0e2a14895a..22b53bf26817 100644 +--- a/drivers/cpufreq/ti-cpufreq.c ++++ b/drivers/cpufreq/ti-cpufreq.c +@@ -201,19 +201,28 @@ static const struct of_device_id ti_cpufreq_of_match[] = { + {}, + }; + ++static const struct of_device_id *ti_cpufreq_match_node(void) ++{ ++ struct device_node *np; ++ const struct of_device_id *match; ++ ++ np = of_find_node_by_path("/"); ++ match = of_match_node(ti_cpufreq_of_match, np); ++ of_node_put(np); ++ ++ return match; ++} ++ + static int ti_cpufreq_probe(struct platform_device *pdev) + { + u32 version[VERSION_COUNT]; +- struct device_node *np; + const struct of_device_id *match; + struct opp_table *ti_opp_table; + struct ti_cpufreq_data *opp_data; + const char * const reg_names[] = {"vdd", "vbb"}; + int ret; + +- np = of_find_node_by_path("/"); +- match = of_match_node(ti_cpufreq_of_match, np); +- of_node_put(np); ++ match = dev_get_platdata(&pdev->dev); + if (!match) + return -ENODEV; + +@@ -290,7 +299,14 @@ fail_put_node: + + static int ti_cpufreq_init(void) + { +- platform_device_register_simple("ti-cpufreq", -1, NULL, 0); ++ const struct of_device_id *match; ++ ++ /* Check to ensure we are on a compatible platform */ ++ match = ti_cpufreq_match_node(); ++ if (match) ++ platform_device_register_data(NULL, "ti-cpufreq", -1, match, ++ sizeof(*match)); ++ + return 0; + } + module_init(ti_cpufreq_init); +diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c +index f43e6dafe446..0f389e008ce6 100644 +--- a/drivers/dma/dw/core.c ++++ b/drivers/dma/dw/core.c +@@ -1064,12 +1064,12 @@ static void dwc_issue_pending(struct dma_chan *chan) + /* + * Program FIFO size of channels. + * +- * By default full FIFO (1024 bytes) is assigned to channel 0. Here we ++ * By default full FIFO (512 bytes) is assigned to channel 0. Here we + * slice FIFO on equal parts between channels. + */ + static void idma32_fifo_partition(struct dw_dma *dw) + { +- u64 value = IDMA32C_FP_PSIZE_CH0(128) | IDMA32C_FP_PSIZE_CH1(128) | ++ u64 value = IDMA32C_FP_PSIZE_CH0(64) | IDMA32C_FP_PSIZE_CH1(64) | + IDMA32C_FP_UPDATE; + u64 fifo_partition = 0; + +@@ -1082,7 +1082,7 @@ static void idma32_fifo_partition(struct dw_dma *dw) + /* Fill FIFO_PARTITION high bits (Channels 2..3, 6..7) */ + fifo_partition |= value << 32; + +- /* Program FIFO Partition registers - 128 bytes for each channel */ ++ /* Program FIFO Partition registers - 64 bytes per channel */ + idma32_writeq(dw, FIFO_PARTITION1, fifo_partition); + idma32_writeq(dw, FIFO_PARTITION0, fifo_partition); + } +diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c +index b4ec2d20e661..cb1b44d78a1f 100644 +--- a/drivers/dma/imx-sdma.c ++++ b/drivers/dma/imx-sdma.c +@@ -24,7 +24,6 @@ + #include <linux/spinlock.h> + #include <linux/device.h> + #include <linux/dma-mapping.h> +-#include <linux/dmapool.h> + #include <linux/firmware.h> + #include <linux/slab.h> + #include <linux/platform_device.h> +@@ -33,6 +32,7 @@ + #include <linux/of_address.h> + #include <linux/of_device.h> + #include <linux/of_dma.h> ++#include <linux/workqueue.h> + + #include <asm/irq.h> + #include <linux/platform_data/dma-imx-sdma.h> +@@ -376,7 +376,7 @@ struct sdma_channel { + u32 shp_addr, per_addr; + enum dma_status status; + struct imx_dma_data data; +- struct dma_pool *bd_pool; ++ struct work_struct terminate_worker; + }; + + #define IMX_DMA_SG_LOOP BIT(0) +@@ -1027,31 +1027,49 @@ static int sdma_disable_channel(struct dma_chan *chan) + + return 0; + } +- +-static int sdma_disable_channel_with_delay(struct dma_chan *chan) ++static void sdma_channel_terminate_work(struct work_struct *work) + { +- struct sdma_channel *sdmac = to_sdma_chan(chan); ++ struct sdma_channel *sdmac = container_of(work, struct sdma_channel, ++ terminate_worker); + unsigned long flags; + LIST_HEAD(head); + +- sdma_disable_channel(chan); +- spin_lock_irqsave(&sdmac->vc.lock, flags); +- vchan_get_all_descriptors(&sdmac->vc, &head); +- sdmac->desc = NULL; +- spin_unlock_irqrestore(&sdmac->vc.lock, flags); +- vchan_dma_desc_free_list(&sdmac->vc, &head); +- + /* + * According to NXP R&D team a delay of one BD SDMA cost time + * (maximum is 1ms) should be added after disable of the channel + * bit, to ensure SDMA core has really been stopped after SDMA + * clients call .device_terminate_all. + */ +- mdelay(1); ++ usleep_range(1000, 2000); ++ ++ spin_lock_irqsave(&sdmac->vc.lock, flags); ++ vchan_get_all_descriptors(&sdmac->vc, &head); ++ sdmac->desc = NULL; ++ spin_unlock_irqrestore(&sdmac->vc.lock, flags); ++ vchan_dma_desc_free_list(&sdmac->vc, &head); ++} ++ ++static int sdma_disable_channel_async(struct dma_chan *chan) ++{ ++ struct sdma_channel *sdmac = to_sdma_chan(chan); ++ ++ sdma_disable_channel(chan); ++ ++ if (sdmac->desc) ++ schedule_work(&sdmac->terminate_worker); + + return 0; + } + ++static void sdma_channel_synchronize(struct dma_chan *chan) ++{ ++ struct sdma_channel *sdmac = to_sdma_chan(chan); ++ ++ vchan_synchronize(&sdmac->vc); ++ ++ flush_work(&sdmac->terminate_worker); ++} ++ + static void sdma_set_watermarklevel_for_p2p(struct sdma_channel *sdmac) + { + struct sdma_engine *sdma = sdmac->sdma; +@@ -1192,10 +1210,11 @@ out: + + static int sdma_alloc_bd(struct sdma_desc *desc) + { ++ u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); + int ret = 0; + +- desc->bd = dma_pool_alloc(desc->sdmac->bd_pool, GFP_NOWAIT, +- &desc->bd_phys); ++ desc->bd = dma_zalloc_coherent(NULL, bd_size, &desc->bd_phys, ++ GFP_NOWAIT); + if (!desc->bd) { + ret = -ENOMEM; + goto out; +@@ -1206,7 +1225,9 @@ out: + + static void sdma_free_bd(struct sdma_desc *desc) + { +- dma_pool_free(desc->sdmac->bd_pool, desc->bd, desc->bd_phys); ++ u32 bd_size = desc->num_bd * sizeof(struct sdma_buffer_descriptor); ++ ++ dma_free_coherent(NULL, bd_size, desc->bd, desc->bd_phys); + } + + static void sdma_desc_free(struct virt_dma_desc *vd) +@@ -1272,10 +1293,6 @@ static int sdma_alloc_chan_resources(struct dma_chan *chan) + if (ret) + goto disable_clk_ahb; + +- sdmac->bd_pool = dma_pool_create("bd_pool", chan->device->dev, +- sizeof(struct sdma_buffer_descriptor), +- 32, 0); +- + return 0; + + disable_clk_ahb: +@@ -1290,7 +1307,9 @@ static void sdma_free_chan_resources(struct dma_chan *chan) + struct sdma_channel *sdmac = to_sdma_chan(chan); + struct sdma_engine *sdma = sdmac->sdma; + +- sdma_disable_channel_with_delay(chan); ++ sdma_disable_channel_async(chan); ++ ++ sdma_channel_synchronize(chan); + + if (sdmac->event_id0) + sdma_event_disable(sdmac, sdmac->event_id0); +@@ -1304,9 +1323,6 @@ static void sdma_free_chan_resources(struct dma_chan *chan) + + clk_disable(sdma->clk_ipg); + clk_disable(sdma->clk_ahb); +- +- dma_pool_destroy(sdmac->bd_pool); +- sdmac->bd_pool = NULL; + } + + static struct sdma_desc *sdma_transfer_init(struct sdma_channel *sdmac, +@@ -1999,6 +2015,8 @@ static int sdma_probe(struct platform_device *pdev) + + sdmac->channel = i; + sdmac->vc.desc_free = sdma_desc_free; ++ INIT_WORK(&sdmac->terminate_worker, ++ sdma_channel_terminate_work); + /* + * Add the channel to the DMAC list. Do not add channel 0 though + * because we need it internally in the SDMA driver. This also means +@@ -2050,7 +2068,8 @@ static int sdma_probe(struct platform_device *pdev) + sdma->dma_device.device_prep_slave_sg = sdma_prep_slave_sg; + sdma->dma_device.device_prep_dma_cyclic = sdma_prep_dma_cyclic; + sdma->dma_device.device_config = sdma_config; +- sdma->dma_device.device_terminate_all = sdma_disable_channel_with_delay; ++ sdma->dma_device.device_terminate_all = sdma_disable_channel_async; ++ sdma->dma_device.device_synchronize = sdma_channel_synchronize; + sdma->dma_device.src_addr_widths = SDMA_DMA_BUSWIDTHS; + sdma->dma_device.dst_addr_widths = SDMA_DMA_BUSWIDTHS; + sdma->dma_device.directions = SDMA_DMA_DIRECTIONS; +diff --git a/drivers/dma/ti/cppi41.c b/drivers/dma/ti/cppi41.c +index 1497da367710..e507ec36c0d3 100644 +--- a/drivers/dma/ti/cppi41.c ++++ b/drivers/dma/ti/cppi41.c +@@ -723,8 +723,22 @@ static int cppi41_stop_chan(struct dma_chan *chan) + + desc_phys = lower_32_bits(c->desc_phys); + desc_num = (desc_phys - cdd->descs_phys) / sizeof(struct cppi41_desc); +- if (!cdd->chan_busy[desc_num]) ++ if (!cdd->chan_busy[desc_num]) { ++ struct cppi41_channel *cc, *_ct; ++ ++ /* ++ * channels might still be in the pendling list if ++ * cppi41_dma_issue_pending() is called after ++ * cppi41_runtime_suspend() is called ++ */ ++ list_for_each_entry_safe(cc, _ct, &cdd->pending, node) { ++ if (cc != c) ++ continue; ++ list_del(&cc->node); ++ break; ++ } + return 0; ++ } + + ret = cppi41_tear_down_chan(c); + if (ret) +diff --git a/drivers/gnss/sirf.c b/drivers/gnss/sirf.c +index 71d014edd167..2c22836d3ffd 100644 +--- a/drivers/gnss/sirf.c ++++ b/drivers/gnss/sirf.c +@@ -168,7 +168,7 @@ static int sirf_set_active(struct sirf_data *data, bool active) + else + timeout = SIRF_HIBERNATE_TIMEOUT; + +- while (retries-- > 0) { ++ do { + sirf_pulse_on_off(data); + ret = sirf_wait_for_power_state(data, active, timeout); + if (ret < 0) { +@@ -179,9 +179,9 @@ static int sirf_set_active(struct sirf_data *data, bool active) + } + + break; +- } ++ } while (retries--); + +- if (retries == 0) ++ if (retries < 0) + return -ETIMEDOUT; + + return 0; +diff --git a/drivers/gpio/gpio-mockup.c b/drivers/gpio/gpio-mockup.c +index d66b7a768ecd..945bd13e5e79 100644 +--- a/drivers/gpio/gpio-mockup.c ++++ b/drivers/gpio/gpio-mockup.c +@@ -32,8 +32,8 @@ + #define gpio_mockup_err(...) pr_err(GPIO_MOCKUP_NAME ": " __VA_ARGS__) + + enum { +- GPIO_MOCKUP_DIR_OUT = 0, +- GPIO_MOCKUP_DIR_IN = 1, ++ GPIO_MOCKUP_DIR_IN = 0, ++ GPIO_MOCKUP_DIR_OUT = 1, + }; + + /* +@@ -135,7 +135,7 @@ static int gpio_mockup_get_direction(struct gpio_chip *gc, unsigned int offset) + { + struct gpio_mockup_chip *chip = gpiochip_get_data(gc); + +- return chip->lines[offset].dir; ++ return !chip->lines[offset].dir; + } + + static int gpio_mockup_to_irq(struct gpio_chip *gc, unsigned int offset) +diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c +index c18712dabf93..9f3f166f1760 100644 +--- a/drivers/gpio/gpio-pxa.c ++++ b/drivers/gpio/gpio-pxa.c +@@ -268,8 +268,8 @@ static int pxa_gpio_direction_input(struct gpio_chip *chip, unsigned offset) + + if (pxa_gpio_has_pinctrl()) { + ret = pinctrl_gpio_direction_input(chip->base + offset); +- if (!ret) +- return 0; ++ if (ret) ++ return ret; + } + + spin_lock_irqsave(&gpio_lock, flags); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +index 6748cd7fc129..686a26de50f9 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +@@ -626,6 +626,13 @@ int amdgpu_display_modeset_create_props(struct amdgpu_device *adev) + "dither", + amdgpu_dither_enum_list, sz); + ++ if (amdgpu_device_has_dc_support(adev)) { ++ adev->mode_info.max_bpc_property = ++ drm_property_create_range(adev->ddev, 0, "max bpc", 8, 16); ++ if (!adev->mode_info.max_bpc_property) ++ return -ENOMEM; ++ } ++ + return 0; + } + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h +index b9e9e8b02fb7..d1b4d9b6aae0 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h +@@ -339,6 +339,8 @@ struct amdgpu_mode_info { + struct drm_property *audio_property; + /* FMT dithering */ + struct drm_property *dither_property; ++ /* maximum number of bits per channel for monitor color */ ++ struct drm_property *max_bpc_property; + /* hardcoded DFP edid from BIOS */ + struct edid *bios_hardcoded_edid; + int bios_hardcoded_edid_size; +diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c +index 9333109b210d..1a744f964b30 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v8_0.c +@@ -55,6 +55,9 @@ MODULE_FIRMWARE("amdgpu/tonga_mc.bin"); + MODULE_FIRMWARE("amdgpu/polaris11_mc.bin"); + MODULE_FIRMWARE("amdgpu/polaris10_mc.bin"); + MODULE_FIRMWARE("amdgpu/polaris12_mc.bin"); ++MODULE_FIRMWARE("amdgpu/polaris11_k_mc.bin"); ++MODULE_FIRMWARE("amdgpu/polaris10_k_mc.bin"); ++MODULE_FIRMWARE("amdgpu/polaris12_k_mc.bin"); + + static const u32 golden_settings_tonga_a11[] = + { +@@ -223,13 +226,39 @@ static int gmc_v8_0_init_microcode(struct amdgpu_device *adev) + chip_name = "tonga"; + break; + case CHIP_POLARIS11: +- chip_name = "polaris11"; ++ if (((adev->pdev->device == 0x67ef) && ++ ((adev->pdev->revision == 0xe0) || ++ (adev->pdev->revision == 0xe5))) || ++ ((adev->pdev->device == 0x67ff) && ++ ((adev->pdev->revision == 0xcf) || ++ (adev->pdev->revision == 0xef) || ++ (adev->pdev->revision == 0xff)))) ++ chip_name = "polaris11_k"; ++ else if ((adev->pdev->device == 0x67ef) && ++ (adev->pdev->revision == 0xe2)) ++ chip_name = "polaris11_k"; ++ else ++ chip_name = "polaris11"; + break; + case CHIP_POLARIS10: +- chip_name = "polaris10"; ++ if ((adev->pdev->device == 0x67df) && ++ ((adev->pdev->revision == 0xe1) || ++ (adev->pdev->revision == 0xf7))) ++ chip_name = "polaris10_k"; ++ else ++ chip_name = "polaris10"; + break; + case CHIP_POLARIS12: +- chip_name = "polaris12"; ++ if (((adev->pdev->device == 0x6987) && ++ ((adev->pdev->revision == 0xc0) || ++ (adev->pdev->revision == 0xc3))) || ++ ((adev->pdev->device == 0x6981) && ++ ((adev->pdev->revision == 0x00) || ++ (adev->pdev->revision == 0x01) || ++ (adev->pdev->revision == 0x10)))) ++ chip_name = "polaris12_k"; ++ else ++ chip_name = "polaris12"; + break; + case CHIP_FIJI: + case CHIP_CARRIZO: +@@ -336,7 +365,7 @@ static int gmc_v8_0_polaris_mc_load_microcode(struct amdgpu_device *adev) + const struct mc_firmware_header_v1_0 *hdr; + const __le32 *fw_data = NULL; + const __le32 *io_mc_regs = NULL; +- u32 data, vbios_version; ++ u32 data; + int i, ucode_size, regs_size; + + /* Skip MC ucode loading on SR-IOV capable boards. +@@ -347,13 +376,6 @@ static int gmc_v8_0_polaris_mc_load_microcode(struct amdgpu_device *adev) + if (amdgpu_sriov_bios(adev)) + return 0; + +- WREG32(mmMC_SEQ_IO_DEBUG_INDEX, 0x9F); +- data = RREG32(mmMC_SEQ_IO_DEBUG_DATA); +- vbios_version = data & 0xf; +- +- if (vbios_version == 0) +- return 0; +- + if (!adev->gmc.fw) + return -EINVAL; + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index ef5c6af4d964..299def84e69c 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -2213,8 +2213,15 @@ static void update_stream_scaling_settings(const struct drm_display_mode *mode, + static enum dc_color_depth + convert_color_depth_from_display_info(const struct drm_connector *connector) + { ++ struct dm_connector_state *dm_conn_state = ++ to_dm_connector_state(connector->state); + uint32_t bpc = connector->display_info.bpc; + ++ /* TODO: Remove this when there's support for max_bpc in drm */ ++ if (dm_conn_state && bpc > dm_conn_state->max_bpc) ++ /* Round down to nearest even number. */ ++ bpc = dm_conn_state->max_bpc - (dm_conn_state->max_bpc & 1); ++ + switch (bpc) { + case 0: + /* Temporary Work around, DRM don't parse color depth for +@@ -2796,6 +2803,9 @@ int amdgpu_dm_connector_atomic_set_property(struct drm_connector *connector, + } else if (property == adev->mode_info.underscan_property) { + dm_new_state->underscan_enable = val; + ret = 0; ++ } else if (property == adev->mode_info.max_bpc_property) { ++ dm_new_state->max_bpc = val; ++ ret = 0; + } + + return ret; +@@ -2838,6 +2848,9 @@ int amdgpu_dm_connector_atomic_get_property(struct drm_connector *connector, + } else if (property == adev->mode_info.underscan_property) { + *val = dm_state->underscan_enable; + ret = 0; ++ } else if (property == adev->mode_info.max_bpc_property) { ++ *val = dm_state->max_bpc; ++ ret = 0; + } + return ret; + } +@@ -3658,6 +3671,9 @@ void amdgpu_dm_connector_init_helper(struct amdgpu_display_manager *dm, + drm_object_attach_property(&aconnector->base.base, + adev->mode_info.underscan_vborder_property, + 0); ++ drm_object_attach_property(&aconnector->base.base, ++ adev->mode_info.max_bpc_property, ++ 0); + + } + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +index aba2c5c1d2f8..74aedcffc4bb 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h +@@ -213,6 +213,7 @@ struct dm_connector_state { + enum amdgpu_rmx_type scaling; + uint8_t underscan_vborder; + uint8_t underscan_hborder; ++ uint8_t max_bpc; + bool underscan_enable; + struct mod_freesync_user_enable user_enable; + bool freesync_capable; +diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h +index 40179c5fc6b8..8750f3f02b3f 100644 +--- a/drivers/gpu/drm/drm_internal.h ++++ b/drivers/gpu/drm/drm_internal.h +@@ -99,6 +99,8 @@ struct device *drm_sysfs_minor_alloc(struct drm_minor *minor); + int drm_sysfs_connector_add(struct drm_connector *connector); + void drm_sysfs_connector_remove(struct drm_connector *connector); + ++void drm_sysfs_lease_event(struct drm_device *dev); ++ + /* drm_gem.c */ + int drm_gem_init(struct drm_device *dev); + void drm_gem_destroy(struct drm_device *dev); +diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c +index b82da96ded5c..fe6bfaf8b53f 100644 +--- a/drivers/gpu/drm/drm_lease.c ++++ b/drivers/gpu/drm/drm_lease.c +@@ -296,7 +296,7 @@ void drm_lease_destroy(struct drm_master *master) + + if (master->lessor) { + /* Tell the master to check the lessee list */ +- drm_sysfs_hotplug_event(dev); ++ drm_sysfs_lease_event(dev); + drm_master_put(&master->lessor); + } + +diff --git a/drivers/gpu/drm/drm_sysfs.c b/drivers/gpu/drm/drm_sysfs.c +index b3c1daad1169..ecb7b33002bb 100644 +--- a/drivers/gpu/drm/drm_sysfs.c ++++ b/drivers/gpu/drm/drm_sysfs.c +@@ -301,6 +301,16 @@ void drm_sysfs_connector_remove(struct drm_connector *connector) + connector->kdev = NULL; + } + ++void drm_sysfs_lease_event(struct drm_device *dev) ++{ ++ char *event_string = "LEASE=1"; ++ char *envp[] = { event_string, NULL }; ++ ++ DRM_DEBUG("generating lease event\n"); ++ ++ kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE, envp); ++} ++ + /** + * drm_sysfs_hotplug_event - generate a DRM uevent + * @dev: DRM device +diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c +index c3a64d6a18df..425df814de75 100644 +--- a/drivers/gpu/drm/i915/intel_pm.c ++++ b/drivers/gpu/drm/i915/intel_pm.c +@@ -2951,8 +2951,8 @@ static void intel_print_wm_latency(struct drm_i915_private *dev_priv, + unsigned int latency = wm[level]; + + if (latency == 0) { +- DRM_ERROR("%s WM%d latency not provided\n", +- name, level); ++ DRM_DEBUG_KMS("%s WM%d latency not provided\n", ++ name, level); + continue; + } + +diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c +index 7bd83e0afa97..c3ae7507d1c7 100644 +--- a/drivers/gpu/drm/msm/msm_gem_submit.c ++++ b/drivers/gpu/drm/msm/msm_gem_submit.c +@@ -410,7 +410,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + struct msm_file_private *ctx = file->driver_priv; + struct msm_gem_submit *submit; + struct msm_gpu *gpu = priv->gpu; +- struct dma_fence *in_fence = NULL; + struct sync_file *sync_file = NULL; + struct msm_gpu_submitqueue *queue; + struct msm_ringbuffer *ring; +@@ -443,6 +442,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + ring = gpu->rb[queue->prio]; + + if (args->flags & MSM_SUBMIT_FENCE_FD_IN) { ++ struct dma_fence *in_fence; ++ + in_fence = sync_file_get_fence(args->fence_fd); + + if (!in_fence) +@@ -452,11 +453,13 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + * Wait if the fence is from a foreign context, or if the fence + * array contains any fence from a foreign context. + */ +- if (!dma_fence_match_context(in_fence, ring->fctx->context)) { ++ ret = 0; ++ if (!dma_fence_match_context(in_fence, ring->fctx->context)) + ret = dma_fence_wait(in_fence, true); +- if (ret) +- return ret; +- } ++ ++ dma_fence_put(in_fence); ++ if (ret) ++ return ret; + } + + ret = mutex_lock_interruptible(&dev->struct_mutex); +@@ -582,8 +585,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + } + + out: +- if (in_fence) +- dma_fence_put(in_fence); + submit_cleanup(submit); + if (ret) + msm_gem_submit_free(submit); +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index 501c05cbec7e..46182d4dd1ce 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -271,6 +271,9 @@ + + #define USB_VENDOR_ID_CIDC 0x1677 + ++#define I2C_VENDOR_ID_CIRQUE 0x0488 ++#define I2C_PRODUCT_ID_CIRQUE_121F 0x121F ++ + #define USB_VENDOR_ID_CJTOUCH 0x24b8 + #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020 + #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040 +@@ -931,6 +934,10 @@ + #define USB_VENDOR_ID_REALTEK 0x0bda + #define USB_DEVICE_ID_REALTEK_READER 0x0152 + ++#define USB_VENDOR_ID_RETROUSB 0xf000 ++#define USB_DEVICE_ID_RETROUSB_SNES_RETROPAD 0x0003 ++#define USB_DEVICE_ID_RETROUSB_SNES_RETROPORT 0x00f1 ++ + #define USB_VENDOR_ID_ROCCAT 0x1e7d + #define USB_DEVICE_ID_ROCCAT_ARVO 0x30d4 + #define USB_DEVICE_ID_ROCCAT_ISKU 0x319c +@@ -1038,6 +1045,7 @@ + #define USB_VENDOR_ID_SYMBOL 0x05e0 + #define USB_DEVICE_ID_SYMBOL_SCANNER_1 0x0800 + #define USB_DEVICE_ID_SYMBOL_SCANNER_2 0x1300 ++#define USB_DEVICE_ID_SYMBOL_SCANNER_3 0x1200 + + #define USB_VENDOR_ID_SYNAPTICS 0x06cb + #define USB_DEVICE_ID_SYNAPTICS_TP 0x0001 +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c +index a481eaf39e88..a3916e58dbf5 100644 +--- a/drivers/hid/hid-input.c ++++ b/drivers/hid/hid-input.c +@@ -325,6 +325,9 @@ static const struct hid_device_id hid_battery_quirks[] = { + { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, + USB_DEVICE_ID_ELECOM_BM084), + HID_BATTERY_QUIRK_IGNORE }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_SYMBOL, ++ USB_DEVICE_ID_SYMBOL_SCANNER_3), ++ HID_BATTERY_QUIRK_IGNORE }, + {} + }; + +diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c +index da954f3f4da7..2faf5421fdd0 100644 +--- a/drivers/hid/hid-multitouch.c ++++ b/drivers/hid/hid-multitouch.c +@@ -1822,6 +1822,12 @@ static const struct hid_device_id mt_devices[] = { + MT_USB_DEVICE(USB_VENDOR_ID_CHUNGHWAT, + USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH) }, + ++ /* Cirque devices */ ++ { .driver_data = MT_CLS_WIN_8_DUAL, ++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8, ++ I2C_VENDOR_ID_CIRQUE, ++ I2C_PRODUCT_ID_CIRQUE_121F) }, ++ + /* CJTouch panels */ + { .driver_data = MT_CLS_NSMU, + MT_USB_DEVICE(USB_VENDOR_ID_CJTOUCH, +diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c +index 0a0605a7e481..77316f022c5a 100644 +--- a/drivers/hid/hid-quirks.c ++++ b/drivers/hid/hid-quirks.c +@@ -136,6 +136,8 @@ static const struct hid_device_id hid_quirks[] = { + { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003), HID_QUIRK_NOGET }, + { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008), HID_QUIRK_NOGET }, + { HID_USB_DEVICE(USB_VENDOR_ID_REALTEK, USB_DEVICE_ID_REALTEK_READER), HID_QUIRK_NO_INIT_REPORTS }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPAD), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_RETROUSB, USB_DEVICE_ID_RETROUSB_SNES_RETROPORT), HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE }, + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RUMBLEPAD), HID_QUIRK_BADPAD }, + { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD2), HID_QUIRK_NO_INIT_REPORTS }, + { HID_USB_DEVICE(USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD), HID_QUIRK_NO_INIT_REPORTS }, +diff --git a/drivers/hid/uhid.c b/drivers/hid/uhid.c +index 051639c09f72..840634e0f1e3 100644 +--- a/drivers/hid/uhid.c ++++ b/drivers/hid/uhid.c +@@ -497,12 +497,13 @@ static int uhid_dev_create2(struct uhid_device *uhid, + goto err_free; + } + +- len = min(sizeof(hid->name), sizeof(ev->u.create2.name)); +- strlcpy(hid->name, ev->u.create2.name, len); +- len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)); +- strlcpy(hid->phys, ev->u.create2.phys, len); +- len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)); +- strlcpy(hid->uniq, ev->u.create2.uniq, len); ++ /* @hid is zero-initialized, strncpy() is correct, strlcpy() not */ ++ len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1; ++ strncpy(hid->name, ev->u.create2.name, len); ++ len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1; ++ strncpy(hid->phys, ev->u.create2.phys, len); ++ len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1; ++ strncpy(hid->uniq, ev->u.create2.uniq, len); + + hid->ll_driver = &uhid_hid_driver; + hid->bus = ev->u.create2.bus; +diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c +index c4a1ebcfffb6..16eb9b3f1cb1 100644 +--- a/drivers/hv/channel_mgmt.c ++++ b/drivers/hv/channel_mgmt.c +@@ -447,61 +447,16 @@ void vmbus_free_channels(void) + } + } + +-/* +- * vmbus_process_offer - Process the offer by creating a channel/device +- * associated with this offer +- */ +-static void vmbus_process_offer(struct vmbus_channel *newchannel) ++/* Note: the function can run concurrently for primary/sub channels. */ ++static void vmbus_add_channel_work(struct work_struct *work) + { +- struct vmbus_channel *channel; +- bool fnew = true; ++ struct vmbus_channel *newchannel = ++ container_of(work, struct vmbus_channel, add_channel_work); ++ struct vmbus_channel *primary_channel = newchannel->primary_channel; + unsigned long flags; + u16 dev_type; + int ret; + +- /* Make sure this is a new offer */ +- mutex_lock(&vmbus_connection.channel_mutex); +- +- /* +- * Now that we have acquired the channel_mutex, +- * we can release the potentially racing rescind thread. +- */ +- atomic_dec(&vmbus_connection.offer_in_progress); +- +- list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) { +- if (!uuid_le_cmp(channel->offermsg.offer.if_type, +- newchannel->offermsg.offer.if_type) && +- !uuid_le_cmp(channel->offermsg.offer.if_instance, +- newchannel->offermsg.offer.if_instance)) { +- fnew = false; +- break; +- } +- } +- +- if (fnew) +- list_add_tail(&newchannel->listentry, +- &vmbus_connection.chn_list); +- +- mutex_unlock(&vmbus_connection.channel_mutex); +- +- if (!fnew) { +- /* +- * Check to see if this is a sub-channel. +- */ +- if (newchannel->offermsg.offer.sub_channel_index != 0) { +- /* +- * Process the sub-channel. +- */ +- newchannel->primary_channel = channel; +- spin_lock_irqsave(&channel->lock, flags); +- list_add_tail(&newchannel->sc_list, &channel->sc_list); +- channel->num_sc++; +- spin_unlock_irqrestore(&channel->lock, flags); +- } else { +- goto err_free_chan; +- } +- } +- + dev_type = hv_get_dev_type(newchannel); + + init_vp_index(newchannel, dev_type); +@@ -519,27 +474,26 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel) + /* + * This state is used to indicate a successful open + * so that when we do close the channel normally, we +- * can cleanup properly ++ * can cleanup properly. + */ + newchannel->state = CHANNEL_OPEN_STATE; + +- if (!fnew) { +- struct hv_device *dev +- = newchannel->primary_channel->device_obj; ++ if (primary_channel != NULL) { ++ /* newchannel is a sub-channel. */ ++ struct hv_device *dev = primary_channel->device_obj; + + if (vmbus_add_channel_kobj(dev, newchannel)) +- goto err_free_chan; ++ goto err_deq_chan; ++ ++ if (primary_channel->sc_creation_callback != NULL) ++ primary_channel->sc_creation_callback(newchannel); + +- if (channel->sc_creation_callback != NULL) +- channel->sc_creation_callback(newchannel); + newchannel->probe_done = true; + return; + } + + /* +- * Start the process of binding this offer to the driver +- * We need to set the DeviceObject field before calling +- * vmbus_child_dev_add() ++ * Start the process of binding the primary channel to the driver + */ + newchannel->device_obj = vmbus_device_create( + &newchannel->offermsg.offer.if_type, +@@ -568,13 +522,28 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel) + + err_deq_chan: + mutex_lock(&vmbus_connection.channel_mutex); +- list_del(&newchannel->listentry); ++ ++ /* ++ * We need to set the flag, otherwise ++ * vmbus_onoffer_rescind() can be blocked. ++ */ ++ newchannel->probe_done = true; ++ ++ if (primary_channel == NULL) { ++ list_del(&newchannel->listentry); ++ } else { ++ spin_lock_irqsave(&primary_channel->lock, flags); ++ list_del(&newchannel->sc_list); ++ spin_unlock_irqrestore(&primary_channel->lock, flags); ++ } ++ + mutex_unlock(&vmbus_connection.channel_mutex); + + if (newchannel->target_cpu != get_cpu()) { + put_cpu(); + smp_call_function_single(newchannel->target_cpu, +- percpu_channel_deq, newchannel, true); ++ percpu_channel_deq, ++ newchannel, true); + } else { + percpu_channel_deq(newchannel); + put_cpu(); +@@ -582,14 +551,104 @@ err_deq_chan: + + vmbus_release_relid(newchannel->offermsg.child_relid); + +-err_free_chan: + free_channel(newchannel); + } + ++/* ++ * vmbus_process_offer - Process the offer by creating a channel/device ++ * associated with this offer ++ */ ++static void vmbus_process_offer(struct vmbus_channel *newchannel) ++{ ++ struct vmbus_channel *channel; ++ struct workqueue_struct *wq; ++ unsigned long flags; ++ bool fnew = true; ++ ++ mutex_lock(&vmbus_connection.channel_mutex); ++ ++ /* ++ * Now that we have acquired the channel_mutex, ++ * we can release the potentially racing rescind thread. ++ */ ++ atomic_dec(&vmbus_connection.offer_in_progress); ++ ++ list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) { ++ if (!uuid_le_cmp(channel->offermsg.offer.if_type, ++ newchannel->offermsg.offer.if_type) && ++ !uuid_le_cmp(channel->offermsg.offer.if_instance, ++ newchannel->offermsg.offer.if_instance)) { ++ fnew = false; ++ break; ++ } ++ } ++ ++ if (fnew) ++ list_add_tail(&newchannel->listentry, ++ &vmbus_connection.chn_list); ++ else { ++ /* ++ * Check to see if this is a valid sub-channel. ++ */ ++ if (newchannel->offermsg.offer.sub_channel_index == 0) { ++ mutex_unlock(&vmbus_connection.channel_mutex); ++ /* ++ * Don't call free_channel(), because newchannel->kobj ++ * is not initialized yet. ++ */ ++ kfree(newchannel); ++ WARN_ON_ONCE(1); ++ return; ++ } ++ /* ++ * Process the sub-channel. ++ */ ++ newchannel->primary_channel = channel; ++ spin_lock_irqsave(&channel->lock, flags); ++ list_add_tail(&newchannel->sc_list, &channel->sc_list); ++ spin_unlock_irqrestore(&channel->lock, flags); ++ } ++ ++ mutex_unlock(&vmbus_connection.channel_mutex); ++ ++ /* ++ * vmbus_process_offer() mustn't call channel->sc_creation_callback() ++ * directly for sub-channels, because sc_creation_callback() -> ++ * vmbus_open() may never get the host's response to the ++ * OPEN_CHANNEL message (the host may rescind a channel at any time, ++ * e.g. in the case of hot removing a NIC), and vmbus_onoffer_rescind() ++ * may not wake up the vmbus_open() as it's blocked due to a non-zero ++ * vmbus_connection.offer_in_progress, and finally we have a deadlock. ++ * ++ * The above is also true for primary channels, if the related device ++ * drivers use sync probing mode by default. ++ * ++ * And, usually the handling of primary channels and sub-channels can ++ * depend on each other, so we should offload them to different ++ * workqueues to avoid possible deadlock, e.g. in sync-probing mode, ++ * NIC1's netvsc_subchan_work() can race with NIC2's netvsc_probe() -> ++ * rtnl_lock(), and causes deadlock: the former gets the rtnl_lock ++ * and waits for all the sub-channels to appear, but the latter ++ * can't get the rtnl_lock and this blocks the handling of ++ * sub-channels. ++ */ ++ INIT_WORK(&newchannel->add_channel_work, vmbus_add_channel_work); ++ wq = fnew ? vmbus_connection.handle_primary_chan_wq : ++ vmbus_connection.handle_sub_chan_wq; ++ queue_work(wq, &newchannel->add_channel_work); ++} ++ + /* + * We use this state to statically distribute the channel interrupt load. + */ + static int next_numa_node_id; ++/* ++ * init_vp_index() accesses global variables like next_numa_node_id, and ++ * it can run concurrently for primary channels and sub-channels: see ++ * vmbus_process_offer(), so we need the lock to protect the global ++ * variables. ++ */ ++static DEFINE_SPINLOCK(bind_channel_to_cpu_lock); + + /* + * Starting with Win8, we can statically distribute the incoming +@@ -625,6 +684,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type) + return; + } + ++ spin_lock(&bind_channel_to_cpu_lock); ++ + /* + * Based on the channel affinity policy, we will assign the NUMA + * nodes. +@@ -707,6 +768,8 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type) + channel->target_cpu = cur_cpu; + channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu); + ++ spin_unlock(&bind_channel_to_cpu_lock); ++ + free_cpumask_var(available_mask); + } + +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c +index f4d08c8ac7f8..4fe117b761ce 100644 +--- a/drivers/hv/connection.c ++++ b/drivers/hv/connection.c +@@ -190,6 +190,20 @@ int vmbus_connect(void) + goto cleanup; + } + ++ vmbus_connection.handle_primary_chan_wq = ++ create_workqueue("hv_pri_chan"); ++ if (!vmbus_connection.handle_primary_chan_wq) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ ++ vmbus_connection.handle_sub_chan_wq = ++ create_workqueue("hv_sub_chan"); ++ if (!vmbus_connection.handle_sub_chan_wq) { ++ ret = -ENOMEM; ++ goto cleanup; ++ } ++ + INIT_LIST_HEAD(&vmbus_connection.chn_msg_list); + spin_lock_init(&vmbus_connection.channelmsg_lock); + +@@ -280,10 +294,14 @@ void vmbus_disconnect(void) + */ + vmbus_initiate_unload(false); + +- if (vmbus_connection.work_queue) { +- drain_workqueue(vmbus_connection.work_queue); ++ if (vmbus_connection.handle_sub_chan_wq) ++ destroy_workqueue(vmbus_connection.handle_sub_chan_wq); ++ ++ if (vmbus_connection.handle_primary_chan_wq) ++ destroy_workqueue(vmbus_connection.handle_primary_chan_wq); ++ ++ if (vmbus_connection.work_queue) + destroy_workqueue(vmbus_connection.work_queue); +- } + + if (vmbus_connection.int_page) { + free_pages((unsigned long)vmbus_connection.int_page, 0); +diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h +index 72eaba3d50fc..87d3d7da78f8 100644 +--- a/drivers/hv/hyperv_vmbus.h ++++ b/drivers/hv/hyperv_vmbus.h +@@ -335,7 +335,14 @@ struct vmbus_connection { + struct list_head chn_list; + struct mutex channel_mutex; + ++ /* ++ * An offer message is handled first on the work_queue, and then ++ * is further handled on handle_primary_chan_wq or ++ * handle_sub_chan_wq. ++ */ + struct workqueue_struct *work_queue; ++ struct workqueue_struct *handle_primary_chan_wq; ++ struct workqueue_struct *handle_sub_chan_wq; + }; + + +diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c +index 84b3e4445d46..e062ab9687c7 100644 +--- a/drivers/iommu/amd_iommu_init.c ++++ b/drivers/iommu/amd_iommu_init.c +@@ -797,7 +797,8 @@ static int iommu_init_ga_log(struct amd_iommu *iommu) + entry = iommu_virt_to_phys(iommu->ga_log) | GA_LOG_SIZE_512; + memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_BASE_OFFSET, + &entry, sizeof(entry)); +- entry = (iommu_virt_to_phys(iommu->ga_log) & 0xFFFFFFFFFFFFFULL) & ~7ULL; ++ entry = (iommu_virt_to_phys(iommu->ga_log_tail) & ++ (BIT_ULL(52)-1)) & ~7ULL; + memcpy_toio(iommu->mmio_base + MMIO_GA_LOG_TAIL_OFFSET, + &entry, sizeof(entry)); + writel(0x00, iommu->mmio_base + MMIO_GA_HEAD_OFFSET); +diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c +index bedc801b06a0..a76c47f20587 100644 +--- a/drivers/iommu/intel-iommu.c ++++ b/drivers/iommu/intel-iommu.c +@@ -3100,7 +3100,7 @@ static int copy_context_table(struct intel_iommu *iommu, + } + + if (old_ce) +- iounmap(old_ce); ++ memunmap(old_ce); + + ret = 0; + if (devfn < 0x80) +diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c +index 4a03e5090952..188f4eaed6e5 100644 +--- a/drivers/iommu/intel-svm.c ++++ b/drivers/iommu/intel-svm.c +@@ -596,7 +596,7 @@ static irqreturn_t prq_event_thread(int irq, void *d) + pr_err("%s: Page request without PASID: %08llx %08llx\n", + iommu->name, ((unsigned long long *)req)[0], + ((unsigned long long *)req)[1]); +- goto bad_req; ++ goto no_pasid; + } + + if (!svm || svm->pasid != req->pasid) { +diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c +index 22b94f8a9a04..d8598e44e381 100644 +--- a/drivers/iommu/ipmmu-vmsa.c ++++ b/drivers/iommu/ipmmu-vmsa.c +@@ -501,6 +501,9 @@ static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain) + + static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain) + { ++ if (!domain->mmu) ++ return; ++ + /* + * Disable the context. Flush the TLB as required when modifying the + * context registers. +diff --git a/drivers/media/cec/cec-adap.c b/drivers/media/cec/cec-adap.c +index dd8bad74a1f0..a537e518384b 100644 +--- a/drivers/media/cec/cec-adap.c ++++ b/drivers/media/cec/cec-adap.c +@@ -1167,6 +1167,8 @@ static int cec_config_log_addr(struct cec_adapter *adap, + { + struct cec_log_addrs *las = &adap->log_addrs; + struct cec_msg msg = { }; ++ const unsigned int max_retries = 2; ++ unsigned int i; + int err; + + if (cec_has_log_addr(adap, log_addr)) +@@ -1175,19 +1177,44 @@ static int cec_config_log_addr(struct cec_adapter *adap, + /* Send poll message */ + msg.len = 1; + msg.msg[0] = (log_addr << 4) | log_addr; +- err = cec_transmit_msg_fh(adap, &msg, NULL, true); + +- /* +- * While trying to poll the physical address was reset +- * and the adapter was unconfigured, so bail out. +- */ +- if (!adap->is_configuring) +- return -EINTR; ++ for (i = 0; i < max_retries; i++) { ++ err = cec_transmit_msg_fh(adap, &msg, NULL, true); + +- if (err) +- return err; ++ /* ++ * While trying to poll the physical address was reset ++ * and the adapter was unconfigured, so bail out. ++ */ ++ if (!adap->is_configuring) ++ return -EINTR; ++ ++ if (err) ++ return err; + +- if (msg.tx_status & CEC_TX_STATUS_OK) ++ /* ++ * The message was aborted due to a disconnect or ++ * unconfigure, just bail out. ++ */ ++ if (msg.tx_status & CEC_TX_STATUS_ABORTED) ++ return -EINTR; ++ if (msg.tx_status & CEC_TX_STATUS_OK) ++ return 0; ++ if (msg.tx_status & CEC_TX_STATUS_NACK) ++ break; ++ /* ++ * Retry up to max_retries times if the message was neither ++ * OKed or NACKed. This can happen due to e.g. a Lost ++ * Arbitration condition. ++ */ ++ } ++ ++ /* ++ * If we are unable to get an OK or a NACK after max_retries attempts ++ * (and note that each attempt already consists of four polls), then ++ * then we assume that something is really weird and that it is not a ++ * good idea to try and claim this logical address. ++ */ ++ if (i == max_retries) + return 0; + + /* +diff --git a/drivers/media/dvb-frontends/dvb-pll.c b/drivers/media/dvb-frontends/dvb-pll.c +index 6d4b2eec67b4..29836c1a40e9 100644 +--- a/drivers/media/dvb-frontends/dvb-pll.c ++++ b/drivers/media/dvb-frontends/dvb-pll.c +@@ -80,8 +80,8 @@ struct dvb_pll_desc { + + static const struct dvb_pll_desc dvb_pll_thomson_dtt7579 = { + .name = "Thomson dtt7579", +- .min = 177000000, +- .max = 858000000, ++ .min = 177 * MHz, ++ .max = 858 * MHz, + .iffreq= 36166667, + .sleepdata = (u8[]){ 2, 0xb4, 0x03 }, + .count = 4, +@@ -102,8 +102,8 @@ static void thomson_dtt759x_bw(struct dvb_frontend *fe, u8 *buf) + + static const struct dvb_pll_desc dvb_pll_thomson_dtt759x = { + .name = "Thomson dtt759x", +- .min = 177000000, +- .max = 896000000, ++ .min = 177 * MHz, ++ .max = 896 * MHz, + .set = thomson_dtt759x_bw, + .iffreq= 36166667, + .sleepdata = (u8[]){ 2, 0x84, 0x03 }, +@@ -126,8 +126,8 @@ static void thomson_dtt7520x_bw(struct dvb_frontend *fe, u8 *buf) + + static const struct dvb_pll_desc dvb_pll_thomson_dtt7520x = { + .name = "Thomson dtt7520x", +- .min = 185000000, +- .max = 900000000, ++ .min = 185 * MHz, ++ .max = 900 * MHz, + .set = thomson_dtt7520x_bw, + .iffreq = 36166667, + .count = 7, +@@ -144,8 +144,8 @@ static const struct dvb_pll_desc dvb_pll_thomson_dtt7520x = { + + static const struct dvb_pll_desc dvb_pll_lg_z201 = { + .name = "LG z201", +- .min = 174000000, +- .max = 862000000, ++ .min = 174 * MHz, ++ .max = 862 * MHz, + .iffreq= 36166667, + .sleepdata = (u8[]){ 2, 0xbc, 0x03 }, + .count = 5, +@@ -160,8 +160,8 @@ static const struct dvb_pll_desc dvb_pll_lg_z201 = { + + static const struct dvb_pll_desc dvb_pll_unknown_1 = { + .name = "unknown 1", /* used by dntv live dvb-t */ +- .min = 174000000, +- .max = 862000000, ++ .min = 174 * MHz, ++ .max = 862 * MHz, + .iffreq= 36166667, + .count = 9, + .entries = { +@@ -182,8 +182,8 @@ static const struct dvb_pll_desc dvb_pll_unknown_1 = { + */ + static const struct dvb_pll_desc dvb_pll_tua6010xs = { + .name = "Infineon TUA6010XS", +- .min = 44250000, +- .max = 858000000, ++ .min = 44250 * kHz, ++ .max = 858 * MHz, + .iffreq= 36125000, + .count = 3, + .entries = { +@@ -196,8 +196,8 @@ static const struct dvb_pll_desc dvb_pll_tua6010xs = { + /* Panasonic env57h1xd5 (some Philips PLL ?) */ + static const struct dvb_pll_desc dvb_pll_env57h1xd5 = { + .name = "Panasonic ENV57H1XD5", +- .min = 44250000, +- .max = 858000000, ++ .min = 44250 * kHz, ++ .max = 858 * MHz, + .iffreq= 36125000, + .count = 4, + .entries = { +@@ -220,8 +220,8 @@ static void tda665x_bw(struct dvb_frontend *fe, u8 *buf) + + static const struct dvb_pll_desc dvb_pll_tda665x = { + .name = "Philips TDA6650/TDA6651", +- .min = 44250000, +- .max = 858000000, ++ .min = 44250 * kHz, ++ .max = 858 * MHz, + .set = tda665x_bw, + .iffreq= 36166667, + .initdata = (u8[]){ 4, 0x0b, 0xf5, 0x85, 0xab }, +@@ -254,8 +254,8 @@ static void tua6034_bw(struct dvb_frontend *fe, u8 *buf) + + static const struct dvb_pll_desc dvb_pll_tua6034 = { + .name = "Infineon TUA6034", +- .min = 44250000, +- .max = 858000000, ++ .min = 44250 * kHz, ++ .max = 858 * MHz, + .iffreq= 36166667, + .count = 3, + .set = tua6034_bw, +@@ -278,8 +278,8 @@ static void tded4_bw(struct dvb_frontend *fe, u8 *buf) + + static const struct dvb_pll_desc dvb_pll_tded4 = { + .name = "ALPS TDED4", +- .min = 47000000, +- .max = 863000000, ++ .min = 47 * MHz, ++ .max = 863 * MHz, + .iffreq= 36166667, + .set = tded4_bw, + .count = 4, +@@ -296,8 +296,8 @@ static const struct dvb_pll_desc dvb_pll_tded4 = { + */ + static const struct dvb_pll_desc dvb_pll_tdhu2 = { + .name = "ALPS TDHU2", +- .min = 54000000, +- .max = 864000000, ++ .min = 54 * MHz, ++ .max = 864 * MHz, + .iffreq= 44000000, + .count = 4, + .entries = { +@@ -313,8 +313,8 @@ static const struct dvb_pll_desc dvb_pll_tdhu2 = { + */ + static const struct dvb_pll_desc dvb_pll_samsung_tbmv = { + .name = "Samsung TBMV30111IN / TBMV30712IN1", +- .min = 54000000, +- .max = 860000000, ++ .min = 54 * MHz, ++ .max = 860 * MHz, + .iffreq= 44000000, + .count = 6, + .entries = { +@@ -332,8 +332,8 @@ static const struct dvb_pll_desc dvb_pll_samsung_tbmv = { + */ + static const struct dvb_pll_desc dvb_pll_philips_sd1878_tda8261 = { + .name = "Philips SD1878", +- .min = 950000, +- .max = 2150000, ++ .min = 950 * MHz, ++ .max = 2150 * MHz, + .iffreq= 249, /* zero-IF, offset 249 is to round up */ + .count = 4, + .entries = { +@@ -398,8 +398,8 @@ static void opera1_bw(struct dvb_frontend *fe, u8 *buf) + + static const struct dvb_pll_desc dvb_pll_opera1 = { + .name = "Opera Tuner", +- .min = 900000, +- .max = 2250000, ++ .min = 900 * MHz, ++ .max = 2250 * MHz, + .initdata = (u8[]){ 4, 0x08, 0xe5, 0xe1, 0x00 }, + .initdata2 = (u8[]){ 4, 0x08, 0xe5, 0xe5, 0x00 }, + .iffreq= 0, +@@ -445,8 +445,8 @@ static void samsung_dtos403ih102a_set(struct dvb_frontend *fe, u8 *buf) + /* unknown pll used in Samsung DTOS403IH102A DVB-C tuner */ + static const struct dvb_pll_desc dvb_pll_samsung_dtos403ih102a = { + .name = "Samsung DTOS403IH102A", +- .min = 44250000, +- .max = 858000000, ++ .min = 44250 * kHz, ++ .max = 858 * MHz, + .iffreq = 36125000, + .count = 8, + .set = samsung_dtos403ih102a_set, +@@ -465,8 +465,8 @@ static const struct dvb_pll_desc dvb_pll_samsung_dtos403ih102a = { + /* Samsung TDTC9251DH0 DVB-T NIM, as used on AirStar 2 */ + static const struct dvb_pll_desc dvb_pll_samsung_tdtc9251dh0 = { + .name = "Samsung TDTC9251DH0", +- .min = 48000000, +- .max = 863000000, ++ .min = 48 * MHz, ++ .max = 863 * MHz, + .iffreq = 36166667, + .count = 3, + .entries = { +@@ -479,8 +479,8 @@ static const struct dvb_pll_desc dvb_pll_samsung_tdtc9251dh0 = { + /* Samsung TBDU18132 DVB-S NIM with TSA5059 PLL, used in SkyStar2 DVB-S 2.3 */ + static const struct dvb_pll_desc dvb_pll_samsung_tbdu18132 = { + .name = "Samsung TBDU18132", +- .min = 950000, +- .max = 2150000, /* guesses */ ++ .min = 950 * MHz, ++ .max = 2150 * MHz, /* guesses */ + .iffreq = 0, + .count = 2, + .entries = { +@@ -500,8 +500,8 @@ static const struct dvb_pll_desc dvb_pll_samsung_tbdu18132 = { + /* Samsung TBMU24112 DVB-S NIM with SL1935 zero-IF tuner */ + static const struct dvb_pll_desc dvb_pll_samsung_tbmu24112 = { + .name = "Samsung TBMU24112", +- .min = 950000, +- .max = 2150000, /* guesses */ ++ .min = 950 * MHz, ++ .max = 2150 * MHz, /* guesses */ + .iffreq = 0, + .count = 2, + .entries = { +@@ -521,8 +521,8 @@ static const struct dvb_pll_desc dvb_pll_samsung_tbmu24112 = { + * 822 - 862 1 * 0 0 1 0 0 0 0x88 */ + static const struct dvb_pll_desc dvb_pll_alps_tdee4 = { + .name = "ALPS TDEE4", +- .min = 47000000, +- .max = 862000000, ++ .min = 47 * MHz, ++ .max = 862 * MHz, + .iffreq = 36125000, + .count = 4, + .entries = { +@@ -537,8 +537,8 @@ static const struct dvb_pll_desc dvb_pll_alps_tdee4 = { + /* CP cur. 50uA, AGC takeover: 103dBuV, PORT3 on */ + static const struct dvb_pll_desc dvb_pll_tua6034_friio = { + .name = "Infineon TUA6034 ISDB-T (Friio)", +- .min = 90000000, +- .max = 770000000, ++ .min = 90 * MHz, ++ .max = 770 * MHz, + .iffreq = 57000000, + .initdata = (u8[]){ 4, 0x9a, 0x50, 0xb2, 0x08 }, + .sleepdata = (u8[]){ 4, 0x9a, 0x70, 0xb3, 0x0b }, +@@ -553,8 +553,8 @@ static const struct dvb_pll_desc dvb_pll_tua6034_friio = { + /* Philips TDA6651 ISDB-T, used in Earthsoft PT1 */ + static const struct dvb_pll_desc dvb_pll_tda665x_earth_pt1 = { + .name = "Philips TDA6651 ISDB-T (EarthSoft PT1)", +- .min = 90000000, +- .max = 770000000, ++ .min = 90 * MHz, ++ .max = 770 * MHz, + .iffreq = 57000000, + .initdata = (u8[]){ 5, 0x0e, 0x7f, 0xc1, 0x80, 0x80 }, + .count = 10, +@@ -610,9 +610,6 @@ static int dvb_pll_configure(struct dvb_frontend *fe, u8 *buf, + u32 div; + int i; + +- if (frequency && (frequency < desc->min || frequency > desc->max)) +- return -EINVAL; +- + for (i = 0; i < desc->count; i++) { + if (frequency > desc->entries[i].limit) + continue; +@@ -799,7 +796,6 @@ struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe, int pll_addr, + struct dvb_pll_priv *priv = NULL; + int ret; + const struct dvb_pll_desc *desc; +- struct dtv_frontend_properties *c = &fe->dtv_property_cache; + + b1 = kmalloc(1, GFP_KERNEL); + if (!b1) +@@ -845,18 +841,12 @@ struct dvb_frontend *dvb_pll_attach(struct dvb_frontend *fe, int pll_addr, + + strncpy(fe->ops.tuner_ops.info.name, desc->name, + sizeof(fe->ops.tuner_ops.info.name)); +- switch (c->delivery_system) { +- case SYS_DVBS: +- case SYS_DVBS2: +- case SYS_TURBO: +- case SYS_ISDBS: +- fe->ops.tuner_ops.info.frequency_min_hz = desc->min * kHz; +- fe->ops.tuner_ops.info.frequency_max_hz = desc->max * kHz; +- break; +- default: +- fe->ops.tuner_ops.info.frequency_min_hz = desc->min; +- fe->ops.tuner_ops.info.frequency_max_hz = desc->max; +- } ++ ++ fe->ops.tuner_ops.info.frequency_min_hz = desc->min; ++ fe->ops.tuner_ops.info.frequency_max_hz = desc->max; ++ ++ dprintk("%s tuner, frequency range: %u...%u\n", ++ desc->name, desc->min, desc->max); + + if (!desc->initdata) + fe->ops.tuner_ops.init = NULL; +diff --git a/drivers/media/pci/intel/ipu3/ipu3-cio2.c b/drivers/media/pci/intel/ipu3/ipu3-cio2.c +index 29027159eced..ca1a4d8e972e 100644 +--- a/drivers/media/pci/intel/ipu3/ipu3-cio2.c ++++ b/drivers/media/pci/intel/ipu3/ipu3-cio2.c +@@ -1846,12 +1846,12 @@ static void cio2_pci_remove(struct pci_dev *pci_dev) + struct cio2_device *cio2 = pci_get_drvdata(pci_dev); + unsigned int i; + ++ media_device_unregister(&cio2->media_dev); + cio2_notifier_exit(cio2); +- cio2_fbpt_exit_dummy(cio2); + for (i = 0; i < CIO2_QUEUES; i++) + cio2_queue_exit(cio2, &cio2->queue[i]); ++ cio2_fbpt_exit_dummy(cio2); + v4l2_device_unregister(&cio2->v4l2_dev); +- media_device_unregister(&cio2->media_dev); + media_device_cleanup(&cio2->media_dev); + mutex_destroy(&cio2->lock); + } +diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c +index 842e2235047d..432bc7fbedc9 100644 +--- a/drivers/media/platform/omap3isp/isp.c ++++ b/drivers/media/platform/omap3isp/isp.c +@@ -1587,6 +1587,8 @@ static void isp_pm_complete(struct device *dev) + + static void isp_unregister_entities(struct isp_device *isp) + { ++ media_device_unregister(&isp->media_dev); ++ + omap3isp_csi2_unregister_entities(&isp->isp_csi2a); + omap3isp_ccp2_unregister_entities(&isp->isp_ccp2); + omap3isp_ccdc_unregister_entities(&isp->isp_ccdc); +@@ -1597,7 +1599,6 @@ static void isp_unregister_entities(struct isp_device *isp) + omap3isp_stat_unregister_entities(&isp->isp_hist); + + v4l2_device_unregister(&isp->v4l2_dev); +- media_device_unregister(&isp->media_dev); + media_device_cleanup(&isp->media_dev); + } + +diff --git a/drivers/media/platform/vicodec/vicodec-core.c b/drivers/media/platform/vicodec/vicodec-core.c +index 408cd55d3580..7a33a52eacca 100644 +--- a/drivers/media/platform/vicodec/vicodec-core.c ++++ b/drivers/media/platform/vicodec/vicodec-core.c +@@ -42,7 +42,7 @@ MODULE_PARM_DESC(debug, " activates debug info"); + #define MAX_WIDTH 4096U + #define MIN_WIDTH 640U + #define MAX_HEIGHT 2160U +-#define MIN_HEIGHT 480U ++#define MIN_HEIGHT 360U + + #define dprintk(dev, fmt, arg...) \ + v4l2_dbg(1, debug, &dev->v4l2_dev, "%s: " fmt, __func__, ## arg) +@@ -438,7 +438,8 @@ restart: + for (; p < p_out + sz; p++) { + u32 copy; + +- p = memchr(p, magic[ctx->comp_magic_cnt], sz); ++ p = memchr(p, magic[ctx->comp_magic_cnt], ++ p_out + sz - p); + if (!p) { + ctx->comp_magic_cnt = 0; + break; +diff --git a/drivers/media/usb/gspca/gspca.c b/drivers/media/usb/gspca/gspca.c +index 57aa521e16b1..405a6a76d820 100644 +--- a/drivers/media/usb/gspca/gspca.c ++++ b/drivers/media/usb/gspca/gspca.c +@@ -426,10 +426,10 @@ void gspca_frame_add(struct gspca_dev *gspca_dev, + + /* append the packet to the frame buffer */ + if (len > 0) { +- if (gspca_dev->image_len + len > gspca_dev->pixfmt.sizeimage) { ++ if (gspca_dev->image_len + len > PAGE_ALIGN(gspca_dev->pixfmt.sizeimage)) { + gspca_err(gspca_dev, "frame overflow %d > %d\n", + gspca_dev->image_len + len, +- gspca_dev->pixfmt.sizeimage); ++ PAGE_ALIGN(gspca_dev->pixfmt.sizeimage)); + packet_type = DISCARD_PACKET; + } else { + /* !! image is NULL only when last pkt is LAST or DISCARD +@@ -1297,18 +1297,19 @@ static int gspca_queue_setup(struct vb2_queue *vq, + unsigned int sizes[], struct device *alloc_devs[]) + { + struct gspca_dev *gspca_dev = vb2_get_drv_priv(vq); ++ unsigned int size = PAGE_ALIGN(gspca_dev->pixfmt.sizeimage); + + if (*nplanes) +- return sizes[0] < gspca_dev->pixfmt.sizeimage ? -EINVAL : 0; ++ return sizes[0] < size ? -EINVAL : 0; + *nplanes = 1; +- sizes[0] = gspca_dev->pixfmt.sizeimage; ++ sizes[0] = size; + return 0; + } + + static int gspca_buffer_prepare(struct vb2_buffer *vb) + { + struct gspca_dev *gspca_dev = vb2_get_drv_priv(vb->vb2_queue); +- unsigned long size = gspca_dev->pixfmt.sizeimage; ++ unsigned long size = PAGE_ALIGN(gspca_dev->pixfmt.sizeimage); + + if (vb2_plane_size(vb, 0) < size) { + gspca_err(gspca_dev, "buffer too small (%lu < %lu)\n", +diff --git a/drivers/mfd/cros_ec_dev.c b/drivers/mfd/cros_ec_dev.c +index 999dac752bcc..6b22d54a540d 100644 +--- a/drivers/mfd/cros_ec_dev.c ++++ b/drivers/mfd/cros_ec_dev.c +@@ -263,6 +263,11 @@ static const struct file_operations fops = { + #endif + }; + ++static void cros_ec_class_release(struct device *dev) ++{ ++ kfree(to_cros_ec_dev(dev)); ++} ++ + static void cros_ec_sensors_register(struct cros_ec_dev *ec) + { + /* +@@ -395,7 +400,7 @@ static int ec_device_probe(struct platform_device *pdev) + int retval = -ENOMEM; + struct device *dev = &pdev->dev; + struct cros_ec_platform *ec_platform = dev_get_platdata(dev); +- struct cros_ec_dev *ec = devm_kzalloc(dev, sizeof(*ec), GFP_KERNEL); ++ struct cros_ec_dev *ec = kzalloc(sizeof(*ec), GFP_KERNEL); + + if (!ec) + return retval; +@@ -417,6 +422,7 @@ static int ec_device_probe(struct platform_device *pdev) + ec->class_dev.devt = MKDEV(ec_major, pdev->id); + ec->class_dev.class = &cros_class; + ec->class_dev.parent = dev; ++ ec->class_dev.release = cros_ec_class_release; + + retval = dev_set_name(&ec->class_dev, "%s", ec_platform->ec_name); + if (retval) { +diff --git a/drivers/mtd/nand/raw/qcom_nandc.c b/drivers/mtd/nand/raw/qcom_nandc.c +index d1d470bb32e4..8815f3e2b718 100644 +--- a/drivers/mtd/nand/raw/qcom_nandc.c ++++ b/drivers/mtd/nand/raw/qcom_nandc.c +@@ -151,15 +151,15 @@ + #define NAND_VERSION_MINOR_SHIFT 16 + + /* NAND OP_CMDs */ +-#define PAGE_READ 0x2 +-#define PAGE_READ_WITH_ECC 0x3 +-#define PAGE_READ_WITH_ECC_SPARE 0x4 +-#define PROGRAM_PAGE 0x6 +-#define PAGE_PROGRAM_WITH_ECC 0x7 +-#define PROGRAM_PAGE_SPARE 0x9 +-#define BLOCK_ERASE 0xa +-#define FETCH_ID 0xb +-#define RESET_DEVICE 0xd ++#define OP_PAGE_READ 0x2 ++#define OP_PAGE_READ_WITH_ECC 0x3 ++#define OP_PAGE_READ_WITH_ECC_SPARE 0x4 ++#define OP_PROGRAM_PAGE 0x6 ++#define OP_PAGE_PROGRAM_WITH_ECC 0x7 ++#define OP_PROGRAM_PAGE_SPARE 0x9 ++#define OP_BLOCK_ERASE 0xa ++#define OP_FETCH_ID 0xb ++#define OP_RESET_DEVICE 0xd + + /* Default Value for NAND_DEV_CMD_VLD */ + #define NAND_DEV_CMD_VLD_VAL (READ_START_VLD | WRITE_START_VLD | \ +@@ -692,11 +692,11 @@ static void update_rw_regs(struct qcom_nand_host *host, int num_cw, bool read) + + if (read) { + if (host->use_ecc) +- cmd = PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; ++ cmd = OP_PAGE_READ_WITH_ECC | PAGE_ACC | LAST_PAGE; + else +- cmd = PAGE_READ | PAGE_ACC | LAST_PAGE; ++ cmd = OP_PAGE_READ | PAGE_ACC | LAST_PAGE; + } else { +- cmd = PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; ++ cmd = OP_PROGRAM_PAGE | PAGE_ACC | LAST_PAGE; + } + + if (host->use_ecc) { +@@ -1170,7 +1170,7 @@ static int nandc_param(struct qcom_nand_host *host) + * in use. we configure the controller to perform a raw read of 512 + * bytes to read onfi params + */ +- nandc_set_reg(nandc, NAND_FLASH_CMD, PAGE_READ | PAGE_ACC | LAST_PAGE); ++ nandc_set_reg(nandc, NAND_FLASH_CMD, OP_PAGE_READ | PAGE_ACC | LAST_PAGE); + nandc_set_reg(nandc, NAND_ADDR0, 0); + nandc_set_reg(nandc, NAND_ADDR1, 0); + nandc_set_reg(nandc, NAND_DEV0_CFG0, 0 << CW_PER_PAGE +@@ -1224,7 +1224,7 @@ static int erase_block(struct qcom_nand_host *host, int page_addr) + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); + + nandc_set_reg(nandc, NAND_FLASH_CMD, +- BLOCK_ERASE | PAGE_ACC | LAST_PAGE); ++ OP_BLOCK_ERASE | PAGE_ACC | LAST_PAGE); + nandc_set_reg(nandc, NAND_ADDR0, page_addr); + nandc_set_reg(nandc, NAND_ADDR1, 0); + nandc_set_reg(nandc, NAND_DEV0_CFG0, +@@ -1255,7 +1255,7 @@ static int read_id(struct qcom_nand_host *host, int column) + if (column == -1) + return 0; + +- nandc_set_reg(nandc, NAND_FLASH_CMD, FETCH_ID); ++ nandc_set_reg(nandc, NAND_FLASH_CMD, OP_FETCH_ID); + nandc_set_reg(nandc, NAND_ADDR0, column); + nandc_set_reg(nandc, NAND_ADDR1, 0); + nandc_set_reg(nandc, NAND_FLASH_CHIP_SELECT, +@@ -1276,7 +1276,7 @@ static int reset(struct qcom_nand_host *host) + struct nand_chip *chip = &host->chip; + struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip); + +- nandc_set_reg(nandc, NAND_FLASH_CMD, RESET_DEVICE); ++ nandc_set_reg(nandc, NAND_FLASH_CMD, OP_RESET_DEVICE); + nandc_set_reg(nandc, NAND_EXEC_CMD, 1); + + write_reg_dma(nandc, NAND_FLASH_CMD, 1, NAND_BAM_NEXT_SGL); +diff --git a/drivers/mtd/spi-nor/cadence-quadspi.c b/drivers/mtd/spi-nor/cadence-quadspi.c +index 6e9cbd1a0b6d..0806c7a81c0f 100644 +--- a/drivers/mtd/spi-nor/cadence-quadspi.c ++++ b/drivers/mtd/spi-nor/cadence-quadspi.c +@@ -644,9 +644,23 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr, + ndelay(cqspi->wr_delay); + + while (remaining > 0) { ++ size_t write_words, mod_bytes; ++ + write_bytes = remaining > page_size ? page_size : remaining; +- iowrite32_rep(cqspi->ahb_base, txbuf, +- DIV_ROUND_UP(write_bytes, 4)); ++ write_words = write_bytes / 4; ++ mod_bytes = write_bytes % 4; ++ /* Write 4 bytes at a time then single bytes. */ ++ if (write_words) { ++ iowrite32_rep(cqspi->ahb_base, txbuf, write_words); ++ txbuf += (write_words * 4); ++ } ++ if (mod_bytes) { ++ unsigned int temp = 0xFFFFFFFF; ++ ++ memcpy(&temp, txbuf, mod_bytes); ++ iowrite32(temp, cqspi->ahb_base); ++ txbuf += mod_bytes; ++ } + + if (!wait_for_completion_timeout(&cqspi->transfer_complete, + msecs_to_jiffies(CQSPI_TIMEOUT_MS))) { +@@ -655,7 +669,6 @@ static int cqspi_indirect_write_execute(struct spi_nor *nor, loff_t to_addr, + goto failwr; + } + +- txbuf += write_bytes; + remaining -= write_bytes; + + if (remaining > 0) +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index 11662f479e76..771a46083739 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -24,6 +24,9 @@ + + #define RCAR_CAN_DRV_NAME "rcar_can" + ++#define RCAR_SUPPORTED_CLOCKS (BIT(CLKR_CLKP1) | BIT(CLKR_CLKP2) | \ ++ BIT(CLKR_CLKEXT)) ++ + /* Mailbox configuration: + * mailbox 60 - 63 - Rx FIFO mailboxes + * mailbox 56 - 59 - Tx FIFO mailboxes +@@ -789,7 +792,7 @@ static int rcar_can_probe(struct platform_device *pdev) + goto fail_clk; + } + +- if (clock_select >= ARRAY_SIZE(clock_names)) { ++ if (!(BIT(clock_select) & RCAR_SUPPORTED_CLOCKS)) { + err = -EINVAL; + dev_err(&pdev->dev, "invalid CAN clock selected\n"); + goto fail_clk; +diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c +index 0678a38b1af4..c9fd83e8d947 100644 +--- a/drivers/net/can/usb/ucan.c ++++ b/drivers/net/can/usb/ucan.c +@@ -1575,11 +1575,8 @@ err_firmware_needs_update: + /* disconnect the device */ + static void ucan_disconnect(struct usb_interface *intf) + { +- struct usb_device *udev; + struct ucan_priv *up = usb_get_intfdata(intf); + +- udev = interface_to_usbdev(intf); +- + usb_set_intfdata(intf, NULL); + + if (up) { +diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c +index d906293ce07d..4b73131a0f20 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c ++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c +@@ -2627,8 +2627,8 @@ err_device_destroy: + ena_com_abort_admin_commands(ena_dev); + ena_com_wait_for_abort_completion(ena_dev); + ena_com_admin_destroy(ena_dev); +- ena_com_mmio_reg_read_request_destroy(ena_dev); + ena_com_dev_reset(ena_dev, ENA_REGS_RESET_DRIVER_INVALID_STATE); ++ ena_com_mmio_reg_read_request_destroy(ena_dev); + err: + clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags); + clear_bit(ENA_FLAG_ONGOING_RESET, &adapter->flags); +diff --git a/drivers/net/ethernet/amd/sunlance.c b/drivers/net/ethernet/amd/sunlance.c +index cdd7a611479b..19f89d9b1781 100644 +--- a/drivers/net/ethernet/amd/sunlance.c ++++ b/drivers/net/ethernet/amd/sunlance.c +@@ -1419,7 +1419,7 @@ static int sparc_lance_probe_one(struct platform_device *op, + + prop = of_get_property(nd, "tpe-link-test?", NULL); + if (!prop) +- goto no_link_test; ++ goto node_put; + + if (strcmp(prop, "true")) { + printk(KERN_NOTICE "SunLance: warning: overriding option " +@@ -1428,6 +1428,8 @@ static int sparc_lance_probe_one(struct platform_device *op, + "to ecd@skynet.be\n"); + auxio_set_lte(AUXIO_LTE_ON); + } ++node_put: ++ of_node_put(nd); + no_link_test: + lp->auto_select = 1; + lp->tpe = 0; +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h +index be1506169076..0de487a8f0eb 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h +@@ -2191,6 +2191,13 @@ void bnx2x_igu_clear_sb_gen(struct bnx2x *bp, u8 func, u8 idu_sb_id, + #define PMF_DMAE_C(bp) (BP_PORT(bp) * MAX_DMAE_C_PER_PORT + \ + E1HVN_MAX) + ++/* Following is the DMAE channel number allocation for the clients. ++ * MFW: OCBB/OCSD implementations use DMAE channels 14/15 respectively. ++ * Driver: 0-3 and 8-11 (for PF dmae operations) ++ * 4 and 12 (for stats requests) ++ */ ++#define BNX2X_FW_DMAE_C 13 /* Channel for FW DMAE operations */ ++ + /* PCIE link and speed */ + #define PCICFG_LINK_WIDTH 0x1f00000 + #define PCICFG_LINK_WIDTH_SHIFT 20 +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c +index 3f4d2c8da21a..a9eaaf3e73a4 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_sp.c +@@ -6149,6 +6149,7 @@ static inline int bnx2x_func_send_start(struct bnx2x *bp, + rdata->sd_vlan_tag = cpu_to_le16(start_params->sd_vlan_tag); + rdata->path_id = BP_PATH(bp); + rdata->network_cos_mode = start_params->network_cos_mode; ++ rdata->dmae_cmd_id = BNX2X_FW_DMAE_C; + + rdata->vxlan_dst_port = cpu_to_le16(start_params->vxlan_dst_port); + rdata->geneve_dst_port = cpu_to_le16(start_params->geneve_dst_port); +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +index e52d7af3ab3e..da9b87689996 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +@@ -2862,8 +2862,8 @@ bnxt_fill_coredump_record(struct bnxt *bp, struct bnxt_coredump_record *record, + record->asic_state = 0; + strlcpy(record->system_name, utsname()->nodename, + sizeof(record->system_name)); +- record->year = cpu_to_le16(tm.tm_year); +- record->month = cpu_to_le16(tm.tm_mon); ++ record->year = cpu_to_le16(tm.tm_year + 1900); ++ record->month = cpu_to_le16(tm.tm_mon + 1); + record->day = cpu_to_le16(tm.tm_mday); + record->hour = cpu_to_le16(tm.tm_hour); + record->minute = cpu_to_le16(tm.tm_min); +diff --git a/drivers/net/ethernet/faraday/ftmac100.c b/drivers/net/ethernet/faraday/ftmac100.c +index a1197d3adbe0..9015bd911bee 100644 +--- a/drivers/net/ethernet/faraday/ftmac100.c ++++ b/drivers/net/ethernet/faraday/ftmac100.c +@@ -872,11 +872,10 @@ static irqreturn_t ftmac100_interrupt(int irq, void *dev_id) + struct net_device *netdev = dev_id; + struct ftmac100 *priv = netdev_priv(netdev); + +- if (likely(netif_running(netdev))) { +- /* Disable interrupts for polling */ +- ftmac100_disable_all_int(priv); ++ /* Disable interrupts for polling */ ++ ftmac100_disable_all_int(priv); ++ if (likely(netif_running(netdev))) + napi_schedule(&priv->napi); +- } + + return IRQ_HANDLED; + } +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c +index 7661064c815b..5ab21a1b5444 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.c ++++ b/drivers/net/ethernet/ibm/ibmvnic.c +@@ -485,8 +485,8 @@ static void release_rx_pools(struct ibmvnic_adapter *adapter) + + for (j = 0; j < rx_pool->size; j++) { + if (rx_pool->rx_buff[j].skb) { +- dev_kfree_skb_any(rx_pool->rx_buff[i].skb); +- rx_pool->rx_buff[i].skb = NULL; ++ dev_kfree_skb_any(rx_pool->rx_buff[j].skb); ++ rx_pool->rx_buff[j].skb = NULL; + } + } + +@@ -1103,20 +1103,15 @@ static int ibmvnic_open(struct net_device *netdev) + return 0; + } + +- mutex_lock(&adapter->reset_lock); +- + if (adapter->state != VNIC_CLOSED) { + rc = ibmvnic_login(netdev); +- if (rc) { +- mutex_unlock(&adapter->reset_lock); ++ if (rc) + return rc; +- } + + rc = init_resources(adapter); + if (rc) { + netdev_err(netdev, "failed to initialize resources\n"); + release_resources(adapter); +- mutex_unlock(&adapter->reset_lock); + return rc; + } + } +@@ -1124,8 +1119,6 @@ static int ibmvnic_open(struct net_device *netdev) + rc = __ibmvnic_open(netdev); + netif_carrier_on(netdev); + +- mutex_unlock(&adapter->reset_lock); +- + return rc; + } + +@@ -1269,10 +1262,8 @@ static int ibmvnic_close(struct net_device *netdev) + return 0; + } + +- mutex_lock(&adapter->reset_lock); + rc = __ibmvnic_close(netdev); + ibmvnic_cleanup(netdev); +- mutex_unlock(&adapter->reset_lock); + + return rc; + } +@@ -1746,6 +1737,7 @@ static int do_reset(struct ibmvnic_adapter *adapter, + struct ibmvnic_rwi *rwi, u32 reset_state) + { + u64 old_num_rx_queues, old_num_tx_queues; ++ u64 old_num_rx_slots, old_num_tx_slots; + struct net_device *netdev = adapter->netdev; + int i, rc; + +@@ -1757,6 +1749,8 @@ static int do_reset(struct ibmvnic_adapter *adapter, + + old_num_rx_queues = adapter->req_rx_queues; + old_num_tx_queues = adapter->req_tx_queues; ++ old_num_rx_slots = adapter->req_rx_add_entries_per_subcrq; ++ old_num_tx_slots = adapter->req_tx_entries_per_subcrq; + + ibmvnic_cleanup(netdev); + +@@ -1819,21 +1813,20 @@ static int do_reset(struct ibmvnic_adapter *adapter, + if (rc) + return rc; + } else if (adapter->req_rx_queues != old_num_rx_queues || +- adapter->req_tx_queues != old_num_tx_queues) { +- adapter->map_id = 1; ++ adapter->req_tx_queues != old_num_tx_queues || ++ adapter->req_rx_add_entries_per_subcrq != ++ old_num_rx_slots || ++ adapter->req_tx_entries_per_subcrq != ++ old_num_tx_slots) { + release_rx_pools(adapter); + release_tx_pools(adapter); +- rc = init_rx_pools(netdev); +- if (rc) +- return rc; +- rc = init_tx_pools(netdev); +- if (rc) +- return rc; +- + release_napi(adapter); +- rc = init_napi(adapter); ++ release_vpd_data(adapter); ++ ++ rc = init_resources(adapter); + if (rc) + return rc; ++ + } else { + rc = reset_tx_pools(adapter); + if (rc) +@@ -1917,17 +1910,8 @@ static int do_hard_reset(struct ibmvnic_adapter *adapter, + adapter->state = VNIC_PROBED; + return 0; + } +- /* netif_set_real_num_xx_queues needs to take rtnl lock here +- * unless wait_for_reset is set, in which case the rtnl lock +- * has already been taken before initializing the reset +- */ +- if (!adapter->wait_for_reset) { +- rtnl_lock(); +- rc = init_resources(adapter); +- rtnl_unlock(); +- } else { +- rc = init_resources(adapter); +- } ++ ++ rc = init_resources(adapter); + if (rc) + return rc; + +@@ -1986,13 +1970,21 @@ static void __ibmvnic_reset(struct work_struct *work) + struct ibmvnic_rwi *rwi; + struct ibmvnic_adapter *adapter; + struct net_device *netdev; ++ bool we_lock_rtnl = false; + u32 reset_state; + int rc = 0; + + adapter = container_of(work, struct ibmvnic_adapter, ibmvnic_reset); + netdev = adapter->netdev; + +- mutex_lock(&adapter->reset_lock); ++ /* netif_set_real_num_xx_queues needs to take rtnl lock here ++ * unless wait_for_reset is set, in which case the rtnl lock ++ * has already been taken before initializing the reset ++ */ ++ if (!adapter->wait_for_reset) { ++ rtnl_lock(); ++ we_lock_rtnl = true; ++ } + reset_state = adapter->state; + + rwi = get_next_rwi(adapter); +@@ -2020,12 +2012,11 @@ static void __ibmvnic_reset(struct work_struct *work) + if (rc) { + netdev_dbg(adapter->netdev, "Reset failed\n"); + free_all_rwi(adapter); +- mutex_unlock(&adapter->reset_lock); +- return; + } + + adapter->resetting = false; +- mutex_unlock(&adapter->reset_lock); ++ if (we_lock_rtnl) ++ rtnl_unlock(); + } + + static int ibmvnic_reset(struct ibmvnic_adapter *adapter, +@@ -4709,7 +4700,6 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id) + + INIT_WORK(&adapter->ibmvnic_reset, __ibmvnic_reset); + INIT_LIST_HEAD(&adapter->rwi_list); +- mutex_init(&adapter->reset_lock); + mutex_init(&adapter->rwi_lock); + adapter->resetting = false; + +@@ -4781,8 +4771,8 @@ static int ibmvnic_remove(struct vio_dev *dev) + struct ibmvnic_adapter *adapter = netdev_priv(netdev); + + adapter->state = VNIC_REMOVING; +- unregister_netdev(netdev); +- mutex_lock(&adapter->reset_lock); ++ rtnl_lock(); ++ unregister_netdevice(netdev); + + release_resources(adapter); + release_sub_crqs(adapter, 1); +@@ -4793,7 +4783,7 @@ static int ibmvnic_remove(struct vio_dev *dev) + + adapter->state = VNIC_REMOVED; + +- mutex_unlock(&adapter->reset_lock); ++ rtnl_unlock(); + device_remove_file(&dev->dev, &dev_attr_failover); + free_netdev(netdev); + dev_set_drvdata(&dev->dev, NULL); +diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h +index f06eec145ca6..735f481b1870 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.h ++++ b/drivers/net/ethernet/ibm/ibmvnic.h +@@ -1068,7 +1068,7 @@ struct ibmvnic_adapter { + struct tasklet_struct tasklet; + enum vnic_state state; + enum ibmvnic_reset_reason reset_reason; +- struct mutex reset_lock, rwi_lock; ++ struct mutex rwi_lock; + struct list_head rwi_list; + struct work_struct ibmvnic_reset; + bool resetting; +diff --git a/drivers/net/ethernet/mellanox/mlx4/alloc.c b/drivers/net/ethernet/mellanox/mlx4/alloc.c +index 4bdf25059542..21788d4f9881 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/alloc.c ++++ b/drivers/net/ethernet/mellanox/mlx4/alloc.c +@@ -337,7 +337,7 @@ void mlx4_zone_allocator_destroy(struct mlx4_zone_allocator *zone_alloc) + static u32 __mlx4_alloc_from_zone(struct mlx4_zone_entry *zone, int count, + int align, u32 skip_mask, u32 *puid) + { +- u32 uid; ++ u32 uid = 0; + u32 res; + struct mlx4_zone_allocator *zone_alloc = zone->allocator; + struct mlx4_zone_entry *curr_node; +diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h +index ebcd2778eeb3..23f1b5b512c2 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h ++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h +@@ -540,8 +540,8 @@ struct slave_list { + struct resource_allocator { + spinlock_t alloc_lock; /* protect quotas */ + union { +- int res_reserved; +- int res_port_rsvd[MLX4_MAX_PORTS]; ++ unsigned int res_reserved; ++ unsigned int res_port_rsvd[MLX4_MAX_PORTS]; + }; + union { + int res_free; +diff --git a/drivers/net/ethernet/mellanox/mlx4/mr.c b/drivers/net/ethernet/mellanox/mlx4/mr.c +index 2e84f10f59ba..1a11bc0e1612 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/mr.c ++++ b/drivers/net/ethernet/mellanox/mlx4/mr.c +@@ -363,6 +363,7 @@ int mlx4_mr_hw_write_mpt(struct mlx4_dev *dev, struct mlx4_mr *mmr, + container_of((void *)mpt_entry, struct mlx4_cmd_mailbox, + buf); + ++ (*mpt_entry)->lkey = 0; + err = mlx4_SW2HW_MPT(dev, mailbox, key); + } + +diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c +index f5459de6d60a..5900a506bf8d 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c +@@ -191,7 +191,7 @@ qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data) + static void + qed_dcbx_set_params(struct qed_dcbx_results *p_data, + struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, +- bool enable, u8 prio, u8 tc, ++ bool app_tlv, bool enable, u8 prio, u8 tc, + enum dcbx_protocol_type type, + enum qed_pci_personality personality) + { +@@ -210,7 +210,7 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data, + p_data->arr[type].dont_add_vlan0 = true; + + /* QM reconf data */ +- if (p_hwfn->hw_info.personality == personality) ++ if (app_tlv && p_hwfn->hw_info.personality == personality) + qed_hw_info_set_offload_tc(&p_hwfn->hw_info, tc); + + /* Configure dcbx vlan priority in doorbell block for roce EDPM */ +@@ -225,7 +225,7 @@ qed_dcbx_set_params(struct qed_dcbx_results *p_data, + static void + qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, + struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, +- bool enable, u8 prio, u8 tc, ++ bool app_tlv, bool enable, u8 prio, u8 tc, + enum dcbx_protocol_type type) + { + enum qed_pci_personality personality; +@@ -240,7 +240,7 @@ qed_dcbx_update_app_info(struct qed_dcbx_results *p_data, + + personality = qed_dcbx_app_update[i].personality; + +- qed_dcbx_set_params(p_data, p_hwfn, p_ptt, enable, ++ qed_dcbx_set_params(p_data, p_hwfn, p_ptt, app_tlv, enable, + prio, tc, type, personality); + } + } +@@ -318,8 +318,8 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, + enable = true; + } + +- qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, +- priority, tc, type); ++ qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, true, ++ enable, priority, tc, type); + } + } + +@@ -340,7 +340,7 @@ qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt, + continue; + + enable = (type == DCBX_PROTOCOL_ETH) ? false : !!dcbx_version; +- qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, enable, ++ qed_dcbx_update_app_info(p_data, p_hwfn, p_ptt, false, enable, + priority, tc, type); + } + +diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c +index 97f073fd3725..2f69ee9221c6 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c +@@ -179,6 +179,10 @@ void qed_resc_free(struct qed_dev *cdev) + qed_iscsi_free(p_hwfn); + qed_ooo_free(p_hwfn); + } ++ ++ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) ++ qed_rdma_info_free(p_hwfn); ++ + qed_iov_free(p_hwfn); + qed_l2_free(p_hwfn); + qed_dmae_info_free(p_hwfn); +@@ -474,8 +478,16 @@ static u16 *qed_init_qm_get_idx_from_flags(struct qed_hwfn *p_hwfn, + struct qed_qm_info *qm_info = &p_hwfn->qm_info; + + /* Can't have multiple flags set here */ +- if (bitmap_weight((unsigned long *)&pq_flags, sizeof(pq_flags)) > 1) ++ if (bitmap_weight((unsigned long *)&pq_flags, ++ sizeof(pq_flags) * BITS_PER_BYTE) > 1) { ++ DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags); ++ goto err; ++ } ++ ++ if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) { ++ DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags); + goto err; ++ } + + switch (pq_flags) { + case PQ_FLAGS_RLS: +@@ -499,8 +511,7 @@ static u16 *qed_init_qm_get_idx_from_flags(struct qed_hwfn *p_hwfn, + } + + err: +- DP_ERR(p_hwfn, "BAD pq flags %d\n", pq_flags); +- return NULL; ++ return &qm_info->start_pq; + } + + /* save pq index in qm info */ +@@ -524,20 +535,32 @@ u16 qed_get_cm_pq_idx_mcos(struct qed_hwfn *p_hwfn, u8 tc) + { + u8 max_tc = qed_init_qm_get_num_tcs(p_hwfn); + ++ if (max_tc == 0) { ++ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", ++ PQ_FLAGS_MCOS); ++ return p_hwfn->qm_info.start_pq; ++ } ++ + if (tc > max_tc) + DP_ERR(p_hwfn, "tc %d must be smaller than %d\n", tc, max_tc); + +- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + tc; ++ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_MCOS) + (tc % max_tc); + } + + u16 qed_get_cm_pq_idx_vf(struct qed_hwfn *p_hwfn, u16 vf) + { + u16 max_vf = qed_init_qm_get_num_vfs(p_hwfn); + ++ if (max_vf == 0) { ++ DP_ERR(p_hwfn, "pq with flag 0x%lx do not exist\n", ++ PQ_FLAGS_VFS); ++ return p_hwfn->qm_info.start_pq; ++ } ++ + if (vf > max_vf) + DP_ERR(p_hwfn, "vf %d must be smaller than %d\n", vf, max_vf); + +- return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + vf; ++ return qed_get_cm_pq_idx(p_hwfn, PQ_FLAGS_VFS) + (vf % max_vf); + } + + u16 qed_get_cm_pq_idx_ofld_mtc(struct qed_hwfn *p_hwfn, u8 tc) +@@ -1074,6 +1097,12 @@ int qed_resc_alloc(struct qed_dev *cdev) + goto alloc_err; + } + ++ if (QED_IS_RDMA_PERSONALITY(p_hwfn)) { ++ rc = qed_rdma_info_alloc(p_hwfn); ++ if (rc) ++ goto alloc_err; ++ } ++ + /* DMA info initialization */ + rc = qed_dmae_info_alloc(p_hwfn); + if (rc) +@@ -2091,11 +2120,8 @@ int qed_hw_start_fastpath(struct qed_hwfn *p_hwfn) + if (!p_ptt) + return -EAGAIN; + +- /* If roce info is allocated it means roce is initialized and should +- * be enabled in searcher. +- */ + if (p_hwfn->p_rdma_info && +- p_hwfn->b_rdma_enabled_in_prs) ++ p_hwfn->p_rdma_info->active && p_hwfn->b_rdma_enabled_in_prs) + qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0x1); + + /* Re-open incoming traffic */ +diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c +index 0f0aba793352..b22f464ea3fa 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_int.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_int.c +@@ -992,6 +992,8 @@ static int qed_int_attentions(struct qed_hwfn *p_hwfn) + */ + do { + index = p_sb_attn->sb_index; ++ /* finish reading index before the loop condition */ ++ dma_rmb(); + attn_bits = le32_to_cpu(p_sb_attn->atten_bits); + attn_acks = le32_to_cpu(p_sb_attn->atten_ack); + } while (index != p_sb_attn->sb_index); +diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c +index 2094d86a7a08..cf3b0e3dc350 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_main.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c +@@ -1634,9 +1634,9 @@ static int qed_drain(struct qed_dev *cdev) + return -EBUSY; + } + rc = qed_mcp_drain(hwfn, ptt); ++ qed_ptt_release(hwfn, ptt); + if (rc) + return rc; +- qed_ptt_release(hwfn, ptt); + } + + return 0; +diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.c b/drivers/net/ethernet/qlogic/qed/qed_rdma.c +index 62113438c880..7873d6dfd91f 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.c +@@ -140,22 +140,34 @@ static u32 qed_rdma_get_sb_id(void *p_hwfn, u32 rel_sb_id) + return FEAT_NUM((struct qed_hwfn *)p_hwfn, QED_PF_L2_QUE) + rel_sb_id; + } + +-static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, +- struct qed_ptt *p_ptt, +- struct qed_rdma_start_in_params *params) ++int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) + { + struct qed_rdma_info *p_rdma_info; +- u32 num_cons, num_tasks; +- int rc = -ENOMEM; + +- DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n"); +- +- /* Allocate a struct with current pf rdma info */ + p_rdma_info = kzalloc(sizeof(*p_rdma_info), GFP_KERNEL); + if (!p_rdma_info) +- return rc; ++ return -ENOMEM; ++ ++ spin_lock_init(&p_rdma_info->lock); + + p_hwfn->p_rdma_info = p_rdma_info; ++ return 0; ++} ++ ++void qed_rdma_info_free(struct qed_hwfn *p_hwfn) ++{ ++ kfree(p_hwfn->p_rdma_info); ++ p_hwfn->p_rdma_info = NULL; ++} ++ ++static int qed_rdma_alloc(struct qed_hwfn *p_hwfn) ++{ ++ struct qed_rdma_info *p_rdma_info = p_hwfn->p_rdma_info; ++ u32 num_cons, num_tasks; ++ int rc = -ENOMEM; ++ ++ DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "Allocating RDMA\n"); ++ + if (QED_IS_IWARP_PERSONALITY(p_hwfn)) + p_rdma_info->proto = PROTOCOLID_IWARP; + else +@@ -183,7 +195,7 @@ static int qed_rdma_alloc(struct qed_hwfn *p_hwfn, + /* Allocate a struct with device params and fill it */ + p_rdma_info->dev = kzalloc(sizeof(*p_rdma_info->dev), GFP_KERNEL); + if (!p_rdma_info->dev) +- goto free_rdma_info; ++ return rc; + + /* Allocate a struct with port params and fill it */ + p_rdma_info->port = kzalloc(sizeof(*p_rdma_info->port), GFP_KERNEL); +@@ -298,8 +310,6 @@ free_rdma_port: + kfree(p_rdma_info->port); + free_rdma_dev: + kfree(p_rdma_info->dev); +-free_rdma_info: +- kfree(p_rdma_info); + + return rc; + } +@@ -370,8 +380,6 @@ static void qed_rdma_resc_free(struct qed_hwfn *p_hwfn) + + kfree(p_rdma_info->port); + kfree(p_rdma_info->dev); +- +- kfree(p_rdma_info); + } + + static void qed_rdma_free_tid(void *rdma_cxt, u32 itid) +@@ -679,8 +687,6 @@ static int qed_rdma_setup(struct qed_hwfn *p_hwfn, + + DP_VERBOSE(p_hwfn, QED_MSG_RDMA, "RDMA setup\n"); + +- spin_lock_init(&p_hwfn->p_rdma_info->lock); +- + qed_rdma_init_devinfo(p_hwfn, params); + qed_rdma_init_port(p_hwfn); + qed_rdma_init_events(p_hwfn, params); +@@ -727,7 +733,7 @@ static int qed_rdma_stop(void *rdma_cxt) + /* Disable RoCE search */ + qed_wr(p_hwfn, p_ptt, p_hwfn->rdma_prs_search_reg, 0); + p_hwfn->b_rdma_enabled_in_prs = false; +- ++ p_hwfn->p_rdma_info->active = 0; + qed_wr(p_hwfn, p_ptt, PRS_REG_ROCE_DEST_QP_MAX_PF, 0); + + ll2_ethertype_en = qed_rd(p_hwfn, p_ptt, PRS_REG_LIGHT_L2_ETHERTYPE_EN); +@@ -1236,7 +1242,8 @@ qed_rdma_create_qp(void *rdma_cxt, + u8 max_stats_queues; + int rc; + +- if (!rdma_cxt || !in_params || !out_params || !p_hwfn->p_rdma_info) { ++ if (!rdma_cxt || !in_params || !out_params || ++ !p_hwfn->p_rdma_info->active) { + DP_ERR(p_hwfn->cdev, + "qed roce create qp failed due to NULL entry (rdma_cxt=%p, in=%p, out=%p, roce_info=?\n", + rdma_cxt, in_params, out_params); +@@ -1802,8 +1809,8 @@ bool qed_rdma_allocated_qps(struct qed_hwfn *p_hwfn) + { + bool result; + +- /* if rdma info has not been allocated, naturally there are no qps */ +- if (!p_hwfn->p_rdma_info) ++ /* if rdma wasn't activated yet, naturally there are no qps */ ++ if (!p_hwfn->p_rdma_info->active) + return false; + + spin_lock_bh(&p_hwfn->p_rdma_info->lock); +@@ -1849,7 +1856,7 @@ static int qed_rdma_start(void *rdma_cxt, + if (!p_ptt) + goto err; + +- rc = qed_rdma_alloc(p_hwfn, p_ptt, params); ++ rc = qed_rdma_alloc(p_hwfn); + if (rc) + goto err1; + +@@ -1858,6 +1865,7 @@ static int qed_rdma_start(void *rdma_cxt, + goto err2; + + qed_ptt_release(p_hwfn, p_ptt); ++ p_hwfn->p_rdma_info->active = 1; + + return rc; + +diff --git a/drivers/net/ethernet/qlogic/qed/qed_rdma.h b/drivers/net/ethernet/qlogic/qed/qed_rdma.h +index 6f722ee8ee94..3689fe3e5935 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_rdma.h ++++ b/drivers/net/ethernet/qlogic/qed/qed_rdma.h +@@ -102,6 +102,7 @@ struct qed_rdma_info { + u16 max_queue_zones; + enum protocol_type proto; + struct qed_iwarp_info iwarp; ++ u8 active:1; + }; + + struct qed_rdma_qp { +@@ -176,10 +177,14 @@ struct qed_rdma_qp { + #if IS_ENABLED(CONFIG_QED_RDMA) + void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); + void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt); ++int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn); ++void qed_rdma_info_free(struct qed_hwfn *p_hwfn); + #else + static inline void qed_rdma_dpm_conf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt) {} + static inline void qed_rdma_dpm_bar(struct qed_hwfn *p_hwfn, + struct qed_ptt *p_ptt) {} ++static inline int qed_rdma_info_alloc(struct qed_hwfn *p_hwfn) {return -EINVAL;} ++static inline void qed_rdma_info_free(struct qed_hwfn *p_hwfn) {} + #endif + + int +diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c +index d887016e54b6..4b6572f0188a 100644 +--- a/drivers/net/team/team.c ++++ b/drivers/net/team/team.c +@@ -985,8 +985,6 @@ static void team_port_disable(struct team *team, + team->en_port_count--; + team_queue_override_port_del(team, port); + team_adjust_ops(team); +- team_notify_peers(team); +- team_mcast_rejoin(team); + team_lower_state_changed(port); + } + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c +index e7584b842dce..eb5db94f5745 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmutil/d11.c +@@ -193,6 +193,9 @@ static void brcmu_d11ac_decchspec(struct brcmu_chan *ch) + } + break; + case BRCMU_CHSPEC_D11AC_BW_160: ++ ch->bw = BRCMU_CHAN_BW_160; ++ ch->sb = brcmu_maskget16(ch->chspec, BRCMU_CHSPEC_D11AC_SB_MASK, ++ BRCMU_CHSPEC_D11AC_SB_SHIFT); + switch (ch->sb) { + case BRCMU_CHAN_SB_LLL: + ch->control_ch_num -= CH_70MHZ_APART; +diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c +index 07442ada6dd0..62ab42e94c9d 100644 +--- a/drivers/net/wireless/mac80211_hwsim.c ++++ b/drivers/net/wireless/mac80211_hwsim.c +@@ -2889,6 +2889,10 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, + + wiphy_ext_feature_set(hw->wiphy, NL80211_EXT_FEATURE_CQM_RSSI_LIST); + ++ tasklet_hrtimer_init(&data->beacon_timer, ++ mac80211_hwsim_beacon, ++ CLOCK_MONOTONIC, HRTIMER_MODE_ABS); ++ + err = ieee80211_register_hw(hw); + if (err < 0) { + pr_debug("mac80211_hwsim: ieee80211_register_hw failed (%d)\n", +@@ -2913,10 +2917,6 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, + data->debugfs, + data, &hwsim_simulate_radar); + +- tasklet_hrtimer_init(&data->beacon_timer, +- mac80211_hwsim_beacon, +- CLOCK_MONOTONIC, HRTIMER_MODE_ABS); +- + spin_lock_bh(&hwsim_radio_lock); + err = rhashtable_insert_fast(&hwsim_radios_rht, &data->rht, + hwsim_rht_params); +diff --git a/drivers/net/wireless/mediatek/mt76/Kconfig b/drivers/net/wireless/mediatek/mt76/Kconfig +index b6c5f17dca30..27826217ff76 100644 +--- a/drivers/net/wireless/mediatek/mt76/Kconfig ++++ b/drivers/net/wireless/mediatek/mt76/Kconfig +@@ -1,6 +1,12 @@ + config MT76_CORE + tristate + ++config MT76_LEDS ++ bool ++ depends on MT76_CORE ++ depends on LEDS_CLASS=y || MT76_CORE=LEDS_CLASS ++ default y ++ + config MT76_USB + tristate + depends on MT76_CORE +diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c +index 029d54bce9e8..ade4a2029a24 100644 +--- a/drivers/net/wireless/mediatek/mt76/mac80211.c ++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c +@@ -342,9 +342,11 @@ int mt76_register_device(struct mt76_dev *dev, bool vht, + mt76_check_sband(dev, NL80211_BAND_2GHZ); + mt76_check_sband(dev, NL80211_BAND_5GHZ); + +- ret = mt76_led_init(dev); +- if (ret) +- return ret; ++ if (IS_ENABLED(CONFIG_MT76_LEDS)) { ++ ret = mt76_led_init(dev); ++ if (ret) ++ return ret; ++ } + + return ieee80211_register_hw(hw); + } +diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c +index b814391f79ac..03b103c45d69 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76x2_init.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_init.c +@@ -581,8 +581,10 @@ int mt76x2_register_device(struct mt76x2_dev *dev) + mt76x2_dfs_init_detector(dev); + + /* init led callbacks */ +- dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; +- dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; ++ if (IS_ENABLED(CONFIG_MT76_LEDS)) { ++ dev->mt76.led_cdev.brightness_set = mt76x2_led_set_brightness; ++ dev->mt76.led_cdev.blink_set = mt76x2_led_set_blink; ++ } + + ret = mt76_register_device(&dev->mt76, true, mt76x2_rates, + ARRAY_SIZE(mt76x2_rates)); +diff --git a/drivers/nvdimm/nd-core.h b/drivers/nvdimm/nd-core.h +index ac68072fb8cd..5ff254dc9b14 100644 +--- a/drivers/nvdimm/nd-core.h ++++ b/drivers/nvdimm/nd-core.h +@@ -112,6 +112,8 @@ resource_size_t nd_pmem_available_dpa(struct nd_region *nd_region, + struct nd_mapping *nd_mapping, resource_size_t *overlap); + resource_size_t nd_blk_available_dpa(struct nd_region *nd_region); + resource_size_t nd_region_available_dpa(struct nd_region *nd_region); ++int nd_region_conflict(struct nd_region *nd_region, resource_size_t start, ++ resource_size_t size); + resource_size_t nvdimm_allocated_dpa(struct nvdimm_drvdata *ndd, + struct nd_label_id *label_id); + int alias_dpa_busy(struct device *dev, void *data); +diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c +index 3f7ad5bc443e..7fe84bfe0878 100644 +--- a/drivers/nvdimm/pfn_devs.c ++++ b/drivers/nvdimm/pfn_devs.c +@@ -590,14 +590,47 @@ static u64 phys_pmem_align_down(struct nd_pfn *nd_pfn, u64 phys) + ALIGN_DOWN(phys, nd_pfn->align)); + } + ++/* ++ * Check if pmem collides with 'System RAM', or other regions when ++ * section aligned. Trim it accordingly. ++ */ ++static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trunc) ++{ ++ struct nd_namespace_common *ndns = nd_pfn->ndns; ++ struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); ++ struct nd_region *nd_region = to_nd_region(nd_pfn->dev.parent); ++ const resource_size_t start = nsio->res.start; ++ const resource_size_t end = start + resource_size(&nsio->res); ++ resource_size_t adjust, size; ++ ++ *start_pad = 0; ++ *end_trunc = 0; ++ ++ adjust = start - PHYS_SECTION_ALIGN_DOWN(start); ++ size = resource_size(&nsio->res) + adjust; ++ if (region_intersects(start - adjust, size, IORESOURCE_SYSTEM_RAM, ++ IORES_DESC_NONE) == REGION_MIXED ++ || nd_region_conflict(nd_region, start - adjust, size)) ++ *start_pad = PHYS_SECTION_ALIGN_UP(start) - start; ++ ++ /* Now check that end of the range does not collide. */ ++ adjust = PHYS_SECTION_ALIGN_UP(end) - end; ++ size = resource_size(&nsio->res) + adjust; ++ if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, ++ IORES_DESC_NONE) == REGION_MIXED ++ || !IS_ALIGNED(end, nd_pfn->align) ++ || nd_region_conflict(nd_region, start, size + adjust)) ++ *end_trunc = end - phys_pmem_align_down(nd_pfn, end); ++} ++ + static int nd_pfn_init(struct nd_pfn *nd_pfn) + { + u32 dax_label_reserve = is_nd_dax(&nd_pfn->dev) ? SZ_128K : 0; + struct nd_namespace_common *ndns = nd_pfn->ndns; +- u32 start_pad = 0, end_trunc = 0; ++ struct nd_namespace_io *nsio = to_nd_namespace_io(&ndns->dev); + resource_size_t start, size; +- struct nd_namespace_io *nsio; + struct nd_region *nd_region; ++ u32 start_pad, end_trunc; + struct nd_pfn_sb *pfn_sb; + unsigned long npfns; + phys_addr_t offset; +@@ -629,30 +662,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) + + memset(pfn_sb, 0, sizeof(*pfn_sb)); + +- /* +- * Check if pmem collides with 'System RAM' when section aligned and +- * trim it accordingly +- */ +- nsio = to_nd_namespace_io(&ndns->dev); +- start = PHYS_SECTION_ALIGN_DOWN(nsio->res.start); +- size = resource_size(&nsio->res); +- if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, +- IORES_DESC_NONE) == REGION_MIXED) { +- start = nsio->res.start; +- start_pad = PHYS_SECTION_ALIGN_UP(start) - start; +- } +- +- start = nsio->res.start; +- size = PHYS_SECTION_ALIGN_UP(start + size) - start; +- if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, +- IORES_DESC_NONE) == REGION_MIXED +- || !IS_ALIGNED(start + resource_size(&nsio->res), +- nd_pfn->align)) { +- size = resource_size(&nsio->res); +- end_trunc = start + size - phys_pmem_align_down(nd_pfn, +- start + size); +- } +- ++ trim_pfn_device(nd_pfn, &start_pad, &end_trunc); + if (start_pad + end_trunc) + dev_info(&nd_pfn->dev, "%s alignment collision, truncate %d bytes\n", + dev_name(&ndns->dev), start_pad + end_trunc); +@@ -663,7 +673,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) + * implementation will limit the pfns advertised through + * ->direct_access() to those that are included in the memmap. + */ +- start += start_pad; ++ start = nsio->res.start + start_pad; + size = resource_size(&nsio->res); + npfns = PFN_SECTION_ALIGN_UP((size - start_pad - end_trunc - SZ_8K) + / PAGE_SIZE); +diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c +index 174a418cb171..e7377f1028ef 100644 +--- a/drivers/nvdimm/region_devs.c ++++ b/drivers/nvdimm/region_devs.c +@@ -1184,6 +1184,47 @@ int nvdimm_has_cache(struct nd_region *nd_region) + } + EXPORT_SYMBOL_GPL(nvdimm_has_cache); + ++struct conflict_context { ++ struct nd_region *nd_region; ++ resource_size_t start, size; ++}; ++ ++static int region_conflict(struct device *dev, void *data) ++{ ++ struct nd_region *nd_region; ++ struct conflict_context *ctx = data; ++ resource_size_t res_end, region_end, region_start; ++ ++ if (!is_memory(dev)) ++ return 0; ++ ++ nd_region = to_nd_region(dev); ++ if (nd_region == ctx->nd_region) ++ return 0; ++ ++ res_end = ctx->start + ctx->size; ++ region_start = nd_region->ndr_start; ++ region_end = region_start + nd_region->ndr_size; ++ if (ctx->start >= region_start && ctx->start < region_end) ++ return -EBUSY; ++ if (res_end > region_start && res_end <= region_end) ++ return -EBUSY; ++ return 0; ++} ++ ++int nd_region_conflict(struct nd_region *nd_region, resource_size_t start, ++ resource_size_t size) ++{ ++ struct nvdimm_bus *nvdimm_bus = walk_to_nvdimm_bus(&nd_region->dev); ++ struct conflict_context ctx = { ++ .nd_region = nd_region, ++ .start = start, ++ .size = size, ++ }; ++ ++ return device_for_each_child(&nvdimm_bus->dev, &ctx, region_conflict); ++} ++ + void __exit nd_region_devs_exit(void) + { + ida_destroy(®ion_ida); +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 611e70cae754..9375fa705d82 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -144,6 +144,7 @@ struct nvme_fc_ctrl { + + bool ioq_live; + bool assoc_active; ++ atomic_t err_work_active; + u64 association_id; + + struct list_head ctrl_list; /* rport->ctrl_list */ +@@ -152,6 +153,7 @@ struct nvme_fc_ctrl { + struct blk_mq_tag_set tag_set; + + struct delayed_work connect_work; ++ struct work_struct err_work; + + struct kref ref; + u32 flags; +@@ -1523,6 +1525,10 @@ nvme_fc_abort_aen_ops(struct nvme_fc_ctrl *ctrl) + struct nvme_fc_fcp_op *aen_op = ctrl->aen_ops; + int i; + ++ /* ensure we've initialized the ops once */ ++ if (!(aen_op->flags & FCOP_FLAGS_AEN)) ++ return; ++ + for (i = 0; i < NVME_NR_AEN_COMMANDS; i++, aen_op++) + __nvme_fc_abort_op(ctrl, aen_op); + } +@@ -2036,7 +2042,25 @@ nvme_fc_nvme_ctrl_freed(struct nvme_ctrl *nctrl) + static void + nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg) + { +- /* only proceed if in LIVE state - e.g. on first error */ ++ int active; ++ ++ /* ++ * if an error (io timeout, etc) while (re)connecting, ++ * it's an error on creating the new association. ++ * Start the error recovery thread if it hasn't already ++ * been started. It is expected there could be multiple ++ * ios hitting this path before things are cleaned up. ++ */ ++ if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) { ++ active = atomic_xchg(&ctrl->err_work_active, 1); ++ if (!active && !schedule_work(&ctrl->err_work)) { ++ atomic_set(&ctrl->err_work_active, 0); ++ WARN_ON(1); ++ } ++ return; ++ } ++ ++ /* Otherwise, only proceed if in LIVE state - e.g. on first error */ + if (ctrl->ctrl.state != NVME_CTRL_LIVE) + return; + +@@ -2802,6 +2826,7 @@ nvme_fc_delete_ctrl(struct nvme_ctrl *nctrl) + { + struct nvme_fc_ctrl *ctrl = to_fc_ctrl(nctrl); + ++ cancel_work_sync(&ctrl->err_work); + cancel_delayed_work_sync(&ctrl->connect_work); + /* + * kill the association on the link side. this will block +@@ -2854,23 +2879,30 @@ nvme_fc_reconnect_or_delete(struct nvme_fc_ctrl *ctrl, int status) + } + + static void +-nvme_fc_reset_ctrl_work(struct work_struct *work) ++__nvme_fc_terminate_io(struct nvme_fc_ctrl *ctrl) + { +- struct nvme_fc_ctrl *ctrl = +- container_of(work, struct nvme_fc_ctrl, ctrl.reset_work); +- int ret; +- +- nvme_stop_ctrl(&ctrl->ctrl); ++ nvme_stop_keep_alive(&ctrl->ctrl); + + /* will block will waiting for io to terminate */ + nvme_fc_delete_association(ctrl); + +- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { ++ if (ctrl->ctrl.state != NVME_CTRL_CONNECTING && ++ !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) + dev_err(ctrl->ctrl.device, + "NVME-FC{%d}: error_recovery: Couldn't change state " + "to CONNECTING\n", ctrl->cnum); +- return; +- } ++} ++ ++static void ++nvme_fc_reset_ctrl_work(struct work_struct *work) ++{ ++ struct nvme_fc_ctrl *ctrl = ++ container_of(work, struct nvme_fc_ctrl, ctrl.reset_work); ++ int ret; ++ ++ __nvme_fc_terminate_io(ctrl); ++ ++ nvme_stop_ctrl(&ctrl->ctrl); + + if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE) + ret = nvme_fc_create_association(ctrl); +@@ -2885,6 +2917,24 @@ nvme_fc_reset_ctrl_work(struct work_struct *work) + ctrl->cnum); + } + ++static void ++nvme_fc_connect_err_work(struct work_struct *work) ++{ ++ struct nvme_fc_ctrl *ctrl = ++ container_of(work, struct nvme_fc_ctrl, err_work); ++ ++ __nvme_fc_terminate_io(ctrl); ++ ++ atomic_set(&ctrl->err_work_active, 0); ++ ++ /* ++ * Rescheduling the connection after recovering ++ * from the io error is left to the reconnect work ++ * item, which is what should have stalled waiting on ++ * the io that had the error that scheduled this work. ++ */ ++} ++ + static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = { + .name = "fc", + .module = THIS_MODULE, +@@ -2995,6 +3045,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + ctrl->cnum = idx; + ctrl->ioq_live = false; + ctrl->assoc_active = false; ++ atomic_set(&ctrl->err_work_active, 0); + init_waitqueue_head(&ctrl->ioabort_wait); + + get_device(ctrl->dev); +@@ -3002,6 +3053,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + + INIT_WORK(&ctrl->ctrl.reset_work, nvme_fc_reset_ctrl_work); + INIT_DELAYED_WORK(&ctrl->connect_work, nvme_fc_connect_ctrl_work); ++ INIT_WORK(&ctrl->err_work, nvme_fc_connect_err_work); + spin_lock_init(&ctrl->lock); + + /* io queue count */ +@@ -3092,6 +3144,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts, + fail_ctrl: + nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_DELETING); + cancel_work_sync(&ctrl->ctrl.reset_work); ++ cancel_work_sync(&ctrl->err_work); + cancel_delayed_work_sync(&ctrl->connect_work); + + ctrl->ctrl.opts = NULL; +diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c +index c0631895154e..8684bcec8ff4 100644 +--- a/drivers/s390/net/ism_drv.c ++++ b/drivers/s390/net/ism_drv.c +@@ -415,9 +415,9 @@ static irqreturn_t ism_handle_irq(int irq, void *data) + break; + + clear_bit_inv(bit, bv); ++ ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; + barrier(); + smcd_handle_irq(ism->smcd, bit + ISM_DMB_BIT_OFFSET); +- ism->sba->dmbe_mask[bit + ISM_DMB_BIT_OFFSET] = 0; + } + + if (ism->sba->e) { +diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c +index 8f5c1d7f751a..b67dc4974f23 100644 +--- a/drivers/s390/virtio/virtio_ccw.c ++++ b/drivers/s390/virtio/virtio_ccw.c +@@ -56,6 +56,7 @@ struct virtio_ccw_device { + unsigned int revision; /* Transport revision */ + wait_queue_head_t wait_q; + spinlock_t lock; ++ struct mutex io_lock; /* Serializes I/O requests */ + struct list_head virtqueues; + unsigned long indicators; + unsigned long indicators2; +@@ -296,6 +297,7 @@ static int ccw_io_helper(struct virtio_ccw_device *vcdev, + unsigned long flags; + int flag = intparm & VIRTIO_CCW_INTPARM_MASK; + ++ mutex_lock(&vcdev->io_lock); + do { + spin_lock_irqsave(get_ccwdev_lock(vcdev->cdev), flags); + ret = ccw_device_start(vcdev->cdev, ccw, intparm, 0, 0); +@@ -308,7 +310,9 @@ static int ccw_io_helper(struct virtio_ccw_device *vcdev, + cpu_relax(); + } while (ret == -EBUSY); + wait_event(vcdev->wait_q, doing_io(vcdev, flag) == 0); +- return ret ? ret : vcdev->err; ++ ret = ret ? ret : vcdev->err; ++ mutex_unlock(&vcdev->io_lock); ++ return ret; + } + + static void virtio_ccw_drop_indicator(struct virtio_ccw_device *vcdev, +@@ -828,6 +832,7 @@ static void virtio_ccw_get_config(struct virtio_device *vdev, + int ret; + struct ccw1 *ccw; + void *config_area; ++ unsigned long flags; + + ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); + if (!ccw) +@@ -846,11 +851,13 @@ static void virtio_ccw_get_config(struct virtio_device *vdev, + if (ret) + goto out_free; + ++ spin_lock_irqsave(&vcdev->lock, flags); + memcpy(vcdev->config, config_area, offset + len); +- if (buf) +- memcpy(buf, &vcdev->config[offset], len); + if (vcdev->config_ready < offset + len) + vcdev->config_ready = offset + len; ++ spin_unlock_irqrestore(&vcdev->lock, flags); ++ if (buf) ++ memcpy(buf, config_area + offset, len); + + out_free: + kfree(config_area); +@@ -864,6 +871,7 @@ static void virtio_ccw_set_config(struct virtio_device *vdev, + struct virtio_ccw_device *vcdev = to_vc_device(vdev); + struct ccw1 *ccw; + void *config_area; ++ unsigned long flags; + + ccw = kzalloc(sizeof(*ccw), GFP_DMA | GFP_KERNEL); + if (!ccw) +@@ -876,9 +884,11 @@ static void virtio_ccw_set_config(struct virtio_device *vdev, + /* Make sure we don't overwrite fields. */ + if (vcdev->config_ready < offset) + virtio_ccw_get_config(vdev, 0, NULL, offset); ++ spin_lock_irqsave(&vcdev->lock, flags); + memcpy(&vcdev->config[offset], buf, len); + /* Write the config area to the host. */ + memcpy(config_area, vcdev->config, sizeof(vcdev->config)); ++ spin_unlock_irqrestore(&vcdev->lock, flags); + ccw->cmd_code = CCW_CMD_WRITE_CONF; + ccw->flags = 0; + ccw->count = offset + len; +@@ -1247,6 +1257,7 @@ static int virtio_ccw_online(struct ccw_device *cdev) + init_waitqueue_head(&vcdev->wait_q); + INIT_LIST_HEAD(&vcdev->virtqueues); + spin_lock_init(&vcdev->lock); ++ mutex_init(&vcdev->io_lock); + + spin_lock_irqsave(get_ccwdev_lock(cdev), flags); + dev_set_drvdata(&cdev->dev, vcdev); +diff --git a/drivers/scsi/ufs/ufs-hisi.c b/drivers/scsi/ufs/ufs-hisi.c +index 46df707e6f2c..452e19f8fb47 100644 +--- a/drivers/scsi/ufs/ufs-hisi.c ++++ b/drivers/scsi/ufs/ufs-hisi.c +@@ -20,6 +20,7 @@ + #include "unipro.h" + #include "ufs-hisi.h" + #include "ufshci.h" ++#include "ufs_quirks.h" + + static int ufs_hisi_check_hibern8(struct ufs_hba *hba) + { +@@ -390,6 +391,14 @@ static void ufs_hisi_set_dev_cap(struct ufs_hisi_dev_params *hisi_param) + + static void ufs_hisi_pwr_change_pre_change(struct ufs_hba *hba) + { ++ if (hba->dev_quirks & UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME) { ++ pr_info("ufs flash device must set VS_DebugSaveConfigTime 0x10\n"); ++ /* VS_DebugSaveConfigTime */ ++ ufshcd_dme_set(hba, UIC_ARG_MIB(0xD0A0), 0x10); ++ /* sync length */ ++ ufshcd_dme_set(hba, UIC_ARG_MIB(0x1556), 0x48); ++ } ++ + /* update */ + ufshcd_dme_set(hba, UIC_ARG_MIB(0x15A8), 0x1); + /* PA_TxSkip */ +diff --git a/drivers/scsi/ufs/ufs_quirks.h b/drivers/scsi/ufs/ufs_quirks.h +index 71f73d1d1ad1..5d2dfdb41a6f 100644 +--- a/drivers/scsi/ufs/ufs_quirks.h ++++ b/drivers/scsi/ufs/ufs_quirks.h +@@ -131,4 +131,10 @@ struct ufs_dev_fix { + */ + #define UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME (1 << 8) + ++/* ++ * Some UFS devices require VS_DebugSaveConfigTime is 0x10, ++ * enabling this quirk ensure this. ++ */ ++#define UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME (1 << 9) ++ + #endif /* UFS_QUIRKS_H_ */ +diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c +index 54074dd483a7..0b81d9d03357 100644 +--- a/drivers/scsi/ufs/ufshcd.c ++++ b/drivers/scsi/ufs/ufshcd.c +@@ -230,6 +230,8 @@ static struct ufs_dev_fix ufs_fixups[] = { + UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, UFS_DEVICE_NO_VCCQ), + UFS_FIX(UFS_VENDOR_SKHYNIX, UFS_ANY_MODEL, + UFS_DEVICE_QUIRK_HOST_PA_SAVECONFIGTIME), ++ UFS_FIX(UFS_VENDOR_SKHYNIX, "hB8aL1" /*H28U62301AMR*/, ++ UFS_DEVICE_QUIRK_HOST_VS_DEBUGSAVECONFIGTIME), + + END_FIX + }; +diff --git a/drivers/staging/rtl8712/mlme_linux.c b/drivers/staging/rtl8712/mlme_linux.c +index baaa52f04560..52095086574f 100644 +--- a/drivers/staging/rtl8712/mlme_linux.c ++++ b/drivers/staging/rtl8712/mlme_linux.c +@@ -158,7 +158,7 @@ void r8712_report_sec_ie(struct _adapter *adapter, u8 authmode, u8 *sec_ie) + p = buff; + p += sprintf(p, "ASSOCINFO(ReqIEs="); + len = sec_ie[1] + 2; +- len = (len < IW_CUSTOM_MAX) ? len : IW_CUSTOM_MAX - 1; ++ len = (len < IW_CUSTOM_MAX) ? len : IW_CUSTOM_MAX; + for (i = 0; i < len; i++) + p += sprintf(p, "%02x", sec_ie[i]); + p += sprintf(p, ")"); +diff --git a/drivers/staging/rtl8712/rtl871x_mlme.c b/drivers/staging/rtl8712/rtl871x_mlme.c +index ac547ddd72d1..d7e88d2a8b1b 100644 +--- a/drivers/staging/rtl8712/rtl871x_mlme.c ++++ b/drivers/staging/rtl8712/rtl871x_mlme.c +@@ -1358,7 +1358,7 @@ sint r8712_restruct_sec_ie(struct _adapter *adapter, u8 *in_ie, + u8 *out_ie, uint in_len) + { + u8 authmode = 0, match; +- u8 sec_ie[255], uncst_oui[4], bkup_ie[255]; ++ u8 sec_ie[IW_CUSTOM_MAX], uncst_oui[4], bkup_ie[255]; + u8 wpa_oui[4] = {0x0, 0x50, 0xf2, 0x01}; + uint ielength, cnt, remove_cnt; + int iEntry; +diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c +index 0952d15f6d40..ca6f1fa3466a 100644 +--- a/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c ++++ b/drivers/staging/rtl8723bs/core/rtw_mlme_ext.c +@@ -1566,7 +1566,7 @@ unsigned int OnAssocReq(struct adapter *padapter, union recv_frame *precv_frame) + if (pstat->aid > 0) { + DBG_871X(" old AID %d\n", pstat->aid); + } else { +- for (pstat->aid = 1; pstat->aid < NUM_STA; pstat->aid++) ++ for (pstat->aid = 1; pstat->aid <= NUM_STA; pstat->aid++) + if (pstapriv->sta_aid[pstat->aid - 1] == NULL) + break; + +diff --git a/drivers/tty/serial/8250/8250_mtk.c b/drivers/tty/serial/8250/8250_mtk.c +index dd5e1cede2b5..c3f933d10295 100644 +--- a/drivers/tty/serial/8250/8250_mtk.c ++++ b/drivers/tty/serial/8250/8250_mtk.c +@@ -213,17 +213,17 @@ static int mtk8250_probe(struct platform_device *pdev) + + platform_set_drvdata(pdev, data); + +- pm_runtime_enable(&pdev->dev); +- if (!pm_runtime_enabled(&pdev->dev)) { +- err = mtk8250_runtime_resume(&pdev->dev); +- if (err) +- return err; +- } ++ err = mtk8250_runtime_resume(&pdev->dev); ++ if (err) ++ return err; + + data->line = serial8250_register_8250_port(&uart); + if (data->line < 0) + return data->line; + ++ pm_runtime_set_active(&pdev->dev); ++ pm_runtime_enable(&pdev->dev); ++ + return 0; + } + +@@ -234,13 +234,11 @@ static int mtk8250_remove(struct platform_device *pdev) + pm_runtime_get_sync(&pdev->dev); + + serial8250_unregister_port(data->line); ++ mtk8250_runtime_suspend(&pdev->dev); + + pm_runtime_disable(&pdev->dev); + pm_runtime_put_noidle(&pdev->dev); + +- if (!pm_runtime_status_suspended(&pdev->dev)) +- mtk8250_runtime_suspend(&pdev->dev); +- + return 0; + } + +diff --git a/drivers/tty/serial/kgdboc.c b/drivers/tty/serial/kgdboc.c +index 8a111ab33b50..93d3a0ec5e11 100644 +--- a/drivers/tty/serial/kgdboc.c ++++ b/drivers/tty/serial/kgdboc.c +@@ -230,7 +230,7 @@ static void kgdboc_put_char(u8 chr) + static int param_set_kgdboc_var(const char *kmessage, + const struct kernel_param *kp) + { +- int len = strlen(kmessage); ++ size_t len = strlen(kmessage); + + if (len >= MAX_CONFIG_LEN) { + printk(KERN_ERR "kgdboc: config string too long\n"); +@@ -252,7 +252,7 @@ static int param_set_kgdboc_var(const char *kmessage, + + strcpy(config, kmessage); + /* Chop out \n char as a result of echo */ +- if (config[len - 1] == '\n') ++ if (len && config[len - 1] == '\n') + config[len - 1] = '\0'; + + if (configured == 1) +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c +index 252eef2c32f9..d6f42b528277 100644 +--- a/drivers/tty/tty_io.c ++++ b/drivers/tty/tty_io.c +@@ -1372,7 +1372,13 @@ err_release_lock: + return ERR_PTR(retval); + } + +-static void tty_free_termios(struct tty_struct *tty) ++/** ++ * tty_save_termios() - save tty termios data in driver table ++ * @tty: tty whose termios data to save ++ * ++ * Locking: Caller guarantees serialisation with tty_init_termios(). ++ */ ++void tty_save_termios(struct tty_struct *tty) + { + struct ktermios *tp; + int idx = tty->index; +@@ -1391,6 +1397,7 @@ static void tty_free_termios(struct tty_struct *tty) + } + *tp = tty->termios; + } ++EXPORT_SYMBOL_GPL(tty_save_termios); + + /** + * tty_flush_works - flush all works of a tty/pty pair +@@ -1490,7 +1497,7 @@ static void release_tty(struct tty_struct *tty, int idx) + WARN_ON(!mutex_is_locked(&tty_mutex)); + if (tty->ops->shutdown) + tty->ops->shutdown(tty); +- tty_free_termios(tty); ++ tty_save_termios(tty); + tty_driver_remove_tty(tty->driver, tty); + tty->port->itty = NULL; + if (tty->link) +diff --git a/drivers/tty/tty_port.c b/drivers/tty/tty_port.c +index 25d736880013..c699d41a2a48 100644 +--- a/drivers/tty/tty_port.c ++++ b/drivers/tty/tty_port.c +@@ -640,7 +640,8 @@ void tty_port_close(struct tty_port *port, struct tty_struct *tty, + if (tty_port_close_start(port, tty, filp) == 0) + return; + tty_port_shutdown(port, tty); +- set_bit(TTY_IO_ERROR, &tty->flags); ++ if (!port->console) ++ set_bit(TTY_IO_ERROR, &tty->flags); + tty_port_close_end(port, tty); + tty_port_tty_set(port, NULL); + } +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index f79979ae482a..cc62707c0251 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -2250,7 +2250,7 @@ static int usb_enumerate_device_otg(struct usb_device *udev) + /* descriptor may appear anywhere in config */ + err = __usb_get_extra_descriptor(udev->rawdescriptors[0], + le16_to_cpu(udev->config[0].desc.wTotalLength), +- USB_DT_OTG, (void **) &desc); ++ USB_DT_OTG, (void **) &desc, sizeof(*desc)); + if (err || !(desc->bmAttributes & USB_OTG_HNP)) + return 0; + +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index 0690fcff0ea2..514c5214ddb2 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -333,6 +333,10 @@ static const struct usb_device_id usb_quirk_list[] = { + /* Midiman M-Audio Keystation 88es */ + { USB_DEVICE(0x0763, 0x0192), .driver_info = USB_QUIRK_RESET_RESUME }, + ++ /* SanDisk Ultra Fit and Ultra Flair */ ++ { USB_DEVICE(0x0781, 0x5583), .driver_info = USB_QUIRK_NO_LPM }, ++ { USB_DEVICE(0x0781, 0x5591), .driver_info = USB_QUIRK_NO_LPM }, ++ + /* M-Systems Flash Disk Pioneers */ + { USB_DEVICE(0x08ec, 0x1000), .driver_info = USB_QUIRK_RESET_RESUME }, + +diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c +index 79d8bd7a612e..4ebfbd737905 100644 +--- a/drivers/usb/core/usb.c ++++ b/drivers/usb/core/usb.c +@@ -832,14 +832,14 @@ EXPORT_SYMBOL_GPL(usb_get_current_frame_number); + */ + + int __usb_get_extra_descriptor(char *buffer, unsigned size, +- unsigned char type, void **ptr) ++ unsigned char type, void **ptr, size_t minsize) + { + struct usb_descriptor_header *header; + + while (size >= sizeof(struct usb_descriptor_header)) { + header = (struct usb_descriptor_header *)buffer; + +- if (header->bLength < 2) { ++ if (header->bLength < 2 || header->bLength > size) { + printk(KERN_ERR + "%s: bogus descriptor, type %d length %d\n", + usbcore_name, +@@ -848,7 +848,7 @@ int __usb_get_extra_descriptor(char *buffer, unsigned size, + return -1; + } + +- if (header->bDescriptorType == type) { ++ if (header->bDescriptorType == type && header->bLength >= minsize) { + *ptr = header; + return 0; + } +diff --git a/drivers/usb/dwc2/pci.c b/drivers/usb/dwc2/pci.c +index d257c541e51b..7afc10872f1f 100644 +--- a/drivers/usb/dwc2/pci.c ++++ b/drivers/usb/dwc2/pci.c +@@ -120,6 +120,7 @@ static int dwc2_pci_probe(struct pci_dev *pci, + dwc2 = platform_device_alloc("dwc2", PLATFORM_DEVID_AUTO); + if (!dwc2) { + dev_err(dev, "couldn't allocate dwc2 device\n"); ++ ret = -ENOMEM; + goto err; + } + +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c +index 3ada83d81bda..31e8bf3578c8 100644 +--- a/drivers/usb/gadget/function/f_fs.c ++++ b/drivers/usb/gadget/function/f_fs.c +@@ -215,7 +215,6 @@ struct ffs_io_data { + + struct mm_struct *mm; + struct work_struct work; +- struct work_struct cancellation_work; + + struct usb_ep *ep; + struct usb_request *req; +@@ -1073,31 +1072,22 @@ ffs_epfile_open(struct inode *inode, struct file *file) + return 0; + } + +-static void ffs_aio_cancel_worker(struct work_struct *work) +-{ +- struct ffs_io_data *io_data = container_of(work, struct ffs_io_data, +- cancellation_work); +- +- ENTER(); +- +- usb_ep_dequeue(io_data->ep, io_data->req); +-} +- + static int ffs_aio_cancel(struct kiocb *kiocb) + { + struct ffs_io_data *io_data = kiocb->private; +- struct ffs_data *ffs = io_data->ffs; ++ struct ffs_epfile *epfile = kiocb->ki_filp->private_data; + int value; + + ENTER(); + +- if (likely(io_data && io_data->ep && io_data->req)) { +- INIT_WORK(&io_data->cancellation_work, ffs_aio_cancel_worker); +- queue_work(ffs->io_completion_wq, &io_data->cancellation_work); +- value = -EINPROGRESS; +- } else { ++ spin_lock_irq(&epfile->ffs->eps_lock); ++ ++ if (likely(io_data && io_data->ep && io_data->req)) ++ value = usb_ep_dequeue(io_data->ep, io_data->req); ++ else + value = -EINVAL; +- } ++ ++ spin_unlock_irq(&epfile->ffs->eps_lock); + + return value; + } +diff --git a/drivers/usb/host/hwa-hc.c b/drivers/usb/host/hwa-hc.c +index 684d6f074c3a..09a8ebd95588 100644 +--- a/drivers/usb/host/hwa-hc.c ++++ b/drivers/usb/host/hwa-hc.c +@@ -640,7 +640,7 @@ static int hwahc_security_create(struct hwahc *hwahc) + top = itr + itr_size; + result = __usb_get_extra_descriptor(usb_dev->rawdescriptors[index], + le16_to_cpu(usb_dev->actconfig->desc.wTotalLength), +- USB_DT_SECURITY, (void **) &secd); ++ USB_DT_SECURITY, (void **) &secd, sizeof(*secd)); + if (result == -1) { + dev_warn(dev, "BUG? WUSB host has no security descriptors\n"); + return 0; +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index beeda27b3789..09bf6b4b741b 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -132,6 +132,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + pdev->device == 0x43bb)) + xhci->quirks |= XHCI_SUSPEND_DELAY; + ++ if (pdev->vendor == PCI_VENDOR_ID_AMD && ++ (pdev->device == 0x15e0 || pdev->device == 0x15e1)) ++ xhci->quirks |= XHCI_SNPS_BROKEN_SUSPEND; ++ + if (pdev->vendor == PCI_VENDOR_ID_AMD) + xhci->quirks |= XHCI_TRUST_TX_LENGTH; + +diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c +index c928dbbff881..dae3be1b9c8f 100644 +--- a/drivers/usb/host/xhci.c ++++ b/drivers/usb/host/xhci.c +@@ -968,6 +968,7 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup) + unsigned int delay = XHCI_MAX_HALT_USEC; + struct usb_hcd *hcd = xhci_to_hcd(xhci); + u32 command; ++ u32 res; + + if (!hcd->state) + return 0; +@@ -1021,11 +1022,28 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup) + command = readl(&xhci->op_regs->command); + command |= CMD_CSS; + writel(command, &xhci->op_regs->command); ++ xhci->broken_suspend = 0; + if (xhci_handshake(&xhci->op_regs->status, + STS_SAVE, 0, 10 * 1000)) { +- xhci_warn(xhci, "WARN: xHC save state timeout\n"); +- spin_unlock_irq(&xhci->lock); +- return -ETIMEDOUT; ++ /* ++ * AMD SNPS xHC 3.0 occasionally does not clear the ++ * SSS bit of USBSTS and when driver tries to poll ++ * to see if the xHC clears BIT(8) which never happens ++ * and driver assumes that controller is not responding ++ * and times out. To workaround this, its good to check ++ * if SRE and HCE bits are not set (as per xhci ++ * Section 5.4.2) and bypass the timeout. ++ */ ++ res = readl(&xhci->op_regs->status); ++ if ((xhci->quirks & XHCI_SNPS_BROKEN_SUSPEND) && ++ (((res & STS_SRE) == 0) && ++ ((res & STS_HCE) == 0))) { ++ xhci->broken_suspend = 1; ++ } else { ++ xhci_warn(xhci, "WARN: xHC save state timeout\n"); ++ spin_unlock_irq(&xhci->lock); ++ return -ETIMEDOUT; ++ } + } + spin_unlock_irq(&xhci->lock); + +@@ -1078,7 +1096,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated) + set_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags); + + spin_lock_irq(&xhci->lock); +- if (xhci->quirks & XHCI_RESET_ON_RESUME) ++ if ((xhci->quirks & XHCI_RESET_ON_RESUME) || xhci->broken_suspend) + hibernated = true; + + if (!hibernated) { +@@ -4496,6 +4514,14 @@ static u16 xhci_calculate_u1_timeout(struct xhci_hcd *xhci, + { + unsigned long long timeout_ns; + ++ /* Prevent U1 if service interval is shorter than U1 exit latency */ ++ if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { ++ if (xhci_service_interval_to_ns(desc) <= udev->u1_params.mel) { ++ dev_dbg(&udev->dev, "Disable U1, ESIT shorter than exit latency\n"); ++ return USB3_LPM_DISABLED; ++ } ++ } ++ + if (xhci->quirks & XHCI_INTEL_HOST) + timeout_ns = xhci_calculate_intel_u1_timeout(udev, desc); + else +@@ -4552,6 +4578,14 @@ static u16 xhci_calculate_u2_timeout(struct xhci_hcd *xhci, + { + unsigned long long timeout_ns; + ++ /* Prevent U2 if service interval is shorter than U2 exit latency */ ++ if (usb_endpoint_xfer_int(desc) || usb_endpoint_xfer_isoc(desc)) { ++ if (xhci_service_interval_to_ns(desc) <= udev->u2_params.mel) { ++ dev_dbg(&udev->dev, "Disable U2, ESIT shorter than exit latency\n"); ++ return USB3_LPM_DISABLED; ++ } ++ } ++ + if (xhci->quirks & XHCI_INTEL_HOST) + timeout_ns = xhci_calculate_intel_u2_timeout(udev, desc); + else +diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h +index e936e4c8af98..c3ed7d1c9f65 100644 +--- a/drivers/usb/host/xhci.h ++++ b/drivers/usb/host/xhci.h +@@ -1847,6 +1847,7 @@ struct xhci_hcd { + #define XHCI_INTEL_USB_ROLE_SW BIT_ULL(31) + #define XHCI_ZERO_64B_REGS BIT_ULL(32) + #define XHCI_RESET_PLL_ON_DISCONNECT BIT_ULL(34) ++#define XHCI_SNPS_BROKEN_SUSPEND BIT_ULL(35) + + unsigned int num_active_eps; + unsigned int limit_active_eps; +@@ -1876,6 +1877,8 @@ struct xhci_hcd { + void *dbc; + /* platform-specific data -- must come last */ + unsigned long priv[0] __aligned(sizeof(s64)); ++ /* Broken Suspend flag for SNPS Suspend resume issue */ ++ u8 broken_suspend; + }; + + /* Platform specific overrides to generic XHCI hc_driver ops */ +diff --git a/drivers/usb/misc/appledisplay.c b/drivers/usb/misc/appledisplay.c +index 6a0c60badfa0..1c6da8d6cccf 100644 +--- a/drivers/usb/misc/appledisplay.c ++++ b/drivers/usb/misc/appledisplay.c +@@ -51,6 +51,7 @@ static const struct usb_device_id appledisplay_table[] = { + { APPLEDISPLAY_DEVICE(0x921c) }, + { APPLEDISPLAY_DEVICE(0x921d) }, + { APPLEDISPLAY_DEVICE(0x9222) }, ++ { APPLEDISPLAY_DEVICE(0x9226) }, + { APPLEDISPLAY_DEVICE(0x9236) }, + + /* Terminating entry */ +diff --git a/drivers/usb/serial/console.c b/drivers/usb/serial/console.c +index 17940589c647..7d289302ff6c 100644 +--- a/drivers/usb/serial/console.c ++++ b/drivers/usb/serial/console.c +@@ -101,7 +101,6 @@ static int usb_console_setup(struct console *co, char *options) + cflag |= PARENB; + break; + } +- co->cflag = cflag; + + /* + * no need to check the index here: if the index is wrong, console +@@ -164,6 +163,7 @@ static int usb_console_setup(struct console *co, char *options) + serial->type->set_termios(tty, port, &dummy); + + tty_port_tty_set(&port->port, NULL); ++ tty_save_termios(tty); + tty_kref_put(tty); + } + tty_port_set_initialized(&port->port, 1); +diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c +index 34bc3ab40c6d..51879ed18652 100644 +--- a/drivers/vhost/vsock.c ++++ b/drivers/vhost/vsock.c +@@ -15,6 +15,7 @@ + #include <net/sock.h> + #include <linux/virtio_vsock.h> + #include <linux/vhost.h> ++#include <linux/hashtable.h> + + #include <net/af_vsock.h> + #include "vhost.h" +@@ -27,14 +28,14 @@ enum { + + /* Used to track all the vhost_vsock instances on the system. */ + static DEFINE_SPINLOCK(vhost_vsock_lock); +-static LIST_HEAD(vhost_vsock_list); ++static DEFINE_READ_MOSTLY_HASHTABLE(vhost_vsock_hash, 8); + + struct vhost_vsock { + struct vhost_dev dev; + struct vhost_virtqueue vqs[2]; + +- /* Link to global vhost_vsock_list, protected by vhost_vsock_lock */ +- struct list_head list; ++ /* Link to global vhost_vsock_hash, writes use vhost_vsock_lock */ ++ struct hlist_node hash; + + struct vhost_work send_pkt_work; + spinlock_t send_pkt_list_lock; +@@ -50,11 +51,14 @@ static u32 vhost_transport_get_local_cid(void) + return VHOST_VSOCK_DEFAULT_HOST_CID; + } + +-static struct vhost_vsock *__vhost_vsock_get(u32 guest_cid) ++/* Callers that dereference the return value must hold vhost_vsock_lock or the ++ * RCU read lock. ++ */ ++static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) + { + struct vhost_vsock *vsock; + +- list_for_each_entry(vsock, &vhost_vsock_list, list) { ++ hash_for_each_possible_rcu(vhost_vsock_hash, vsock, hash, guest_cid) { + u32 other_cid = vsock->guest_cid; + + /* Skip instances that have no CID yet */ +@@ -69,17 +73,6 @@ static struct vhost_vsock *__vhost_vsock_get(u32 guest_cid) + return NULL; + } + +-static struct vhost_vsock *vhost_vsock_get(u32 guest_cid) +-{ +- struct vhost_vsock *vsock; +- +- spin_lock_bh(&vhost_vsock_lock); +- vsock = __vhost_vsock_get(guest_cid); +- spin_unlock_bh(&vhost_vsock_lock); +- +- return vsock; +-} +- + static void + vhost_transport_do_send_pkt(struct vhost_vsock *vsock, + struct vhost_virtqueue *vq) +@@ -210,9 +203,12 @@ vhost_transport_send_pkt(struct virtio_vsock_pkt *pkt) + struct vhost_vsock *vsock; + int len = pkt->len; + ++ rcu_read_lock(); ++ + /* Find the vhost_vsock according to guest context id */ + vsock = vhost_vsock_get(le64_to_cpu(pkt->hdr.dst_cid)); + if (!vsock) { ++ rcu_read_unlock(); + virtio_transport_free_pkt(pkt); + return -ENODEV; + } +@@ -225,6 +221,8 @@ vhost_transport_send_pkt(struct virtio_vsock_pkt *pkt) + spin_unlock_bh(&vsock->send_pkt_list_lock); + + vhost_work_queue(&vsock->dev, &vsock->send_pkt_work); ++ ++ rcu_read_unlock(); + return len; + } + +@@ -234,12 +232,15 @@ vhost_transport_cancel_pkt(struct vsock_sock *vsk) + struct vhost_vsock *vsock; + struct virtio_vsock_pkt *pkt, *n; + int cnt = 0; ++ int ret = -ENODEV; + LIST_HEAD(freeme); + ++ rcu_read_lock(); ++ + /* Find the vhost_vsock according to guest context id */ + vsock = vhost_vsock_get(vsk->remote_addr.svm_cid); + if (!vsock) +- return -ENODEV; ++ goto out; + + spin_lock_bh(&vsock->send_pkt_list_lock); + list_for_each_entry_safe(pkt, n, &vsock->send_pkt_list, list) { +@@ -265,7 +266,10 @@ vhost_transport_cancel_pkt(struct vsock_sock *vsk) + vhost_poll_queue(&tx_vq->poll); + } + +- return 0; ++ ret = 0; ++out: ++ rcu_read_unlock(); ++ return ret; + } + + static struct virtio_vsock_pkt * +@@ -533,10 +537,6 @@ static int vhost_vsock_dev_open(struct inode *inode, struct file *file) + spin_lock_init(&vsock->send_pkt_list_lock); + INIT_LIST_HEAD(&vsock->send_pkt_list); + vhost_work_init(&vsock->send_pkt_work, vhost_transport_send_pkt_work); +- +- spin_lock_bh(&vhost_vsock_lock); +- list_add_tail(&vsock->list, &vhost_vsock_list); +- spin_unlock_bh(&vhost_vsock_lock); + return 0; + + out: +@@ -577,9 +577,13 @@ static int vhost_vsock_dev_release(struct inode *inode, struct file *file) + struct vhost_vsock *vsock = file->private_data; + + spin_lock_bh(&vhost_vsock_lock); +- list_del(&vsock->list); ++ if (vsock->guest_cid) ++ hash_del_rcu(&vsock->hash); + spin_unlock_bh(&vhost_vsock_lock); + ++ /* Wait for other CPUs to finish using vsock */ ++ synchronize_rcu(); ++ + /* Iterating over all connections for all CIDs to find orphans is + * inefficient. Room for improvement here. */ + vsock_for_each_connected_socket(vhost_vsock_reset_orphans); +@@ -620,12 +624,17 @@ static int vhost_vsock_set_cid(struct vhost_vsock *vsock, u64 guest_cid) + + /* Refuse if CID is already in use */ + spin_lock_bh(&vhost_vsock_lock); +- other = __vhost_vsock_get(guest_cid); ++ other = vhost_vsock_get(guest_cid); + if (other && other != vsock) { + spin_unlock_bh(&vhost_vsock_lock); + return -EADDRINUSE; + } ++ ++ if (vsock->guest_cid) ++ hash_del_rcu(&vsock->hash); ++ + vsock->guest_cid = guest_cid; ++ hash_add_rcu(vhost_vsock_hash, &vsock->hash, guest_cid); + spin_unlock_bh(&vhost_vsock_lock); + + return 0; +diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c +index 3713d22b95a7..907e85d65bb4 100644 +--- a/fs/cifs/dir.c ++++ b/fs/cifs/dir.c +@@ -174,7 +174,7 @@ cifs_bp_rename_retry: + + cifs_dbg(FYI, "using cifs_sb prepath <%s>\n", cifs_sb->prepath); + memcpy(full_path+dfsplen+1, cifs_sb->prepath, pplen-1); +- full_path[dfsplen] = '\\'; ++ full_path[dfsplen] = dirsep; + for (i = 0; i < pplen-1; i++) + if (full_path[dfsplen+1+i] == '/') + full_path[dfsplen+1+i] = CIFS_DIR_SEP(cifs_sb); +diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c +index 7b861bbc0b43..315967354954 100644 +--- a/fs/nfs/callback_proc.c ++++ b/fs/nfs/callback_proc.c +@@ -686,20 +686,24 @@ __be32 nfs4_callback_offload(void *data, void *dummy, + { + struct cb_offloadargs *args = data; + struct nfs_server *server; +- struct nfs4_copy_state *copy; ++ struct nfs4_copy_state *copy, *tmp_copy; + bool found = false; + ++ copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); ++ if (!copy) ++ return htonl(NFS4ERR_SERVERFAULT); ++ + spin_lock(&cps->clp->cl_lock); + rcu_read_lock(); + list_for_each_entry_rcu(server, &cps->clp->cl_superblocks, + client_link) { +- list_for_each_entry(copy, &server->ss_copies, copies) { ++ list_for_each_entry(tmp_copy, &server->ss_copies, copies) { + if (memcmp(args->coa_stateid.other, +- copy->stateid.other, ++ tmp_copy->stateid.other, + sizeof(args->coa_stateid.other))) + continue; +- nfs4_copy_cb_args(copy, args); +- complete(©->completion); ++ nfs4_copy_cb_args(tmp_copy, args); ++ complete(&tmp_copy->completion); + found = true; + goto out; + } +@@ -707,15 +711,11 @@ __be32 nfs4_callback_offload(void *data, void *dummy, + out: + rcu_read_unlock(); + if (!found) { +- copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); +- if (!copy) { +- spin_unlock(&cps->clp->cl_lock); +- return htonl(NFS4ERR_SERVERFAULT); +- } + memcpy(©->stateid, &args->coa_stateid, NFS4_STATEID_SIZE); + nfs4_copy_cb_args(copy, args); + list_add_tail(©->copies, &cps->clp->pending_cb_stateids); +- } ++ } else ++ kfree(copy); + spin_unlock(&cps->clp->cl_lock); + + return 0; +diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c +index cae43333ef16..86ac2c5b93fe 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayout.c ++++ b/fs/nfs/flexfilelayout/flexfilelayout.c +@@ -1361,12 +1361,7 @@ static void ff_layout_read_prepare_v4(struct rpc_task *task, void *data) + task)) + return; + +- if (ff_layout_read_prepare_common(task, hdr)) +- return; +- +- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, +- hdr->args.lock_context, FMODE_READ) == -EIO) +- rpc_exit(task, -EIO); /* lost lock, terminate I/O */ ++ ff_layout_read_prepare_common(task, hdr); + } + + static void ff_layout_read_call_done(struct rpc_task *task, void *data) +@@ -1542,12 +1537,7 @@ static void ff_layout_write_prepare_v4(struct rpc_task *task, void *data) + task)) + return; + +- if (ff_layout_write_prepare_common(task, hdr)) +- return; +- +- if (nfs4_set_rw_stateid(&hdr->args.stateid, hdr->args.context, +- hdr->args.lock_context, FMODE_WRITE) == -EIO) +- rpc_exit(task, -EIO); /* lost lock, terminate I/O */ ++ ff_layout_write_prepare_common(task, hdr); + } + + static void ff_layout_write_call_done(struct rpc_task *task, void *data) +@@ -1742,6 +1732,10 @@ ff_layout_read_pagelist(struct nfs_pgio_header *hdr) + fh = nfs4_ff_layout_select_ds_fh(lseg, idx); + if (fh) + hdr->args.fh = fh; ++ ++ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) ++ goto out_failed; ++ + /* + * Note that if we ever decide to split across DSes, + * then we may need to handle dense-like offsets. +@@ -1804,6 +1798,9 @@ ff_layout_write_pagelist(struct nfs_pgio_header *hdr, int sync) + if (fh) + hdr->args.fh = fh; + ++ if (!nfs4_ff_layout_select_ds_stateid(lseg, idx, &hdr->args.stateid)) ++ goto out_failed; ++ + /* + * Note that if we ever decide to split across DSes, + * then we may need to handle dense-like offsets. +diff --git a/fs/nfs/flexfilelayout/flexfilelayout.h b/fs/nfs/flexfilelayout/flexfilelayout.h +index 411798346e48..de50a342d5a5 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayout.h ++++ b/fs/nfs/flexfilelayout/flexfilelayout.h +@@ -215,6 +215,10 @@ unsigned int ff_layout_fetch_ds_ioerr(struct pnfs_layout_hdr *lo, + unsigned int maxnum); + struct nfs_fh * + nfs4_ff_layout_select_ds_fh(struct pnfs_layout_segment *lseg, u32 mirror_idx); ++int ++nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, ++ u32 mirror_idx, ++ nfs4_stateid *stateid); + + struct nfs4_pnfs_ds * + nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx, +diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c +index 59aa04976331..a8df2f496898 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c ++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c +@@ -370,6 +370,25 @@ out: + return fh; + } + ++int ++nfs4_ff_layout_select_ds_stateid(struct pnfs_layout_segment *lseg, ++ u32 mirror_idx, ++ nfs4_stateid *stateid) ++{ ++ struct nfs4_ff_layout_mirror *mirror = FF_LAYOUT_COMP(lseg, mirror_idx); ++ ++ if (!ff_layout_mirror_valid(lseg, mirror, false)) { ++ pr_err_ratelimited("NFS: %s: No data server for mirror offset index %d\n", ++ __func__, mirror_idx); ++ goto out; ++ } ++ ++ nfs4_stateid_copy(stateid, &mirror->stateid); ++ return 1; ++out: ++ return 0; ++} ++ + /** + * nfs4_ff_layout_prepare_ds - prepare a DS connection for an RPC call + * @lseg: the layout segment we're operating on +diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c +index ac5b784a1de0..fed06fd9998d 100644 +--- a/fs/nfs/nfs42proc.c ++++ b/fs/nfs/nfs42proc.c +@@ -137,31 +137,32 @@ static int handle_async_copy(struct nfs42_copy_res *res, + struct file *dst, + nfs4_stateid *src_stateid) + { +- struct nfs4_copy_state *copy; ++ struct nfs4_copy_state *copy, *tmp_copy; + int status = NFS4_OK; + bool found_pending = false; + struct nfs_open_context *ctx = nfs_file_open_context(dst); + ++ copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); ++ if (!copy) ++ return -ENOMEM; ++ + spin_lock(&server->nfs_client->cl_lock); +- list_for_each_entry(copy, &server->nfs_client->pending_cb_stateids, ++ list_for_each_entry(tmp_copy, &server->nfs_client->pending_cb_stateids, + copies) { +- if (memcmp(&res->write_res.stateid, ©->stateid, ++ if (memcmp(&res->write_res.stateid, &tmp_copy->stateid, + NFS4_STATEID_SIZE)) + continue; + found_pending = true; +- list_del(©->copies); ++ list_del(&tmp_copy->copies); + break; + } + if (found_pending) { + spin_unlock(&server->nfs_client->cl_lock); ++ kfree(copy); ++ copy = tmp_copy; + goto out; + } + +- copy = kzalloc(sizeof(struct nfs4_copy_state), GFP_NOFS); +- if (!copy) { +- spin_unlock(&server->nfs_client->cl_lock); +- return -ENOMEM; +- } + memcpy(©->stateid, &res->write_res.stateid, NFS4_STATEID_SIZE); + init_completion(©->completion); + copy->parent_state = ctx->state; +diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h +index 3a6904173214..63287d911c08 100644 +--- a/fs/nfs/nfs4_fs.h ++++ b/fs/nfs/nfs4_fs.h +@@ -41,6 +41,8 @@ enum nfs4_client_state { + NFS4CLNT_MOVED, + NFS4CLNT_LEASE_MOVED, + NFS4CLNT_DELEGATION_EXPIRED, ++ NFS4CLNT_RUN_MANAGER, ++ NFS4CLNT_DELEGRETURN_RUNNING, + }; + + #define NFS4_RENEW_TIMEOUT 0x01 +diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c +index 18920152da14..d2f645d34eb1 100644 +--- a/fs/nfs/nfs4state.c ++++ b/fs/nfs/nfs4state.c +@@ -1210,6 +1210,7 @@ void nfs4_schedule_state_manager(struct nfs_client *clp) + struct task_struct *task; + char buf[INET6_ADDRSTRLEN + sizeof("-manager") + 1]; + ++ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); + if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) + return; + __module_get(THIS_MODULE); +@@ -2485,6 +2486,7 @@ static void nfs4_state_manager(struct nfs_client *clp) + + /* Ensure exclusive access to NFSv4 state */ + do { ++ clear_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); + if (test_bit(NFS4CLNT_PURGE_STATE, &clp->cl_state)) { + section = "purge state"; + status = nfs4_purge_lease(clp); +@@ -2575,14 +2577,18 @@ static void nfs4_state_manager(struct nfs_client *clp) + } + + nfs4_end_drain_session(clp); +- if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { +- nfs_client_return_marked_delegations(clp); +- continue; ++ nfs4_clear_state_manager_bit(clp); ++ ++ if (!test_and_set_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state)) { ++ if (test_and_clear_bit(NFS4CLNT_DELEGRETURN, &clp->cl_state)) { ++ nfs_client_return_marked_delegations(clp); ++ set_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state); ++ } ++ clear_bit(NFS4CLNT_DELEGRETURN_RUNNING, &clp->cl_state); + } + +- nfs4_clear_state_manager_bit(clp); + /* Did we race with an attempt to give us more work? */ +- if (clp->cl_state == 0) ++ if (!test_bit(NFS4CLNT_RUN_MANAGER, &clp->cl_state)) + return; + if (test_and_set_bit(NFS4CLNT_MANAGER_RUNNING, &clp->cl_state) != 0) + return; +diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h +index efda23cf32c7..5185a16b19ba 100644 +--- a/include/linux/hyperv.h ++++ b/include/linux/hyperv.h +@@ -904,6 +904,13 @@ struct vmbus_channel { + + bool probe_done; + ++ /* ++ * We must offload the handling of the primary/sub channels ++ * from the single-threaded vmbus_connection.work_queue to ++ * two different workqueue, otherwise we can block ++ * vmbus_connection.work_queue and hang: see vmbus_process_offer(). ++ */ ++ struct work_struct add_channel_work; + }; + + static inline bool is_hvsock_channel(const struct vmbus_channel *c) +diff --git a/include/linux/tty.h b/include/linux/tty.h +index c56e3978b00f..808fbfe86f85 100644 +--- a/include/linux/tty.h ++++ b/include/linux/tty.h +@@ -556,6 +556,7 @@ extern struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx); + extern void tty_release_struct(struct tty_struct *tty, int idx); + extern int tty_release(struct inode *inode, struct file *filp); + extern void tty_init_termios(struct tty_struct *tty); ++extern void tty_save_termios(struct tty_struct *tty); + extern int tty_standard_install(struct tty_driver *driver, + struct tty_struct *tty); + +diff --git a/include/linux/usb.h b/include/linux/usb.h +index 4cdd515a4385..5e49e82c4368 100644 +--- a/include/linux/usb.h ++++ b/include/linux/usb.h +@@ -407,11 +407,11 @@ struct usb_host_bos { + }; + + int __usb_get_extra_descriptor(char *buffer, unsigned size, +- unsigned char type, void **ptr); ++ unsigned char type, void **ptr, size_t min); + #define usb_get_extra_descriptor(ifpoint, type, ptr) \ + __usb_get_extra_descriptor((ifpoint)->extra, \ + (ifpoint)->extralen, \ +- type, (void **)ptr) ++ type, (void **)ptr, sizeof(**(ptr))) + + /* ----------------------------------------------------------------------- */ + +diff --git a/include/sound/pcm_params.h b/include/sound/pcm_params.h +index 2dd37cada7c0..888a833d3b00 100644 +--- a/include/sound/pcm_params.h ++++ b/include/sound/pcm_params.h +@@ -254,11 +254,13 @@ static inline int snd_interval_empty(const struct snd_interval *i) + static inline int snd_interval_single(const struct snd_interval *i) + { + return (i->min == i->max || +- (i->min + 1 == i->max && i->openmax)); ++ (i->min + 1 == i->max && (i->openmin || i->openmax))); + } + + static inline int snd_interval_value(const struct snd_interval *i) + { ++ if (i->openmin && !i->openmax) ++ return i->max; + return i->min; + } + +diff --git a/lib/test_firmware.c b/lib/test_firmware.c +index b984806d7d7b..7cab9a9869ac 100644 +--- a/lib/test_firmware.c ++++ b/lib/test_firmware.c +@@ -837,6 +837,7 @@ static ssize_t read_firmware_show(struct device *dev, + if (req->fw->size > PAGE_SIZE) { + pr_err("Testing interface must use PAGE_SIZE firmware for now\n"); + rc = -EINVAL; ++ goto out; + } + memcpy(buf, req->fw->data, req->fw->size); + +diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c +index 9f481cfdf77d..e8090f099eb8 100644 +--- a/net/batman-adv/bat_v_elp.c ++++ b/net/batman-adv/bat_v_elp.c +@@ -352,19 +352,21 @@ out: + */ + int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface) + { ++ static const size_t tvlv_padding = sizeof(__be32); + struct batadv_elp_packet *elp_packet; + unsigned char *elp_buff; + u32 random_seqno; + size_t size; + int res = -ENOMEM; + +- size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN; ++ size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding; + hard_iface->bat_v.elp_skb = dev_alloc_skb(size); + if (!hard_iface->bat_v.elp_skb) + goto out; + + skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN); +- elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN); ++ elp_buff = skb_put_zero(hard_iface->bat_v.elp_skb, ++ BATADV_ELP_HLEN + tvlv_padding); + elp_packet = (struct batadv_elp_packet *)elp_buff; + + elp_packet->packet_type = BATADV_ELP; +diff --git a/net/batman-adv/fragmentation.c b/net/batman-adv/fragmentation.c +index 0fddc17106bd..5b71a289d04f 100644 +--- a/net/batman-adv/fragmentation.c ++++ b/net/batman-adv/fragmentation.c +@@ -275,7 +275,7 @@ batadv_frag_merge_packets(struct hlist_head *chain) + kfree(entry); + + packet = (struct batadv_frag_packet *)skb_out->data; +- size = ntohs(packet->total_size); ++ size = ntohs(packet->total_size) + hdr_size; + + /* Make room for the rest of the fragments. */ + if (pskb_expand_head(skb_out, 0, size - skb_out->len, GFP_ATOMIC) < 0) { +diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c +index 5836ddeac9e3..5f3c81e705c7 100644 +--- a/net/mac80211/iface.c ++++ b/net/mac80211/iface.c +@@ -1015,6 +1015,8 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, + if (local->open_count == 0) + ieee80211_clear_tx_pending(local); + ++ sdata->vif.bss_conf.beacon_int = 0; ++ + /* + * If the interface goes down while suspended, presumably because + * the device was unplugged and that happens before our resume, +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 96611d5dfadb..5e2b4a41acf1 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -1372,6 +1372,7 @@ ieee80211_rx_h_check_dup(struct ieee80211_rx_data *rx) + return RX_CONTINUE; + + if (ieee80211_is_ctl(hdr->frame_control) || ++ ieee80211_is_nullfunc(hdr->frame_control) || + ieee80211_is_qos_nullfunc(hdr->frame_control) || + is_multicast_ether_addr(hdr->addr1)) + return RX_CONTINUE; +@@ -3029,7 +3030,7 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx) + cfg80211_sta_opmode_change_notify(sdata->dev, + rx->sta->addr, + &sta_opmode, +- GFP_KERNEL); ++ GFP_ATOMIC); + goto handled; + } + case WLAN_HT_ACTION_NOTIFY_CHANWIDTH: { +@@ -3066,7 +3067,7 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx) + cfg80211_sta_opmode_change_notify(sdata->dev, + rx->sta->addr, + &sta_opmode, +- GFP_KERNEL); ++ GFP_ATOMIC); + goto handled; + } + default: +diff --git a/net/mac80211/status.c b/net/mac80211/status.c +index 91d7c0cd1882..7fa10d06cc51 100644 +--- a/net/mac80211/status.c ++++ b/net/mac80211/status.c +@@ -964,6 +964,8 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw, + /* Track when last TDLS packet was ACKed */ + if (test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) + sta->status_stats.last_tdls_pkt_time = jiffies; ++ } else if (test_sta_flag(sta, WLAN_STA_PS_STA)) { ++ return; + } else { + ieee80211_lost_packet(sta, info); + } +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c +index 25ba24bef8f5..995a491f73a9 100644 +--- a/net/mac80211/tx.c ++++ b/net/mac80211/tx.c +@@ -439,8 +439,8 @@ ieee80211_tx_h_multicast_ps_buf(struct ieee80211_tx_data *tx) + if (ieee80211_hw_check(&tx->local->hw, QUEUE_CONTROL)) + info->hw_queue = tx->sdata->vif.cab_queue; + +- /* no stations in PS mode */ +- if (!atomic_read(&ps->num_sta_ps)) ++ /* no stations in PS mode and no buffered packets */ ++ if (!atomic_read(&ps->num_sta_ps) && skb_queue_empty(&ps->bc_buf)) + return TX_CONTINUE; + + info->flags |= IEEE80211_TX_CTL_SEND_AFTER_DTIM; +diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c +index 21c0aa0a0d1d..8cb7d812ccb8 100644 +--- a/net/sunrpc/auth_gss/auth_gss.c ++++ b/net/sunrpc/auth_gss/auth_gss.c +@@ -1768,6 +1768,7 @@ priv_release_snd_buf(struct rpc_rqst *rqstp) + for (i=0; i < rqstp->rq_enc_pages_num; i++) + __free_page(rqstp->rq_enc_pages[i]); + kfree(rqstp->rq_enc_pages); ++ rqstp->rq_release_snd_buf = NULL; + } + + static int +@@ -1776,6 +1777,9 @@ alloc_enc_pages(struct rpc_rqst *rqstp) + struct xdr_buf *snd_buf = &rqstp->rq_snd_buf; + int first, last, i; + ++ if (rqstp->rq_release_snd_buf) ++ rqstp->rq_release_snd_buf(rqstp); ++ + if (snd_buf->page_len == 0) { + rqstp->rq_enc_pages_num = 0; + return 0; +diff --git a/net/wireless/util.c b/net/wireless/util.c +index 959ed3acd240..aad1c8e858e5 100644 +--- a/net/wireless/util.c ++++ b/net/wireless/util.c +@@ -1418,6 +1418,8 @@ size_t ieee80211_ie_split_ric(const u8 *ies, size_t ielen, + ies[pos + ext], + ext == 2)) + pos = skip_ie(ies, ielen, pos); ++ else ++ break; + } + } else { + pos = skip_ie(ies, ielen, pos); +diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c +index 66c90f486af9..818dff1de545 100644 +--- a/sound/core/pcm_native.c ++++ b/sound/core/pcm_native.c +@@ -36,6 +36,7 @@ + #include <sound/timer.h> + #include <sound/minors.h> + #include <linux/uio.h> ++#include <linux/delay.h> + + #include "pcm_local.h" + +@@ -91,12 +92,12 @@ static DECLARE_RWSEM(snd_pcm_link_rwsem); + * and this may lead to a deadlock when the code path takes read sem + * twice (e.g. one in snd_pcm_action_nonatomic() and another in + * snd_pcm_stream_lock()). As a (suboptimal) workaround, let writer to +- * spin until it gets the lock. ++ * sleep until all the readers are completed without blocking by writer. + */ +-static inline void down_write_nonblock(struct rw_semaphore *lock) ++static inline void down_write_nonfifo(struct rw_semaphore *lock) + { + while (!down_write_trylock(lock)) +- cond_resched(); ++ msleep(1); + } + + #define PCM_LOCK_DEFAULT 0 +@@ -1967,7 +1968,7 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd) + res = -ENOMEM; + goto _nolock; + } +- down_write_nonblock(&snd_pcm_link_rwsem); ++ down_write_nonfifo(&snd_pcm_link_rwsem); + write_lock_irq(&snd_pcm_link_rwlock); + if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN || + substream->runtime->status->state != substream1->runtime->status->state || +@@ -2014,7 +2015,7 @@ static int snd_pcm_unlink(struct snd_pcm_substream *substream) + struct snd_pcm_substream *s; + int res = 0; + +- down_write_nonblock(&snd_pcm_link_rwsem); ++ down_write_nonfifo(&snd_pcm_link_rwsem); + write_lock_irq(&snd_pcm_link_rwlock); + if (!snd_pcm_stream_linked(substream)) { + res = -EALREADY; +@@ -2369,7 +2370,8 @@ int snd_pcm_hw_constraints_complete(struct snd_pcm_substream *substream) + + static void pcm_release_private(struct snd_pcm_substream *substream) + { +- snd_pcm_unlink(substream); ++ if (snd_pcm_stream_linked(substream)) ++ snd_pcm_unlink(substream); + } + + void snd_pcm_release_substream(struct snd_pcm_substream *substream) +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 5810be2c6c34..1ddeebc373b3 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -2585,6 +2585,10 @@ static const struct pci_device_id azx_ids[] = { + /* AMD Hudson */ + { PCI_DEVICE(0x1022, 0x780d), + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB }, ++ /* AMD Stoney */ ++ { PCI_DEVICE(0x1022, 0x157a), ++ .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB | ++ AZX_DCAPS_PM_RUNTIME }, + /* AMD Raven */ + { PCI_DEVICE(0x1022, 0x15e3), + .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB | +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index cf5d26642bcd..22ca1f0a858f 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -4988,9 +4988,18 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec, + { 0x19, 0x21a11010 }, /* dock mic */ + { } + }; ++ /* Assure the speaker pin to be coupled with DAC NID 0x03; otherwise ++ * the speaker output becomes too low by some reason on Thinkpads with ++ * ALC298 codec ++ */ ++ static hda_nid_t preferred_pairs[] = { ++ 0x14, 0x03, 0x17, 0x02, 0x21, 0x02, ++ 0 ++ }; + struct alc_spec *spec = codec->spec; + + if (action == HDA_FIXUP_ACT_PRE_PROBE) { ++ spec->gen.preferred_dacs = preferred_pairs; + spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; + snd_hda_apply_pincfgs(codec, pincfgs); + } else if (action == HDA_FIXUP_ACT_INIT) { +@@ -5510,6 +5519,7 @@ enum { + ALC221_FIXUP_HP_HEADSET_MIC, + ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, + ALC295_FIXUP_HP_AUTO_MUTE, ++ ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, + }; + + static const struct hda_fixup alc269_fixups[] = { +@@ -6387,6 +6397,15 @@ static const struct hda_fixup alc269_fixups[] = { + .type = HDA_FIXUP_FUNC, + .v.func = alc_fixup_auto_mute_via_amp, + }, ++ [ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE] = { ++ .type = HDA_FIXUP_PINS, ++ .v.pins = (const struct hda_pintbl[]) { ++ { 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */ ++ { } ++ }, ++ .chained = true, ++ .chain_id = ALC269_FIXUP_HEADSET_MIC ++ }, + }; + + static const struct snd_pci_quirk alc269_fixup_tbl[] = { +@@ -6401,7 +6420,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1025, 0x0762, "Acer Aspire E1-472", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), + SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572), + SND_PCI_QUIRK(0x1025, 0x079b, "Acer Aspire V5-573G", ALC282_FIXUP_ASPIRE_V5_PINS), ++ SND_PCI_QUIRK(0x1025, 0x102b, "Acer Aspire C24-860", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1025, 0x106d, "Acer Cloudbook 14", ALC283_FIXUP_CHROME_BOOK), ++ SND_PCI_QUIRK(0x1025, 0x128f, "Acer Veriton Z6860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1025, 0x1290, "Acer Veriton Z4860G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), ++ SND_PCI_QUIRK(0x1025, 0x1291, "Acer Veriton Z4660G", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE), + SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z), + SND_PCI_QUIRK(0x1028, 0x054b, "Dell XPS one 2710", ALC275_FIXUP_DELL_XPS), + SND_PCI_QUIRK(0x1028, 0x05bd, "Dell Latitude E6440", ALC292_FIXUP_DELL_E7X), +@@ -7065,6 +7088,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = { + {0x14, 0x90170110}, + {0x19, 0x04a11040}, + {0x21, 0x04211020}), ++ SND_HDA_PIN_QUIRK(0x10ec0286, 0x1025, "Acer", ALC286_FIXUP_ACER_AIO_MIC_NO_PRESENCE, ++ {0x12, 0x90a60130}, ++ {0x17, 0x90170110}, ++ {0x21, 0x02211020}), + SND_HDA_PIN_QUIRK(0x10ec0288, 0x1028, "Dell", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, + {0x12, 0x90a60120}, + {0x14, 0x90170110}, +diff --git a/sound/usb/card.c b/sound/usb/card.c +index 2bfe4e80a6b9..a105947eaf55 100644 +--- a/sound/usb/card.c ++++ b/sound/usb/card.c +@@ -682,9 +682,12 @@ static int usb_audio_probe(struct usb_interface *intf, + + __error: + if (chip) { ++ /* chip->active is inside the chip->card object, ++ * decrement before memory is possibly returned. ++ */ ++ atomic_dec(&chip->active); + if (!chip->num_interfaces) + snd_card_free(chip->card); +- atomic_dec(&chip->active); + } + mutex_unlock(®ister_mutex); + return err; +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c +index 8a945ece9869..6623cafc94f2 100644 +--- a/sound/usb/quirks.c ++++ b/sound/usb/quirks.c +@@ -1373,6 +1373,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip, + return SNDRV_PCM_FMTBIT_DSD_U32_BE; + break; + ++ case USB_ID(0x152a, 0x85de): /* SMSL D1 DAC */ + case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */ + case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */ + case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */ +diff --git a/tools/testing/selftests/tc-testing/tdc.py b/tools/testing/selftests/tc-testing/tdc.py +index 87a04a8a5945..7607ba3e3cbe 100755 +--- a/tools/testing/selftests/tc-testing/tdc.py ++++ b/tools/testing/selftests/tc-testing/tdc.py +@@ -134,9 +134,9 @@ def exec_cmd(args, pm, stage, command): + (rawout, serr) = proc.communicate() + + if proc.returncode != 0 and len(serr) > 0: +- foutput = serr.decode("utf-8") ++ foutput = serr.decode("utf-8", errors="ignore") + else: +- foutput = rawout.decode("utf-8") ++ foutput = rawout.decode("utf-8", errors="ignore") + + proc.stdout.close() + proc.stderr.close() +@@ -169,6 +169,8 @@ def prepare_env(args, pm, stage, prefix, cmdlist, output = None): + file=sys.stderr) + print("\n{} *** Error message: \"{}\"".format(prefix, foutput), + file=sys.stderr) ++ print("returncode {}; expected {}".format(proc.returncode, ++ exit_codes)) + print("\n{} *** Aborting test run.".format(prefix), file=sys.stderr) + print("\n\n{} *** stdout ***".format(proc.stdout), file=sys.stderr) + print("\n\n{} *** stderr ***".format(proc.stderr), file=sys.stderr) +@@ -195,12 +197,18 @@ def run_one_test(pm, args, index, tidx): + print('-----> execute stage') + pm.call_pre_execute() + (p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"]) +- exit_code = p.returncode ++ if p: ++ exit_code = p.returncode ++ else: ++ exit_code = None ++ + pm.call_post_execute() + +- if (exit_code != int(tidx["expExitCode"])): ++ if (exit_code is None or exit_code != int(tidx["expExitCode"])): + result = False +- print("exit:", exit_code, int(tidx["expExitCode"])) ++ print("exit: {!r}".format(exit_code)) ++ print("exit: {}".format(int(tidx["expExitCode"]))) ++ #print("exit: {!r} {}".format(exit_code, int(tidx["expExitCode"]))) + print(procout) + else: + if args.verbose > 0: