From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 56C6D138334 for ; Wed, 14 Nov 2018 11:38:06 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 948FDE0BA0; Wed, 14 Nov 2018 11:37:58 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 341BDE0B9A for ; Wed, 14 Nov 2018 11:37:58 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id DC8C4335CDA for ; Wed, 14 Nov 2018 11:37:56 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 2B421479 for ; Wed, 14 Nov 2018 11:37:52 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1542195386.2988b4e9c29458eeec34e9889d4837ff38fae2b4.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.18 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1012_linux-4.18.13.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 2988b4e9c29458eeec34e9889d4837ff38fae2b4 X-VCS-Branch: 4.18 Date: Wed, 14 Nov 2018 11:37:52 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 7413e6a1-4d28-40b1-960f-2c0e668eba16 X-Archives-Hash: 0a35a4500f4a568c6a5708f98c1d8e97 commit: 2988b4e9c29458eeec34e9889d4837ff38fae2b4 Author: Mike Pagano gentoo org> AuthorDate: Wed Oct 10 11:16:13 2018 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Nov 14 11:36:26 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2988b4e9 Linux patch 4.18.13 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1012_linux-4.18.13.patch | 7273 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 7277 insertions(+) diff --git a/0000_README b/0000_README index ff87445..f5bb594 100644 --- a/0000_README +++ b/0000_README @@ -91,6 +91,10 @@ Patch: 1011_linux-4.18.12.patch From: http://www.kernel.org Desc: Linux 4.18.12 +Patch: 1012_linux-4.18.13.patch +From: http://www.kernel.org +Desc: Linux 4.18.13 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1012_linux-4.18.13.patch b/1012_linux-4.18.13.patch new file mode 100644 index 0000000..6c8e751 --- /dev/null +++ b/1012_linux-4.18.13.patch @@ -0,0 +1,7273 @@ +diff --git a/Documentation/devicetree/bindings/net/sh_eth.txt b/Documentation/devicetree/bindings/net/sh_eth.txt +index 82a4cf2c145d..a62fe3b613fc 100644 +--- a/Documentation/devicetree/bindings/net/sh_eth.txt ++++ b/Documentation/devicetree/bindings/net/sh_eth.txt +@@ -16,6 +16,7 @@ Required properties: + "renesas,ether-r8a7794" if the device is a part of R8A7794 SoC. + "renesas,gether-r8a77980" if the device is a part of R8A77980 SoC. + "renesas,ether-r7s72100" if the device is a part of R7S72100 SoC. ++ "renesas,ether-r7s9210" if the device is a part of R7S9210 SoC. + "renesas,rcar-gen1-ether" for a generic R-Car Gen1 device. + "renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1 + device. +diff --git a/Makefile b/Makefile +index 466e07af8473..4442e9ea4b6d 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 18 +-SUBLEVEL = 12 ++SUBLEVEL = 13 + EXTRAVERSION = + NAME = Merciless Moray + +diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h +index 11859287c52a..c98b59ac0612 100644 +--- a/arch/arc/include/asm/atomic.h ++++ b/arch/arc/include/asm/atomic.h +@@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v) \ + "1: llock %[orig], [%[ctr]] \n" \ + " " #asm_op " %[val], %[orig], %[i] \n" \ + " scond %[val], [%[ctr]] \n" \ +- " \n" \ ++ " bnz 1b \n" \ + : [val] "=&r" (val), \ + [orig] "=&r" (orig) \ + : [ctr] "r" (&v->counter), \ +diff --git a/arch/arm64/include/asm/jump_label.h b/arch/arm64/include/asm/jump_label.h +index 1b5e0e843c3a..7e2b3e360086 100644 +--- a/arch/arm64/include/asm/jump_label.h ++++ b/arch/arm64/include/asm/jump_label.h +@@ -28,7 +28,7 @@ + + static __always_inline bool arch_static_branch(struct static_key *key, bool branch) + { +- asm goto("1: nop\n\t" ++ asm_volatile_goto("1: nop\n\t" + ".pushsection __jump_table, \"aw\"\n\t" + ".align 3\n\t" + ".quad 1b, %l[l_yes], %c0\n\t" +@@ -42,7 +42,7 @@ l_yes: + + static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) + { +- asm goto("1: b %l[l_yes]\n\t" ++ asm_volatile_goto("1: b %l[l_yes]\n\t" + ".pushsection __jump_table, \"aw\"\n\t" + ".align 3\n\t" + ".quad 1b, %l[l_yes], %c0\n\t" +diff --git a/arch/hexagon/include/asm/bitops.h b/arch/hexagon/include/asm/bitops.h +index 5e4a59b3ec1b..2691a1857d20 100644 +--- a/arch/hexagon/include/asm/bitops.h ++++ b/arch/hexagon/include/asm/bitops.h +@@ -211,7 +211,7 @@ static inline long ffz(int x) + * This is defined the same way as ffs. + * Note fls(0) = 0, fls(1) = 1, fls(0x80000000) = 32. + */ +-static inline long fls(int x) ++static inline int fls(int x) + { + int r; + +@@ -232,7 +232,7 @@ static inline long fls(int x) + * the libc and compiler builtin ffs routines, therefore + * differs in spirit from the above ffz (man ffs). + */ +-static inline long ffs(int x) ++static inline int ffs(int x) + { + int r; + +diff --git a/arch/hexagon/kernel/dma.c b/arch/hexagon/kernel/dma.c +index 77459df34e2e..7ebe7ad19d15 100644 +--- a/arch/hexagon/kernel/dma.c ++++ b/arch/hexagon/kernel/dma.c +@@ -60,7 +60,7 @@ static void *hexagon_dma_alloc_coherent(struct device *dev, size_t size, + panic("Can't create %s() memory pool!", __func__); + else + gen_pool_add(coherent_pool, +- pfn_to_virt(max_low_pfn), ++ (unsigned long)pfn_to_virt(max_low_pfn), + hexagon_coherent_pool_size, -1); + } + +diff --git a/arch/nds32/include/asm/elf.h b/arch/nds32/include/asm/elf.h +index 56c479058802..f5f9cf7e0544 100644 +--- a/arch/nds32/include/asm/elf.h ++++ b/arch/nds32/include/asm/elf.h +@@ -121,9 +121,9 @@ struct elf32_hdr; + */ + #define ELF_CLASS ELFCLASS32 + #ifdef __NDS32_EB__ +-#define ELF_DATA ELFDATA2MSB; ++#define ELF_DATA ELFDATA2MSB + #else +-#define ELF_DATA ELFDATA2LSB; ++#define ELF_DATA ELFDATA2LSB + #endif + #define ELF_ARCH EM_NDS32 + #define USE_ELF_CORE_DUMP +diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h +index 18a009f3804d..3f771e0595e8 100644 +--- a/arch/nds32/include/asm/uaccess.h ++++ b/arch/nds32/include/asm/uaccess.h +@@ -78,8 +78,9 @@ static inline void set_fs(mm_segment_t fs) + #define get_user(x,p) \ + ({ \ + long __e = -EFAULT; \ +- if(likely(access_ok(VERIFY_READ, p, sizeof(*p)))) { \ +- __e = __get_user(x,p); \ ++ const __typeof__(*(p)) __user *__p = (p); \ ++ if(likely(access_ok(VERIFY_READ, __p, sizeof(*__p)))) { \ ++ __e = __get_user(x, __p); \ + } else \ + x = 0; \ + __e; \ +@@ -99,10 +100,10 @@ static inline void set_fs(mm_segment_t fs) + + #define __get_user_err(x,ptr,err) \ + do { \ +- unsigned long __gu_addr = (unsigned long)(ptr); \ ++ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \ + unsigned long __gu_val; \ +- __chk_user_ptr(ptr); \ +- switch (sizeof(*(ptr))) { \ ++ __chk_user_ptr(__gu_addr); \ ++ switch (sizeof(*(__gu_addr))) { \ + case 1: \ + __get_user_asm("lbi",__gu_val,__gu_addr,err); \ + break; \ +@@ -119,7 +120,7 @@ do { \ + BUILD_BUG(); \ + break; \ + } \ +- (x) = (__typeof__(*(ptr)))__gu_val; \ ++ (x) = (__typeof__(*(__gu_addr)))__gu_val; \ + } while (0) + + #define __get_user_asm(inst,x,addr,err) \ +@@ -169,8 +170,9 @@ do { \ + #define put_user(x,p) \ + ({ \ + long __e = -EFAULT; \ +- if(likely(access_ok(VERIFY_WRITE, p, sizeof(*p)))) { \ +- __e = __put_user(x,p); \ ++ __typeof__(*(p)) __user *__p = (p); \ ++ if(likely(access_ok(VERIFY_WRITE, __p, sizeof(*__p)))) { \ ++ __e = __put_user(x, __p); \ + } \ + __e; \ + }) +@@ -189,10 +191,10 @@ do { \ + + #define __put_user_err(x,ptr,err) \ + do { \ +- unsigned long __pu_addr = (unsigned long)(ptr); \ +- __typeof__(*(ptr)) __pu_val = (x); \ +- __chk_user_ptr(ptr); \ +- switch (sizeof(*(ptr))) { \ ++ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ ++ __typeof__(*(__pu_addr)) __pu_val = (x); \ ++ __chk_user_ptr(__pu_addr); \ ++ switch (sizeof(*(__pu_addr))) { \ + case 1: \ + __put_user_asm("sbi",__pu_val,__pu_addr,err); \ + break; \ +diff --git a/arch/nds32/kernel/atl2c.c b/arch/nds32/kernel/atl2c.c +index 0c6d031a1c4a..0c5386e72098 100644 +--- a/arch/nds32/kernel/atl2c.c ++++ b/arch/nds32/kernel/atl2c.c +@@ -9,7 +9,8 @@ + + void __iomem *atl2c_base; + static const struct of_device_id atl2c_ids[] __initconst = { +- {.compatible = "andestech,atl2c",} ++ {.compatible = "andestech,atl2c",}, ++ {} + }; + + static int __init atl2c_of_init(void) +diff --git a/arch/nds32/kernel/module.c b/arch/nds32/kernel/module.c +index 4167283d8293..1e31829cbc2a 100644 +--- a/arch/nds32/kernel/module.c ++++ b/arch/nds32/kernel/module.c +@@ -40,7 +40,7 @@ void do_reloc16(unsigned int val, unsigned int *loc, unsigned int val_mask, + + tmp2 = tmp & loc_mask; + if (partial_in_place) { +- tmp &= (!loc_mask); ++ tmp &= (~loc_mask); + tmp = + tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask); + } else { +@@ -70,7 +70,7 @@ void do_reloc32(unsigned int val, unsigned int *loc, unsigned int val_mask, + + tmp2 = tmp & loc_mask; + if (partial_in_place) { +- tmp &= (!loc_mask); ++ tmp &= (~loc_mask); + tmp = + tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask); + } else { +diff --git a/arch/nds32/kernel/traps.c b/arch/nds32/kernel/traps.c +index a6205fd4db52..f0e974347c26 100644 +--- a/arch/nds32/kernel/traps.c ++++ b/arch/nds32/kernel/traps.c +@@ -137,7 +137,7 @@ static void __dump(struct task_struct *tsk, unsigned long *base_reg) + !((unsigned long)base_reg & 0x3) && + ((unsigned long)base_reg >= TASK_SIZE)) { + unsigned long next_fp; +-#if !defined(NDS32_ABI_2) ++#if !defined(__NDS32_ABI_2) + ret_addr = base_reg[0]; + next_fp = base_reg[1]; + #else +diff --git a/arch/nds32/kernel/vmlinux.lds.S b/arch/nds32/kernel/vmlinux.lds.S +index 288313b886ef..9e90f30a181d 100644 +--- a/arch/nds32/kernel/vmlinux.lds.S ++++ b/arch/nds32/kernel/vmlinux.lds.S +@@ -13,14 +13,26 @@ OUTPUT_ARCH(nds32) + ENTRY(_stext_lma) + jiffies = jiffies_64; + ++#if defined(CONFIG_GCOV_KERNEL) ++#define NDS32_EXIT_KEEP(x) x ++#else ++#define NDS32_EXIT_KEEP(x) ++#endif ++ + SECTIONS + { + _stext_lma = TEXTADDR - LOAD_OFFSET; + . = TEXTADDR; + __init_begin = .; + HEAD_TEXT_SECTION ++ .exit.text : { ++ NDS32_EXIT_KEEP(EXIT_TEXT) ++ } + INIT_TEXT_SECTION(PAGE_SIZE) + INIT_DATA_SECTION(16) ++ .exit.data : { ++ NDS32_EXIT_KEEP(EXIT_DATA) ++ } + PERCPU_SECTION(L1_CACHE_BYTES) + __init_end = .; + +diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c +index 7f3a8cf5d66f..4c08f42f6406 100644 +--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c ++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c +@@ -359,7 +359,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate(struct kvm_vcpu *vcpu, gva_t eaddr, + unsigned long pp, key; + unsigned long v, orig_v, gr; + __be64 *hptep; +- int index; ++ long int index; + int virtmode = vcpu->arch.shregs.msr & (data ? MSR_DR : MSR_IR); + + if (kvm_is_radix(vcpu->kvm)) +diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c +index f0d2070866d4..0efa5b29d0a3 100644 +--- a/arch/riscv/kernel/setup.c ++++ b/arch/riscv/kernel/setup.c +@@ -64,15 +64,8 @@ atomic_t hart_lottery; + #ifdef CONFIG_BLK_DEV_INITRD + static void __init setup_initrd(void) + { +- extern char __initramfs_start[]; +- extern unsigned long __initramfs_size; + unsigned long size; + +- if (__initramfs_size > 0) { +- initrd_start = (unsigned long)(&__initramfs_start); +- initrd_end = initrd_start + __initramfs_size; +- } +- + if (initrd_start >= initrd_end) { + printk(KERN_INFO "initrd not found or empty"); + goto disable; +diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c +index a4170048a30b..17fbd07e4245 100644 +--- a/arch/x86/events/intel/lbr.c ++++ b/arch/x86/events/intel/lbr.c +@@ -1250,4 +1250,8 @@ void intel_pmu_lbr_init_knl(void) + + x86_pmu.lbr_sel_mask = LBR_SEL_MASK; + x86_pmu.lbr_sel_map = snb_lbr_sel_map; ++ ++ /* Knights Landing does have MISPREDICT bit */ ++ if (x86_pmu.intel_cap.lbr_format == LBR_FORMAT_LIP) ++ x86_pmu.intel_cap.lbr_format = LBR_FORMAT_EIP_FLAGS; + } +diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c +index ec00d1ff5098..f7151cd03cb0 100644 +--- a/arch/x86/kernel/apm_32.c ++++ b/arch/x86/kernel/apm_32.c +@@ -1640,6 +1640,7 @@ static int do_open(struct inode *inode, struct file *filp) + return 0; + } + ++#ifdef CONFIG_PROC_FS + static int proc_apm_show(struct seq_file *m, void *v) + { + unsigned short bx; +@@ -1719,6 +1720,7 @@ static int proc_apm_show(struct seq_file *m, void *v) + units); + return 0; + } ++#endif + + static int apm(void *unused) + { +diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c +index eb85cb87c40f..ec868373b11b 100644 +--- a/block/blk-cgroup.c ++++ b/block/blk-cgroup.c +@@ -307,28 +307,11 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg, + } + } + +-static void blkg_pd_offline(struct blkcg_gq *blkg) +-{ +- int i; +- +- lockdep_assert_held(blkg->q->queue_lock); +- lockdep_assert_held(&blkg->blkcg->lock); +- +- for (i = 0; i < BLKCG_MAX_POLS; i++) { +- struct blkcg_policy *pol = blkcg_policy[i]; +- +- if (blkg->pd[i] && !blkg->pd[i]->offline && +- pol->pd_offline_fn) { +- pol->pd_offline_fn(blkg->pd[i]); +- blkg->pd[i]->offline = true; +- } +- } +-} +- + static void blkg_destroy(struct blkcg_gq *blkg) + { + struct blkcg *blkcg = blkg->blkcg; + struct blkcg_gq *parent = blkg->parent; ++ int i; + + lockdep_assert_held(blkg->q->queue_lock); + lockdep_assert_held(&blkcg->lock); +@@ -337,6 +320,13 @@ static void blkg_destroy(struct blkcg_gq *blkg) + WARN_ON_ONCE(list_empty(&blkg->q_node)); + WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node)); + ++ for (i = 0; i < BLKCG_MAX_POLS; i++) { ++ struct blkcg_policy *pol = blkcg_policy[i]; ++ ++ if (blkg->pd[i] && pol->pd_offline_fn) ++ pol->pd_offline_fn(blkg->pd[i]); ++ } ++ + if (parent) { + blkg_rwstat_add_aux(&parent->stat_bytes, &blkg->stat_bytes); + blkg_rwstat_add_aux(&parent->stat_ios, &blkg->stat_ios); +@@ -379,7 +369,6 @@ static void blkg_destroy_all(struct request_queue *q) + struct blkcg *blkcg = blkg->blkcg; + + spin_lock(&blkcg->lock); +- blkg_pd_offline(blkg); + blkg_destroy(blkg); + spin_unlock(&blkcg->lock); + } +@@ -1006,54 +995,21 @@ static struct cftype blkcg_legacy_files[] = { + * @css: css of interest + * + * This function is called when @css is about to go away and responsible +- * for offlining all blkgs pd and killing all wbs associated with @css. +- * blkgs pd offline should be done while holding both q and blkcg locks. +- * As blkcg lock is nested inside q lock, this function performs reverse +- * double lock dancing. ++ * for shooting down all blkgs associated with @css. blkgs should be ++ * removed while holding both q and blkcg locks. As blkcg lock is nested ++ * inside q lock, this function performs reverse double lock dancing. + * + * This is the blkcg counterpart of ioc_release_fn(). + */ + static void blkcg_css_offline(struct cgroup_subsys_state *css) + { + struct blkcg *blkcg = css_to_blkcg(css); +- struct blkcg_gq *blkg; + + spin_lock_irq(&blkcg->lock); + +- hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { +- struct request_queue *q = blkg->q; +- +- if (spin_trylock(q->queue_lock)) { +- blkg_pd_offline(blkg); +- spin_unlock(q->queue_lock); +- } else { +- spin_unlock_irq(&blkcg->lock); +- cpu_relax(); +- spin_lock_irq(&blkcg->lock); +- } +- } +- +- spin_unlock_irq(&blkcg->lock); +- +- wb_blkcg_offline(blkcg); +-} +- +-/** +- * blkcg_destroy_all_blkgs - destroy all blkgs associated with a blkcg +- * @blkcg: blkcg of interest +- * +- * This function is called when blkcg css is about to free and responsible for +- * destroying all blkgs associated with @blkcg. +- * blkgs should be removed while holding both q and blkcg locks. As blkcg lock +- * is nested inside q lock, this function performs reverse double lock dancing. +- */ +-static void blkcg_destroy_all_blkgs(struct blkcg *blkcg) +-{ +- spin_lock_irq(&blkcg->lock); + while (!hlist_empty(&blkcg->blkg_list)) { + struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first, +- struct blkcg_gq, +- blkcg_node); ++ struct blkcg_gq, blkcg_node); + struct request_queue *q = blkg->q; + + if (spin_trylock(q->queue_lock)) { +@@ -1065,7 +1021,10 @@ static void blkcg_destroy_all_blkgs(struct blkcg *blkcg) + spin_lock_irq(&blkcg->lock); + } + } ++ + spin_unlock_irq(&blkcg->lock); ++ ++ wb_blkcg_offline(blkcg); + } + + static void blkcg_css_free(struct cgroup_subsys_state *css) +@@ -1073,8 +1032,6 @@ static void blkcg_css_free(struct cgroup_subsys_state *css) + struct blkcg *blkcg = css_to_blkcg(css); + int i; + +- blkcg_destroy_all_blkgs(blkcg); +- + mutex_lock(&blkcg_pol_mutex); + + list_del(&blkcg->all_blkcgs_node); +@@ -1412,11 +1369,8 @@ void blkcg_deactivate_policy(struct request_queue *q, + + list_for_each_entry(blkg, &q->blkg_list, q_node) { + if (blkg->pd[pol->plid]) { +- if (!blkg->pd[pol->plid]->offline && +- pol->pd_offline_fn) { ++ if (pol->pd_offline_fn) + pol->pd_offline_fn(blkg->pd[pol->plid]); +- blkg->pd[pol->plid]->offline = true; +- } + pol->pd_free_fn(blkg->pd[pol->plid]); + blkg->pd[pol->plid] = NULL; + } +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c +index 22a2bc5f25ce..99bf0c0394f8 100644 +--- a/drivers/ata/libata-core.c ++++ b/drivers/ata/libata-core.c +@@ -7403,4 +7403,4 @@ EXPORT_SYMBOL_GPL(ata_cable_unknown); + EXPORT_SYMBOL_GPL(ata_cable_ignore); + EXPORT_SYMBOL_GPL(ata_cable_sata); + EXPORT_SYMBOL_GPL(ata_host_get); +-EXPORT_SYMBOL_GPL(ata_host_put); +\ No newline at end of file ++EXPORT_SYMBOL_GPL(ata_host_put); +diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c +index 0943e7065e0e..8e9213b36e31 100644 +--- a/drivers/base/firmware_loader/main.c ++++ b/drivers/base/firmware_loader/main.c +@@ -209,22 +209,28 @@ static struct fw_priv *__lookup_fw_priv(const char *fw_name) + static int alloc_lookup_fw_priv(const char *fw_name, + struct firmware_cache *fwc, + struct fw_priv **fw_priv, void *dbuf, +- size_t size) ++ size_t size, enum fw_opt opt_flags) + { + struct fw_priv *tmp; + + spin_lock(&fwc->lock); +- tmp = __lookup_fw_priv(fw_name); +- if (tmp) { +- kref_get(&tmp->ref); +- spin_unlock(&fwc->lock); +- *fw_priv = tmp; +- pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n"); +- return 1; ++ if (!(opt_flags & FW_OPT_NOCACHE)) { ++ tmp = __lookup_fw_priv(fw_name); ++ if (tmp) { ++ kref_get(&tmp->ref); ++ spin_unlock(&fwc->lock); ++ *fw_priv = tmp; ++ pr_debug("batched request - sharing the same struct fw_priv and lookup for multiple requests\n"); ++ return 1; ++ } + } ++ + tmp = __allocate_fw_priv(fw_name, fwc, dbuf, size); +- if (tmp) +- list_add(&tmp->list, &fwc->head); ++ if (tmp) { ++ INIT_LIST_HEAD(&tmp->list); ++ if (!(opt_flags & FW_OPT_NOCACHE)) ++ list_add(&tmp->list, &fwc->head); ++ } + spin_unlock(&fwc->lock); + + *fw_priv = tmp; +@@ -493,7 +499,8 @@ int assign_fw(struct firmware *fw, struct device *device, + */ + static int + _request_firmware_prepare(struct firmware **firmware_p, const char *name, +- struct device *device, void *dbuf, size_t size) ++ struct device *device, void *dbuf, size_t size, ++ enum fw_opt opt_flags) + { + struct firmware *firmware; + struct fw_priv *fw_priv; +@@ -511,7 +518,8 @@ _request_firmware_prepare(struct firmware **firmware_p, const char *name, + return 0; /* assigned */ + } + +- ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size); ++ ret = alloc_lookup_fw_priv(name, &fw_cache, &fw_priv, dbuf, size, ++ opt_flags); + + /* + * bind with 'priv' now to avoid warning in failure path +@@ -571,7 +579,8 @@ _request_firmware(const struct firmware **firmware_p, const char *name, + goto out; + } + +- ret = _request_firmware_prepare(&fw, name, device, buf, size); ++ ret = _request_firmware_prepare(&fw, name, device, buf, size, ++ opt_flags); + if (ret <= 0) /* error or already assigned */ + goto out; + +diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c +index efc9a7ae4857..35e81d7dd929 100644 +--- a/drivers/cpufreq/qcom-cpufreq-kryo.c ++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c +@@ -44,7 +44,7 @@ enum _msm8996_version { + + struct platform_device *cpufreq_dt_pdev, *kryo_cpufreq_pdev; + +-static enum _msm8996_version __init qcom_cpufreq_kryo_get_msm_id(void) ++static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void) + { + size_t len; + u32 *msm_id; +@@ -221,7 +221,7 @@ static int __init qcom_cpufreq_kryo_init(void) + } + module_init(qcom_cpufreq_kryo_init); + +-static void __init qcom_cpufreq_kryo_exit(void) ++static void __exit qcom_cpufreq_kryo_exit(void) + { + platform_device_unregister(kryo_cpufreq_pdev); + platform_driver_unregister(&qcom_cpufreq_kryo_driver); +diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c +index d67667970f7e..ec40f991e6c6 100644 +--- a/drivers/crypto/caam/caamalg.c ++++ b/drivers/crypto/caam/caamalg.c +@@ -1553,8 +1553,8 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request + edesc->src_nents = src_nents; + edesc->dst_nents = dst_nents; + edesc->sec4_sg_bytes = sec4_sg_bytes; +- edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) + +- desc_bytes; ++ edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc + ++ desc_bytes); + edesc->iv_dir = DMA_TO_DEVICE; + + /* Make sure IV is located in a DMAable area */ +@@ -1757,8 +1757,8 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc( + edesc->src_nents = src_nents; + edesc->dst_nents = dst_nents; + edesc->sec4_sg_bytes = sec4_sg_bytes; +- edesc->sec4_sg = (void *)edesc + sizeof(struct ablkcipher_edesc) + +- desc_bytes; ++ edesc->sec4_sg = (struct sec4_sg_entry *)((u8 *)edesc->hw_desc + ++ desc_bytes); + edesc->iv_dir = DMA_FROM_DEVICE; + + /* Make sure IV is located in a DMAable area */ +diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c +index b916c4eb608c..e5d2ac5aec40 100644 +--- a/drivers/crypto/chelsio/chcr_algo.c ++++ b/drivers/crypto/chelsio/chcr_algo.c +@@ -367,7 +367,8 @@ static inline void dsgl_walk_init(struct dsgl_walk *walk, + walk->to = (struct phys_sge_pairs *)(dsgl + 1); + } + +-static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid) ++static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid, ++ int pci_chan_id) + { + struct cpl_rx_phys_dsgl *phys_cpl; + +@@ -385,6 +386,7 @@ static inline void dsgl_walk_end(struct dsgl_walk *walk, unsigned short qid) + phys_cpl->rss_hdr_int.opcode = CPL_RX_PHYS_ADDR; + phys_cpl->rss_hdr_int.qid = htons(qid); + phys_cpl->rss_hdr_int.hash_val = 0; ++ phys_cpl->rss_hdr_int.channel = pci_chan_id; + } + + static inline void dsgl_walk_add_page(struct dsgl_walk *walk, +@@ -718,7 +720,7 @@ static inline void create_wreq(struct chcr_context *ctx, + FILL_WR_RX_Q_ID(ctx->dev->rx_channel_id, qid, + !!lcb, ctx->tx_qidx); + +- chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->dev->tx_channel_id, ++ chcr_req->ulptx.cmd_dest = FILL_ULPTX_CMD_DEST(ctx->tx_chan_id, + qid); + chcr_req->ulptx.len = htonl((DIV_ROUND_UP(len16, 16) - + ((sizeof(chcr_req->wreq)) >> 4))); +@@ -1339,16 +1341,23 @@ static int chcr_device_init(struct chcr_context *ctx) + adap->vres.ncrypto_fc); + rxq_perchan = u_ctx->lldi.nrxq / u_ctx->lldi.nchan; + txq_perchan = ntxq / u_ctx->lldi.nchan; +- rxq_idx = ctx->dev->tx_channel_id * rxq_perchan; +- rxq_idx += id % rxq_perchan; +- txq_idx = ctx->dev->tx_channel_id * txq_perchan; +- txq_idx += id % txq_perchan; + spin_lock(&ctx->dev->lock_chcr_dev); +- ctx->rx_qidx = rxq_idx; +- ctx->tx_qidx = txq_idx; ++ ctx->tx_chan_id = ctx->dev->tx_channel_id; + ctx->dev->tx_channel_id = !ctx->dev->tx_channel_id; + ctx->dev->rx_channel_id = 0; + spin_unlock(&ctx->dev->lock_chcr_dev); ++ rxq_idx = ctx->tx_chan_id * rxq_perchan; ++ rxq_idx += id % rxq_perchan; ++ txq_idx = ctx->tx_chan_id * txq_perchan; ++ txq_idx += id % txq_perchan; ++ ctx->rx_qidx = rxq_idx; ++ ctx->tx_qidx = txq_idx; ++ /* Channel Id used by SGE to forward packet to Host. ++ * Same value should be used in cpl_fw6_pld RSS_CH field ++ * by FW. Driver programs PCI channel ID to be used in fw ++ * at the time of queue allocation with value "pi->tx_chan" ++ */ ++ ctx->pci_chan_id = txq_idx / txq_perchan; + } + out: + return err; +@@ -2503,6 +2512,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req, + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct dsgl_walk dsgl_walk; + unsigned int authsize = crypto_aead_authsize(tfm); ++ struct chcr_context *ctx = a_ctx(tfm); + u32 temp; + + dsgl_walk_init(&dsgl_walk, phys_cpl); +@@ -2512,7 +2522,7 @@ void chcr_add_aead_dst_ent(struct aead_request *req, + dsgl_walk_add_page(&dsgl_walk, IV, &reqctx->iv_dma); + temp = req->cryptlen + (reqctx->op ? -authsize : authsize); + dsgl_walk_add_sg(&dsgl_walk, req->dst, temp, req->assoclen); +- dsgl_walk_end(&dsgl_walk, qid); ++ dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id); + } + + void chcr_add_cipher_src_ent(struct ablkcipher_request *req, +@@ -2544,6 +2554,8 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req, + unsigned short qid) + { + struct chcr_blkcipher_req_ctx *reqctx = ablkcipher_request_ctx(req); ++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(wrparam->req); ++ struct chcr_context *ctx = c_ctx(tfm); + struct dsgl_walk dsgl_walk; + + dsgl_walk_init(&dsgl_walk, phys_cpl); +@@ -2552,7 +2564,7 @@ void chcr_add_cipher_dst_ent(struct ablkcipher_request *req, + reqctx->dstsg = dsgl_walk.last_sg; + reqctx->dst_ofst = dsgl_walk.last_sg_len; + +- dsgl_walk_end(&dsgl_walk, qid); ++ dsgl_walk_end(&dsgl_walk, qid, ctx->pci_chan_id); + } + + void chcr_add_hash_src_ent(struct ahash_request *req, +diff --git a/drivers/crypto/chelsio/chcr_crypto.h b/drivers/crypto/chelsio/chcr_crypto.h +index 54835cb109e5..0d2c70c344f3 100644 +--- a/drivers/crypto/chelsio/chcr_crypto.h ++++ b/drivers/crypto/chelsio/chcr_crypto.h +@@ -255,6 +255,8 @@ struct chcr_context { + struct chcr_dev *dev; + unsigned char tx_qidx; + unsigned char rx_qidx; ++ unsigned char tx_chan_id; ++ unsigned char pci_chan_id; + struct __crypto_ctx crypto_ctx[0]; + }; + +diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c +index a10c418d4e5c..56bd28174f52 100644 +--- a/drivers/crypto/mxs-dcp.c ++++ b/drivers/crypto/mxs-dcp.c +@@ -63,7 +63,7 @@ struct dcp { + struct dcp_coherent_block *coh; + + struct completion completion[DCP_MAX_CHANS]; +- struct mutex mutex[DCP_MAX_CHANS]; ++ spinlock_t lock[DCP_MAX_CHANS]; + struct task_struct *thread[DCP_MAX_CHANS]; + struct crypto_queue queue[DCP_MAX_CHANS]; + }; +@@ -349,13 +349,20 @@ static int dcp_chan_thread_aes(void *data) + + int ret; + +- do { +- __set_current_state(TASK_INTERRUPTIBLE); ++ while (!kthread_should_stop()) { ++ set_current_state(TASK_INTERRUPTIBLE); + +- mutex_lock(&sdcp->mutex[chan]); ++ spin_lock(&sdcp->lock[chan]); + backlog = crypto_get_backlog(&sdcp->queue[chan]); + arq = crypto_dequeue_request(&sdcp->queue[chan]); +- mutex_unlock(&sdcp->mutex[chan]); ++ spin_unlock(&sdcp->lock[chan]); ++ ++ if (!backlog && !arq) { ++ schedule(); ++ continue; ++ } ++ ++ set_current_state(TASK_RUNNING); + + if (backlog) + backlog->complete(backlog, -EINPROGRESS); +@@ -363,11 +370,8 @@ static int dcp_chan_thread_aes(void *data) + if (arq) { + ret = mxs_dcp_aes_block_crypt(arq); + arq->complete(arq, ret); +- continue; + } +- +- schedule(); +- } while (!kthread_should_stop()); ++ } + + return 0; + } +@@ -409,9 +413,9 @@ static int mxs_dcp_aes_enqueue(struct ablkcipher_request *req, int enc, int ecb) + rctx->ecb = ecb; + actx->chan = DCP_CHAN_CRYPTO; + +- mutex_lock(&sdcp->mutex[actx->chan]); ++ spin_lock(&sdcp->lock[actx->chan]); + ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base); +- mutex_unlock(&sdcp->mutex[actx->chan]); ++ spin_unlock(&sdcp->lock[actx->chan]); + + wake_up_process(sdcp->thread[actx->chan]); + +@@ -640,13 +644,20 @@ static int dcp_chan_thread_sha(void *data) + struct ahash_request *req; + int ret, fini; + +- do { +- __set_current_state(TASK_INTERRUPTIBLE); ++ while (!kthread_should_stop()) { ++ set_current_state(TASK_INTERRUPTIBLE); + +- mutex_lock(&sdcp->mutex[chan]); ++ spin_lock(&sdcp->lock[chan]); + backlog = crypto_get_backlog(&sdcp->queue[chan]); + arq = crypto_dequeue_request(&sdcp->queue[chan]); +- mutex_unlock(&sdcp->mutex[chan]); ++ spin_unlock(&sdcp->lock[chan]); ++ ++ if (!backlog && !arq) { ++ schedule(); ++ continue; ++ } ++ ++ set_current_state(TASK_RUNNING); + + if (backlog) + backlog->complete(backlog, -EINPROGRESS); +@@ -658,12 +669,8 @@ static int dcp_chan_thread_sha(void *data) + ret = dcp_sha_req_to_buf(arq); + fini = rctx->fini; + arq->complete(arq, ret); +- if (!fini) +- continue; + } +- +- schedule(); +- } while (!kthread_should_stop()); ++ } + + return 0; + } +@@ -721,9 +728,9 @@ static int dcp_sha_update_fx(struct ahash_request *req, int fini) + rctx->init = 1; + } + +- mutex_lock(&sdcp->mutex[actx->chan]); ++ spin_lock(&sdcp->lock[actx->chan]); + ret = crypto_enqueue_request(&sdcp->queue[actx->chan], &req->base); +- mutex_unlock(&sdcp->mutex[actx->chan]); ++ spin_unlock(&sdcp->lock[actx->chan]); + + wake_up_process(sdcp->thread[actx->chan]); + mutex_unlock(&actx->mutex); +@@ -997,7 +1004,7 @@ static int mxs_dcp_probe(struct platform_device *pdev) + platform_set_drvdata(pdev, sdcp); + + for (i = 0; i < DCP_MAX_CHANS; i++) { +- mutex_init(&sdcp->mutex[i]); ++ spin_lock_init(&sdcp->lock[i]); + init_completion(&sdcp->completion[i]); + crypto_init_queue(&sdcp->queue[i], 50); + } +diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c +index ba197f34c252..763c2166ee0e 100644 +--- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c ++++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c +@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + struct adf_hw_device_data *hw_data; + char name[ADF_DEVICE_NAME_LENGTH]; + unsigned int i, bar_nr; +- int ret, bar_mask; ++ unsigned long bar_mask; ++ int ret; + + switch (ent->device) { + case ADF_C3XXX_PCI_DEVICE_ID: +@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + /* Find and map all the device's BARS */ + i = 0; + bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); +- for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, +- ADF_PCI_MAX_BARS * 2) { ++ for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i++]; + + bar->base_addr = pci_resource_start(pdev, bar_nr); +diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c +index 24ec908eb26c..613c7d5644ce 100644 +--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c ++++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c +@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + struct adf_hw_device_data *hw_data; + char name[ADF_DEVICE_NAME_LENGTH]; + unsigned int i, bar_nr; +- int ret, bar_mask; ++ unsigned long bar_mask; ++ int ret; + + switch (ent->device) { + case ADF_C3XXXIOV_PCI_DEVICE_ID: +@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + /* Find and map all the device's BARS */ + i = 0; + bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); +- for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, +- ADF_PCI_MAX_BARS * 2) { ++ for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i++]; + + bar->base_addr = pci_resource_start(pdev, bar_nr); +diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c +index 59a5a0df50b6..9cb832963357 100644 +--- a/drivers/crypto/qat/qat_c62x/adf_drv.c ++++ b/drivers/crypto/qat/qat_c62x/adf_drv.c +@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + struct adf_hw_device_data *hw_data; + char name[ADF_DEVICE_NAME_LENGTH]; + unsigned int i, bar_nr; +- int ret, bar_mask; ++ unsigned long bar_mask; ++ int ret; + + switch (ent->device) { + case ADF_C62X_PCI_DEVICE_ID: +@@ -235,8 +236,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + /* Find and map all the device's BARS */ + i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0; + bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); +- for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, +- ADF_PCI_MAX_BARS * 2) { ++ for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i++]; + + bar->base_addr = pci_resource_start(pdev, bar_nr); +diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c +index b9f3e0e4fde9..278452b8ef81 100644 +--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c ++++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c +@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + struct adf_hw_device_data *hw_data; + char name[ADF_DEVICE_NAME_LENGTH]; + unsigned int i, bar_nr; +- int ret, bar_mask; ++ unsigned long bar_mask; ++ int ret; + + switch (ent->device) { + case ADF_C62XIOV_PCI_DEVICE_ID: +@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + /* Find and map all the device's BARS */ + i = 0; + bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); +- for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, +- ADF_PCI_MAX_BARS * 2) { ++ for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i++]; + + bar->base_addr = pci_resource_start(pdev, bar_nr); +diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c +index be5c5a988ca5..3a9708ef4ce2 100644 +--- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c ++++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c +@@ -123,7 +123,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + struct adf_hw_device_data *hw_data; + char name[ADF_DEVICE_NAME_LENGTH]; + unsigned int i, bar_nr; +- int ret, bar_mask; ++ unsigned long bar_mask; ++ int ret; + + switch (ent->device) { + case ADF_DH895XCC_PCI_DEVICE_ID: +@@ -237,8 +238,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + /* Find and map all the device's BARS */ + i = 0; + bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); +- for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, +- ADF_PCI_MAX_BARS * 2) { ++ for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i++]; + + bar->base_addr = pci_resource_start(pdev, bar_nr); +diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c +index 26ab17bfc6da..3da0f951cb59 100644 +--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c ++++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c +@@ -125,7 +125,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + struct adf_hw_device_data *hw_data; + char name[ADF_DEVICE_NAME_LENGTH]; + unsigned int i, bar_nr; +- int ret, bar_mask; ++ unsigned long bar_mask; ++ int ret; + + switch (ent->device) { + case ADF_DH895XCCIOV_PCI_DEVICE_ID: +@@ -215,8 +216,7 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + /* Find and map all the device's BARS */ + i = 0; + bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); +- for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, +- ADF_PCI_MAX_BARS * 2) { ++ for_each_set_bit(bar_nr, &bar_mask, ADF_PCI_MAX_BARS * 2) { + struct adf_bar *bar = &accel_pci_dev->pci_bars[i++]; + + bar->base_addr = pci_resource_start(pdev, bar_nr); +diff --git a/drivers/firmware/arm_scmi/perf.c b/drivers/firmware/arm_scmi/perf.c +index 2a219b1261b1..49cb74f54a10 100644 +--- a/drivers/firmware/arm_scmi/perf.c ++++ b/drivers/firmware/arm_scmi/perf.c +@@ -166,7 +166,13 @@ scmi_perf_domain_attributes_get(const struct scmi_handle *handle, u32 domain, + le32_to_cpu(attr->sustained_freq_khz); + dom_info->sustained_perf_level = + le32_to_cpu(attr->sustained_perf_level); +- dom_info->mult_factor = (dom_info->sustained_freq_khz * 1000) / ++ if (!dom_info->sustained_freq_khz || ++ !dom_info->sustained_perf_level) ++ /* CPUFreq converts to kHz, hence default 1000 */ ++ dom_info->mult_factor = 1000; ++ else ++ dom_info->mult_factor = ++ (dom_info->sustained_freq_khz * 1000) / + dom_info->sustained_perf_level; + memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); + } +diff --git a/drivers/gpio/gpio-adp5588.c b/drivers/gpio/gpio-adp5588.c +index 3530ccd17e04..da9781a2ef4a 100644 +--- a/drivers/gpio/gpio-adp5588.c ++++ b/drivers/gpio/gpio-adp5588.c +@@ -41,6 +41,8 @@ struct adp5588_gpio { + uint8_t int_en[3]; + uint8_t irq_mask[3]; + uint8_t irq_stat[3]; ++ uint8_t int_input_en[3]; ++ uint8_t int_lvl_cached[3]; + }; + + static int adp5588_gpio_read(struct i2c_client *client, u8 reg) +@@ -173,12 +175,28 @@ static void adp5588_irq_bus_sync_unlock(struct irq_data *d) + struct adp5588_gpio *dev = irq_data_get_irq_chip_data(d); + int i; + +- for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++) ++ for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++) { ++ if (dev->int_input_en[i]) { ++ mutex_lock(&dev->lock); ++ dev->dir[i] &= ~dev->int_input_en[i]; ++ dev->int_input_en[i] = 0; ++ adp5588_gpio_write(dev->client, GPIO_DIR1 + i, ++ dev->dir[i]); ++ mutex_unlock(&dev->lock); ++ } ++ ++ if (dev->int_lvl_cached[i] != dev->int_lvl[i]) { ++ dev->int_lvl_cached[i] = dev->int_lvl[i]; ++ adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + i, ++ dev->int_lvl[i]); ++ } ++ + if (dev->int_en[i] ^ dev->irq_mask[i]) { + dev->int_en[i] = dev->irq_mask[i]; + adp5588_gpio_write(dev->client, GPIO_INT_EN1 + i, + dev->int_en[i]); + } ++ } + + mutex_unlock(&dev->irq_lock); + } +@@ -221,9 +239,7 @@ static int adp5588_irq_set_type(struct irq_data *d, unsigned int type) + else + return -EINVAL; + +- adp5588_gpio_direction_input(&dev->gpio_chip, gpio); +- adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + bank, +- dev->int_lvl[bank]); ++ dev->int_input_en[bank] |= bit; + + return 0; + } +diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c +index 7a2de3de6571..5b12d6fdd448 100644 +--- a/drivers/gpio/gpio-dwapb.c ++++ b/drivers/gpio/gpio-dwapb.c +@@ -726,6 +726,7 @@ static int dwapb_gpio_probe(struct platform_device *pdev) + out_unregister: + dwapb_gpio_unregister(gpio); + dwapb_irq_teardown(gpio); ++ clk_disable_unprepare(gpio->clk); + + return err; + } +diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c +index addd9fecc198..a3e43cacd78e 100644 +--- a/drivers/gpio/gpiolib-acpi.c ++++ b/drivers/gpio/gpiolib-acpi.c +@@ -25,7 +25,6 @@ + + struct acpi_gpio_event { + struct list_head node; +- struct list_head initial_sync_list; + acpi_handle handle; + unsigned int pin; + unsigned int irq; +@@ -49,10 +48,19 @@ struct acpi_gpio_chip { + struct mutex conn_lock; + struct gpio_chip *chip; + struct list_head events; ++ struct list_head deferred_req_irqs_list_entry; + }; + +-static LIST_HEAD(acpi_gpio_initial_sync_list); +-static DEFINE_MUTEX(acpi_gpio_initial_sync_list_lock); ++/* ++ * For gpiochips which call acpi_gpiochip_request_interrupts() before late_init ++ * (so builtin drivers) we register the ACPI GpioInt event handlers from a ++ * late_initcall_sync handler, so that other builtin drivers can register their ++ * OpRegions before the event handlers can run. This list contains gpiochips ++ * for which the acpi_gpiochip_request_interrupts() has been deferred. ++ */ ++static DEFINE_MUTEX(acpi_gpio_deferred_req_irqs_lock); ++static LIST_HEAD(acpi_gpio_deferred_req_irqs_list); ++static bool acpi_gpio_deferred_req_irqs_done; + + static int acpi_gpiochip_find(struct gpio_chip *gc, void *data) + { +@@ -89,21 +97,6 @@ static struct gpio_desc *acpi_get_gpiod(char *path, int pin) + return gpiochip_get_desc(chip, pin); + } + +-static void acpi_gpio_add_to_initial_sync_list(struct acpi_gpio_event *event) +-{ +- mutex_lock(&acpi_gpio_initial_sync_list_lock); +- list_add(&event->initial_sync_list, &acpi_gpio_initial_sync_list); +- mutex_unlock(&acpi_gpio_initial_sync_list_lock); +-} +- +-static void acpi_gpio_del_from_initial_sync_list(struct acpi_gpio_event *event) +-{ +- mutex_lock(&acpi_gpio_initial_sync_list_lock); +- if (!list_empty(&event->initial_sync_list)) +- list_del_init(&event->initial_sync_list); +- mutex_unlock(&acpi_gpio_initial_sync_list_lock); +-} +- + static irqreturn_t acpi_gpio_irq_handler(int irq, void *data) + { + struct acpi_gpio_event *event = data; +@@ -186,7 +179,7 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, + + gpiod_direction_input(desc); + +- value = gpiod_get_value(desc); ++ value = gpiod_get_value_cansleep(desc); + + ret = gpiochip_lock_as_irq(chip, pin); + if (ret) { +@@ -229,7 +222,6 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, + event->irq = irq; + event->pin = pin; + event->desc = desc; +- INIT_LIST_HEAD(&event->initial_sync_list); + + ret = request_threaded_irq(event->irq, NULL, handler, irqflags, + "ACPI:Event", event); +@@ -251,10 +243,9 @@ static acpi_status acpi_gpiochip_request_interrupt(struct acpi_resource *ares, + * may refer to OperationRegions from other (builtin) drivers which + * may be probed after us. + */ +- if (handler == acpi_gpio_irq_handler && +- (((irqflags & IRQF_TRIGGER_RISING) && value == 1) || +- ((irqflags & IRQF_TRIGGER_FALLING) && value == 0))) +- acpi_gpio_add_to_initial_sync_list(event); ++ if (((irqflags & IRQF_TRIGGER_RISING) && value == 1) || ++ ((irqflags & IRQF_TRIGGER_FALLING) && value == 0)) ++ handler(event->irq, event); + + return AE_OK; + +@@ -283,6 +274,7 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip) + struct acpi_gpio_chip *acpi_gpio; + acpi_handle handle; + acpi_status status; ++ bool defer; + + if (!chip->parent || !chip->to_irq) + return; +@@ -295,6 +287,16 @@ void acpi_gpiochip_request_interrupts(struct gpio_chip *chip) + if (ACPI_FAILURE(status)) + return; + ++ mutex_lock(&acpi_gpio_deferred_req_irqs_lock); ++ defer = !acpi_gpio_deferred_req_irqs_done; ++ if (defer) ++ list_add(&acpi_gpio->deferred_req_irqs_list_entry, ++ &acpi_gpio_deferred_req_irqs_list); ++ mutex_unlock(&acpi_gpio_deferred_req_irqs_lock); ++ ++ if (defer) ++ return; ++ + acpi_walk_resources(handle, "_AEI", + acpi_gpiochip_request_interrupt, acpi_gpio); + } +@@ -325,11 +327,14 @@ void acpi_gpiochip_free_interrupts(struct gpio_chip *chip) + if (ACPI_FAILURE(status)) + return; + ++ mutex_lock(&acpi_gpio_deferred_req_irqs_lock); ++ if (!list_empty(&acpi_gpio->deferred_req_irqs_list_entry)) ++ list_del_init(&acpi_gpio->deferred_req_irqs_list_entry); ++ mutex_unlock(&acpi_gpio_deferred_req_irqs_lock); ++ + list_for_each_entry_safe_reverse(event, ep, &acpi_gpio->events, node) { + struct gpio_desc *desc; + +- acpi_gpio_del_from_initial_sync_list(event); +- + if (irqd_is_wakeup_set(irq_get_irq_data(event->irq))) + disable_irq_wake(event->irq); + +@@ -1049,6 +1054,7 @@ void acpi_gpiochip_add(struct gpio_chip *chip) + + acpi_gpio->chip = chip; + INIT_LIST_HEAD(&acpi_gpio->events); ++ INIT_LIST_HEAD(&acpi_gpio->deferred_req_irqs_list_entry); + + status = acpi_attach_data(handle, acpi_gpio_chip_dh, acpi_gpio); + if (ACPI_FAILURE(status)) { +@@ -1195,20 +1201,28 @@ bool acpi_can_fallback_to_crs(struct acpi_device *adev, const char *con_id) + return con_id == NULL; + } + +-/* Sync the initial state of handlers after all builtin drivers have probed */ +-static int acpi_gpio_initial_sync(void) ++/* Run deferred acpi_gpiochip_request_interrupts() */ ++static int acpi_gpio_handle_deferred_request_interrupts(void) + { +- struct acpi_gpio_event *event, *ep; ++ struct acpi_gpio_chip *acpi_gpio, *tmp; ++ ++ mutex_lock(&acpi_gpio_deferred_req_irqs_lock); ++ list_for_each_entry_safe(acpi_gpio, tmp, ++ &acpi_gpio_deferred_req_irqs_list, ++ deferred_req_irqs_list_entry) { ++ acpi_handle handle; + +- mutex_lock(&acpi_gpio_initial_sync_list_lock); +- list_for_each_entry_safe(event, ep, &acpi_gpio_initial_sync_list, +- initial_sync_list) { +- acpi_evaluate_object(event->handle, NULL, NULL, NULL); +- list_del_init(&event->initial_sync_list); ++ handle = ACPI_HANDLE(acpi_gpio->chip->parent); ++ acpi_walk_resources(handle, "_AEI", ++ acpi_gpiochip_request_interrupt, acpi_gpio); ++ ++ list_del_init(&acpi_gpio->deferred_req_irqs_list_entry); + } +- mutex_unlock(&acpi_gpio_initial_sync_list_lock); ++ ++ acpi_gpio_deferred_req_irqs_done = true; ++ mutex_unlock(&acpi_gpio_deferred_req_irqs_lock); + + return 0; + } + /* We must use _sync so that this runs after the first deferred_probe run */ +-late_initcall_sync(acpi_gpio_initial_sync); ++late_initcall_sync(acpi_gpio_handle_deferred_request_interrupts); +diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c +index 53a14ee8ad6d..a704d2e74421 100644 +--- a/drivers/gpio/gpiolib-of.c ++++ b/drivers/gpio/gpiolib-of.c +@@ -31,6 +31,7 @@ static int of_gpiochip_match_node_and_xlate(struct gpio_chip *chip, void *data) + struct of_phandle_args *gpiospec = data; + + return chip->gpiodev->dev.of_node == gpiospec->np && ++ chip->of_xlate && + chip->of_xlate(chip, gpiospec, NULL) >= 0; + } + +diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c +index e11a3bb03820..06dce16e22bb 100644 +--- a/drivers/gpio/gpiolib.c ++++ b/drivers/gpio/gpiolib.c +@@ -565,7 +565,7 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip) + if (ret) + goto out_free_descs; + lh->descs[i] = desc; +- count = i; ++ count = i + 1; + + if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW) + set_bit(FLAG_ACTIVE_LOW, &desc->flags); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +index 7200eea4f918..d9d8964a6e97 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c +@@ -38,6 +38,7 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p, + { + struct drm_gem_object *gobj; + unsigned long size; ++ int r; + + gobj = drm_gem_object_lookup(p->filp, data->handle); + if (gobj == NULL) +@@ -49,20 +50,26 @@ static int amdgpu_cs_user_fence_chunk(struct amdgpu_cs_parser *p, + p->uf_entry.tv.shared = true; + p->uf_entry.user_pages = NULL; + +- size = amdgpu_bo_size(p->uf_entry.robj); +- if (size != PAGE_SIZE || (data->offset + 8) > size) +- return -EINVAL; +- +- *offset = data->offset; +- + drm_gem_object_put_unlocked(gobj); + ++ size = amdgpu_bo_size(p->uf_entry.robj); ++ if (size != PAGE_SIZE || (data->offset + 8) > size) { ++ r = -EINVAL; ++ goto error_unref; ++ } ++ + if (amdgpu_ttm_tt_get_usermm(p->uf_entry.robj->tbo.ttm)) { +- amdgpu_bo_unref(&p->uf_entry.robj); +- return -EINVAL; ++ r = -EINVAL; ++ goto error_unref; + } + ++ *offset = data->offset; ++ + return 0; ++ ++error_unref: ++ amdgpu_bo_unref(&p->uf_entry.robj); ++ return r; + } + + static int amdgpu_cs_parser_init(struct amdgpu_cs_parser *p, void *data) +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +index ca53b3fba422..3e3e4e907ee5 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +@@ -67,6 +67,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = { + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0100, 0x00000100), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0), ++ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_IB_CNTL, 0x800f0100, 0x00000100), +@@ -78,7 +79,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = { + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC0_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_IB_CNTL, 0x800f0100, 0x00000100), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), +- SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0) ++ SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_PAGE, 0x000003ff, 0x000003c0), ++ SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_UTCL1_WATERMK, 0xfc000000, 0x00000000) + }; + + static const struct soc15_reg_golden golden_settings_sdma_vg10[] = { +@@ -106,7 +108,8 @@ static const struct soc15_reg_golden golden_settings_sdma_4_1[] = + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC0_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0xfffffff7, 0x00403000), +- SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0) ++ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0), ++ SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000) + }; + + static const struct soc15_reg_golden golden_settings_sdma_4_2[] = +diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +index 77779adeef28..f8e866ceda02 100644 +--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c ++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +@@ -4555,12 +4555,12 @@ static int smu7_get_sclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks) + return -EINVAL; + dep_sclk_table = table_info->vdd_dep_on_sclk; + for (i = 0; i < dep_sclk_table->count; i++) +- clocks->clock[i] = dep_sclk_table->entries[i].clk * 10; ++ clocks->clock[i] = dep_sclk_table->entries[i].clk; + clocks->count = dep_sclk_table->count; + } else if (hwmgr->pp_table_version == PP_TABLE_V0) { + sclk_table = hwmgr->dyn_state.vddc_dependency_on_sclk; + for (i = 0; i < sclk_table->count; i++) +- clocks->clock[i] = sclk_table->entries[i].clk * 10; ++ clocks->clock[i] = sclk_table->entries[i].clk; + clocks->count = sclk_table->count; + } + +@@ -4592,7 +4592,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks) + return -EINVAL; + dep_mclk_table = table_info->vdd_dep_on_mclk; + for (i = 0; i < dep_mclk_table->count; i++) { +- clocks->clock[i] = dep_mclk_table->entries[i].clk * 10; ++ clocks->clock[i] = dep_mclk_table->entries[i].clk; + clocks->latency[i] = smu7_get_mem_latency(hwmgr, + dep_mclk_table->entries[i].clk); + } +@@ -4600,7 +4600,7 @@ static int smu7_get_mclks(struct pp_hwmgr *hwmgr, struct amd_pp_clocks *clocks) + } else if (hwmgr->pp_table_version == PP_TABLE_V0) { + mclk_table = hwmgr->dyn_state.vddc_dependency_on_mclk; + for (i = 0; i < mclk_table->count; i++) +- clocks->clock[i] = mclk_table->entries[i].clk * 10; ++ clocks->clock[i] = mclk_table->entries[i].clk; + clocks->count = mclk_table->count; + } + return 0; +diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c +index 0adfc5392cd3..617557bd8c24 100644 +--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c ++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu8_hwmgr.c +@@ -1605,17 +1605,17 @@ static int smu8_get_clock_by_type(struct pp_hwmgr *hwmgr, enum amd_pp_clock_type + switch (type) { + case amd_pp_disp_clock: + for (i = 0; i < clocks->count; i++) +- clocks->clock[i] = data->sys_info.display_clock[i] * 10; ++ clocks->clock[i] = data->sys_info.display_clock[i]; + break; + case amd_pp_sys_clock: + table = hwmgr->dyn_state.vddc_dependency_on_sclk; + for (i = 0; i < clocks->count; i++) +- clocks->clock[i] = table->entries[i].clk * 10; ++ clocks->clock[i] = table->entries[i].clk; + break; + case amd_pp_mem_clock: + clocks->count = SMU8_NUM_NBPMEMORYCLOCK; + for (i = 0; i < clocks->count; i++) +- clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i] * 10; ++ clocks->clock[i] = data->sys_info.nbp_memory_clock[clocks->count - 1 - i]; + break; + default: + return -1; +diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c +index c2ebe5da34d0..89225adaa60a 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_drm.c ++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c +@@ -230,7 +230,7 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname, + mutex_unlock(&drm->master.lock); + } + if (ret) { +- NV_ERROR(drm, "Client allocation failed: %d\n", ret); ++ NV_PRINTK(err, cli, "Client allocation failed: %d\n", ret); + goto done; + } + +@@ -240,37 +240,37 @@ nouveau_cli_init(struct nouveau_drm *drm, const char *sname, + }, sizeof(struct nv_device_v0), + &cli->device); + if (ret) { +- NV_ERROR(drm, "Device allocation failed: %d\n", ret); ++ NV_PRINTK(err, cli, "Device allocation failed: %d\n", ret); + goto done; + } + + ret = nvif_mclass(&cli->device.object, mmus); + if (ret < 0) { +- NV_ERROR(drm, "No supported MMU class\n"); ++ NV_PRINTK(err, cli, "No supported MMU class\n"); + goto done; + } + + ret = nvif_mmu_init(&cli->device.object, mmus[ret].oclass, &cli->mmu); + if (ret) { +- NV_ERROR(drm, "MMU allocation failed: %d\n", ret); ++ NV_PRINTK(err, cli, "MMU allocation failed: %d\n", ret); + goto done; + } + + ret = nvif_mclass(&cli->mmu.object, vmms); + if (ret < 0) { +- NV_ERROR(drm, "No supported VMM class\n"); ++ NV_PRINTK(err, cli, "No supported VMM class\n"); + goto done; + } + + ret = nouveau_vmm_init(cli, vmms[ret].oclass, &cli->vmm); + if (ret) { +- NV_ERROR(drm, "VMM allocation failed: %d\n", ret); ++ NV_PRINTK(err, cli, "VMM allocation failed: %d\n", ret); + goto done; + } + + ret = nvif_mclass(&cli->mmu.object, mems); + if (ret < 0) { +- NV_ERROR(drm, "No supported MEM class\n"); ++ NV_PRINTK(err, cli, "No supported MEM class\n"); + goto done; + } + +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c +index 32fa94a9773f..cbd33e87b799 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/base.c +@@ -275,6 +275,7 @@ nvkm_disp_oneinit(struct nvkm_engine *engine) + struct nvkm_outp *outp, *outt, *pair; + struct nvkm_conn *conn; + struct nvkm_head *head; ++ struct nvkm_ior *ior; + struct nvbios_connE connE; + struct dcb_output dcbE; + u8 hpd = 0, ver, hdr; +@@ -399,6 +400,19 @@ nvkm_disp_oneinit(struct nvkm_engine *engine) + return ret; + } + ++ /* Enforce identity-mapped SOR assignment for panels, which have ++ * certain bits (ie. backlight controls) wired to a specific SOR. ++ */ ++ list_for_each_entry(outp, &disp->outp, head) { ++ if (outp->conn->info.type == DCB_CONNECTOR_LVDS || ++ outp->conn->info.type == DCB_CONNECTOR_eDP) { ++ ior = nvkm_ior_find(disp, SOR, ffs(outp->info.or) - 1); ++ if (!WARN_ON(!ior)) ++ ior->identity = true; ++ outp->identity = true; ++ } ++ } ++ + i = 0; + list_for_each_entry(head, &disp->head, head) + i = max(i, head->id + 1); +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c +index 7c5bed29ffef..6160a6158cf2 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/dp.c +@@ -412,14 +412,10 @@ nvkm_dp_train(struct nvkm_dp *dp, u32 dataKBps) + } + + static void +-nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior) ++nvkm_dp_disable(struct nvkm_outp *outp, struct nvkm_ior *ior) + { + struct nvkm_dp *dp = nvkm_dp(outp); + +- /* Prevent link from being retrained if sink sends an IRQ. */ +- atomic_set(&dp->lt.done, 0); +- ior->dp.nr = 0; +- + /* Execute DisableLT script from DP Info Table. */ + nvbios_init(&ior->disp->engine.subdev, dp->info.script[4], + init.outp = &dp->outp.info; +@@ -428,6 +424,16 @@ nvkm_dp_release(struct nvkm_outp *outp, struct nvkm_ior *ior) + ); + } + ++static void ++nvkm_dp_release(struct nvkm_outp *outp) ++{ ++ struct nvkm_dp *dp = nvkm_dp(outp); ++ ++ /* Prevent link from being retrained if sink sends an IRQ. */ ++ atomic_set(&dp->lt.done, 0); ++ dp->outp.ior->dp.nr = 0; ++} ++ + static int + nvkm_dp_acquire(struct nvkm_outp *outp) + { +@@ -576,6 +582,7 @@ nvkm_dp_func = { + .fini = nvkm_dp_fini, + .acquire = nvkm_dp_acquire, + .release = nvkm_dp_release, ++ .disable = nvkm_dp_disable, + }; + + static int +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h +index e0b4e0c5704e..19911211a12a 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/ior.h +@@ -16,6 +16,7 @@ struct nvkm_ior { + char name[8]; + + struct list_head head; ++ bool identity; + + struct nvkm_ior_state { + struct nvkm_outp *outp; +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c +index f89c7b977aa5..def005dd5fda 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c +@@ -501,11 +501,11 @@ nv50_disp_super_2_0(struct nv50_disp *disp, struct nvkm_head *head) + nv50_disp_super_ied_off(head, ior, 2); + + /* If we're shutting down the OR's only active head, execute +- * the output path's release function. ++ * the output path's disable function. + */ + if (ior->arm.head == (1 << head->id)) { +- if ((outp = ior->arm.outp) && outp->func->release) +- outp->func->release(outp, ior); ++ if ((outp = ior->arm.outp) && outp->func->disable) ++ outp->func->disable(outp, ior); + } + } + +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c +index be9e7f8c3b23..44df835e5473 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.c +@@ -93,6 +93,8 @@ nvkm_outp_release(struct nvkm_outp *outp, u8 user) + if (ior) { + outp->acquired &= ~user; + if (!outp->acquired) { ++ if (outp->func->release && outp->ior) ++ outp->func->release(outp); + outp->ior->asy.outp = NULL; + outp->ior = NULL; + } +@@ -127,17 +129,26 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user) + if (proto == UNKNOWN) + return -ENOSYS; + ++ /* Deal with panels requiring identity-mapped SOR assignment. */ ++ if (outp->identity) { ++ ior = nvkm_ior_find(outp->disp, SOR, ffs(outp->info.or) - 1); ++ if (WARN_ON(!ior)) ++ return -ENOSPC; ++ return nvkm_outp_acquire_ior(outp, user, ior); ++ } ++ + /* First preference is to reuse the OR that is currently armed + * on HW, if any, in order to prevent unnecessary switching. + */ + list_for_each_entry(ior, &outp->disp->ior, head) { +- if (!ior->asy.outp && ior->arm.outp == outp) ++ if (!ior->identity && !ior->asy.outp && ior->arm.outp == outp) + return nvkm_outp_acquire_ior(outp, user, ior); + } + + /* Failing that, a completely unused OR is the next best thing. */ + list_for_each_entry(ior, &outp->disp->ior, head) { +- if (!ior->asy.outp && ior->type == type && !ior->arm.outp && ++ if (!ior->identity && ++ !ior->asy.outp && ior->type == type && !ior->arm.outp && + (ior->func->route.set || ior->id == __ffs(outp->info.or))) + return nvkm_outp_acquire_ior(outp, user, ior); + } +@@ -146,7 +157,7 @@ nvkm_outp_acquire(struct nvkm_outp *outp, u8 user) + * but will be released during the next modeset. + */ + list_for_each_entry(ior, &outp->disp->ior, head) { +- if (!ior->asy.outp && ior->type == type && ++ if (!ior->identity && !ior->asy.outp && ior->type == type && + (ior->func->route.set || ior->id == __ffs(outp->info.or))) + return nvkm_outp_acquire_ior(outp, user, ior); + } +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h +index ea84d7d5741a..3f932fb39c94 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/outp.h +@@ -17,6 +17,7 @@ struct nvkm_outp { + + struct list_head head; + struct nvkm_conn *conn; ++ bool identity; + + /* Assembly state. */ + #define NVKM_OUTP_PRIV 1 +@@ -41,7 +42,8 @@ struct nvkm_outp_func { + void (*init)(struct nvkm_outp *); + void (*fini)(struct nvkm_outp *); + int (*acquire)(struct nvkm_outp *); +- void (*release)(struct nvkm_outp *, struct nvkm_ior *); ++ void (*release)(struct nvkm_outp *); ++ void (*disable)(struct nvkm_outp *, struct nvkm_ior *); + }; + + #define OUTP_MSG(o,l,f,a...) do { \ +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c +index b80618e35491..d65959ef0564 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/devinit/gm200.c +@@ -158,7 +158,8 @@ gm200_devinit_post(struct nvkm_devinit *base, bool post) + } + + /* load and execute some other ucode image (bios therm?) */ +- return pmu_load(init, 0x01, post, NULL, NULL); ++ pmu_load(init, 0x01, post, NULL, NULL); ++ return 0; + } + + static const struct nvkm_devinit_func +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +index de269eb482dd..7459def78d50 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +@@ -1423,7 +1423,7 @@ nvkm_vmm_get(struct nvkm_vmm *vmm, u8 page, u64 size, struct nvkm_vma **pvma) + void + nvkm_vmm_part(struct nvkm_vmm *vmm, struct nvkm_memory *inst) + { +- if (vmm->func->part && inst) { ++ if (inst && vmm->func->part) { + mutex_lock(&vmm->mutex); + vmm->func->part(vmm, inst); + mutex_unlock(&vmm->mutex); +diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c +index 25b7bd56ae11..1cb41992aaa1 100644 +--- a/drivers/hid/hid-apple.c ++++ b/drivers/hid/hid-apple.c +@@ -335,7 +335,8 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi, + struct hid_field *field, struct hid_usage *usage, + unsigned long **bit, int *max) + { +- if (usage->hid == (HID_UP_CUSTOM | 0x0003)) { ++ if (usage->hid == (HID_UP_CUSTOM | 0x0003) || ++ usage->hid == (HID_UP_MSVENDOR | 0x0003)) { + /* The fn key on Apple USB keyboards */ + set_bit(EV_REP, hi->input->evbit); + hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN); +@@ -472,6 +473,12 @@ static const struct hid_device_id apple_devices[] = { + .driver_data = APPLE_NUMLOCK_EMULATION | APPLE_HAS_FN }, + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI), + .driver_data = APPLE_HAS_FN }, ++ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI), ++ .driver_data = APPLE_HAS_FN }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI), ++ .driver_data = APPLE_HAS_FN }, ++ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI), ++ .driver_data = APPLE_HAS_FN }, + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ANSI), + .driver_data = APPLE_HAS_FN }, + { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_WELLSPRING_ISO), +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index e80bcd71fe1e..eee6b79fb131 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -88,6 +88,7 @@ + #define USB_DEVICE_ID_ANTON_TOUCH_PAD 0x3101 + + #define USB_VENDOR_ID_APPLE 0x05ac ++#define BT_VENDOR_ID_APPLE 0x004c + #define USB_DEVICE_ID_APPLE_MIGHTYMOUSE 0x0304 + #define USB_DEVICE_ID_APPLE_MAGICMOUSE 0x030d + #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD 0x030e +@@ -157,6 +158,7 @@ + #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO 0x0256 + #define USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_JIS 0x0257 + #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_ANSI 0x0267 ++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_ANSI 0x026c + #define USB_DEVICE_ID_APPLE_WELLSPRING8_ANSI 0x0290 + #define USB_DEVICE_ID_APPLE_WELLSPRING8_ISO 0x0291 + #define USB_DEVICE_ID_APPLE_WELLSPRING8_JIS 0x0292 +@@ -526,10 +528,6 @@ + #define I2C_VENDOR_ID_HANTICK 0x0911 + #define I2C_PRODUCT_ID_HANTICK_5288 0x5288 + +-#define I2C_VENDOR_ID_RAYD 0x2386 +-#define I2C_PRODUCT_ID_RAYD_3118 0x3118 +-#define I2C_PRODUCT_ID_RAYD_4B33 0x4B33 +- + #define USB_VENDOR_ID_HANWANG 0x0b57 + #define USB_DEVICE_ID_HANWANG_TABLET_FIRST 0x5000 + #define USB_DEVICE_ID_HANWANG_TABLET_LAST 0x8fff +@@ -949,6 +947,7 @@ + #define USB_DEVICE_ID_SAITEK_RUMBLEPAD 0xff17 + #define USB_DEVICE_ID_SAITEK_PS1000 0x0621 + #define USB_DEVICE_ID_SAITEK_RAT7_OLD 0x0ccb ++#define USB_DEVICE_ID_SAITEK_RAT7_CONTAGION 0x0ccd + #define USB_DEVICE_ID_SAITEK_RAT7 0x0cd7 + #define USB_DEVICE_ID_SAITEK_RAT9 0x0cfa + #define USB_DEVICE_ID_SAITEK_MMO7 0x0cd0 +diff --git a/drivers/hid/hid-saitek.c b/drivers/hid/hid-saitek.c +index 39e642686ff0..683861f324e3 100644 +--- a/drivers/hid/hid-saitek.c ++++ b/drivers/hid/hid-saitek.c +@@ -183,6 +183,8 @@ static const struct hid_device_id saitek_devices[] = { + .driver_data = SAITEK_RELEASE_MODE_RAT7 }, + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7), + .driver_data = SAITEK_RELEASE_MODE_RAT7 }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT7_CONTAGION), ++ .driver_data = SAITEK_RELEASE_MODE_RAT7 }, + { HID_USB_DEVICE(USB_VENDOR_ID_SAITEK, USB_DEVICE_ID_SAITEK_RAT9), + .driver_data = SAITEK_RELEASE_MODE_RAT7 }, + { HID_USB_DEVICE(USB_VENDOR_ID_MADCATZ, USB_DEVICE_ID_MADCATZ_RAT9), +diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c +index 50af72baa5ca..2b63487057c2 100644 +--- a/drivers/hid/hid-sensor-hub.c ++++ b/drivers/hid/hid-sensor-hub.c +@@ -579,6 +579,28 @@ void sensor_hub_device_close(struct hid_sensor_hub_device *hsdev) + } + EXPORT_SYMBOL_GPL(sensor_hub_device_close); + ++static __u8 *sensor_hub_report_fixup(struct hid_device *hdev, __u8 *rdesc, ++ unsigned int *rsize) ++{ ++ /* ++ * Checks if the report descriptor of Thinkpad Helix 2 has a logical ++ * minimum for magnetic flux axis greater than the maximum. ++ */ ++ if (hdev->product == USB_DEVICE_ID_TEXAS_INSTRUMENTS_LENOVO_YOGA && ++ *rsize == 2558 && rdesc[913] == 0x17 && rdesc[914] == 0x40 && ++ rdesc[915] == 0x81 && rdesc[916] == 0x08 && ++ rdesc[917] == 0x00 && rdesc[918] == 0x27 && ++ rdesc[921] == 0x07 && rdesc[922] == 0x00) { ++ /* Sets negative logical minimum for mag x, y and z */ ++ rdesc[914] = rdesc[935] = rdesc[956] = 0xc0; ++ rdesc[915] = rdesc[936] = rdesc[957] = 0x7e; ++ rdesc[916] = rdesc[937] = rdesc[958] = 0xf7; ++ rdesc[917] = rdesc[938] = rdesc[959] = 0xff; ++ } ++ ++ return rdesc; ++} ++ + static int sensor_hub_probe(struct hid_device *hdev, + const struct hid_device_id *id) + { +@@ -743,6 +765,7 @@ static struct hid_driver sensor_hub_driver = { + .probe = sensor_hub_probe, + .remove = sensor_hub_remove, + .raw_event = sensor_hub_raw_event, ++ .report_fixup = sensor_hub_report_fixup, + #ifdef CONFIG_PM + .suspend = sensor_hub_suspend, + .resume = sensor_hub_resume, +diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c +index 64773433b947..37013b58098c 100644 +--- a/drivers/hid/i2c-hid/i2c-hid.c ++++ b/drivers/hid/i2c-hid/i2c-hid.c +@@ -48,6 +48,7 @@ + #define I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV BIT(0) + #define I2C_HID_QUIRK_NO_IRQ_AFTER_RESET BIT(1) + #define I2C_HID_QUIRK_RESEND_REPORT_DESCR BIT(2) ++#define I2C_HID_QUIRK_NO_RUNTIME_PM BIT(3) + + /* flags */ + #define I2C_HID_STARTED 0 +@@ -169,13 +170,10 @@ static const struct i2c_hid_quirks { + { USB_VENDOR_ID_WEIDA, USB_DEVICE_ID_WEIDA_8755, + I2C_HID_QUIRK_SET_PWR_WAKEUP_DEV }, + { I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288, +- I2C_HID_QUIRK_NO_IRQ_AFTER_RESET }, +- { I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_3118, +- I2C_HID_QUIRK_RESEND_REPORT_DESCR }, ++ I2C_HID_QUIRK_NO_IRQ_AFTER_RESET | ++ I2C_HID_QUIRK_NO_RUNTIME_PM }, + { USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH, + I2C_HID_QUIRK_RESEND_REPORT_DESCR }, +- { I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_4B33, +- I2C_HID_QUIRK_RESEND_REPORT_DESCR }, + { 0, 0 } + }; + +@@ -1110,7 +1108,9 @@ static int i2c_hid_probe(struct i2c_client *client, + goto err_mem_free; + } + +- pm_runtime_put(&client->dev); ++ if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM)) ++ pm_runtime_put(&client->dev); ++ + return 0; + + err_mem_free: +@@ -1136,7 +1136,8 @@ static int i2c_hid_remove(struct i2c_client *client) + struct i2c_hid *ihid = i2c_get_clientdata(client); + struct hid_device *hid; + +- pm_runtime_get_sync(&client->dev); ++ if (!(ihid->quirks & I2C_HID_QUIRK_NO_RUNTIME_PM)) ++ pm_runtime_get_sync(&client->dev); + pm_runtime_disable(&client->dev); + pm_runtime_set_suspended(&client->dev); + pm_runtime_put_noidle(&client->dev); +@@ -1237,11 +1238,16 @@ static int i2c_hid_resume(struct device *dev) + pm_runtime_enable(dev); + + enable_irq(client->irq); +- ret = i2c_hid_hwreset(client); ++ ++ /* Instead of resetting device, simply powers the device on. This ++ * solves "incomplete reports" on Raydium devices 2386:3118 and ++ * 2386:4B33 ++ */ ++ ret = i2c_hid_set_power(client, I2C_HID_PWR_ON); + if (ret) + return ret; + +- /* RAYDIUM device (2386:3118) need to re-send report descr cmd ++ /* Some devices need to re-send report descr cmd + * after resume, after this it will be back normal. + * otherwise it issues too many incomplete reports. + */ +diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h +index 97869b7410eb..da133716bed0 100644 +--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h ++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h +@@ -29,6 +29,7 @@ + #define CNL_Ax_DEVICE_ID 0x9DFC + #define GLK_Ax_DEVICE_ID 0x31A2 + #define CNL_H_DEVICE_ID 0xA37C ++#define SPT_H_DEVICE_ID 0xA135 + + #define REVISION_ID_CHT_A0 0x6 + #define REVISION_ID_CHT_Ax_SI 0x0 +diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c +index a2c53ea3b5ed..c7b8eb32b1ea 100644 +--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c ++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c +@@ -38,6 +38,7 @@ static const struct pci_device_id ish_pci_tbl[] = { + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_Ax_DEVICE_ID)}, + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, GLK_Ax_DEVICE_ID)}, + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, CNL_H_DEVICE_ID)}, ++ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, SPT_H_DEVICE_ID)}, + {0, } + }; + MODULE_DEVICE_TABLE(pci, ish_pci_tbl); +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c +index ced041899456..f4d08c8ac7f8 100644 +--- a/drivers/hv/connection.c ++++ b/drivers/hv/connection.c +@@ -76,6 +76,7 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, + __u32 version) + { + int ret = 0; ++ unsigned int cur_cpu; + struct vmbus_channel_initiate_contact *msg; + unsigned long flags; + +@@ -118,9 +119,10 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, + * the CPU attempting to connect may not be CPU 0. + */ + if (version >= VERSION_WIN8_1) { +- msg->target_vcpu = +- hv_cpu_number_to_vp_number(smp_processor_id()); +- vmbus_connection.connect_cpu = smp_processor_id(); ++ cur_cpu = get_cpu(); ++ msg->target_vcpu = hv_cpu_number_to_vp_number(cur_cpu); ++ vmbus_connection.connect_cpu = cur_cpu; ++ put_cpu(); + } else { + msg->target_vcpu = 0; + vmbus_connection.connect_cpu = 0; +diff --git a/drivers/i2c/busses/i2c-uniphier-f.c b/drivers/i2c/busses/i2c-uniphier-f.c +index 9918bdd81619..a403e8579b65 100644 +--- a/drivers/i2c/busses/i2c-uniphier-f.c ++++ b/drivers/i2c/busses/i2c-uniphier-f.c +@@ -401,11 +401,8 @@ static int uniphier_fi2c_master_xfer(struct i2c_adapter *adap, + return ret; + + for (msg = msgs; msg < emsg; msg++) { +- /* If next message is read, skip the stop condition */ +- bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD); +- /* but, force it if I2C_M_STOP is set */ +- if (msg->flags & I2C_M_STOP) +- stop = true; ++ /* Emit STOP if it is the last message or I2C_M_STOP is set. */ ++ bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP); + + ret = uniphier_fi2c_master_xfer_one(adap, msg, stop); + if (ret) +diff --git a/drivers/i2c/busses/i2c-uniphier.c b/drivers/i2c/busses/i2c-uniphier.c +index bb181b088291..454f914ae66d 100644 +--- a/drivers/i2c/busses/i2c-uniphier.c ++++ b/drivers/i2c/busses/i2c-uniphier.c +@@ -248,11 +248,8 @@ static int uniphier_i2c_master_xfer(struct i2c_adapter *adap, + return ret; + + for (msg = msgs; msg < emsg; msg++) { +- /* If next message is read, skip the stop condition */ +- bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD); +- /* but, force it if I2C_M_STOP is set */ +- if (msg->flags & I2C_M_STOP) +- stop = true; ++ /* Emit STOP if it is the last message or I2C_M_STOP is set. */ ++ bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP); + + ret = uniphier_i2c_master_xfer_one(adap, msg, stop); + if (ret) +diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c +index 4994f920a836..8653182be818 100644 +--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c ++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c +@@ -187,12 +187,15 @@ static int st_lsm6dsx_set_fifo_odr(struct st_lsm6dsx_sensor *sensor, + + int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark) + { +- u16 fifo_watermark = ~0, cur_watermark, sip = 0, fifo_th_mask; ++ u16 fifo_watermark = ~0, cur_watermark, fifo_th_mask; + struct st_lsm6dsx_hw *hw = sensor->hw; + struct st_lsm6dsx_sensor *cur_sensor; + int i, err, data; + __le16 wdata; + ++ if (!hw->sip) ++ return 0; ++ + for (i = 0; i < ST_LSM6DSX_ID_MAX; i++) { + cur_sensor = iio_priv(hw->iio_devs[i]); + +@@ -203,14 +206,10 @@ int st_lsm6dsx_update_watermark(struct st_lsm6dsx_sensor *sensor, u16 watermark) + : cur_sensor->watermark; + + fifo_watermark = min_t(u16, fifo_watermark, cur_watermark); +- sip += cur_sensor->sip; + } + +- if (!sip) +- return 0; +- +- fifo_watermark = max_t(u16, fifo_watermark, sip); +- fifo_watermark = (fifo_watermark / sip) * sip; ++ fifo_watermark = max_t(u16, fifo_watermark, hw->sip); ++ fifo_watermark = (fifo_watermark / hw->sip) * hw->sip; + fifo_watermark = fifo_watermark * hw->settings->fifo_ops.th_wl; + + err = regmap_read(hw->regmap, hw->settings->fifo_ops.fifo_th.addr + 1, +diff --git a/drivers/iio/temperature/maxim_thermocouple.c b/drivers/iio/temperature/maxim_thermocouple.c +index 54e383231d1e..c31b9633f32d 100644 +--- a/drivers/iio/temperature/maxim_thermocouple.c ++++ b/drivers/iio/temperature/maxim_thermocouple.c +@@ -258,7 +258,6 @@ static int maxim_thermocouple_remove(struct spi_device *spi) + static const struct spi_device_id maxim_thermocouple_id[] = { + {"max6675", MAX6675}, + {"max31855", MAX31855}, +- {"max31856", MAX31855}, + {}, + }; + MODULE_DEVICE_TABLE(spi, maxim_thermocouple_id); +diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c +index ec8fb289621f..5f437d1570fb 100644 +--- a/drivers/infiniband/core/ucma.c ++++ b/drivers/infiniband/core/ucma.c +@@ -124,6 +124,8 @@ static DEFINE_MUTEX(mut); + static DEFINE_IDR(ctx_idr); + static DEFINE_IDR(multicast_idr); + ++static const struct file_operations ucma_fops; ++ + static inline struct ucma_context *_ucma_find_context(int id, + struct ucma_file *file) + { +@@ -1581,6 +1583,10 @@ static ssize_t ucma_migrate_id(struct ucma_file *new_file, + f = fdget(cmd.fd); + if (!f.file) + return -ENOENT; ++ if (f.file->f_op != &ucma_fops) { ++ ret = -EINVAL; ++ goto file_put; ++ } + + /* Validate current fd and prevent destruction of id. */ + ctx = ucma_get_ctx(f.file->private_data, cmd.id); +diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c +index a76e206704d4..cb1e69bdad0b 100644 +--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c ++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c +@@ -844,6 +844,8 @@ int bnxt_re_destroy_qp(struct ib_qp *ib_qp) + "Failed to destroy Shadow QP"); + return rc; + } ++ bnxt_qplib_free_qp_res(&rdev->qplib_res, ++ &rdev->qp1_sqp->qplib_qp); + mutex_lock(&rdev->qp_lock); + list_del(&rdev->qp1_sqp->list); + atomic_dec(&rdev->qp_count); +diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +index e426b990c1dd..6ad0d46ab879 100644 +--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c ++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c +@@ -196,7 +196,7 @@ static int bnxt_qplib_alloc_qp_hdr_buf(struct bnxt_qplib_res *res, + struct bnxt_qplib_qp *qp) + { + struct bnxt_qplib_q *rq = &qp->rq; +- struct bnxt_qplib_q *sq = &qp->rq; ++ struct bnxt_qplib_q *sq = &qp->sq; + int rc = 0; + + if (qp->sq_hdr_buf_size && sq->hwq.max_elements) { +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c +index d77c97fe4a23..c53363443280 100644 +--- a/drivers/iommu/amd_iommu.c ++++ b/drivers/iommu/amd_iommu.c +@@ -3073,7 +3073,7 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom, + return 0; + + offset_mask = pte_pgsize - 1; +- __pte = *pte & PM_ADDR_MASK; ++ __pte = __sme_clr(*pte & PM_ADDR_MASK); + + return (__pte & ~offset_mask) | (iova & offset_mask); + } +diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c +index 75df4c9d8b54..1c7c1250bf75 100644 +--- a/drivers/md/dm-raid.c ++++ b/drivers/md/dm-raid.c +@@ -29,9 +29,6 @@ + */ + #define MIN_RAID456_JOURNAL_SPACE (4*2048) + +-/* Global list of all raid sets */ +-static LIST_HEAD(raid_sets); +- + static bool devices_handle_discard_safely = false; + + /* +@@ -227,7 +224,6 @@ struct rs_layout { + + struct raid_set { + struct dm_target *ti; +- struct list_head list; + + uint32_t stripe_cache_entries; + unsigned long ctr_flags; +@@ -273,19 +269,6 @@ static void rs_config_restore(struct raid_set *rs, struct rs_layout *l) + mddev->new_chunk_sectors = l->new_chunk_sectors; + } + +-/* Find any raid_set in active slot for @rs on global list */ +-static struct raid_set *rs_find_active(struct raid_set *rs) +-{ +- struct raid_set *r; +- struct mapped_device *md = dm_table_get_md(rs->ti->table); +- +- list_for_each_entry(r, &raid_sets, list) +- if (r != rs && dm_table_get_md(r->ti->table) == md) +- return r; +- +- return NULL; +-} +- + /* raid10 algorithms (i.e. formats) */ + #define ALGORITHM_RAID10_DEFAULT 0 + #define ALGORITHM_RAID10_NEAR 1 +@@ -764,7 +747,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r + + mddev_init(&rs->md); + +- INIT_LIST_HEAD(&rs->list); + rs->raid_disks = raid_devs; + rs->delta_disks = 0; + +@@ -782,9 +764,6 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r + for (i = 0; i < raid_devs; i++) + md_rdev_init(&rs->dev[i].rdev); + +- /* Add @rs to global list. */ +- list_add(&rs->list, &raid_sets); +- + /* + * Remaining items to be initialized by further RAID params: + * rs->md.persistent +@@ -797,7 +776,7 @@ static struct raid_set *raid_set_alloc(struct dm_target *ti, struct raid_type *r + return rs; + } + +-/* Free all @rs allocations and remove it from global list. */ ++/* Free all @rs allocations */ + static void raid_set_free(struct raid_set *rs) + { + int i; +@@ -815,8 +794,6 @@ static void raid_set_free(struct raid_set *rs) + dm_put_device(rs->ti, rs->dev[i].data_dev); + } + +- list_del(&rs->list); +- + kfree(rs); + } + +@@ -3149,6 +3126,11 @@ static int raid_ctr(struct dm_target *ti, unsigned int argc, char **argv) + set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags); + rs_set_new(rs); + } else if (rs_is_recovering(rs)) { ++ /* Rebuild particular devices */ ++ if (test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) { ++ set_bit(RT_FLAG_UPDATE_SBS, &rs->runtime_flags); ++ rs_setup_recovery(rs, MaxSector); ++ } + /* A recovering raid set may be resized */ + ; /* skip setup rs */ + } else if (rs_is_reshaping(rs)) { +@@ -3350,32 +3332,53 @@ static int raid_map(struct dm_target *ti, struct bio *bio) + return DM_MAPIO_SUBMITTED; + } + +-/* Return string describing the current sync action of @mddev */ +-static const char *decipher_sync_action(struct mddev *mddev, unsigned long recovery) ++/* Return sync state string for @state */ ++enum sync_state { st_frozen, st_reshape, st_resync, st_check, st_repair, st_recover, st_idle }; ++static const char *sync_str(enum sync_state state) ++{ ++ /* Has to be in above sync_state order! */ ++ static const char *sync_strs[] = { ++ "frozen", ++ "reshape", ++ "resync", ++ "check", ++ "repair", ++ "recover", ++ "idle" ++ }; ++ ++ return __within_range(state, 0, ARRAY_SIZE(sync_strs) - 1) ? sync_strs[state] : "undef"; ++}; ++ ++/* Return enum sync_state for @mddev derived from @recovery flags */ ++static const enum sync_state decipher_sync_action(struct mddev *mddev, unsigned long recovery) + { + if (test_bit(MD_RECOVERY_FROZEN, &recovery)) +- return "frozen"; ++ return st_frozen; + +- /* The MD sync thread can be done with io but still be running */ ++ /* The MD sync thread can be done with io or be interrupted but still be running */ + if (!test_bit(MD_RECOVERY_DONE, &recovery) && + (test_bit(MD_RECOVERY_RUNNING, &recovery) || + (!mddev->ro && test_bit(MD_RECOVERY_NEEDED, &recovery)))) { + if (test_bit(MD_RECOVERY_RESHAPE, &recovery)) +- return "reshape"; ++ return st_reshape; + + if (test_bit(MD_RECOVERY_SYNC, &recovery)) { + if (!test_bit(MD_RECOVERY_REQUESTED, &recovery)) +- return "resync"; +- else if (test_bit(MD_RECOVERY_CHECK, &recovery)) +- return "check"; +- return "repair"; ++ return st_resync; ++ if (test_bit(MD_RECOVERY_CHECK, &recovery)) ++ return st_check; ++ return st_repair; + } + + if (test_bit(MD_RECOVERY_RECOVER, &recovery)) +- return "recover"; ++ return st_recover; ++ ++ if (mddev->reshape_position != MaxSector) ++ return st_reshape; + } + +- return "idle"; ++ return st_idle; + } + + /* +@@ -3409,6 +3412,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery, + sector_t resync_max_sectors) + { + sector_t r; ++ enum sync_state state; + struct mddev *mddev = &rs->md; + + clear_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags); +@@ -3419,20 +3423,14 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery, + set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags); + + } else { +- if (!test_bit(__CTR_FLAG_NOSYNC, &rs->ctr_flags) && +- !test_bit(MD_RECOVERY_INTR, &recovery) && +- (test_bit(MD_RECOVERY_NEEDED, &recovery) || +- test_bit(MD_RECOVERY_RESHAPE, &recovery) || +- test_bit(MD_RECOVERY_RUNNING, &recovery))) +- r = mddev->curr_resync_completed; +- else ++ state = decipher_sync_action(mddev, recovery); ++ ++ if (state == st_idle && !test_bit(MD_RECOVERY_INTR, &recovery)) + r = mddev->recovery_cp; ++ else ++ r = mddev->curr_resync_completed; + +- if (r >= resync_max_sectors && +- (!test_bit(MD_RECOVERY_REQUESTED, &recovery) || +- (!test_bit(MD_RECOVERY_FROZEN, &recovery) && +- !test_bit(MD_RECOVERY_NEEDED, &recovery) && +- !test_bit(MD_RECOVERY_RUNNING, &recovery)))) { ++ if (state == st_idle && r >= resync_max_sectors) { + /* + * Sync complete. + */ +@@ -3440,24 +3438,20 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery, + if (test_bit(MD_RECOVERY_RECOVER, &recovery)) + set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags); + +- } else if (test_bit(MD_RECOVERY_RECOVER, &recovery)) { ++ } else if (state == st_recover) + /* + * In case we are recovering, the array is not in sync + * and health chars should show the recovering legs. + */ + ; +- +- } else if (test_bit(MD_RECOVERY_SYNC, &recovery) && +- !test_bit(MD_RECOVERY_REQUESTED, &recovery)) { ++ else if (state == st_resync) + /* + * If "resync" is occurring, the raid set + * is or may be out of sync hence the health + * characters shall be 'a'. + */ + set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags); +- +- } else if (test_bit(MD_RECOVERY_RESHAPE, &recovery) && +- !test_bit(MD_RECOVERY_REQUESTED, &recovery)) { ++ else if (state == st_reshape) + /* + * If "reshape" is occurring, the raid set + * is or may be out of sync hence the health +@@ -3465,7 +3459,7 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery, + */ + set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags); + +- } else if (test_bit(MD_RECOVERY_REQUESTED, &recovery)) { ++ else if (state == st_check || state == st_repair) + /* + * If "check" or "repair" is occurring, the raid set has + * undergone an initial sync and the health characters +@@ -3473,12 +3467,12 @@ static sector_t rs_get_progress(struct raid_set *rs, unsigned long recovery, + */ + set_bit(RT_FLAG_RS_IN_SYNC, &rs->runtime_flags); + +- } else { ++ else { + struct md_rdev *rdev; + + /* + * We are idle and recovery is needed, prevent 'A' chars race +- * caused by components still set to in-sync by constrcuctor. ++ * caused by components still set to in-sync by constructor. + */ + if (test_bit(MD_RECOVERY_NEEDED, &recovery)) + set_bit(RT_FLAG_RS_RESYNCING, &rs->runtime_flags); +@@ -3542,7 +3536,7 @@ static void raid_status(struct dm_target *ti, status_type_t type, + progress = rs_get_progress(rs, recovery, resync_max_sectors); + resync_mismatches = (mddev->last_sync_action && !strcasecmp(mddev->last_sync_action, "check")) ? + atomic64_read(&mddev->resync_mismatches) : 0; +- sync_action = decipher_sync_action(&rs->md, recovery); ++ sync_action = sync_str(decipher_sync_action(&rs->md, recovery)); + + /* HM FIXME: do we want another state char for raid0? It shows 'D'/'A'/'-' now */ + for (i = 0; i < rs->raid_disks; i++) +@@ -3892,14 +3886,13 @@ static int rs_start_reshape(struct raid_set *rs) + struct mddev *mddev = &rs->md; + struct md_personality *pers = mddev->pers; + ++ /* Don't allow the sync thread to work until the table gets reloaded. */ ++ set_bit(MD_RECOVERY_WAIT, &mddev->recovery); ++ + r = rs_setup_reshape(rs); + if (r) + return r; + +- /* Need to be resumed to be able to start reshape, recovery is frozen until raid_resume() though */ +- if (test_and_clear_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags)) +- mddev_resume(mddev); +- + /* + * Check any reshape constraints enforced by the personalility + * +@@ -3923,10 +3916,6 @@ static int rs_start_reshape(struct raid_set *rs) + } + } + +- /* Suspend because a resume will happen in raid_resume() */ +- set_bit(RT_FLAG_RS_SUSPENDED, &rs->runtime_flags); +- mddev_suspend(mddev); +- + /* + * Now reshape got set up, update superblocks to + * reflect the fact so that a table reload will +@@ -3947,29 +3936,6 @@ static int raid_preresume(struct dm_target *ti) + if (test_and_set_bit(RT_FLAG_RS_PRERESUMED, &rs->runtime_flags)) + return 0; + +- if (!test_bit(__CTR_FLAG_REBUILD, &rs->ctr_flags)) { +- struct raid_set *rs_active = rs_find_active(rs); +- +- if (rs_active) { +- /* +- * In case no rebuilds have been requested +- * and an active table slot exists, copy +- * current resynchonization completed and +- * reshape position pointers across from +- * suspended raid set in the active slot. +- * +- * This resumes the new mapping at current +- * offsets to continue recover/reshape without +- * necessarily redoing a raid set partially or +- * causing data corruption in case of a reshape. +- */ +- if (rs_active->md.curr_resync_completed != MaxSector) +- mddev->curr_resync_completed = rs_active->md.curr_resync_completed; +- if (rs_active->md.reshape_position != MaxSector) +- mddev->reshape_position = rs_active->md.reshape_position; +- } +- } +- + /* + * The superblocks need to be updated on disk if the + * array is new or new devices got added (thus zeroed +diff --git a/drivers/md/dm-thin-metadata.c b/drivers/md/dm-thin-metadata.c +index 72142021b5c9..20b0776e39ef 100644 +--- a/drivers/md/dm-thin-metadata.c ++++ b/drivers/md/dm-thin-metadata.c +@@ -188,6 +188,12 @@ struct dm_pool_metadata { + unsigned long flags; + sector_t data_block_size; + ++ /* ++ * We reserve a section of the metadata for commit overhead. ++ * All reported space does *not* include this. ++ */ ++ dm_block_t metadata_reserve; ++ + /* + * Set if a transaction has to be aborted but the attempt to roll back + * to the previous (good) transaction failed. The only pool metadata +@@ -816,6 +822,20 @@ static int __commit_transaction(struct dm_pool_metadata *pmd) + return dm_tm_commit(pmd->tm, sblock); + } + ++static void __set_metadata_reserve(struct dm_pool_metadata *pmd) ++{ ++ int r; ++ dm_block_t total; ++ dm_block_t max_blocks = 4096; /* 16M */ ++ ++ r = dm_sm_get_nr_blocks(pmd->metadata_sm, &total); ++ if (r) { ++ DMERR("could not get size of metadata device"); ++ pmd->metadata_reserve = max_blocks; ++ } else ++ pmd->metadata_reserve = min(max_blocks, div_u64(total, 10)); ++} ++ + struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev, + sector_t data_block_size, + bool format_device) +@@ -849,6 +869,8 @@ struct dm_pool_metadata *dm_pool_metadata_open(struct block_device *bdev, + return ERR_PTR(r); + } + ++ __set_metadata_reserve(pmd); ++ + return pmd; + } + +@@ -1820,6 +1842,13 @@ int dm_pool_get_free_metadata_block_count(struct dm_pool_metadata *pmd, + down_read(&pmd->root_lock); + if (!pmd->fail_io) + r = dm_sm_get_nr_free(pmd->metadata_sm, result); ++ ++ if (!r) { ++ if (*result < pmd->metadata_reserve) ++ *result = 0; ++ else ++ *result -= pmd->metadata_reserve; ++ } + up_read(&pmd->root_lock); + + return r; +@@ -1932,8 +1961,11 @@ int dm_pool_resize_metadata_dev(struct dm_pool_metadata *pmd, dm_block_t new_cou + int r = -EINVAL; + + down_write(&pmd->root_lock); +- if (!pmd->fail_io) ++ if (!pmd->fail_io) { + r = __resize_space_map(pmd->metadata_sm, new_count); ++ if (!r) ++ __set_metadata_reserve(pmd); ++ } + up_write(&pmd->root_lock); + + return r; +diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c +index 1087f6a1ac79..b512efd4050c 100644 +--- a/drivers/md/dm-thin.c ++++ b/drivers/md/dm-thin.c +@@ -200,7 +200,13 @@ struct dm_thin_new_mapping; + enum pool_mode { + PM_WRITE, /* metadata may be changed */ + PM_OUT_OF_DATA_SPACE, /* metadata may be changed, though data may not be allocated */ ++ ++ /* ++ * Like READ_ONLY, except may switch back to WRITE on metadata resize. Reported as READ_ONLY. ++ */ ++ PM_OUT_OF_METADATA_SPACE, + PM_READ_ONLY, /* metadata may not be changed */ ++ + PM_FAIL, /* all I/O fails */ + }; + +@@ -1388,7 +1394,35 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode); + + static void requeue_bios(struct pool *pool); + +-static void check_for_space(struct pool *pool) ++static bool is_read_only_pool_mode(enum pool_mode mode) ++{ ++ return (mode == PM_OUT_OF_METADATA_SPACE || mode == PM_READ_ONLY); ++} ++ ++static bool is_read_only(struct pool *pool) ++{ ++ return is_read_only_pool_mode(get_pool_mode(pool)); ++} ++ ++static void check_for_metadata_space(struct pool *pool) ++{ ++ int r; ++ const char *ooms_reason = NULL; ++ dm_block_t nr_free; ++ ++ r = dm_pool_get_free_metadata_block_count(pool->pmd, &nr_free); ++ if (r) ++ ooms_reason = "Could not get free metadata blocks"; ++ else if (!nr_free) ++ ooms_reason = "No free metadata blocks"; ++ ++ if (ooms_reason && !is_read_only(pool)) { ++ DMERR("%s", ooms_reason); ++ set_pool_mode(pool, PM_OUT_OF_METADATA_SPACE); ++ } ++} ++ ++static void check_for_data_space(struct pool *pool) + { + int r; + dm_block_t nr_free; +@@ -1414,14 +1448,16 @@ static int commit(struct pool *pool) + { + int r; + +- if (get_pool_mode(pool) >= PM_READ_ONLY) ++ if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE) + return -EINVAL; + + r = dm_pool_commit_metadata(pool->pmd); + if (r) + metadata_operation_failed(pool, "dm_pool_commit_metadata", r); +- else +- check_for_space(pool); ++ else { ++ check_for_metadata_space(pool); ++ check_for_data_space(pool); ++ } + + return r; + } +@@ -1487,6 +1523,19 @@ static int alloc_data_block(struct thin_c *tc, dm_block_t *result) + return r; + } + ++ r = dm_pool_get_free_metadata_block_count(pool->pmd, &free_blocks); ++ if (r) { ++ metadata_operation_failed(pool, "dm_pool_get_free_metadata_block_count", r); ++ return r; ++ } ++ ++ if (!free_blocks) { ++ /* Let's commit before we use up the metadata reserve. */ ++ r = commit(pool); ++ if (r) ++ return r; ++ } ++ + return 0; + } + +@@ -1518,6 +1567,7 @@ static blk_status_t should_error_unserviceable_bio(struct pool *pool) + case PM_OUT_OF_DATA_SPACE: + return pool->pf.error_if_no_space ? BLK_STS_NOSPC : 0; + ++ case PM_OUT_OF_METADATA_SPACE: + case PM_READ_ONLY: + case PM_FAIL: + return BLK_STS_IOERR; +@@ -2481,8 +2531,9 @@ static void set_pool_mode(struct pool *pool, enum pool_mode new_mode) + error_retry_list(pool); + break; + ++ case PM_OUT_OF_METADATA_SPACE: + case PM_READ_ONLY: +- if (old_mode != new_mode) ++ if (!is_read_only_pool_mode(old_mode)) + notify_of_pool_mode_change(pool, "read-only"); + dm_pool_metadata_read_only(pool->pmd); + pool->process_bio = process_bio_read_only; +@@ -3420,6 +3471,10 @@ static int maybe_resize_metadata_dev(struct dm_target *ti, bool *need_commit) + DMINFO("%s: growing the metadata device from %llu to %llu blocks", + dm_device_name(pool->pool_md), + sb_metadata_dev_size, metadata_dev_size); ++ ++ if (get_pool_mode(pool) == PM_OUT_OF_METADATA_SPACE) ++ set_pool_mode(pool, PM_WRITE); ++ + r = dm_pool_resize_metadata_dev(pool->pmd, metadata_dev_size); + if (r) { + metadata_operation_failed(pool, "dm_pool_resize_metadata_dev", r); +@@ -3724,7 +3779,7 @@ static int pool_message(struct dm_target *ti, unsigned argc, char **argv, + struct pool_c *pt = ti->private; + struct pool *pool = pt->pool; + +- if (get_pool_mode(pool) >= PM_READ_ONLY) { ++ if (get_pool_mode(pool) >= PM_OUT_OF_METADATA_SPACE) { + DMERR("%s: unable to service pool target messages in READ_ONLY or FAIL mode", + dm_device_name(pool->pool_md)); + return -EOPNOTSUPP; +@@ -3798,6 +3853,7 @@ static void pool_status(struct dm_target *ti, status_type_t type, + dm_block_t nr_blocks_data; + dm_block_t nr_blocks_metadata; + dm_block_t held_root; ++ enum pool_mode mode; + char buf[BDEVNAME_SIZE]; + char buf2[BDEVNAME_SIZE]; + struct pool_c *pt = ti->private; +@@ -3868,9 +3924,10 @@ static void pool_status(struct dm_target *ti, status_type_t type, + else + DMEMIT("- "); + +- if (pool->pf.mode == PM_OUT_OF_DATA_SPACE) ++ mode = get_pool_mode(pool); ++ if (mode == PM_OUT_OF_DATA_SPACE) + DMEMIT("out_of_data_space "); +- else if (pool->pf.mode == PM_READ_ONLY) ++ else if (is_read_only_pool_mode(mode)) + DMEMIT("ro "); + else + DMEMIT("rw "); +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index 35bd3a62451b..8c93d44a052c 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -4531,11 +4531,12 @@ static sector_t reshape_request(struct mddev *mddev, sector_t sector_nr, + allow_barrier(conf); + } + ++ raise_barrier(conf, 0); + read_more: + /* Now schedule reads for blocks from sector_nr to last */ + r10_bio = raid10_alloc_init_r10buf(conf); + r10_bio->state = 0; +- raise_barrier(conf, sectors_done != 0); ++ raise_barrier(conf, 1); + atomic_set(&r10_bio->remaining, 0); + r10_bio->mddev = mddev; + r10_bio->sector = sector_nr; +@@ -4631,6 +4632,8 @@ read_more: + if (sector_nr <= last) + goto read_more; + ++ lower_barrier(conf); ++ + /* Now that we have done the whole section we can + * update reshape_progress + */ +diff --git a/drivers/md/raid5-log.h b/drivers/md/raid5-log.h +index a001808a2b77..bfb811407061 100644 +--- a/drivers/md/raid5-log.h ++++ b/drivers/md/raid5-log.h +@@ -46,6 +46,11 @@ extern int ppl_modify_log(struct r5conf *conf, struct md_rdev *rdev, bool add); + extern void ppl_quiesce(struct r5conf *conf, int quiesce); + extern int ppl_handle_flush_request(struct r5l_log *log, struct bio *bio); + ++static inline bool raid5_has_log(struct r5conf *conf) ++{ ++ return test_bit(MD_HAS_JOURNAL, &conf->mddev->flags); ++} ++ + static inline bool raid5_has_ppl(struct r5conf *conf) + { + return test_bit(MD_HAS_PPL, &conf->mddev->flags); +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 49107c52c8e6..9050bfc71309 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -735,7 +735,7 @@ static bool stripe_can_batch(struct stripe_head *sh) + { + struct r5conf *conf = sh->raid_conf; + +- if (conf->log || raid5_has_ppl(conf)) ++ if (raid5_has_log(conf) || raid5_has_ppl(conf)) + return false; + return test_bit(STRIPE_BATCH_READY, &sh->state) && + !test_bit(STRIPE_BITMAP_PENDING, &sh->state) && +@@ -7739,7 +7739,7 @@ static int raid5_resize(struct mddev *mddev, sector_t sectors) + sector_t newsize; + struct r5conf *conf = mddev->private; + +- if (conf->log || raid5_has_ppl(conf)) ++ if (raid5_has_log(conf) || raid5_has_ppl(conf)) + return -EINVAL; + sectors &= ~((sector_t)conf->chunk_sectors - 1); + newsize = raid5_size(mddev, sectors, mddev->raid_disks); +@@ -7790,7 +7790,7 @@ static int check_reshape(struct mddev *mddev) + { + struct r5conf *conf = mddev->private; + +- if (conf->log || raid5_has_ppl(conf)) ++ if (raid5_has_log(conf) || raid5_has_ppl(conf)) + return -EINVAL; + if (mddev->delta_disks == 0 && + mddev->new_layout == mddev->layout && +diff --git a/drivers/net/ethernet/amazon/ena/ena_com.c b/drivers/net/ethernet/amazon/ena/ena_com.c +index 17f12c18d225..c37deef3bcf1 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_com.c ++++ b/drivers/net/ethernet/amazon/ena/ena_com.c +@@ -459,7 +459,7 @@ static void ena_com_handle_admin_completion(struct ena_com_admin_queue *admin_qu + cqe = &admin_queue->cq.entries[head_masked]; + + /* Go over all the completions */ +- while ((cqe->acq_common_descriptor.flags & ++ while ((READ_ONCE(cqe->acq_common_descriptor.flags) & + ENA_ADMIN_ACQ_COMMON_DESC_PHASE_MASK) == phase) { + /* Do not read the rest of the completion entry before the + * phase bit was validated +@@ -637,7 +637,7 @@ static u32 ena_com_reg_bar_read32(struct ena_com_dev *ena_dev, u16 offset) + + mmiowb(); + for (i = 0; i < timeout; i++) { +- if (read_resp->req_id == mmio_read->seq_num) ++ if (READ_ONCE(read_resp->req_id) == mmio_read->seq_num) + break; + + udelay(1); +@@ -1796,8 +1796,8 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data) + aenq_common = &aenq_e->aenq_common_desc; + + /* Go over all the events */ +- while ((aenq_common->flags & ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == +- phase) { ++ while ((READ_ONCE(aenq_common->flags) & ++ ENA_ADMIN_AENQ_COMMON_DESC_PHASE_MASK) == phase) { + pr_debug("AENQ! Group[%x] Syndrom[%x] timestamp: [%llus]\n", + aenq_common->group, aenq_common->syndrom, + (u64)aenq_common->timestamp_low + +diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c +index f2af87d70594..1b01cd2820ba 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c ++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c +@@ -76,7 +76,7 @@ MODULE_DEVICE_TABLE(pci, ena_pci_tbl); + + static int ena_rss_init_default(struct ena_adapter *adapter); + static void check_for_admin_com_state(struct ena_adapter *adapter); +-static void ena_destroy_device(struct ena_adapter *adapter); ++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful); + static int ena_restore_device(struct ena_adapter *adapter); + + static void ena_tx_timeout(struct net_device *dev) +@@ -461,7 +461,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring, + return -ENOMEM; + } + +- dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, ++ dma = dma_map_page(rx_ring->dev, page, 0, ENA_PAGE_SIZE, + DMA_FROM_DEVICE); + if (unlikely(dma_mapping_error(rx_ring->dev, dma))) { + u64_stats_update_begin(&rx_ring->syncp); +@@ -478,7 +478,7 @@ static inline int ena_alloc_rx_page(struct ena_ring *rx_ring, + rx_info->page_offset = 0; + ena_buf = &rx_info->ena_buf; + ena_buf->paddr = dma; +- ena_buf->len = PAGE_SIZE; ++ ena_buf->len = ENA_PAGE_SIZE; + + return 0; + } +@@ -495,7 +495,7 @@ static void ena_free_rx_page(struct ena_ring *rx_ring, + return; + } + +- dma_unmap_page(rx_ring->dev, ena_buf->paddr, PAGE_SIZE, ++ dma_unmap_page(rx_ring->dev, ena_buf->paddr, ENA_PAGE_SIZE, + DMA_FROM_DEVICE); + + __free_page(page); +@@ -916,10 +916,10 @@ static struct sk_buff *ena_rx_skb(struct ena_ring *rx_ring, + do { + dma_unmap_page(rx_ring->dev, + dma_unmap_addr(&rx_info->ena_buf, paddr), +- PAGE_SIZE, DMA_FROM_DEVICE); ++ ENA_PAGE_SIZE, DMA_FROM_DEVICE); + + skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_info->page, +- rx_info->page_offset, len, PAGE_SIZE); ++ rx_info->page_offset, len, ENA_PAGE_SIZE); + + netif_dbg(rx_ring->adapter, rx_status, rx_ring->netdev, + "rx skb updated. len %d. data_len %d\n", +@@ -1900,7 +1900,7 @@ static int ena_close(struct net_device *netdev) + "Destroy failure, restarting device\n"); + ena_dump_stats_to_dmesg(adapter); + /* rtnl lock already obtained in dev_ioctl() layer */ +- ena_destroy_device(adapter); ++ ena_destroy_device(adapter, false); + ena_restore_device(adapter); + } + +@@ -2549,12 +2549,15 @@ err_disable_msix: + return rc; + } + +-static void ena_destroy_device(struct ena_adapter *adapter) ++static void ena_destroy_device(struct ena_adapter *adapter, bool graceful) + { + struct net_device *netdev = adapter->netdev; + struct ena_com_dev *ena_dev = adapter->ena_dev; + bool dev_up; + ++ if (!test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) ++ return; ++ + netif_carrier_off(netdev); + + del_timer_sync(&adapter->timer_service); +@@ -2562,7 +2565,8 @@ static void ena_destroy_device(struct ena_adapter *adapter) + dev_up = test_bit(ENA_FLAG_DEV_UP, &adapter->flags); + adapter->dev_up_before_reset = dev_up; + +- ena_com_set_admin_running_state(ena_dev, false); ++ if (!graceful) ++ ena_com_set_admin_running_state(ena_dev, false); + + if (test_bit(ENA_FLAG_DEV_UP, &adapter->flags)) + ena_down(adapter); +@@ -2590,6 +2594,7 @@ static void ena_destroy_device(struct ena_adapter *adapter) + adapter->reset_reason = ENA_REGS_RESET_NORMAL; + + clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); ++ clear_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags); + } + + static int ena_restore_device(struct ena_adapter *adapter) +@@ -2634,6 +2639,7 @@ static int ena_restore_device(struct ena_adapter *adapter) + } + } + ++ set_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags); + mod_timer(&adapter->timer_service, round_jiffies(jiffies + HZ)); + dev_err(&pdev->dev, "Device reset completed successfully\n"); + +@@ -2664,7 +2670,7 @@ static void ena_fw_reset_device(struct work_struct *work) + return; + } + rtnl_lock(); +- ena_destroy_device(adapter); ++ ena_destroy_device(adapter, false); + ena_restore_device(adapter); + rtnl_unlock(); + } +@@ -3408,30 +3414,24 @@ static void ena_remove(struct pci_dev *pdev) + netdev->rx_cpu_rmap = NULL; + } + #endif /* CONFIG_RFS_ACCEL */ +- +- unregister_netdev(netdev); + del_timer_sync(&adapter->timer_service); + + cancel_work_sync(&adapter->reset_task); + +- /* Reset the device only if the device is running. */ +- if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) +- ena_com_dev_reset(ena_dev, adapter->reset_reason); ++ unregister_netdev(netdev); + +- ena_free_mgmnt_irq(adapter); ++ /* If the device is running then we want to make sure the device will be ++ * reset to make sure no more events will be issued by the device. ++ */ ++ if (test_bit(ENA_FLAG_DEVICE_RUNNING, &adapter->flags)) ++ set_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); + +- ena_disable_msix(adapter); ++ rtnl_lock(); ++ ena_destroy_device(adapter, true); ++ rtnl_unlock(); + + free_netdev(netdev); + +- ena_com_mmio_reg_read_request_destroy(ena_dev); +- +- ena_com_abort_admin_commands(ena_dev); +- +- ena_com_wait_for_abort_completion(ena_dev); +- +- ena_com_admin_destroy(ena_dev); +- + ena_com_rss_destroy(ena_dev); + + ena_com_delete_debug_area(ena_dev); +@@ -3466,7 +3466,7 @@ static int ena_suspend(struct pci_dev *pdev, pm_message_t state) + "ignoring device reset request as the device is being suspended\n"); + clear_bit(ENA_FLAG_TRIGGER_RESET, &adapter->flags); + } +- ena_destroy_device(adapter); ++ ena_destroy_device(adapter, true); + rtnl_unlock(); + return 0; + } +diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h +index f1972b5ab650..7c7ae56c52cf 100644 +--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h ++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h +@@ -355,4 +355,15 @@ void ena_dump_stats_to_buf(struct ena_adapter *adapter, u8 *buf); + + int ena_get_sset_count(struct net_device *netdev, int sset); + ++/* The ENA buffer length fields is 16 bit long. So when PAGE_SIZE == 64kB the ++ * driver passas 0. ++ * Since the max packet size the ENA handles is ~9kB limit the buffer length to ++ * 16kB. ++ */ ++#if PAGE_SIZE > SZ_16K ++#define ENA_PAGE_SIZE SZ_16K ++#else ++#define ENA_PAGE_SIZE PAGE_SIZE ++#endif ++ + #endif /* !(ENA_H) */ +diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c +index 515d96e32143..c4d7479938e2 100644 +--- a/drivers/net/ethernet/cadence/macb_main.c ++++ b/drivers/net/ethernet/cadence/macb_main.c +@@ -648,7 +648,7 @@ static int macb_halt_tx(struct macb *bp) + if (!(status & MACB_BIT(TGO))) + return 0; + +- usleep_range(10, 250); ++ udelay(250); + } while (time_before(halt_time, timeout)); + + return -ETIMEDOUT; +diff --git a/drivers/net/ethernet/hisilicon/hns/hnae.h b/drivers/net/ethernet/hisilicon/hns/hnae.h +index cad52bd331f7..08a750fb60c4 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hnae.h ++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h +@@ -486,6 +486,8 @@ struct hnae_ae_ops { + u8 *auto_neg, u16 *speed, u8 *duplex); + void (*toggle_ring_irq)(struct hnae_ring *ring, u32 val); + void (*adjust_link)(struct hnae_handle *handle, int speed, int duplex); ++ bool (*need_adjust_link)(struct hnae_handle *handle, ++ int speed, int duplex); + int (*set_loopback)(struct hnae_handle *handle, + enum hnae_loop loop_mode, int en); + void (*get_ring_bdnum_limit)(struct hnae_queue *queue, +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c +index bd68379d2bea..bf930ab3c2bd 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c +@@ -155,6 +155,41 @@ static void hns_ae_put_handle(struct hnae_handle *handle) + hns_ae_get_ring_pair(handle->qs[i])->used_by_vf = 0; + } + ++static int hns_ae_wait_flow_down(struct hnae_handle *handle) ++{ ++ struct dsaf_device *dsaf_dev; ++ struct hns_ppe_cb *ppe_cb; ++ struct hnae_vf_cb *vf_cb; ++ int ret; ++ int i; ++ ++ for (i = 0; i < handle->q_num; i++) { ++ ret = hns_rcb_wait_tx_ring_clean(handle->qs[i]); ++ if (ret) ++ return ret; ++ } ++ ++ ppe_cb = hns_get_ppe_cb(handle); ++ ret = hns_ppe_wait_tx_fifo_clean(ppe_cb); ++ if (ret) ++ return ret; ++ ++ dsaf_dev = hns_ae_get_dsaf_dev(handle->dev); ++ if (!dsaf_dev) ++ return -EINVAL; ++ ret = hns_dsaf_wait_pkt_clean(dsaf_dev, handle->dport_id); ++ if (ret) ++ return ret; ++ ++ vf_cb = hns_ae_get_vf_cb(handle); ++ ret = hns_mac_wait_fifo_clean(vf_cb->mac_cb); ++ if (ret) ++ return ret; ++ ++ mdelay(10); ++ return 0; ++} ++ + static void hns_ae_ring_enable_all(struct hnae_handle *handle, int val) + { + int q_num = handle->q_num; +@@ -399,12 +434,41 @@ static int hns_ae_get_mac_info(struct hnae_handle *handle, + return hns_mac_get_port_info(mac_cb, auto_neg, speed, duplex); + } + ++static bool hns_ae_need_adjust_link(struct hnae_handle *handle, int speed, ++ int duplex) ++{ ++ struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle); ++ ++ return hns_mac_need_adjust_link(mac_cb, speed, duplex); ++} ++ + static void hns_ae_adjust_link(struct hnae_handle *handle, int speed, + int duplex) + { + struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle); + +- hns_mac_adjust_link(mac_cb, speed, duplex); ++ switch (mac_cb->dsaf_dev->dsaf_ver) { ++ case AE_VERSION_1: ++ hns_mac_adjust_link(mac_cb, speed, duplex); ++ break; ++ ++ case AE_VERSION_2: ++ /* chip need to clear all pkt inside */ ++ hns_mac_disable(mac_cb, MAC_COMM_MODE_RX); ++ if (hns_ae_wait_flow_down(handle)) { ++ hns_mac_enable(mac_cb, MAC_COMM_MODE_RX); ++ break; ++ } ++ ++ hns_mac_adjust_link(mac_cb, speed, duplex); ++ hns_mac_enable(mac_cb, MAC_COMM_MODE_RX); ++ break; ++ ++ default: ++ break; ++ } ++ ++ return; + } + + static void hns_ae_get_ring_bdnum_limit(struct hnae_queue *queue, +@@ -902,6 +966,7 @@ static struct hnae_ae_ops hns_dsaf_ops = { + .get_status = hns_ae_get_link_status, + .get_info = hns_ae_get_mac_info, + .adjust_link = hns_ae_adjust_link, ++ .need_adjust_link = hns_ae_need_adjust_link, + .set_loopback = hns_ae_config_loopback, + .get_ring_bdnum_limit = hns_ae_get_ring_bdnum_limit, + .get_pauseparam = hns_ae_get_pauseparam, +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c +index 74bd260ca02a..8c7bc5cf193c 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c +@@ -257,6 +257,16 @@ static void hns_gmac_get_pausefrm_cfg(void *mac_drv, u32 *rx_pause_en, + *tx_pause_en = dsaf_get_bit(pause_en, GMAC_PAUSE_EN_TX_FDFC_B); + } + ++static bool hns_gmac_need_adjust_link(void *mac_drv, enum mac_speed speed, ++ int duplex) ++{ ++ struct mac_driver *drv = (struct mac_driver *)mac_drv; ++ struct hns_mac_cb *mac_cb = drv->mac_cb; ++ ++ return (mac_cb->speed != speed) || ++ (mac_cb->half_duplex == duplex); ++} ++ + static int hns_gmac_adjust_link(void *mac_drv, enum mac_speed speed, + u32 full_duplex) + { +@@ -309,6 +319,30 @@ static void hns_gmac_set_promisc(void *mac_drv, u8 en) + hns_gmac_set_uc_match(mac_drv, en); + } + ++int hns_gmac_wait_fifo_clean(void *mac_drv) ++{ ++ struct mac_driver *drv = (struct mac_driver *)mac_drv; ++ int wait_cnt; ++ u32 val; ++ ++ wait_cnt = 0; ++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) { ++ val = dsaf_read_dev(drv, GMAC_FIFO_STATE_REG); ++ /* bit5~bit0 is not send complete pkts */ ++ if ((val & 0x3f) == 0) ++ break; ++ usleep_range(100, 200); ++ } ++ ++ if (wait_cnt >= HNS_MAX_WAIT_CNT) { ++ dev_err(drv->dev, ++ "hns ge %d fifo was not idle.\n", drv->mac_id); ++ return -EBUSY; ++ } ++ ++ return 0; ++} ++ + static void hns_gmac_init(void *mac_drv) + { + u32 port; +@@ -690,6 +724,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param) + mac_drv->mac_disable = hns_gmac_disable; + mac_drv->mac_free = hns_gmac_free; + mac_drv->adjust_link = hns_gmac_adjust_link; ++ mac_drv->need_adjust_link = hns_gmac_need_adjust_link; + mac_drv->set_tx_auto_pause_frames = hns_gmac_set_tx_auto_pause_frames; + mac_drv->config_max_frame_length = hns_gmac_config_max_frame_length; + mac_drv->mac_pausefrm_cfg = hns_gmac_pause_frm_cfg; +@@ -717,6 +752,7 @@ void *hns_gmac_config(struct hns_mac_cb *mac_cb, struct mac_params *mac_param) + mac_drv->get_strings = hns_gmac_get_strings; + mac_drv->update_stats = hns_gmac_update_stats; + mac_drv->set_promiscuous = hns_gmac_set_promisc; ++ mac_drv->wait_fifo_clean = hns_gmac_wait_fifo_clean; + + return (void *)mac_drv; + } +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c +index 9dcc5765f11f..5c6b880c3eb7 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c +@@ -114,6 +114,26 @@ int hns_mac_get_port_info(struct hns_mac_cb *mac_cb, + return 0; + } + ++/** ++ *hns_mac_is_adjust_link - check is need change mac speed and duplex register ++ *@mac_cb: mac device ++ *@speed: phy device speed ++ *@duplex:phy device duplex ++ * ++ */ ++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex) ++{ ++ struct mac_driver *mac_ctrl_drv; ++ ++ mac_ctrl_drv = (struct mac_driver *)(mac_cb->priv.mac); ++ ++ if (mac_ctrl_drv->need_adjust_link) ++ return mac_ctrl_drv->need_adjust_link(mac_ctrl_drv, ++ (enum mac_speed)speed, duplex); ++ else ++ return true; ++} ++ + void hns_mac_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex) + { + int ret; +@@ -430,6 +450,16 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable) + return 0; + } + ++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb) ++{ ++ struct mac_driver *drv = hns_mac_get_drv(mac_cb); ++ ++ if (drv->wait_fifo_clean) ++ return drv->wait_fifo_clean(drv); ++ ++ return 0; ++} ++ + void hns_mac_reset(struct hns_mac_cb *mac_cb) + { + struct mac_driver *drv = hns_mac_get_drv(mac_cb); +@@ -999,6 +1029,20 @@ static int hns_mac_get_max_port_num(struct dsaf_device *dsaf_dev) + return DSAF_MAX_PORT_NUM; + } + ++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode) ++{ ++ struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb); ++ ++ mac_ctrl_drv->mac_enable(mac_cb->priv.mac, mode); ++} ++ ++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode) ++{ ++ struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb); ++ ++ mac_ctrl_drv->mac_disable(mac_cb->priv.mac, mode); ++} ++ + /** + * hns_mac_init - init mac + * @dsaf_dev: dsa fabric device struct pointer +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h +index bbc0a98e7ca3..fbc75341bef7 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h +@@ -356,6 +356,9 @@ struct mac_driver { + /*adjust mac mode of port,include speed and duplex*/ + int (*adjust_link)(void *mac_drv, enum mac_speed speed, + u32 full_duplex); ++ /* need adjust link */ ++ bool (*need_adjust_link)(void *mac_drv, enum mac_speed speed, ++ int duplex); + /* config autoegotaite mode of port*/ + void (*set_an_mode)(void *mac_drv, u8 enable); + /* config loopbank mode */ +@@ -394,6 +397,7 @@ struct mac_driver { + void (*get_info)(void *mac_drv, struct mac_info *mac_info); + + void (*update_stats)(void *mac_drv); ++ int (*wait_fifo_clean)(void *mac_drv); + + enum mac_mode mac_mode; + u8 mac_id; +@@ -427,6 +431,7 @@ void *hns_xgmac_config(struct hns_mac_cb *mac_cb, + + int hns_mac_init(struct dsaf_device *dsaf_dev); + void mac_adjust_link(struct net_device *net_dev); ++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex); + void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status); + int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb, u32 vmid, char *addr); + int hns_mac_set_multi(struct hns_mac_cb *mac_cb, +@@ -463,5 +468,8 @@ int hns_mac_add_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id, + int hns_mac_rm_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id, + const unsigned char *addr); + int hns_mac_clr_multicast(struct hns_mac_cb *mac_cb, int vfn); ++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode); ++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode); ++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb); + + #endif /* _HNS_DSAF_MAC_H */ +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c +index 0ce07f6eb1e6..0ef6d429308f 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c +@@ -2733,6 +2733,35 @@ void hns_dsaf_set_promisc_tcam(struct dsaf_device *dsaf_dev, + soft_mac_entry->index = enable ? entry_index : DSAF_INVALID_ENTRY_IDX; + } + ++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port) ++{ ++ u32 val, val_tmp; ++ int wait_cnt; ++ ++ if (port >= DSAF_SERVICE_NW_NUM) ++ return 0; ++ ++ wait_cnt = 0; ++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) { ++ val = dsaf_read_dev(dsaf_dev, DSAF_VOQ_IN_PKT_NUM_0_REG + ++ (port + DSAF_XGE_NUM) * 0x40); ++ val_tmp = dsaf_read_dev(dsaf_dev, DSAF_VOQ_OUT_PKT_NUM_0_REG + ++ (port + DSAF_XGE_NUM) * 0x40); ++ if (val == val_tmp) ++ break; ++ ++ usleep_range(100, 200); ++ } ++ ++ if (wait_cnt >= HNS_MAX_WAIT_CNT) { ++ dev_err(dsaf_dev->dev, "hns dsaf clean wait timeout(%u - %u).\n", ++ val, val_tmp); ++ return -EBUSY; ++ } ++ ++ return 0; ++} ++ + /** + * dsaf_probe - probo dsaf dev + * @pdev: dasf platform device +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h +index 4507e8222683..0e1cd99831a6 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h +@@ -44,6 +44,8 @@ struct hns_mac_cb; + #define DSAF_ROCE_CREDIT_CHN 8 + #define DSAF_ROCE_CHAN_MODE 3 + ++#define HNS_MAX_WAIT_CNT 10000 ++ + enum dsaf_roce_port_mode { + DSAF_ROCE_6PORT_MODE, + DSAF_ROCE_4PORT_MODE, +@@ -463,5 +465,6 @@ int hns_dsaf_rm_mac_addr( + + int hns_dsaf_clr_mac_mc_port(struct dsaf_device *dsaf_dev, + u8 mac_id, u8 port_num); ++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port); + + #endif /* __HNS_DSAF_MAIN_H__ */ +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c +index 93e71e27401b..a19932aeb9d7 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c +@@ -274,6 +274,29 @@ static void hns_ppe_exc_irq_en(struct hns_ppe_cb *ppe_cb, int en) + dsaf_write_dev(ppe_cb, PPE_INTEN_REG, msk_vlue & vld_msk); + } + ++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb) ++{ ++ int wait_cnt; ++ u32 val; ++ ++ wait_cnt = 0; ++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) { ++ val = dsaf_read_dev(ppe_cb, PPE_CURR_TX_FIFO0_REG) & 0x3ffU; ++ if (!val) ++ break; ++ ++ usleep_range(100, 200); ++ } ++ ++ if (wait_cnt >= HNS_MAX_WAIT_CNT) { ++ dev_err(ppe_cb->dev, "hns ppe tx fifo clean wait timeout, still has %u pkt.\n", ++ val); ++ return -EBUSY; ++ } ++ ++ return 0; ++} ++ + /** + * ppe_init_hw - init ppe + * @ppe_cb: ppe device +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h +index 9d8e643e8aa6..f670e63a5a01 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h +@@ -100,6 +100,7 @@ struct ppe_common_cb { + + }; + ++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb); + int hns_ppe_init(struct dsaf_device *dsaf_dev); + + void hns_ppe_uninit(struct dsaf_device *dsaf_dev); +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c +index e2e28532e4dc..1e43d7a3ca86 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c +@@ -66,6 +66,29 @@ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag) + "queue(%d) wait fbd(%d) clean fail!!\n", i, fbd_num); + } + ++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs) ++{ ++ u32 head, tail; ++ int wait_cnt; ++ ++ tail = dsaf_read_dev(&qs->tx_ring, RCB_REG_TAIL); ++ wait_cnt = 0; ++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) { ++ head = dsaf_read_dev(&qs->tx_ring, RCB_REG_HEAD); ++ if (tail == head) ++ break; ++ ++ usleep_range(100, 200); ++ } ++ ++ if (wait_cnt >= HNS_MAX_WAIT_CNT) { ++ dev_err(qs->dev->dev, "rcb wait timeout, head not equal to tail.\n"); ++ return -EBUSY; ++ } ++ ++ return 0; ++} ++ + /** + *hns_rcb_reset_ring_hw - ring reset + *@q: ring struct pointer +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h +index 602816498c8d..2319b772a271 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h +@@ -136,6 +136,7 @@ void hns_rcbv2_int_clr_hw(struct hnae_queue *q, u32 flag); + void hns_rcb_init_hw(struct ring_pair_cb *ring); + void hns_rcb_reset_ring_hw(struct hnae_queue *q); + void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag); ++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs); + u32 hns_rcb_get_rx_coalesced_frames( + struct rcb_common_cb *rcb_common, u32 port_idx); + u32 hns_rcb_get_tx_coalesced_frames( +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h +index 886cbbf25761..74d935d82cbc 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h +@@ -464,6 +464,7 @@ + #define RCB_RING_INTMSK_TX_OVERTIME_REG 0x000C4 + #define RCB_RING_INTSTS_TX_OVERTIME_REG 0x000C8 + ++#define GMAC_FIFO_STATE_REG 0x0000UL + #define GMAC_DUPLEX_TYPE_REG 0x0008UL + #define GMAC_FD_FC_TYPE_REG 0x000CUL + #define GMAC_TX_WATER_LINE_REG 0x0010UL +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c +index ef994a715f93..b4518f45f048 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c +@@ -1212,11 +1212,26 @@ static void hns_nic_adjust_link(struct net_device *ndev) + struct hnae_handle *h = priv->ae_handle; + int state = 1; + ++ /* If there is no phy, do not need adjust link */ + if (ndev->phydev) { +- h->dev->ops->adjust_link(h, ndev->phydev->speed, +- ndev->phydev->duplex); +- state = ndev->phydev->link; ++ /* When phy link down, do nothing */ ++ if (ndev->phydev->link == 0) ++ return; ++ ++ if (h->dev->ops->need_adjust_link(h, ndev->phydev->speed, ++ ndev->phydev->duplex)) { ++ /* because Hi161X chip don't support to change gmac ++ * speed and duplex with traffic. Delay 200ms to ++ * make sure there is no more data in chip FIFO. ++ */ ++ netif_carrier_off(ndev); ++ msleep(200); ++ h->dev->ops->adjust_link(h, ndev->phydev->speed, ++ ndev->phydev->duplex); ++ netif_carrier_on(ndev); ++ } + } ++ + state = state && h->dev->ops->get_status(h); + + if (state != priv->link) { +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c +index 2e14a3ae1d8b..c1e947bb852f 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c +@@ -243,7 +243,9 @@ static int hns_nic_set_link_ksettings(struct net_device *net_dev, + } + + if (h->dev->ops->adjust_link) { ++ netif_carrier_off(net_dev); + h->dev->ops->adjust_link(h, (int)speed, cmd->base.duplex); ++ netif_carrier_on(net_dev); + return 0; + } + +diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c +index 354c0982847b..372664686309 100644 +--- a/drivers/net/ethernet/ibm/emac/core.c ++++ b/drivers/net/ethernet/ibm/emac/core.c +@@ -494,9 +494,6 @@ static u32 __emac_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_s + case 16384: + ret |= EMAC_MR1_RFS_16K; + break; +- case 8192: +- ret |= EMAC4_MR1_RFS_8K; +- break; + case 4096: + ret |= EMAC_MR1_RFS_4K; + break; +@@ -537,6 +534,9 @@ static u32 __emac4_calc_base_mr1(struct emac_instance *dev, int tx_size, int rx_ + case 16384: + ret |= EMAC4_MR1_RFS_16K; + break; ++ case 8192: ++ ret |= EMAC4_MR1_RFS_8K; ++ break; + case 4096: + ret |= EMAC4_MR1_RFS_4K; + break; +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c +index ffe7acbeaa22..d834308adf95 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.c ++++ b/drivers/net/ethernet/ibm/ibmvnic.c +@@ -1841,11 +1841,17 @@ static int do_reset(struct ibmvnic_adapter *adapter, + adapter->map_id = 1; + release_rx_pools(adapter); + release_tx_pools(adapter); +- init_rx_pools(netdev); +- init_tx_pools(netdev); ++ rc = init_rx_pools(netdev); ++ if (rc) ++ return rc; ++ rc = init_tx_pools(netdev); ++ if (rc) ++ return rc; + + release_napi(adapter); +- init_napi(adapter); ++ rc = init_napi(adapter); ++ if (rc) ++ return rc; + } else { + rc = reset_tx_pools(adapter); + if (rc) +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +index 62e57b05a0ae..56b31e903cc1 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +@@ -3196,11 +3196,13 @@ int ixgbe_poll(struct napi_struct *napi, int budget) + return budget; + + /* all work done, exit the polling mode */ +- napi_complete_done(napi, work_done); +- if (adapter->rx_itr_setting & 1) +- ixgbe_set_itr(q_vector); +- if (!test_bit(__IXGBE_DOWN, &adapter->state)) +- ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx)); ++ if (likely(napi_complete_done(napi, work_done))) { ++ if (adapter->rx_itr_setting & 1) ++ ixgbe_set_itr(q_vector); ++ if (!test_bit(__IXGBE_DOWN, &adapter->state)) ++ ixgbe_irq_enable_queues(adapter, ++ BIT_ULL(q_vector->v_idx)); ++ } + + return min(work_done, budget - 1); + } +diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +index 661fa5a38df2..b8bba64673e5 100644 +--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c ++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c +@@ -4685,6 +4685,7 @@ static int mvpp2_port_probe(struct platform_device *pdev, + dev->min_mtu = ETH_MIN_MTU; + /* 9704 == 9728 - 20 and rounding to 8 */ + dev->max_mtu = MVPP2_BM_JUMBO_PKT_SIZE; ++ dev->dev.of_node = port_node; + + /* Phylink isn't used w/ ACPI as of now */ + if (port_node) { +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c +index 922811fb66e7..37ba7c78859d 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c +@@ -396,16 +396,17 @@ void mlx5_remove_dev_by_protocol(struct mlx5_core_dev *dev, int protocol) + } + } + +-static u16 mlx5_gen_pci_id(struct mlx5_core_dev *dev) ++static u32 mlx5_gen_pci_id(struct mlx5_core_dev *dev) + { +- return (u16)((dev->pdev->bus->number << 8) | ++ return (u32)((pci_domain_nr(dev->pdev->bus) << 16) | ++ (dev->pdev->bus->number << 8) | + PCI_SLOT(dev->pdev->devfn)); + } + + /* Must be called with intf_mutex held */ + struct mlx5_core_dev *mlx5_get_next_phys_dev(struct mlx5_core_dev *dev) + { +- u16 pci_id = mlx5_gen_pci_id(dev); ++ u32 pci_id = mlx5_gen_pci_id(dev); + struct mlx5_core_dev *res = NULL; + struct mlx5_core_dev *tmp_dev; + struct mlx5_priv *priv; +diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c +index e5eb361b973c..1d1e66002232 100644 +--- a/drivers/net/ethernet/realtek/r8169.c ++++ b/drivers/net/ethernet/realtek/r8169.c +@@ -730,7 +730,7 @@ struct rtl8169_tc_offsets { + }; + + enum rtl_flag { +- RTL_FLAG_TASK_ENABLED, ++ RTL_FLAG_TASK_ENABLED = 0, + RTL_FLAG_TASK_SLOW_PENDING, + RTL_FLAG_TASK_RESET_PENDING, + RTL_FLAG_TASK_PHY_PENDING, +@@ -5150,13 +5150,13 @@ static void rtl_hw_start(struct rtl8169_private *tp) + + rtl_set_rx_max_size(tp); + rtl_set_rx_tx_desc_registers(tp); +- rtl_set_tx_config_registers(tp); + RTL_W8(tp, Cfg9346, Cfg9346_Lock); + + /* Initially a 10 us delay. Turned it into a PCI commit. - FR */ + RTL_R8(tp, IntrMask); + RTL_W8(tp, ChipCmd, CmdTxEnb | CmdRxEnb); + rtl_init_rxcfg(tp); ++ rtl_set_tx_config_registers(tp); + + rtl_set_rx_mode(tp->dev); + /* no early-rx interrupts */ +@@ -7125,7 +7125,8 @@ static int rtl8169_close(struct net_device *dev) + rtl8169_update_counters(tp); + + rtl_lock_work(tp); +- clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags); ++ /* Clear all task flags */ ++ bitmap_zero(tp->wk.flags, RTL_FLAG_MAX); + + rtl8169_down(dev); + rtl_unlock_work(tp); +@@ -7301,7 +7302,9 @@ static void rtl8169_net_suspend(struct net_device *dev) + + rtl_lock_work(tp); + napi_disable(&tp->napi); +- clear_bit(RTL_FLAG_TASK_ENABLED, tp->wk.flags); ++ /* Clear all task flags */ ++ bitmap_zero(tp->wk.flags, RTL_FLAG_MAX); ++ + rtl_unlock_work(tp); + + rtl_pll_power_down(tp); +diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c +index 5614fd231bbe..6520379b390e 100644 +--- a/drivers/net/ethernet/renesas/sh_eth.c ++++ b/drivers/net/ethernet/renesas/sh_eth.c +@@ -807,6 +807,41 @@ static struct sh_eth_cpu_data r8a77980_data = { + .magic = 1, + .cexcr = 1, + }; ++ ++/* R7S9210 */ ++static struct sh_eth_cpu_data r7s9210_data = { ++ .soft_reset = sh_eth_soft_reset, ++ ++ .set_duplex = sh_eth_set_duplex, ++ .set_rate = sh_eth_set_rate_rcar, ++ ++ .register_type = SH_ETH_REG_FAST_SH4, ++ ++ .edtrr_trns = EDTRR_TRNS_ETHER, ++ .ecsr_value = ECSR_ICD, ++ .ecsipr_value = ECSIPR_ICDIP, ++ .eesipr_value = EESIPR_TWBIP | EESIPR_TABTIP | EESIPR_RABTIP | ++ EESIPR_RFCOFIP | EESIPR_ECIIP | EESIPR_FTCIP | ++ EESIPR_TDEIP | EESIPR_TFUFIP | EESIPR_FRIP | ++ EESIPR_RDEIP | EESIPR_RFOFIP | EESIPR_CNDIP | ++ EESIPR_DLCIP | EESIPR_CDIP | EESIPR_TROIP | ++ EESIPR_RMAFIP | EESIPR_RRFIP | EESIPR_RTLFIP | ++ EESIPR_RTSFIP | EESIPR_PREIP | EESIPR_CERFIP, ++ ++ .tx_check = EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_TRO, ++ .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE | ++ EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE, ++ ++ .fdr_value = 0x0000070f, ++ ++ .apr = 1, ++ .mpr = 1, ++ .tpauser = 1, ++ .hw_swap = 1, ++ .rpadir = 1, ++ .no_ade = 1, ++ .xdfar_rw = 1, ++}; + #endif /* CONFIG_OF */ + + static void sh_eth_set_rate_sh7724(struct net_device *ndev) +@@ -3131,6 +3166,7 @@ static const struct of_device_id sh_eth_match_table[] = { + { .compatible = "renesas,ether-r8a7794", .data = &rcar_gen2_data }, + { .compatible = "renesas,gether-r8a77980", .data = &r8a77980_data }, + { .compatible = "renesas,ether-r7s72100", .data = &r7s72100_data }, ++ { .compatible = "renesas,ether-r7s9210", .data = &r7s9210_data }, + { .compatible = "renesas,rcar-gen1-ether", .data = &rcar_gen1_data }, + { .compatible = "renesas,rcar-gen2-ether", .data = &rcar_gen2_data }, + { } +diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c +index 6b0e1ec346cb..d46d57b989ae 100644 +--- a/drivers/net/wireless/broadcom/b43/dma.c ++++ b/drivers/net/wireless/broadcom/b43/dma.c +@@ -1518,13 +1518,15 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev, + } + } else { + /* More than a single header/data pair were missed. +- * Report this error, and reset the controller to ++ * Report this error. If running with open-source ++ * firmware, then reset the controller to + * revive operation. + */ + b43dbg(dev->wl, + "Out of order TX status report on DMA ring %d. Expected %d, but got %d\n", + ring->index, firstused, slot); +- b43_controller_restart(dev, "Out of order TX"); ++ if (dev->fw.opensource) ++ b43_controller_restart(dev, "Out of order TX"); + return; + } + } +diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c +index b815ba38dbdb..88121548eb9f 100644 +--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c ++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c +@@ -877,15 +877,12 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, + const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ? + iwl_ext_nvm_channels : iwl_nvm_channels; + struct ieee80211_regdomain *regd, *copy_rd; +- int size_of_regd, regd_to_copy, wmms_to_copy; +- int size_of_wmms = 0; ++ int size_of_regd, regd_to_copy; + struct ieee80211_reg_rule *rule; +- struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm; + struct regdb_ptrs *regdb_ptrs; + enum nl80211_band band; + int center_freq, prev_center_freq = 0; +- int valid_rules = 0, n_wmms = 0; +- int i; ++ int valid_rules = 0; + bool new_rule; + int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ? + IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS; +@@ -904,11 +901,7 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, + sizeof(struct ieee80211_regdomain) + + num_of_ch * sizeof(struct ieee80211_reg_rule); + +- if (geo_info & GEO_WMM_ETSI_5GHZ_INFO) +- size_of_wmms = +- num_of_ch * sizeof(struct ieee80211_wmm_rule); +- +- regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL); ++ regd = kzalloc(size_of_regd, GFP_KERNEL); + if (!regd) + return ERR_PTR(-ENOMEM); + +@@ -922,8 +915,6 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, + regd->alpha2[0] = fw_mcc >> 8; + regd->alpha2[1] = fw_mcc & 0xff; + +- wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); +- + for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) { + ch_flags = (u16)__le32_to_cpup(channels + ch_idx); + band = (ch_idx < NUM_2GHZ_CHANNELS) ? +@@ -977,26 +968,10 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, + band == NL80211_BAND_2GHZ) + continue; + +- if (!reg_query_regdb_wmm(regd->alpha2, center_freq, +- ®db_ptrs[n_wmms].token, wmm_rule)) { +- /* Add only new rules */ +- for (i = 0; i < n_wmms; i++) { +- if (regdb_ptrs[i].token == +- regdb_ptrs[n_wmms].token) { +- rule->wmm_rule = regdb_ptrs[i].rule; +- break; +- } +- } +- if (i == n_wmms) { +- rule->wmm_rule = wmm_rule; +- regdb_ptrs[n_wmms++].rule = wmm_rule; +- wmm_rule++; +- } +- } ++ reg_query_regdb_wmm(regd->alpha2, center_freq, rule); + } + + regd->n_reg_rules = valid_rules; +- regd->n_wmm_rules = n_wmms; + + /* + * Narrow down regdom for unused regulatory rules to prevent hole +@@ -1005,28 +980,13 @@ iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, + regd_to_copy = sizeof(struct ieee80211_regdomain) + + valid_rules * sizeof(struct ieee80211_reg_rule); + +- wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms; +- +- copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL); ++ copy_rd = kzalloc(regd_to_copy, GFP_KERNEL); + if (!copy_rd) { + copy_rd = ERR_PTR(-ENOMEM); + goto out; + } + + memcpy(copy_rd, regd, regd_to_copy); +- memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd, +- wmms_to_copy); +- +- d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy); +- s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); +- +- for (i = 0; i < regd->n_reg_rules; i++) { +- if (!regd->reg_rules[i].wmm_rule) +- continue; +- +- copy_rd->reg_rules[i].wmm_rule = d_wmm + +- (regd->reg_rules[i].wmm_rule - s_wmm); +- } + + out: + kfree(regdb_ptrs); +diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c +index 18e819d964f1..80e2c8595c7c 100644 +--- a/drivers/net/wireless/mac80211_hwsim.c ++++ b/drivers/net/wireless/mac80211_hwsim.c +@@ -33,6 +33,7 @@ + #include + #include + #include ++#include + #include "mac80211_hwsim.h" + + #define WARN_QUEUE 100 +@@ -2699,9 +2700,6 @@ static int mac80211_hwsim_new_radio(struct genl_info *info, + IEEE80211_VHT_CAP_SHORT_GI_80 | + IEEE80211_VHT_CAP_SHORT_GI_160 | + IEEE80211_VHT_CAP_TXSTBC | +- IEEE80211_VHT_CAP_RXSTBC_1 | +- IEEE80211_VHT_CAP_RXSTBC_2 | +- IEEE80211_VHT_CAP_RXSTBC_3 | + IEEE80211_VHT_CAP_RXSTBC_4 | + IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK; + sband->vht_cap.vht_mcs.rx_mcs_map = +@@ -3194,6 +3192,11 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info) + if (info->attrs[HWSIM_ATTR_CHANNELS]) + param.channels = nla_get_u32(info->attrs[HWSIM_ATTR_CHANNELS]); + ++ if (param.channels < 1) { ++ GENL_SET_ERR_MSG(info, "must have at least one channel"); ++ return -EINVAL; ++ } ++ + if (param.channels > CFG80211_MAX_NUM_DIFFERENT_CHANNELS) { + GENL_SET_ERR_MSG(info, "too many channels specified"); + return -EINVAL; +@@ -3227,6 +3230,9 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info) + kfree(hwname); + return -EINVAL; + } ++ ++ idx = array_index_nospec(idx, ++ ARRAY_SIZE(hwsim_world_regdom_custom)); + param.regd = hwsim_world_regdom_custom[idx]; + } + +diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c +index 52e0c5d579a7..1d909e5ba657 100644 +--- a/drivers/nvme/target/rdma.c ++++ b/drivers/nvme/target/rdma.c +@@ -65,6 +65,7 @@ struct nvmet_rdma_rsp { + + struct nvmet_req req; + ++ bool allocated; + u8 n_rdma; + u32 flags; + u32 invalidate_rkey; +@@ -166,11 +167,19 @@ nvmet_rdma_get_rsp(struct nvmet_rdma_queue *queue) + unsigned long flags; + + spin_lock_irqsave(&queue->rsps_lock, flags); +- rsp = list_first_entry(&queue->free_rsps, ++ rsp = list_first_entry_or_null(&queue->free_rsps, + struct nvmet_rdma_rsp, free_list); +- list_del(&rsp->free_list); ++ if (likely(rsp)) ++ list_del(&rsp->free_list); + spin_unlock_irqrestore(&queue->rsps_lock, flags); + ++ if (unlikely(!rsp)) { ++ rsp = kmalloc(sizeof(*rsp), GFP_KERNEL); ++ if (unlikely(!rsp)) ++ return NULL; ++ rsp->allocated = true; ++ } ++ + return rsp; + } + +@@ -179,6 +188,11 @@ nvmet_rdma_put_rsp(struct nvmet_rdma_rsp *rsp) + { + unsigned long flags; + ++ if (rsp->allocated) { ++ kfree(rsp); ++ return; ++ } ++ + spin_lock_irqsave(&rsp->queue->rsps_lock, flags); + list_add_tail(&rsp->free_list, &rsp->queue->free_rsps); + spin_unlock_irqrestore(&rsp->queue->rsps_lock, flags); +@@ -702,6 +716,15 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc) + + cmd->queue = queue; + rsp = nvmet_rdma_get_rsp(queue); ++ if (unlikely(!rsp)) { ++ /* ++ * we get here only under memory pressure, ++ * silently drop and have the host retry ++ * as we can't even fail it. ++ */ ++ nvmet_rdma_post_recv(queue->dev, cmd); ++ return; ++ } + rsp->queue = queue; + rsp->cmd = cmd; + rsp->flags = 0; +diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c +index ffdb78421a25..b0f0d4e86f67 100644 +--- a/drivers/s390/net/qeth_core_main.c ++++ b/drivers/s390/net/qeth_core_main.c +@@ -25,6 +25,7 @@ + #include + #include + #include ++#include + + #include + #include +@@ -4738,7 +4739,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata) + + priv.buffer_len = oat_data.buffer_len; + priv.response_len = 0; +- priv.buffer = kzalloc(oat_data.buffer_len, GFP_KERNEL); ++ priv.buffer = vzalloc(oat_data.buffer_len); + if (!priv.buffer) { + rc = -ENOMEM; + goto out; +@@ -4779,7 +4780,7 @@ static int qeth_query_oat_command(struct qeth_card *card, char __user *udata) + rc = -EFAULT; + + out_free: +- kfree(priv.buffer); ++ vfree(priv.buffer); + out: + return rc; + } +diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c +index 2487f0aeb165..3bef60ae0480 100644 +--- a/drivers/s390/net/qeth_l2_main.c ++++ b/drivers/s390/net/qeth_l2_main.c +@@ -425,7 +425,7 @@ static int qeth_l2_process_inbound_buffer(struct qeth_card *card, + default: + dev_kfree_skb_any(skb); + QETH_CARD_TEXT(card, 3, "inbunkno"); +- QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN); ++ QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr)); + continue; + } + work_done++; +diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c +index 5905dc63e256..3ea840542767 100644 +--- a/drivers/s390/net/qeth_l3_main.c ++++ b/drivers/s390/net/qeth_l3_main.c +@@ -1390,7 +1390,7 @@ static int qeth_l3_process_inbound_buffer(struct qeth_card *card, + default: + dev_kfree_skb_any(skb); + QETH_CARD_TEXT(card, 3, "inbunkno"); +- QETH_DBF_HEX(CTRL, 3, hdr, QETH_DBF_CTRL_LEN); ++ QETH_DBF_HEX(CTRL, 3, hdr, sizeof(*hdr)); + continue; + } + work_done++; +diff --git a/drivers/scsi/aacraid/aacraid.h b/drivers/scsi/aacraid/aacraid.h +index 29bf1e60f542..39eb415987fc 100644 +--- a/drivers/scsi/aacraid/aacraid.h ++++ b/drivers/scsi/aacraid/aacraid.h +@@ -1346,7 +1346,7 @@ struct fib { + struct aac_hba_map_info { + __le32 rmw_nexus; /* nexus for native HBA devices */ + u8 devtype; /* device type */ +- u8 reset_state; /* 0 - no reset, 1..x - */ ++ s8 reset_state; /* 0 - no reset, 1..x - */ + /* after xth TM LUN reset */ + u16 qd_limit; + u32 scan_counter; +diff --git a/drivers/scsi/csiostor/csio_hw.c b/drivers/scsi/csiostor/csio_hw.c +index a10cf25ee7f9..e4baf04ec5ea 100644 +--- a/drivers/scsi/csiostor/csio_hw.c ++++ b/drivers/scsi/csiostor/csio_hw.c +@@ -1512,6 +1512,46 @@ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16) + return caps32; + } + ++/** ++ * fwcaps32_to_caps16 - convert 32-bit Port Capabilities to 16-bits ++ * @caps32: a 32-bit Port Capabilities value ++ * ++ * Returns the equivalent 16-bit Port Capabilities value. Note that ++ * not all 32-bit Port Capabilities can be represented in the 16-bit ++ * Port Capabilities and some fields/values may not make it. ++ */ ++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32) ++{ ++ fw_port_cap16_t caps16 = 0; ++ ++ #define CAP32_TO_CAP16(__cap) \ ++ do { \ ++ if (caps32 & FW_PORT_CAP32_##__cap) \ ++ caps16 |= FW_PORT_CAP_##__cap; \ ++ } while (0) ++ ++ CAP32_TO_CAP16(SPEED_100M); ++ CAP32_TO_CAP16(SPEED_1G); ++ CAP32_TO_CAP16(SPEED_10G); ++ CAP32_TO_CAP16(SPEED_25G); ++ CAP32_TO_CAP16(SPEED_40G); ++ CAP32_TO_CAP16(SPEED_100G); ++ CAP32_TO_CAP16(FC_RX); ++ CAP32_TO_CAP16(FC_TX); ++ CAP32_TO_CAP16(802_3_PAUSE); ++ CAP32_TO_CAP16(802_3_ASM_DIR); ++ CAP32_TO_CAP16(ANEG); ++ CAP32_TO_CAP16(FORCE_PAUSE); ++ CAP32_TO_CAP16(MDIAUTO); ++ CAP32_TO_CAP16(MDISTRAIGHT); ++ CAP32_TO_CAP16(FEC_RS); ++ CAP32_TO_CAP16(FEC_BASER_RS); ++ ++ #undef CAP32_TO_CAP16 ++ ++ return caps16; ++} ++ + /** + * lstatus_to_fwcap - translate old lstatus to 32-bit Port Capabilities + * @lstatus: old FW_PORT_ACTION_GET_PORT_INFO lstatus value +@@ -1670,7 +1710,7 @@ csio_enable_ports(struct csio_hw *hw) + val = 1; + + csio_mb_params(hw, mbp, CSIO_MB_DEFAULT_TMO, +- hw->pfn, 0, 1, ¶m, &val, false, ++ hw->pfn, 0, 1, ¶m, &val, true, + NULL); + + if (csio_mb_issue(hw, mbp)) { +@@ -1680,16 +1720,9 @@ csio_enable_ports(struct csio_hw *hw) + return -EINVAL; + } + +- csio_mb_process_read_params_rsp(hw, mbp, &retval, 1, +- &val); +- if (retval != FW_SUCCESS) { +- csio_err(hw, "FW_PARAMS_CMD(r) port:%d failed: 0x%x\n", +- portid, retval); +- mempool_free(mbp, hw->mb_mempool); +- return -EINVAL; +- } +- +- fw_caps = val; ++ csio_mb_process_read_params_rsp(hw, mbp, &retval, ++ 0, NULL); ++ fw_caps = retval ? FW_CAPS16 : FW_CAPS32; + } + + /* Read PORT information */ +@@ -2275,8 +2308,8 @@ bye: + } + + /* +- * Returns -EINVAL if attempts to flash the firmware failed +- * else returns 0, ++ * Returns -EINVAL if attempts to flash the firmware failed, ++ * -ENOMEM if memory allocation failed else returns 0, + * if flashing was not attempted because the card had the + * latest firmware ECANCELED is returned + */ +@@ -2304,6 +2337,13 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset) + return -EINVAL; + } + ++ /* allocate memory to read the header of the firmware on the ++ * card ++ */ ++ card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL); ++ if (!card_fw) ++ return -ENOMEM; ++ + if (csio_is_t5(pci_dev->device & CSIO_HW_CHIP_MASK)) + fw_bin_file = FW_FNAME_T5; + else +@@ -2317,11 +2357,6 @@ csio_hw_flash_fw(struct csio_hw *hw, int *reset) + fw_size = fw->size; + } + +- /* allocate memory to read the header of the firmware on the +- * card +- */ +- card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL); +- + /* upgrade FW logic */ + ret = csio_hw_prep_fw(hw, fw_info, fw_data, fw_size, card_fw, + hw->fw_state, reset); +diff --git a/drivers/scsi/csiostor/csio_hw.h b/drivers/scsi/csiostor/csio_hw.h +index 9e73ef771eb7..e351af6e7c81 100644 +--- a/drivers/scsi/csiostor/csio_hw.h ++++ b/drivers/scsi/csiostor/csio_hw.h +@@ -639,6 +639,7 @@ int csio_handle_intr_status(struct csio_hw *, unsigned int, + + fw_port_cap32_t fwcap_to_fwspeed(fw_port_cap32_t acaps); + fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16); ++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32); + fw_port_cap32_t lstatus_to_fwcap(u32 lstatus); + + int csio_hw_start(struct csio_hw *); +diff --git a/drivers/scsi/csiostor/csio_mb.c b/drivers/scsi/csiostor/csio_mb.c +index c026417269c3..6f13673d6aa0 100644 +--- a/drivers/scsi/csiostor/csio_mb.c ++++ b/drivers/scsi/csiostor/csio_mb.c +@@ -368,7 +368,7 @@ csio_mb_port(struct csio_hw *hw, struct csio_mb *mbp, uint32_t tmo, + FW_CMD_LEN16_V(sizeof(*cmdp) / 16)); + + if (fw_caps == FW_CAPS16) +- cmdp->u.l1cfg.rcap = cpu_to_be32(fc); ++ cmdp->u.l1cfg.rcap = cpu_to_be32(fwcaps32_to_caps16(fc)); + else + cmdp->u.l1cfg32.rcap32 = cpu_to_be32(fc); + } +@@ -395,8 +395,8 @@ csio_mb_process_read_port_rsp(struct csio_hw *hw, struct csio_mb *mbp, + *pcaps = fwcaps16_to_caps32(ntohs(rsp->u.info.pcap)); + *acaps = fwcaps16_to_caps32(ntohs(rsp->u.info.acap)); + } else { +- *pcaps = ntohs(rsp->u.info32.pcaps32); +- *acaps = ntohs(rsp->u.info32.acaps32); ++ *pcaps = be32_to_cpu(rsp->u.info32.pcaps32); ++ *acaps = be32_to_cpu(rsp->u.info32.acaps32); + } + } + } +diff --git a/drivers/scsi/qedi/qedi.h b/drivers/scsi/qedi/qedi.h +index fc3babc15fa3..a6f96b35e971 100644 +--- a/drivers/scsi/qedi/qedi.h ++++ b/drivers/scsi/qedi/qedi.h +@@ -77,6 +77,11 @@ enum qedi_nvm_tgts { + QEDI_NVM_TGT_SEC, + }; + ++struct qedi_nvm_iscsi_image { ++ struct nvm_iscsi_cfg iscsi_cfg; ++ u32 crc; ++}; ++ + struct qedi_uio_ctrl { + /* meta data */ + u32 uio_hsi_version; +@@ -294,7 +299,7 @@ struct qedi_ctx { + void *bdq_pbl_list; + dma_addr_t bdq_pbl_list_dma; + u8 bdq_pbl_list_num_entries; +- struct nvm_iscsi_cfg *iscsi_cfg; ++ struct qedi_nvm_iscsi_image *iscsi_image; + dma_addr_t nvm_buf_dma; + void __iomem *bdq_primary_prod; + void __iomem *bdq_secondary_prod; +diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c +index cff83b9457f7..3e18a68c2b03 100644 +--- a/drivers/scsi/qedi/qedi_main.c ++++ b/drivers/scsi/qedi/qedi_main.c +@@ -1346,23 +1346,26 @@ exit_setup_int: + + static void qedi_free_nvm_iscsi_cfg(struct qedi_ctx *qedi) + { +- if (qedi->iscsi_cfg) ++ if (qedi->iscsi_image) + dma_free_coherent(&qedi->pdev->dev, +- sizeof(struct nvm_iscsi_cfg), +- qedi->iscsi_cfg, qedi->nvm_buf_dma); ++ sizeof(struct qedi_nvm_iscsi_image), ++ qedi->iscsi_image, qedi->nvm_buf_dma); + } + + static int qedi_alloc_nvm_iscsi_cfg(struct qedi_ctx *qedi) + { +- qedi->iscsi_cfg = dma_zalloc_coherent(&qedi->pdev->dev, +- sizeof(struct nvm_iscsi_cfg), +- &qedi->nvm_buf_dma, GFP_KERNEL); +- if (!qedi->iscsi_cfg) { ++ struct qedi_nvm_iscsi_image nvm_image; ++ ++ qedi->iscsi_image = dma_zalloc_coherent(&qedi->pdev->dev, ++ sizeof(nvm_image), ++ &qedi->nvm_buf_dma, ++ GFP_KERNEL); ++ if (!qedi->iscsi_image) { + QEDI_ERR(&qedi->dbg_ctx, "Could not allocate NVM BUF.\n"); + return -ENOMEM; + } + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, +- "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_cfg, ++ "NVM BUF addr=0x%p dma=0x%llx.\n", qedi->iscsi_image, + qedi->nvm_buf_dma); + + return 0; +@@ -1905,7 +1908,7 @@ qedi_get_nvram_block(struct qedi_ctx *qedi) + struct nvm_iscsi_block *block; + + pf = qedi->dev_info.common.abs_pf_id; +- block = &qedi->iscsi_cfg->block[0]; ++ block = &qedi->iscsi_image->iscsi_cfg.block[0]; + for (i = 0; i < NUM_OF_ISCSI_PF_SUPPORTED; i++, block++) { + flags = ((block->id) & NVM_ISCSI_CFG_BLK_CTRL_FLAG_MASK) >> + NVM_ISCSI_CFG_BLK_CTRL_FLAG_OFFSET; +@@ -2194,15 +2197,14 @@ static void qedi_boot_release(void *data) + static int qedi_get_boot_info(struct qedi_ctx *qedi) + { + int ret = 1; +- u16 len; +- +- len = sizeof(struct nvm_iscsi_cfg); ++ struct qedi_nvm_iscsi_image nvm_image; + + QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO, + "Get NVM iSCSI CFG image\n"); + ret = qedi_ops->common->nvm_get_image(qedi->cdev, + QED_NVM_IMAGE_ISCSI_CFG, +- (char *)qedi->iscsi_cfg, len); ++ (char *)qedi->iscsi_image, ++ sizeof(nvm_image)); + if (ret) + QEDI_ERR(&qedi->dbg_ctx, + "Could not get NVM image. ret = %d\n", ret); +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c +index 8e223799347a..a4ecc9d77624 100644 +--- a/drivers/target/iscsi/iscsi_target.c ++++ b/drivers/target/iscsi/iscsi_target.c +@@ -4211,22 +4211,15 @@ int iscsit_close_connection( + crypto_free_ahash(tfm); + } + +- free_cpumask_var(conn->conn_cpumask); +- +- kfree(conn->conn_ops); +- conn->conn_ops = NULL; +- + if (conn->sock) + sock_release(conn->sock); + + if (conn->conn_transport->iscsit_free_conn) + conn->conn_transport->iscsit_free_conn(conn); + +- iscsit_put_transport(conn->conn_transport); +- + pr_debug("Moving to TARG_CONN_STATE_FREE.\n"); + conn->conn_state = TARG_CONN_STATE_FREE; +- kfree(conn); ++ iscsit_free_conn(conn); + + spin_lock_bh(&sess->conn_lock); + atomic_dec(&sess->nconn); +diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c +index 68b3eb00a9d0..2fda5b0664fd 100644 +--- a/drivers/target/iscsi/iscsi_target_login.c ++++ b/drivers/target/iscsi/iscsi_target_login.c +@@ -67,45 +67,10 @@ static struct iscsi_login *iscsi_login_init_conn(struct iscsi_conn *conn) + goto out_req_buf; + } + +- conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL); +- if (!conn->conn_ops) { +- pr_err("Unable to allocate memory for" +- " struct iscsi_conn_ops.\n"); +- goto out_rsp_buf; +- } +- +- init_waitqueue_head(&conn->queues_wq); +- INIT_LIST_HEAD(&conn->conn_list); +- INIT_LIST_HEAD(&conn->conn_cmd_list); +- INIT_LIST_HEAD(&conn->immed_queue_list); +- INIT_LIST_HEAD(&conn->response_queue_list); +- init_completion(&conn->conn_post_wait_comp); +- init_completion(&conn->conn_wait_comp); +- init_completion(&conn->conn_wait_rcfr_comp); +- init_completion(&conn->conn_waiting_on_uc_comp); +- init_completion(&conn->conn_logout_comp); +- init_completion(&conn->rx_half_close_comp); +- init_completion(&conn->tx_half_close_comp); +- init_completion(&conn->rx_login_comp); +- spin_lock_init(&conn->cmd_lock); +- spin_lock_init(&conn->conn_usage_lock); +- spin_lock_init(&conn->immed_queue_lock); +- spin_lock_init(&conn->nopin_timer_lock); +- spin_lock_init(&conn->response_queue_lock); +- spin_lock_init(&conn->state_lock); +- +- if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) { +- pr_err("Unable to allocate conn->conn_cpumask\n"); +- goto out_conn_ops; +- } + conn->conn_login = login; + + return login; + +-out_conn_ops: +- kfree(conn->conn_ops); +-out_rsp_buf: +- kfree(login->rsp_buf); + out_req_buf: + kfree(login->req_buf); + out_login: +@@ -310,11 +275,9 @@ static int iscsi_login_zero_tsih_s1( + return -ENOMEM; + } + +- ret = iscsi_login_set_conn_values(sess, conn, pdu->cid); +- if (unlikely(ret)) { +- kfree(sess); +- return ret; +- } ++ if (iscsi_login_set_conn_values(sess, conn, pdu->cid)) ++ goto free_sess; ++ + sess->init_task_tag = pdu->itt; + memcpy(&sess->isid, pdu->isid, 6); + sess->exp_cmd_sn = be32_to_cpu(pdu->cmdsn); +@@ -1157,6 +1120,75 @@ iscsit_conn_set_transport(struct iscsi_conn *conn, struct iscsit_transport *t) + return 0; + } + ++static struct iscsi_conn *iscsit_alloc_conn(struct iscsi_np *np) ++{ ++ struct iscsi_conn *conn; ++ ++ conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL); ++ if (!conn) { ++ pr_err("Could not allocate memory for new connection\n"); ++ return NULL; ++ } ++ pr_debug("Moving to TARG_CONN_STATE_FREE.\n"); ++ conn->conn_state = TARG_CONN_STATE_FREE; ++ ++ init_waitqueue_head(&conn->queues_wq); ++ INIT_LIST_HEAD(&conn->conn_list); ++ INIT_LIST_HEAD(&conn->conn_cmd_list); ++ INIT_LIST_HEAD(&conn->immed_queue_list); ++ INIT_LIST_HEAD(&conn->response_queue_list); ++ init_completion(&conn->conn_post_wait_comp); ++ init_completion(&conn->conn_wait_comp); ++ init_completion(&conn->conn_wait_rcfr_comp); ++ init_completion(&conn->conn_waiting_on_uc_comp); ++ init_completion(&conn->conn_logout_comp); ++ init_completion(&conn->rx_half_close_comp); ++ init_completion(&conn->tx_half_close_comp); ++ init_completion(&conn->rx_login_comp); ++ spin_lock_init(&conn->cmd_lock); ++ spin_lock_init(&conn->conn_usage_lock); ++ spin_lock_init(&conn->immed_queue_lock); ++ spin_lock_init(&conn->nopin_timer_lock); ++ spin_lock_init(&conn->response_queue_lock); ++ spin_lock_init(&conn->state_lock); ++ ++ timer_setup(&conn->nopin_response_timer, ++ iscsit_handle_nopin_response_timeout, 0); ++ timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0); ++ ++ if (iscsit_conn_set_transport(conn, np->np_transport) < 0) ++ goto free_conn; ++ ++ conn->conn_ops = kzalloc(sizeof(struct iscsi_conn_ops), GFP_KERNEL); ++ if (!conn->conn_ops) { ++ pr_err("Unable to allocate memory for struct iscsi_conn_ops.\n"); ++ goto put_transport; ++ } ++ ++ if (!zalloc_cpumask_var(&conn->conn_cpumask, GFP_KERNEL)) { ++ pr_err("Unable to allocate conn->conn_cpumask\n"); ++ goto free_mask; ++ } ++ ++ return conn; ++ ++free_mask: ++ free_cpumask_var(conn->conn_cpumask); ++put_transport: ++ iscsit_put_transport(conn->conn_transport); ++free_conn: ++ kfree(conn); ++ return NULL; ++} ++ ++void iscsit_free_conn(struct iscsi_conn *conn) ++{ ++ free_cpumask_var(conn->conn_cpumask); ++ kfree(conn->conn_ops); ++ iscsit_put_transport(conn->conn_transport); ++ kfree(conn); ++} ++ + void iscsi_target_login_sess_out(struct iscsi_conn *conn, + struct iscsi_np *np, bool zero_tsih, bool new_sess) + { +@@ -1210,10 +1242,6 @@ old_sess_out: + crypto_free_ahash(tfm); + } + +- free_cpumask_var(conn->conn_cpumask); +- +- kfree(conn->conn_ops); +- + if (conn->param_list) { + iscsi_release_param_list(conn->param_list); + conn->param_list = NULL; +@@ -1231,8 +1259,7 @@ old_sess_out: + if (conn->conn_transport->iscsit_free_conn) + conn->conn_transport->iscsit_free_conn(conn); + +- iscsit_put_transport(conn->conn_transport); +- kfree(conn); ++ iscsit_free_conn(conn); + } + + static int __iscsi_target_login_thread(struct iscsi_np *np) +@@ -1262,31 +1289,16 @@ static int __iscsi_target_login_thread(struct iscsi_np *np) + } + spin_unlock_bh(&np->np_thread_lock); + +- conn = kzalloc(sizeof(struct iscsi_conn), GFP_KERNEL); ++ conn = iscsit_alloc_conn(np); + if (!conn) { +- pr_err("Could not allocate memory for" +- " new connection\n"); + /* Get another socket */ + return 1; + } +- pr_debug("Moving to TARG_CONN_STATE_FREE.\n"); +- conn->conn_state = TARG_CONN_STATE_FREE; +- +- timer_setup(&conn->nopin_response_timer, +- iscsit_handle_nopin_response_timeout, 0); +- timer_setup(&conn->nopin_timer, iscsit_handle_nopin_timeout, 0); +- +- if (iscsit_conn_set_transport(conn, np->np_transport) < 0) { +- kfree(conn); +- return 1; +- } + + rc = np->np_transport->iscsit_accept_np(np, conn); + if (rc == -ENOSYS) { + complete(&np->np_restart_comp); +- iscsit_put_transport(conn->conn_transport); +- kfree(conn); +- conn = NULL; ++ iscsit_free_conn(conn); + goto exit; + } else if (rc < 0) { + spin_lock_bh(&np->np_thread_lock); +@@ -1294,17 +1306,13 @@ static int __iscsi_target_login_thread(struct iscsi_np *np) + np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; + spin_unlock_bh(&np->np_thread_lock); + complete(&np->np_restart_comp); +- iscsit_put_transport(conn->conn_transport); +- kfree(conn); +- conn = NULL; ++ iscsit_free_conn(conn); + /* Get another socket */ + return 1; + } + spin_unlock_bh(&np->np_thread_lock); +- iscsit_put_transport(conn->conn_transport); +- kfree(conn); +- conn = NULL; +- goto out; ++ iscsit_free_conn(conn); ++ return 1; + } + /* + * Perform the remaining iSCSI connection initialization items.. +@@ -1454,7 +1462,6 @@ old_sess_out: + tpg_np = NULL; + } + +-out: + return 1; + + exit: +diff --git a/drivers/target/iscsi/iscsi_target_login.h b/drivers/target/iscsi/iscsi_target_login.h +index 74ac3abc44a0..3b8e3639ff5d 100644 +--- a/drivers/target/iscsi/iscsi_target_login.h ++++ b/drivers/target/iscsi/iscsi_target_login.h +@@ -19,7 +19,7 @@ extern int iscsi_target_setup_login_socket(struct iscsi_np *, + extern int iscsit_accept_np(struct iscsi_np *, struct iscsi_conn *); + extern int iscsit_get_login_rx(struct iscsi_conn *, struct iscsi_login *); + extern int iscsit_put_login_tx(struct iscsi_conn *, struct iscsi_login *, u32); +-extern void iscsit_free_conn(struct iscsi_np *, struct iscsi_conn *); ++extern void iscsit_free_conn(struct iscsi_conn *); + extern int iscsit_start_kthreads(struct iscsi_conn *); + extern void iscsi_post_login_handler(struct iscsi_np *, struct iscsi_conn *, u8); + extern void iscsi_target_login_sess_out(struct iscsi_conn *, struct iscsi_np *, +diff --git a/drivers/usb/gadget/udc/fotg210-udc.c b/drivers/usb/gadget/udc/fotg210-udc.c +index 53a48f561458..587c5037ff07 100644 +--- a/drivers/usb/gadget/udc/fotg210-udc.c ++++ b/drivers/usb/gadget/udc/fotg210-udc.c +@@ -1063,12 +1063,15 @@ static const struct usb_gadget_ops fotg210_gadget_ops = { + static int fotg210_udc_remove(struct platform_device *pdev) + { + struct fotg210_udc *fotg210 = platform_get_drvdata(pdev); ++ int i; + + usb_del_gadget_udc(&fotg210->gadget); + iounmap(fotg210->reg); + free_irq(platform_get_irq(pdev, 0), fotg210); + + fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req); ++ for (i = 0; i < FOTG210_MAX_NUM_EP; i++) ++ kfree(fotg210->ep[i]); + kfree(fotg210); + + return 0; +@@ -1099,7 +1102,7 @@ static int fotg210_udc_probe(struct platform_device *pdev) + /* initialize udc */ + fotg210 = kzalloc(sizeof(struct fotg210_udc), GFP_KERNEL); + if (fotg210 == NULL) +- goto err_alloc; ++ goto err; + + for (i = 0; i < FOTG210_MAX_NUM_EP; i++) { + _ep[i] = kzalloc(sizeof(struct fotg210_ep), GFP_KERNEL); +@@ -1111,7 +1114,7 @@ static int fotg210_udc_probe(struct platform_device *pdev) + fotg210->reg = ioremap(res->start, resource_size(res)); + if (fotg210->reg == NULL) { + pr_err("ioremap error.\n"); +- goto err_map; ++ goto err_alloc; + } + + spin_lock_init(&fotg210->lock); +@@ -1159,7 +1162,7 @@ static int fotg210_udc_probe(struct platform_device *pdev) + fotg210->ep0_req = fotg210_ep_alloc_request(&fotg210->ep[0]->ep, + GFP_KERNEL); + if (fotg210->ep0_req == NULL) +- goto err_req; ++ goto err_map; + + fotg210_init(fotg210); + +@@ -1187,12 +1190,14 @@ err_req: + fotg210_ep_free_request(&fotg210->ep[0]->ep, fotg210->ep0_req); + + err_map: +- if (fotg210->reg) +- iounmap(fotg210->reg); ++ iounmap(fotg210->reg); + + err_alloc: ++ for (i = 0; i < FOTG210_MAX_NUM_EP; i++) ++ kfree(fotg210->ep[i]); + kfree(fotg210); + ++err: + return ret; + } + +diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c +index c1b22fc64e38..b5a14caa9297 100644 +--- a/drivers/usb/host/xhci-plat.c ++++ b/drivers/usb/host/xhci-plat.c +@@ -152,7 +152,7 @@ static int xhci_plat_probe(struct platform_device *pdev) + { + const struct xhci_plat_priv *priv_match; + const struct hc_driver *driver; +- struct device *sysdev; ++ struct device *sysdev, *tmpdev; + struct xhci_hcd *xhci; + struct resource *res; + struct usb_hcd *hcd; +@@ -272,19 +272,24 @@ static int xhci_plat_probe(struct platform_device *pdev) + goto disable_clk; + } + +- if (device_property_read_bool(sysdev, "usb2-lpm-disable")) +- xhci->quirks |= XHCI_HW_LPM_DISABLE; ++ /* imod_interval is the interrupt moderation value in nanoseconds. */ ++ xhci->imod_interval = 40000; + +- if (device_property_read_bool(sysdev, "usb3-lpm-capable")) +- xhci->quirks |= XHCI_LPM_SUPPORT; ++ /* Iterate over all parent nodes for finding quirks */ ++ for (tmpdev = &pdev->dev; tmpdev; tmpdev = tmpdev->parent) { + +- if (device_property_read_bool(&pdev->dev, "quirk-broken-port-ped")) +- xhci->quirks |= XHCI_BROKEN_PORT_PED; ++ if (device_property_read_bool(tmpdev, "usb2-lpm-disable")) ++ xhci->quirks |= XHCI_HW_LPM_DISABLE; + +- /* imod_interval is the interrupt moderation value in nanoseconds. */ +- xhci->imod_interval = 40000; +- device_property_read_u32(sysdev, "imod-interval-ns", +- &xhci->imod_interval); ++ if (device_property_read_bool(tmpdev, "usb3-lpm-capable")) ++ xhci->quirks |= XHCI_LPM_SUPPORT; ++ ++ if (device_property_read_bool(tmpdev, "quirk-broken-port-ped")) ++ xhci->quirks |= XHCI_BROKEN_PORT_PED; ++ ++ device_property_read_u32(tmpdev, "imod-interval-ns", ++ &xhci->imod_interval); ++ } + + hcd->usb_phy = devm_usb_get_phy_by_phandle(sysdev, "usb-phy", 0); + if (IS_ERR(hcd->usb_phy)) { +diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c +index 1232dd49556d..6d9fd5f64903 100644 +--- a/drivers/usb/misc/yurex.c ++++ b/drivers/usb/misc/yurex.c +@@ -413,6 +413,9 @@ static ssize_t yurex_read(struct file *file, char __user *buffer, size_t count, + spin_unlock_irqrestore(&dev->lock, flags); + mutex_unlock(&dev->io_mutex); + ++ if (WARN_ON_ONCE(len >= sizeof(in_buffer))) ++ return -EIO; ++ + return simple_read_from_buffer(buffer, count, ppos, in_buffer, len); + } + +diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c +index d4265c8ebb22..b1357aa4bc55 100644 +--- a/drivers/xen/cpu_hotplug.c ++++ b/drivers/xen/cpu_hotplug.c +@@ -19,15 +19,16 @@ static void enable_hotplug_cpu(int cpu) + + static void disable_hotplug_cpu(int cpu) + { +- if (cpu_online(cpu)) { +- lock_device_hotplug(); ++ if (!cpu_is_hotpluggable(cpu)) ++ return; ++ lock_device_hotplug(); ++ if (cpu_online(cpu)) + device_offline(get_cpu_device(cpu)); +- unlock_device_hotplug(); +- } +- if (cpu_present(cpu)) ++ if (!cpu_online(cpu) && cpu_present(cpu)) { + xen_arch_unregister_cpu(cpu); +- +- set_cpu_present(cpu, false); ++ set_cpu_present(cpu, false); ++ } ++ unlock_device_hotplug(); + } + + static int vcpu_online(unsigned int cpu) +diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c +index 08e4af04d6f2..e6c1934734b7 100644 +--- a/drivers/xen/events/events_base.c ++++ b/drivers/xen/events/events_base.c +@@ -138,7 +138,7 @@ static int set_evtchn_to_irq(unsigned evtchn, unsigned irq) + clear_evtchn_to_irq_row(row); + } + +- evtchn_to_irq[EVTCHN_ROW(evtchn)][EVTCHN_COL(evtchn)] = irq; ++ evtchn_to_irq[row][col] = irq; + return 0; + } + +diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c +index c93d8ef8df34..5bb01a62f214 100644 +--- a/drivers/xen/manage.c ++++ b/drivers/xen/manage.c +@@ -280,9 +280,11 @@ static void sysrq_handler(struct xenbus_watch *watch, const char *path, + /* + * The Xenstore watch fires directly after registering it and + * after a suspend/resume cycle. So ENOENT is no error but +- * might happen in those cases. ++ * might happen in those cases. ERANGE is observed when we get ++ * an empty value (''), this happens when we acknowledge the ++ * request by writing '\0' below. + */ +- if (err != -ENOENT) ++ if (err != -ENOENT && err != -ERANGE) + pr_err("Error %d reading sysrq code in control/sysrq\n", + err); + xenbus_transaction_end(xbt, 1); +diff --git a/fs/afs/proc.c b/fs/afs/proc.c +index 0c3285c8db95..476dcbb79713 100644 +--- a/fs/afs/proc.c ++++ b/fs/afs/proc.c +@@ -98,13 +98,13 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size) + goto inval; + + args = strchr(name, ' '); +- if (!args) +- goto inval; +- do { +- *args++ = 0; +- } while(*args == ' '); +- if (!*args) +- goto inval; ++ if (args) { ++ do { ++ *args++ = 0; ++ } while(*args == ' '); ++ if (!*args) ++ goto inval; ++ } + + /* determine command to perform */ + _debug("cmd=%s name=%s args=%s", buf, name, args); +@@ -120,7 +120,6 @@ static int afs_proc_cells_write(struct file *file, char *buf, size_t size) + + if (test_and_set_bit(AFS_CELL_FL_NO_GC, &cell->flags)) + afs_put_cell(net, cell); +- printk("kAFS: Added new cell '%s'\n", name); + } else { + goto inval; + } +diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h +index 118346aceea9..663ce0518d27 100644 +--- a/fs/btrfs/ctree.h ++++ b/fs/btrfs/ctree.h +@@ -1277,6 +1277,7 @@ struct btrfs_root { + int send_in_progress; + struct btrfs_subvolume_writers *subv_writers; + atomic_t will_be_snapshotted; ++ atomic_t snapshot_force_cow; + + /* For qgroup metadata reserved space */ + spinlock_t qgroup_meta_rsv_lock; +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index dfed08e70ec1..891b1aab3480 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -1217,6 +1217,7 @@ static void __setup_root(struct btrfs_root *root, struct btrfs_fs_info *fs_info, + atomic_set(&root->log_batch, 0); + refcount_set(&root->refs, 1); + atomic_set(&root->will_be_snapshotted, 0); ++ atomic_set(&root->snapshot_force_cow, 0); + root->log_transid = 0; + root->log_transid_committed = -1; + root->last_log_commit = 0; +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 071d949f69ec..d3736fbf6774 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -1275,7 +1275,7 @@ static noinline int run_delalloc_nocow(struct inode *inode, + u64 disk_num_bytes; + u64 ram_bytes; + int extent_type; +- int ret, err; ++ int ret; + int type; + int nocow; + int check_prev = 1; +@@ -1407,11 +1407,8 @@ next_slot: + * if there are pending snapshots for this root, + * we fall into common COW way. + */ +- if (!nolock) { +- err = btrfs_start_write_no_snapshotting(root); +- if (!err) +- goto out_check; +- } ++ if (!nolock && atomic_read(&root->snapshot_force_cow)) ++ goto out_check; + /* + * force cow if csum exists in the range. + * this ensure that csum for a given extent are +@@ -1420,9 +1417,6 @@ next_slot: + ret = csum_exist_in_range(fs_info, disk_bytenr, + num_bytes); + if (ret) { +- if (!nolock) +- btrfs_end_write_no_snapshotting(root); +- + /* + * ret could be -EIO if the above fails to read + * metadata. +@@ -1435,11 +1429,8 @@ next_slot: + WARN_ON_ONCE(nolock); + goto out_check; + } +- if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) { +- if (!nolock) +- btrfs_end_write_no_snapshotting(root); ++ if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) + goto out_check; +- } + nocow = 1; + } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) { + extent_end = found_key.offset + +@@ -1453,8 +1444,6 @@ next_slot: + out_check: + if (extent_end <= start) { + path->slots[0]++; +- if (!nolock && nocow) +- btrfs_end_write_no_snapshotting(root); + if (nocow) + btrfs_dec_nocow_writers(fs_info, disk_bytenr); + goto next_slot; +@@ -1476,8 +1465,6 @@ out_check: + end, page_started, nr_written, 1, + NULL); + if (ret) { +- if (!nolock && nocow) +- btrfs_end_write_no_snapshotting(root); + if (nocow) + btrfs_dec_nocow_writers(fs_info, + disk_bytenr); +@@ -1497,8 +1484,6 @@ out_check: + ram_bytes, BTRFS_COMPRESS_NONE, + BTRFS_ORDERED_PREALLOC); + if (IS_ERR(em)) { +- if (!nolock && nocow) +- btrfs_end_write_no_snapshotting(root); + if (nocow) + btrfs_dec_nocow_writers(fs_info, + disk_bytenr); +@@ -1537,8 +1522,6 @@ out_check: + EXTENT_CLEAR_DATA_RESV, + PAGE_UNLOCK | PAGE_SET_PRIVATE2); + +- if (!nolock && nocow) +- btrfs_end_write_no_snapshotting(root); + cur_offset = extent_end; + + /* +diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c +index f3d6be0c657b..ef7159646615 100644 +--- a/fs/btrfs/ioctl.c ++++ b/fs/btrfs/ioctl.c +@@ -761,6 +761,7 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir, + struct btrfs_pending_snapshot *pending_snapshot; + struct btrfs_trans_handle *trans; + int ret; ++ bool snapshot_force_cow = false; + + if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state)) + return -EINVAL; +@@ -777,6 +778,11 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir, + goto free_pending; + } + ++ /* ++ * Force new buffered writes to reserve space even when NOCOW is ++ * possible. This is to avoid later writeback (running dealloc) to ++ * fallback to COW mode and unexpectedly fail with ENOSPC. ++ */ + atomic_inc(&root->will_be_snapshotted); + smp_mb__after_atomic(); + /* wait for no snapshot writes */ +@@ -787,6 +793,14 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir, + if (ret) + goto dec_and_free; + ++ /* ++ * All previous writes have started writeback in NOCOW mode, so now ++ * we force future writes to fallback to COW mode during snapshot ++ * creation. ++ */ ++ atomic_inc(&root->snapshot_force_cow); ++ snapshot_force_cow = true; ++ + btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1); + + btrfs_init_block_rsv(&pending_snapshot->block_rsv, +@@ -851,6 +865,8 @@ static int create_snapshot(struct btrfs_root *root, struct inode *dir, + fail: + btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv); + dec_and_free: ++ if (snapshot_force_cow) ++ atomic_dec(&root->snapshot_force_cow); + if (atomic_dec_and_test(&root->will_be_snapshotted)) + wake_up_var(&root->will_be_snapshotted); + free_pending: +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 5304b8d6ceb8..1a22c0ecaf67 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -4584,7 +4584,12 @@ again: + + /* Now btrfs_update_device() will change the on-disk size. */ + ret = btrfs_update_device(trans, device); +- btrfs_end_transaction(trans); ++ if (ret < 0) { ++ btrfs_abort_transaction(trans, ret); ++ btrfs_end_transaction(trans); ++ } else { ++ ret = btrfs_commit_transaction(trans); ++ } + done: + btrfs_free_path(path); + if (ret) { +diff --git a/fs/ceph/super.c b/fs/ceph/super.c +index 95a3b3ac9b6e..60f81ac369b5 100644 +--- a/fs/ceph/super.c ++++ b/fs/ceph/super.c +@@ -603,6 +603,8 @@ static int extra_mon_dispatch(struct ceph_client *client, struct ceph_msg *msg) + + /* + * create a new fs client ++ * ++ * Success or not, this function consumes @fsopt and @opt. + */ + static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt, + struct ceph_options *opt) +@@ -610,17 +612,20 @@ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt, + struct ceph_fs_client *fsc; + int page_count; + size_t size; +- int err = -ENOMEM; ++ int err; + + fsc = kzalloc(sizeof(*fsc), GFP_KERNEL); +- if (!fsc) +- return ERR_PTR(-ENOMEM); ++ if (!fsc) { ++ err = -ENOMEM; ++ goto fail; ++ } + + fsc->client = ceph_create_client(opt, fsc); + if (IS_ERR(fsc->client)) { + err = PTR_ERR(fsc->client); + goto fail; + } ++ opt = NULL; /* fsc->client now owns this */ + + fsc->client->extra_mon_dispatch = extra_mon_dispatch; + fsc->client->osdc.abort_on_full = true; +@@ -678,6 +683,9 @@ fail_client: + ceph_destroy_client(fsc->client); + fail: + kfree(fsc); ++ if (opt) ++ ceph_destroy_options(opt); ++ destroy_mount_options(fsopt); + return ERR_PTR(err); + } + +@@ -1042,8 +1050,6 @@ static struct dentry *ceph_mount(struct file_system_type *fs_type, + fsc = create_fs_client(fsopt, opt); + if (IS_ERR(fsc)) { + res = ERR_CAST(fsc); +- destroy_mount_options(fsopt); +- ceph_destroy_options(opt); + goto out_final; + } + +diff --git a/fs/cifs/cifs_unicode.c b/fs/cifs/cifs_unicode.c +index b380e0871372..a2b2355e7f01 100644 +--- a/fs/cifs/cifs_unicode.c ++++ b/fs/cifs/cifs_unicode.c +@@ -105,9 +105,6 @@ convert_sfm_char(const __u16 src_char, char *target) + case SFM_LESSTHAN: + *target = '<'; + break; +- case SFM_SLASH: +- *target = '\\'; +- break; + case SFM_SPACE: + *target = ' '; + break; +diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c +index 93408eab92e7..f5baf777564c 100644 +--- a/fs/cifs/cifssmb.c ++++ b/fs/cifs/cifssmb.c +@@ -601,10 +601,15 @@ CIFSSMBNegotiate(const unsigned int xid, struct cifs_ses *ses) + } + + count = 0; ++ /* ++ * We know that all the name entries in the protocols array ++ * are short (< 16 bytes anyway) and are NUL terminated. ++ */ + for (i = 0; i < CIFS_NUM_PROT; i++) { +- strncpy(pSMB->DialectsArray+count, protocols[i].name, 16); +- count += strlen(protocols[i].name) + 1; +- /* null at end of source and target buffers anyway */ ++ size_t len = strlen(protocols[i].name) + 1; ++ ++ memcpy(pSMB->DialectsArray+count, protocols[i].name, len); ++ count += len; + } + inc_rfc1001_len(pSMB, count); + pSMB->ByteCount = cpu_to_le16(count); +diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c +index 53e8362cbc4a..6737f54d9a34 100644 +--- a/fs/cifs/misc.c ++++ b/fs/cifs/misc.c +@@ -404,9 +404,17 @@ is_valid_oplock_break(char *buffer, struct TCP_Server_Info *srv) + (struct smb_com_transaction_change_notify_rsp *)buf; + struct file_notify_information *pnotify; + __u32 data_offset = 0; ++ size_t len = srv->total_read - sizeof(pSMBr->hdr.smb_buf_length); ++ + if (get_bcc(buf) > sizeof(struct file_notify_information)) { + data_offset = le32_to_cpu(pSMBr->DataOffset); + ++ if (data_offset > ++ len - sizeof(struct file_notify_information)) { ++ cifs_dbg(FYI, "invalid data_offset %u\n", ++ data_offset); ++ return true; ++ } + pnotify = (struct file_notify_information *) + ((char *)&pSMBr->hdr.Protocol + data_offset); + cifs_dbg(FYI, "dnotify on %s Action: 0x%x\n", +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index 5ecbc99f46e4..abb54b852bdc 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -1484,7 +1484,7 @@ smb2_query_dir_first(const unsigned int xid, struct cifs_tcon *tcon, + } + + srch_inf->entries_in_buffer = 0; +- srch_inf->index_of_last_entry = 0; ++ srch_inf->index_of_last_entry = 2; + + rc = SMB2_query_directory(xid, tcon, fid->persistent_fid, + fid->volatile_fid, 0, srch_inf); +diff --git a/fs/dcache.c b/fs/dcache.c +index d19a0dc46c04..baa89f092a2d 100644 +--- a/fs/dcache.c ++++ b/fs/dcache.c +@@ -1890,7 +1890,7 @@ void d_instantiate_new(struct dentry *entry, struct inode *inode) + spin_lock(&inode->i_lock); + __d_instantiate(entry, inode); + WARN_ON(!(inode->i_state & I_NEW)); +- inode->i_state &= ~I_NEW; ++ inode->i_state &= ~I_NEW & ~I_CREATING; + smp_mb(); + wake_up_bit(&inode->i_state, __I_NEW); + spin_unlock(&inode->i_lock); +diff --git a/fs/inode.c b/fs/inode.c +index 8c86c809ca17..a06de4454232 100644 +--- a/fs/inode.c ++++ b/fs/inode.c +@@ -804,6 +804,10 @@ repeat: + __wait_on_freeing_inode(inode); + goto repeat; + } ++ if (unlikely(inode->i_state & I_CREATING)) { ++ spin_unlock(&inode->i_lock); ++ return ERR_PTR(-ESTALE); ++ } + __iget(inode); + spin_unlock(&inode->i_lock); + return inode; +@@ -831,6 +835,10 @@ repeat: + __wait_on_freeing_inode(inode); + goto repeat; + } ++ if (unlikely(inode->i_state & I_CREATING)) { ++ spin_unlock(&inode->i_lock); ++ return ERR_PTR(-ESTALE); ++ } + __iget(inode); + spin_unlock(&inode->i_lock); + return inode; +@@ -961,13 +969,26 @@ void unlock_new_inode(struct inode *inode) + lockdep_annotate_inode_mutex_key(inode); + spin_lock(&inode->i_lock); + WARN_ON(!(inode->i_state & I_NEW)); +- inode->i_state &= ~I_NEW; ++ inode->i_state &= ~I_NEW & ~I_CREATING; + smp_mb(); + wake_up_bit(&inode->i_state, __I_NEW); + spin_unlock(&inode->i_lock); + } + EXPORT_SYMBOL(unlock_new_inode); + ++void discard_new_inode(struct inode *inode) ++{ ++ lockdep_annotate_inode_mutex_key(inode); ++ spin_lock(&inode->i_lock); ++ WARN_ON(!(inode->i_state & I_NEW)); ++ inode->i_state &= ~I_NEW; ++ smp_mb(); ++ wake_up_bit(&inode->i_state, __I_NEW); ++ spin_unlock(&inode->i_lock); ++ iput(inode); ++} ++EXPORT_SYMBOL(discard_new_inode); ++ + /** + * lock_two_nondirectories - take two i_mutexes on non-directory objects + * +@@ -1029,6 +1050,7 @@ struct inode *inode_insert5(struct inode *inode, unsigned long hashval, + { + struct hlist_head *head = inode_hashtable + hash(inode->i_sb, hashval); + struct inode *old; ++ bool creating = inode->i_state & I_CREATING; + + again: + spin_lock(&inode_hash_lock); +@@ -1039,6 +1061,8 @@ again: + * Use the old inode instead of the preallocated one. + */ + spin_unlock(&inode_hash_lock); ++ if (IS_ERR(old)) ++ return NULL; + wait_on_inode(old); + if (unlikely(inode_unhashed(old))) { + iput(old); +@@ -1060,6 +1084,8 @@ again: + inode->i_state |= I_NEW; + hlist_add_head(&inode->i_hash, head); + spin_unlock(&inode->i_lock); ++ if (!creating) ++ inode_sb_list_add(inode); + unlock: + spin_unlock(&inode_hash_lock); + +@@ -1094,12 +1120,13 @@ struct inode *iget5_locked(struct super_block *sb, unsigned long hashval, + struct inode *inode = ilookup5(sb, hashval, test, data); + + if (!inode) { +- struct inode *new = new_inode(sb); ++ struct inode *new = alloc_inode(sb); + + if (new) { ++ new->i_state = 0; + inode = inode_insert5(new, hashval, test, set, data); + if (unlikely(inode != new)) +- iput(new); ++ destroy_inode(new); + } + } + return inode; +@@ -1128,6 +1155,8 @@ again: + inode = find_inode_fast(sb, head, ino); + spin_unlock(&inode_hash_lock); + if (inode) { ++ if (IS_ERR(inode)) ++ return NULL; + wait_on_inode(inode); + if (unlikely(inode_unhashed(inode))) { + iput(inode); +@@ -1165,6 +1194,8 @@ again: + */ + spin_unlock(&inode_hash_lock); + destroy_inode(inode); ++ if (IS_ERR(old)) ++ return NULL; + inode = old; + wait_on_inode(inode); + if (unlikely(inode_unhashed(inode))) { +@@ -1282,7 +1313,7 @@ struct inode *ilookup5_nowait(struct super_block *sb, unsigned long hashval, + inode = find_inode(sb, head, test, data); + spin_unlock(&inode_hash_lock); + +- return inode; ++ return IS_ERR(inode) ? NULL : inode; + } + EXPORT_SYMBOL(ilookup5_nowait); + +@@ -1338,6 +1369,8 @@ again: + spin_unlock(&inode_hash_lock); + + if (inode) { ++ if (IS_ERR(inode)) ++ return NULL; + wait_on_inode(inode); + if (unlikely(inode_unhashed(inode))) { + iput(inode); +@@ -1421,12 +1454,17 @@ int insert_inode_locked(struct inode *inode) + } + if (likely(!old)) { + spin_lock(&inode->i_lock); +- inode->i_state |= I_NEW; ++ inode->i_state |= I_NEW | I_CREATING; + hlist_add_head(&inode->i_hash, head); + spin_unlock(&inode->i_lock); + spin_unlock(&inode_hash_lock); + return 0; + } ++ if (unlikely(old->i_state & I_CREATING)) { ++ spin_unlock(&old->i_lock); ++ spin_unlock(&inode_hash_lock); ++ return -EBUSY; ++ } + __iget(old); + spin_unlock(&old->i_lock); + spin_unlock(&inode_hash_lock); +@@ -1443,7 +1481,10 @@ EXPORT_SYMBOL(insert_inode_locked); + int insert_inode_locked4(struct inode *inode, unsigned long hashval, + int (*test)(struct inode *, void *), void *data) + { +- struct inode *old = inode_insert5(inode, hashval, test, NULL, data); ++ struct inode *old; ++ ++ inode->i_state |= I_CREATING; ++ old = inode_insert5(inode, hashval, test, NULL, data); + + if (old != inode) { + iput(old); +diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c +index f174397b63a0..ababdbfab537 100644 +--- a/fs/notify/fsnotify.c ++++ b/fs/notify/fsnotify.c +@@ -351,16 +351,9 @@ int fsnotify(struct inode *to_tell, __u32 mask, const void *data, int data_is, + + iter_info.srcu_idx = srcu_read_lock(&fsnotify_mark_srcu); + +- if ((mask & FS_MODIFY) || +- (test_mask & to_tell->i_fsnotify_mask)) { +- iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] = +- fsnotify_first_mark(&to_tell->i_fsnotify_marks); +- } +- +- if (mnt && ((mask & FS_MODIFY) || +- (test_mask & mnt->mnt_fsnotify_mask))) { +- iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] = +- fsnotify_first_mark(&to_tell->i_fsnotify_marks); ++ iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] = ++ fsnotify_first_mark(&to_tell->i_fsnotify_marks); ++ if (mnt) { + iter_info.marks[FSNOTIFY_OBJ_TYPE_VFSMOUNT] = + fsnotify_first_mark(&mnt->mnt_fsnotify_marks); + } +diff --git a/fs/ocfs2/dlm/dlmmaster.c b/fs/ocfs2/dlm/dlmmaster.c +index aaca0949fe53..826f0567ec43 100644 +--- a/fs/ocfs2/dlm/dlmmaster.c ++++ b/fs/ocfs2/dlm/dlmmaster.c +@@ -584,9 +584,9 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm, + + res->last_used = 0; + +- spin_lock(&dlm->spinlock); ++ spin_lock(&dlm->track_lock); + list_add_tail(&res->tracking, &dlm->tracking_list); +- spin_unlock(&dlm->spinlock); ++ spin_unlock(&dlm->track_lock); + + memset(res->lvb, 0, DLM_LVB_LEN); + memset(res->refmap, 0, sizeof(res->refmap)); +diff --git a/fs/overlayfs/dir.c b/fs/overlayfs/dir.c +index f480b1a2cd2e..da9b3ccfde23 100644 +--- a/fs/overlayfs/dir.c ++++ b/fs/overlayfs/dir.c +@@ -601,6 +601,10 @@ static int ovl_create_object(struct dentry *dentry, int mode, dev_t rdev, + if (!inode) + goto out_drop_write; + ++ spin_lock(&inode->i_lock); ++ inode->i_state |= I_CREATING; ++ spin_unlock(&inode->i_lock); ++ + inode_init_owner(inode, dentry->d_parent->d_inode, mode); + attr.mode = inode->i_mode; + +diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c +index c993dd8db739..c2229f02389b 100644 +--- a/fs/overlayfs/namei.c ++++ b/fs/overlayfs/namei.c +@@ -705,7 +705,7 @@ struct dentry *ovl_lookup_index(struct ovl_fs *ofs, struct dentry *upper, + index = NULL; + goto out; + } +- pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%*s, err=%i);\n" ++ pr_warn_ratelimited("overlayfs: failed inode index lookup (ino=%lu, key=%.*s, err=%i);\n" + "overlayfs: mount with '-o index=off' to disable inodes index.\n", + d_inode(origin)->i_ino, name.len, name.name, + err); +diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h +index 7538b9b56237..e789924e9833 100644 +--- a/fs/overlayfs/overlayfs.h ++++ b/fs/overlayfs/overlayfs.h +@@ -147,8 +147,8 @@ static inline int ovl_do_setxattr(struct dentry *dentry, const char *name, + const void *value, size_t size, int flags) + { + int err = vfs_setxattr(dentry, name, value, size, flags); +- pr_debug("setxattr(%pd2, \"%s\", \"%*s\", 0x%x) = %i\n", +- dentry, name, (int) size, (char *) value, flags, err); ++ pr_debug("setxattr(%pd2, \"%s\", \"%*pE\", %zu, 0x%x) = %i\n", ++ dentry, name, min((int)size, 48), value, size, flags, err); + return err; + } + +diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c +index 6f1078028c66..319a7eeb388f 100644 +--- a/fs/overlayfs/util.c ++++ b/fs/overlayfs/util.c +@@ -531,7 +531,7 @@ static void ovl_cleanup_index(struct dentry *dentry) + struct dentry *upperdentry = ovl_dentry_upper(dentry); + struct dentry *index = NULL; + struct inode *inode; +- struct qstr name; ++ struct qstr name = { }; + int err; + + err = ovl_get_index_name(lowerdentry, &name); +@@ -574,6 +574,7 @@ static void ovl_cleanup_index(struct dentry *dentry) + goto fail; + + out: ++ kfree(name.name); + dput(index); + return; + +diff --git a/fs/proc/base.c b/fs/proc/base.c +index aaffc0c30216..bbcad104505c 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -407,6 +407,20 @@ static int proc_pid_stack(struct seq_file *m, struct pid_namespace *ns, + unsigned long *entries; + int err; + ++ /* ++ * The ability to racily run the kernel stack unwinder on a running task ++ * and then observe the unwinder output is scary; while it is useful for ++ * debugging kernel issues, it can also allow an attacker to leak kernel ++ * stack contents. ++ * Doing this in a manner that is at least safe from races would require ++ * some work to ensure that the remote task can not be scheduled; and ++ * even then, this would still expose the unwinder as local attack ++ * surface. ++ * Therefore, this interface is restricted to root. ++ */ ++ if (!file_ns_capable(m->file, &init_user_ns, CAP_SYS_ADMIN)) ++ return -EACCES; ++ + entries = kmalloc_array(MAX_STACK_TRACE_DEPTH, sizeof(*entries), + GFP_KERNEL); + if (!entries) +diff --git a/fs/xattr.c b/fs/xattr.c +index 1bee74682513..c689fd5b5679 100644 +--- a/fs/xattr.c ++++ b/fs/xattr.c +@@ -949,17 +949,19 @@ ssize_t simple_xattr_list(struct inode *inode, struct simple_xattrs *xattrs, + int err = 0; + + #ifdef CONFIG_FS_POSIX_ACL +- if (inode->i_acl) { +- err = xattr_list_one(&buffer, &remaining_size, +- XATTR_NAME_POSIX_ACL_ACCESS); +- if (err) +- return err; +- } +- if (inode->i_default_acl) { +- err = xattr_list_one(&buffer, &remaining_size, +- XATTR_NAME_POSIX_ACL_DEFAULT); +- if (err) +- return err; ++ if (IS_POSIXACL(inode)) { ++ if (inode->i_acl) { ++ err = xattr_list_one(&buffer, &remaining_size, ++ XATTR_NAME_POSIX_ACL_ACCESS); ++ if (err) ++ return err; ++ } ++ if (inode->i_default_acl) { ++ err = xattr_list_one(&buffer, &remaining_size, ++ XATTR_NAME_POSIX_ACL_DEFAULT); ++ if (err) ++ return err; ++ } + } + #endif + +diff --git a/include/asm-generic/io.h b/include/asm-generic/io.h +index 66d1d45fa2e1..d356f802945a 100644 +--- a/include/asm-generic/io.h ++++ b/include/asm-generic/io.h +@@ -1026,7 +1026,8 @@ static inline void __iomem *ioremap_wt(phys_addr_t offset, size_t size) + #define ioport_map ioport_map + static inline void __iomem *ioport_map(unsigned long port, unsigned int nr) + { +- return PCI_IOBASE + (port & MMIO_UPPER_LIMIT); ++ port &= IO_SPACE_LIMIT; ++ return (port > MMIO_UPPER_LIMIT) ? NULL : PCI_IOBASE + port; + } + #endif + +diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h +index 0fce47d5acb1..5d46b83d4820 100644 +--- a/include/linux/blk-cgroup.h ++++ b/include/linux/blk-cgroup.h +@@ -88,7 +88,6 @@ struct blkg_policy_data { + /* the blkg and policy id this per-policy data belongs to */ + struct blkcg_gq *blkg; + int plid; +- bool offline; + }; + + /* +diff --git a/include/linux/fs.h b/include/linux/fs.h +index 805bf22898cf..a3afa50bb79f 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -2014,6 +2014,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp) + * I_OVL_INUSE Used by overlayfs to get exclusive ownership on upper + * and work dirs among overlayfs mounts. + * ++ * I_CREATING New object's inode in the middle of setting up. ++ * + * Q: What is the difference between I_WILL_FREE and I_FREEING? + */ + #define I_DIRTY_SYNC (1 << 0) +@@ -2034,7 +2036,8 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp) + #define __I_DIRTY_TIME_EXPIRED 12 + #define I_DIRTY_TIME_EXPIRED (1 << __I_DIRTY_TIME_EXPIRED) + #define I_WB_SWITCH (1 << 13) +-#define I_OVL_INUSE (1 << 14) ++#define I_OVL_INUSE (1 << 14) ++#define I_CREATING (1 << 15) + + #define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC) + #define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES) +@@ -2918,6 +2921,7 @@ extern void lockdep_annotate_inode_mutex_key(struct inode *inode); + static inline void lockdep_annotate_inode_mutex_key(struct inode *inode) { }; + #endif + extern void unlock_new_inode(struct inode *); ++extern void discard_new_inode(struct inode *); + extern unsigned int get_next_ino(void); + extern void evict_inodes(struct super_block *sb); + +diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h +index 1beb3ead0385..7229c186d199 100644 +--- a/include/net/cfg80211.h ++++ b/include/net/cfg80211.h +@@ -4763,8 +4763,8 @@ const char *reg_initiator_name(enum nl80211_reg_initiator initiator); + * + * Return: 0 on success. -ENODATA. + */ +-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *ptr, +- struct ieee80211_wmm_rule *rule); ++int reg_query_regdb_wmm(char *alpha2, int freq, ++ struct ieee80211_reg_rule *rule); + + /* + * callbacks for asynchronous cfg80211 methods, notification +diff --git a/include/net/regulatory.h b/include/net/regulatory.h +index 60f8cc86a447..3469750df0f4 100644 +--- a/include/net/regulatory.h ++++ b/include/net/regulatory.h +@@ -217,15 +217,15 @@ struct ieee80211_wmm_rule { + struct ieee80211_reg_rule { + struct ieee80211_freq_range freq_range; + struct ieee80211_power_rule power_rule; +- struct ieee80211_wmm_rule *wmm_rule; ++ struct ieee80211_wmm_rule wmm_rule; + u32 flags; + u32 dfs_cac_ms; ++ bool has_wmm; + }; + + struct ieee80211_regdomain { + struct rcu_head rcu_head; + u32 n_reg_rules; +- u32 n_wmm_rules; + char alpha2[3]; + enum nl80211_dfs_regions dfs_region; + struct ieee80211_reg_rule reg_rules[]; +diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c +index ed707b21d152..f833a60699ad 100644 +--- a/kernel/bpf/sockmap.c ++++ b/kernel/bpf/sockmap.c +@@ -236,7 +236,7 @@ static int bpf_tcp_init(struct sock *sk) + } + + static void smap_release_sock(struct smap_psock *psock, struct sock *sock); +-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md); ++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge); + + static void bpf_tcp_release(struct sock *sk) + { +@@ -248,7 +248,7 @@ static void bpf_tcp_release(struct sock *sk) + goto out; + + if (psock->cork) { +- free_start_sg(psock->sock, psock->cork); ++ free_start_sg(psock->sock, psock->cork, true); + kfree(psock->cork); + psock->cork = NULL; + } +@@ -330,14 +330,14 @@ static void bpf_tcp_close(struct sock *sk, long timeout) + close_fun = psock->save_close; + + if (psock->cork) { +- free_start_sg(psock->sock, psock->cork); ++ free_start_sg(psock->sock, psock->cork, true); + kfree(psock->cork); + psock->cork = NULL; + } + + list_for_each_entry_safe(md, mtmp, &psock->ingress, list) { + list_del(&md->list); +- free_start_sg(psock->sock, md); ++ free_start_sg(psock->sock, md, true); + kfree(md); + } + +@@ -369,7 +369,7 @@ static void bpf_tcp_close(struct sock *sk, long timeout) + /* If another thread deleted this object skip deletion. + * The refcnt on psock may or may not be zero. + */ +- if (l) { ++ if (l && l == link) { + hlist_del_rcu(&link->hash_node); + smap_release_sock(psock, link->sk); + free_htab_elem(htab, link); +@@ -570,14 +570,16 @@ static void free_bytes_sg(struct sock *sk, int bytes, + md->sg_start = i; + } + +-static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md) ++static int free_sg(struct sock *sk, int start, ++ struct sk_msg_buff *md, bool charge) + { + struct scatterlist *sg = md->sg_data; + int i = start, free = 0; + + while (sg[i].length) { + free += sg[i].length; +- sk_mem_uncharge(sk, sg[i].length); ++ if (charge) ++ sk_mem_uncharge(sk, sg[i].length); + if (!md->skb) + put_page(sg_page(&sg[i])); + sg[i].length = 0; +@@ -594,9 +596,9 @@ static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md) + return free; + } + +-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md) ++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge) + { +- int free = free_sg(sk, md->sg_start, md); ++ int free = free_sg(sk, md->sg_start, md, charge); + + md->sg_start = md->sg_end; + return free; +@@ -604,7 +606,7 @@ static int free_start_sg(struct sock *sk, struct sk_msg_buff *md) + + static int free_curr_sg(struct sock *sk, struct sk_msg_buff *md) + { +- return free_sg(sk, md->sg_curr, md); ++ return free_sg(sk, md->sg_curr, md, true); + } + + static int bpf_map_msg_verdict(int _rc, struct sk_msg_buff *md) +@@ -718,7 +720,7 @@ static int bpf_tcp_ingress(struct sock *sk, int apply_bytes, + list_add_tail(&r->list, &psock->ingress); + sk->sk_data_ready(sk); + } else { +- free_start_sg(sk, r); ++ free_start_sg(sk, r, true); + kfree(r); + } + +@@ -755,14 +757,10 @@ static int bpf_tcp_sendmsg_do_redirect(struct sock *sk, int send, + release_sock(sk); + } + smap_release_sock(psock, sk); +- if (unlikely(err)) +- goto out; +- return 0; ++ return err; + out_rcu: + rcu_read_unlock(); +-out: +- free_bytes_sg(NULL, send, md, false); +- return err; ++ return 0; + } + + static inline void bpf_md_init(struct smap_psock *psock) +@@ -825,7 +823,7 @@ more_data: + case __SK_PASS: + err = bpf_tcp_push(sk, send, m, flags, true); + if (unlikely(err)) { +- *copied -= free_start_sg(sk, m); ++ *copied -= free_start_sg(sk, m, true); + break; + } + +@@ -848,16 +846,17 @@ more_data: + lock_sock(sk); + + if (unlikely(err < 0)) { +- free_start_sg(sk, m); ++ int free = free_start_sg(sk, m, false); ++ + psock->sg_size = 0; + if (!cork) +- *copied -= send; ++ *copied -= free; + } else { + psock->sg_size -= send; + } + + if (cork) { +- free_start_sg(sk, m); ++ free_start_sg(sk, m, true); + psock->sg_size = 0; + kfree(m); + m = NULL; +@@ -915,6 +914,8 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, + + if (unlikely(flags & MSG_ERRQUEUE)) + return inet_recv_error(sk, msg, len, addr_len); ++ if (!skb_queue_empty(&sk->sk_receive_queue)) ++ return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); + + rcu_read_lock(); + psock = smap_psock_sk(sk); +@@ -925,9 +926,6 @@ static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, + goto out; + rcu_read_unlock(); + +- if (!skb_queue_empty(&sk->sk_receive_queue)) +- return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); +- + lock_sock(sk); + bytes_ready: + while (copied != len) { +@@ -1125,7 +1123,7 @@ wait_for_memory: + err = sk_stream_wait_memory(sk, &timeo); + if (err) { + if (m && m != psock->cork) +- free_start_sg(sk, m); ++ free_start_sg(sk, m, true); + goto out_err; + } + } +@@ -1467,10 +1465,16 @@ static void smap_destroy_psock(struct rcu_head *rcu) + schedule_work(&psock->gc_work); + } + ++static bool psock_is_smap_sk(struct sock *sk) ++{ ++ return inet_csk(sk)->icsk_ulp_ops == &bpf_tcp_ulp_ops; ++} ++ + static void smap_release_sock(struct smap_psock *psock, struct sock *sock) + { + if (refcount_dec_and_test(&psock->refcnt)) { +- tcp_cleanup_ulp(sock); ++ if (psock_is_smap_sk(sock)) ++ tcp_cleanup_ulp(sock); + write_lock_bh(&sock->sk_callback_lock); + smap_stop_sock(psock, sock); + write_unlock_bh(&sock->sk_callback_lock); +@@ -1584,13 +1588,13 @@ static void smap_gc_work(struct work_struct *w) + bpf_prog_put(psock->bpf_tx_msg); + + if (psock->cork) { +- free_start_sg(psock->sock, psock->cork); ++ free_start_sg(psock->sock, psock->cork, true); + kfree(psock->cork); + } + + list_for_each_entry_safe(md, mtmp, &psock->ingress, list) { + list_del(&md->list); +- free_start_sg(psock->sock, md); ++ free_start_sg(psock->sock, md, true); + kfree(md); + } + +@@ -1897,6 +1901,10 @@ static int __sock_map_ctx_update_elem(struct bpf_map *map, + * doesn't update user data. + */ + if (psock) { ++ if (!psock_is_smap_sk(sock)) { ++ err = -EBUSY; ++ goto out_progs; ++ } + if (READ_ONCE(psock->bpf_parse) && parse) { + err = -EBUSY; + goto out_progs; +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index adbe21c8876e..82e8edef6ea0 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -2865,6 +2865,15 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, + u64 umin_val, umax_val; + u64 insn_bitness = (BPF_CLASS(insn->code) == BPF_ALU64) ? 64 : 32; + ++ if (insn_bitness == 32) { ++ /* Relevant for 32-bit RSH: Information can propagate towards ++ * LSB, so it isn't sufficient to only truncate the output to ++ * 32 bits. ++ */ ++ coerce_reg_to_size(dst_reg, 4); ++ coerce_reg_to_size(&src_reg, 4); ++ } ++ + smin_val = src_reg.smin_value; + smax_val = src_reg.smax_value; + umin_val = src_reg.umin_value; +@@ -3100,7 +3109,6 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env, + if (BPF_CLASS(insn->code) != BPF_ALU64) { + /* 32-bit ALU ops are (32,32)->32 */ + coerce_reg_to_size(dst_reg, 4); +- coerce_reg_to_size(&src_reg, 4); + } + + __reg_deduce_bounds(dst_reg); +diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c +index 56a0fed30c0a..505a41c42b96 100644 +--- a/kernel/sched/topology.c ++++ b/kernel/sched/topology.c +@@ -1295,7 +1295,7 @@ static void init_numa_topology_type(void) + + n = sched_max_numa_distance; + +- if (sched_domains_numa_levels <= 1) { ++ if (sched_domains_numa_levels <= 2) { + sched_numa_topology_type = NUMA_DIRECT; + return; + } +@@ -1380,9 +1380,6 @@ void sched_init_numa(void) + break; + } + +- if (!level) +- return; +- + /* + * 'level' contains the number of unique distances + * +diff --git a/mm/madvise.c b/mm/madvise.c +index 4d3c922ea1a1..8534ea2978c5 100644 +--- a/mm/madvise.c ++++ b/mm/madvise.c +@@ -96,7 +96,7 @@ static long madvise_behavior(struct vm_area_struct *vma, + new_flags |= VM_DONTDUMP; + break; + case MADV_DODUMP: +- if (new_flags & VM_SPECIAL) { ++ if (!is_vm_hugetlb_page(vma) && new_flags & VM_SPECIAL) { + error = -EINVAL; + goto out; + } +diff --git a/net/core/filter.c b/net/core/filter.c +index 9dfd145eedcc..963ee2e88861 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -2272,14 +2272,21 @@ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = { + .arg2_type = ARG_ANYTHING, + }; + ++#define sk_msg_iter_var(var) \ ++ do { \ ++ var++; \ ++ if (var == MAX_SKB_FRAGS) \ ++ var = 0; \ ++ } while (0) ++ + BPF_CALL_4(bpf_msg_pull_data, + struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags) + { +- unsigned int len = 0, offset = 0, copy = 0; ++ unsigned int len = 0, offset = 0, copy = 0, poffset = 0; ++ int bytes = end - start, bytes_sg_total; + struct scatterlist *sg = msg->sg_data; + int first_sg, last_sg, i, shift; + unsigned char *p, *to, *from; +- int bytes = end - start; + struct page *page; + + if (unlikely(flags || end <= start)) +@@ -2289,21 +2296,22 @@ BPF_CALL_4(bpf_msg_pull_data, + i = msg->sg_start; + do { + len = sg[i].length; +- offset += len; + if (start < offset + len) + break; +- i++; +- if (i == MAX_SKB_FRAGS) +- i = 0; ++ offset += len; ++ sk_msg_iter_var(i); + } while (i != msg->sg_end); + + if (unlikely(start >= offset + len)) + return -EINVAL; + +- if (!msg->sg_copy[i] && bytes <= len) +- goto out; +- + first_sg = i; ++ /* The start may point into the sg element so we need to also ++ * account for the headroom. ++ */ ++ bytes_sg_total = start - offset + bytes; ++ if (!msg->sg_copy[i] && bytes_sg_total <= len) ++ goto out; + + /* At this point we need to linearize multiple scatterlist + * elements or a single shared page. Either way we need to +@@ -2317,37 +2325,32 @@ BPF_CALL_4(bpf_msg_pull_data, + */ + do { + copy += sg[i].length; +- i++; +- if (i == MAX_SKB_FRAGS) +- i = 0; +- if (bytes < copy) ++ sk_msg_iter_var(i); ++ if (bytes_sg_total <= copy) + break; + } while (i != msg->sg_end); + last_sg = i; + +- if (unlikely(copy < end - start)) ++ if (unlikely(bytes_sg_total > copy)) + return -EINVAL; + + page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy)); + if (unlikely(!page)) + return -ENOMEM; + p = page_address(page); +- offset = 0; + + i = first_sg; + do { + from = sg_virt(&sg[i]); + len = sg[i].length; +- to = p + offset; ++ to = p + poffset; + + memcpy(to, from, len); +- offset += len; ++ poffset += len; + sg[i].length = 0; + put_page(sg_page(&sg[i])); + +- i++; +- if (i == MAX_SKB_FRAGS) +- i = 0; ++ sk_msg_iter_var(i); + } while (i != last_sg); + + sg[first_sg].length = copy; +@@ -2357,11 +2360,15 @@ BPF_CALL_4(bpf_msg_pull_data, + * had a single entry though we can just replace it and + * be done. Otherwise walk the ring and shift the entries. + */ +- shift = last_sg - first_sg - 1; ++ WARN_ON_ONCE(last_sg == first_sg); ++ shift = last_sg > first_sg ? ++ last_sg - first_sg - 1 : ++ MAX_SKB_FRAGS - first_sg + last_sg - 1; + if (!shift) + goto out; + +- i = first_sg + 1; ++ i = first_sg; ++ sk_msg_iter_var(i); + do { + int move_from; + +@@ -2378,15 +2385,13 @@ BPF_CALL_4(bpf_msg_pull_data, + sg[move_from].page_link = 0; + sg[move_from].offset = 0; + +- i++; +- if (i == MAX_SKB_FRAGS) +- i = 0; ++ sk_msg_iter_var(i); + } while (1); + msg->sg_end -= shift; + if (msg->sg_end < 0) + msg->sg_end += MAX_SKB_FRAGS; + out: +- msg->data = sg_virt(&sg[i]) + start - offset; ++ msg->data = sg_virt(&sg[first_sg]) + start - offset; + msg->data_end = msg->data + bytes; + + return 0; +diff --git a/net/ipv4/netfilter/Kconfig b/net/ipv4/netfilter/Kconfig +index bbfc356cb1b5..d7ecae5e93ea 100644 +--- a/net/ipv4/netfilter/Kconfig ++++ b/net/ipv4/netfilter/Kconfig +@@ -122,6 +122,10 @@ config NF_NAT_IPV4 + + if NF_NAT_IPV4 + ++config NF_NAT_MASQUERADE_IPV4 ++ bool ++ ++if NF_TABLES + config NFT_CHAIN_NAT_IPV4 + depends on NF_TABLES_IPV4 + tristate "IPv4 nf_tables nat chain support" +@@ -131,9 +135,6 @@ config NFT_CHAIN_NAT_IPV4 + packet transformations such as the source, destination address and + source and destination ports. + +-config NF_NAT_MASQUERADE_IPV4 +- bool +- + config NFT_MASQ_IPV4 + tristate "IPv4 masquerading support for nf_tables" + depends on NF_TABLES_IPV4 +@@ -151,6 +152,7 @@ config NFT_REDIR_IPV4 + help + This is the expression that provides IPv4 redirect support for + nf_tables. ++endif # NF_TABLES + + config NF_NAT_SNMP_BASIC + tristate "Basic SNMP-ALG support" +diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c +index 6449a1c2283b..f0f5fedb8caa 100644 +--- a/net/mac80211/ibss.c ++++ b/net/mac80211/ibss.c +@@ -947,8 +947,8 @@ static void ieee80211_rx_mgmt_deauth_ibss(struct ieee80211_sub_if_data *sdata, + if (len < IEEE80211_DEAUTH_FRAME_LEN) + return; + +- ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM BSSID=%pM (reason: %d)\n", +- mgmt->sa, mgmt->da, mgmt->bssid, reason); ++ ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da); ++ ibss_dbg(sdata, "\tBSSID=%pM (reason: %d)\n", mgmt->bssid, reason); + sta_info_destroy_addr(sdata, mgmt->sa); + } + +@@ -966,9 +966,9 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata, + auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg); + auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction); + +- ibss_dbg(sdata, +- "RX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=%d)\n", +- mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction); ++ ibss_dbg(sdata, "RX Auth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da); ++ ibss_dbg(sdata, "\tBSSID=%pM (auth_transaction=%d)\n", ++ mgmt->bssid, auth_transaction); + + if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1) + return; +@@ -1175,10 +1175,10 @@ static void ieee80211_rx_bss_info(struct ieee80211_sub_if_data *sdata, + rx_timestamp = drv_get_tsf(local, sdata); + } + +- ibss_dbg(sdata, +- "RX beacon SA=%pM BSSID=%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n", ++ ibss_dbg(sdata, "RX beacon SA=%pM BSSID=%pM TSF=0x%llx\n", + mgmt->sa, mgmt->bssid, +- (unsigned long long)rx_timestamp, ++ (unsigned long long)rx_timestamp); ++ ibss_dbg(sdata, "\tBCN=0x%llx diff=%lld @%lu\n", + (unsigned long long)beacon_timestamp, + (unsigned long long)(rx_timestamp - beacon_timestamp), + jiffies); +@@ -1537,9 +1537,9 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata, + + tx_last_beacon = drv_tx_last_beacon(local); + +- ibss_dbg(sdata, +- "RX ProbeReq SA=%pM DA=%pM BSSID=%pM (tx_last_beacon=%d)\n", +- mgmt->sa, mgmt->da, mgmt->bssid, tx_last_beacon); ++ ibss_dbg(sdata, "RX ProbeReq SA=%pM DA=%pM\n", mgmt->sa, mgmt->da); ++ ibss_dbg(sdata, "\tBSSID=%pM (tx_last_beacon=%d)\n", ++ mgmt->bssid, tx_last_beacon); + + if (!tx_last_beacon && is_multicast_ether_addr(mgmt->da)) + return; +diff --git a/net/mac80211/main.c b/net/mac80211/main.c +index fb73451ed85e..66cbddd65b47 100644 +--- a/net/mac80211/main.c ++++ b/net/mac80211/main.c +@@ -255,8 +255,27 @@ static void ieee80211_restart_work(struct work_struct *work) + + flush_work(&local->radar_detected_work); + rtnl_lock(); +- list_for_each_entry(sdata, &local->interfaces, list) ++ list_for_each_entry(sdata, &local->interfaces, list) { ++ /* ++ * XXX: there may be more work for other vif types and even ++ * for station mode: a good thing would be to run most of ++ * the iface type's dependent _stop (ieee80211_mg_stop, ++ * ieee80211_ibss_stop) etc... ++ * For now, fix only the specific bug that was seen: race ++ * between csa_connection_drop_work and us. ++ */ ++ if (sdata->vif.type == NL80211_IFTYPE_STATION) { ++ /* ++ * This worker is scheduled from the iface worker that ++ * runs on mac80211's workqueue, so we can't be ++ * scheduling this worker after the cancel right here. ++ * The exception is ieee80211_chswitch_done. ++ * Then we can have a race... ++ */ ++ cancel_work_sync(&sdata->u.mgd.csa_connection_drop_work); ++ } + flush_delayed_work(&sdata->dec_tailroom_needed_wk); ++ } + ieee80211_scan_cancel(local); + + /* make sure any new ROC will consider local->in_reconfig */ +@@ -470,10 +489,7 @@ static const struct ieee80211_vht_cap mac80211_vht_capa_mod_mask = { + cpu_to_le32(IEEE80211_VHT_CAP_RXLDPC | + IEEE80211_VHT_CAP_SHORT_GI_80 | + IEEE80211_VHT_CAP_SHORT_GI_160 | +- IEEE80211_VHT_CAP_RXSTBC_1 | +- IEEE80211_VHT_CAP_RXSTBC_2 | +- IEEE80211_VHT_CAP_RXSTBC_3 | +- IEEE80211_VHT_CAP_RXSTBC_4 | ++ IEEE80211_VHT_CAP_RXSTBC_MASK | + IEEE80211_VHT_CAP_TXSTBC | + IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE | + IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE | +@@ -1182,6 +1198,7 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw) + #if IS_ENABLED(CONFIG_IPV6) + unregister_inet6addr_notifier(&local->ifa6_notifier); + #endif ++ ieee80211_txq_teardown_flows(local); + + rtnl_lock(); + +@@ -1210,7 +1227,6 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw) + skb_queue_purge(&local->skb_queue); + skb_queue_purge(&local->skb_queue_unreliable); + skb_queue_purge(&local->skb_queue_tdls_chsw); +- ieee80211_txq_teardown_flows(local); + + destroy_workqueue(local->workqueue); + wiphy_unregister(local->hw.wiphy); +diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c +index 35ad3983ae4b..daf9db3c8f24 100644 +--- a/net/mac80211/mesh_hwmp.c ++++ b/net/mac80211/mesh_hwmp.c +@@ -572,6 +572,10 @@ static void hwmp_preq_frame_process(struct ieee80211_sub_if_data *sdata, + forward = false; + reply = true; + target_metric = 0; ++ ++ if (SN_GT(target_sn, ifmsh->sn)) ++ ifmsh->sn = target_sn; ++ + if (time_after(jiffies, ifmsh->last_sn_update + + net_traversal_jiffies(sdata)) || + time_before(jiffies, ifmsh->last_sn_update)) { +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index a59187c016e0..b046bf95eb3c 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -978,6 +978,10 @@ static void ieee80211_chswitch_work(struct work_struct *work) + */ + + if (sdata->reserved_chanctx) { ++ struct ieee80211_supported_band *sband = NULL; ++ struct sta_info *mgd_sta = NULL; ++ enum ieee80211_sta_rx_bandwidth bw = IEEE80211_STA_RX_BW_20; ++ + /* + * with multi-vif csa driver may call ieee80211_csa_finish() + * many times while waiting for other interfaces to use their +@@ -986,6 +990,48 @@ static void ieee80211_chswitch_work(struct work_struct *work) + if (sdata->reserved_ready) + goto out; + ++ if (sdata->vif.bss_conf.chandef.width != ++ sdata->csa_chandef.width) { ++ /* ++ * For managed interface, we need to also update the AP ++ * station bandwidth and align the rate scale algorithm ++ * on the bandwidth change. Here we only consider the ++ * bandwidth of the new channel definition (as channel ++ * switch flow does not have the full HT/VHT/HE ++ * information), assuming that if additional changes are ++ * required they would be done as part of the processing ++ * of the next beacon from the AP. ++ */ ++ switch (sdata->csa_chandef.width) { ++ case NL80211_CHAN_WIDTH_20_NOHT: ++ case NL80211_CHAN_WIDTH_20: ++ default: ++ bw = IEEE80211_STA_RX_BW_20; ++ break; ++ case NL80211_CHAN_WIDTH_40: ++ bw = IEEE80211_STA_RX_BW_40; ++ break; ++ case NL80211_CHAN_WIDTH_80: ++ bw = IEEE80211_STA_RX_BW_80; ++ break; ++ case NL80211_CHAN_WIDTH_80P80: ++ case NL80211_CHAN_WIDTH_160: ++ bw = IEEE80211_STA_RX_BW_160; ++ break; ++ } ++ ++ mgd_sta = sta_info_get(sdata, ifmgd->bssid); ++ sband = ++ local->hw.wiphy->bands[sdata->csa_chandef.chan->band]; ++ } ++ ++ if (sdata->vif.bss_conf.chandef.width > ++ sdata->csa_chandef.width) { ++ mgd_sta->sta.bandwidth = bw; ++ rate_control_rate_update(local, sband, mgd_sta, ++ IEEE80211_RC_BW_CHANGED); ++ } ++ + ret = ieee80211_vif_use_reserved_context(sdata); + if (ret) { + sdata_info(sdata, +@@ -996,6 +1042,13 @@ static void ieee80211_chswitch_work(struct work_struct *work) + goto out; + } + ++ if (sdata->vif.bss_conf.chandef.width < ++ sdata->csa_chandef.width) { ++ mgd_sta->sta.bandwidth = bw; ++ rate_control_rate_update(local, sband, mgd_sta, ++ IEEE80211_RC_BW_CHANGED); ++ } ++ + goto out; + } + +@@ -1217,6 +1270,16 @@ ieee80211_sta_process_chanswitch(struct ieee80211_sub_if_data *sdata, + cbss->beacon_interval)); + return; + drop_connection: ++ /* ++ * This is just so that the disconnect flow will know that ++ * we were trying to switch channel and failed. In case the ++ * mode is 1 (we are not allowed to Tx), we will know not to ++ * send a deauthentication frame. Those two fields will be ++ * reset when the disconnection worker runs. ++ */ ++ sdata->vif.csa_active = true; ++ sdata->csa_block_tx = csa_ie.mode; ++ + ieee80211_queue_work(&local->hw, &ifmgd->csa_connection_drop_work); + mutex_unlock(&local->chanctx_mtx); + mutex_unlock(&local->mtx); +@@ -2400,6 +2463,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata) + struct ieee80211_local *local = sdata->local; + struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; + u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; ++ bool tx; + + sdata_lock(sdata); + if (!ifmgd->associated) { +@@ -2407,6 +2471,8 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata) + return; + } + ++ tx = !sdata->csa_block_tx; ++ + /* AP is probably out of range (or not reachable for another reason) so + * remove the bss struct for that AP. + */ +@@ -2414,7 +2480,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata) + + ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH, + WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY, +- true, frame_buf); ++ tx, frame_buf); + mutex_lock(&local->mtx); + sdata->vif.csa_active = false; + ifmgd->csa_waiting_bcn = false; +@@ -2425,7 +2491,7 @@ static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata) + } + mutex_unlock(&local->mtx); + +- ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), true, ++ ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), tx, + WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY); + + sdata_unlock(sdata); +diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c +index fa1f1e63a264..9b3b069e418a 100644 +--- a/net/mac80211/tx.c ++++ b/net/mac80211/tx.c +@@ -3073,27 +3073,18 @@ void ieee80211_clear_fast_xmit(struct sta_info *sta) + } + + static bool ieee80211_amsdu_realloc_pad(struct ieee80211_local *local, +- struct sk_buff *skb, int headroom, +- int *subframe_len) ++ struct sk_buff *skb, int headroom) + { +- int amsdu_len = *subframe_len + sizeof(struct ethhdr); +- int padding = (4 - amsdu_len) & 3; +- +- if (skb_headroom(skb) < headroom || skb_tailroom(skb) < padding) { ++ if (skb_headroom(skb) < headroom) { + I802_DEBUG_INC(local->tx_expand_skb_head); + +- if (pskb_expand_head(skb, headroom, padding, GFP_ATOMIC)) { ++ if (pskb_expand_head(skb, headroom, 0, GFP_ATOMIC)) { + wiphy_debug(local->hw.wiphy, + "failed to reallocate TX buffer\n"); + return false; + } + } + +- if (padding) { +- *subframe_len += padding; +- skb_put_zero(skb, padding); +- } +- + return true; + } + +@@ -3117,8 +3108,7 @@ static bool ieee80211_amsdu_prepare_head(struct ieee80211_sub_if_data *sdata, + if (info->control.flags & IEEE80211_TX_CTRL_AMSDU) + return true; + +- if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr), +- &subframe_len)) ++ if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr))) + return false; + + data = skb_push(skb, sizeof(*amsdu_hdr)); +@@ -3184,7 +3174,8 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata, + void *data; + bool ret = false; + unsigned int orig_len; +- int n = 1, nfrags; ++ int n = 2, nfrags, pad = 0; ++ u16 hdrlen; + + if (!ieee80211_hw_check(&local->hw, TX_AMSDU)) + return false; +@@ -3217,9 +3208,6 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata, + if (skb->len + head->len > max_amsdu_len) + goto out; + +- if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head)) +- goto out; +- + nfrags = 1 + skb_shinfo(skb)->nr_frags; + nfrags += 1 + skb_shinfo(head)->nr_frags; + frag_tail = &skb_shinfo(head)->frag_list; +@@ -3235,10 +3223,24 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata, + if (max_frags && nfrags > max_frags) + goto out; + +- if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) + 2, +- &subframe_len)) ++ if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head)) + goto out; + ++ /* ++ * Pad out the previous subframe to a multiple of 4 by adding the ++ * padding to the next one, that's being added. Note that head->len ++ * is the length of the full A-MSDU, but that works since each time ++ * we add a new subframe we pad out the previous one to a multiple ++ * of 4 and thus it no longer matters in the next round. ++ */ ++ hdrlen = fast_tx->hdr_len - sizeof(rfc1042_header); ++ if ((head->len - hdrlen) & 3) ++ pad = 4 - ((head->len - hdrlen) & 3); ++ ++ if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) + ++ 2 + pad)) ++ goto out_recalc; ++ + ret = true; + data = skb_push(skb, ETH_ALEN + 2); + memmove(data, data + ETH_ALEN + 2, 2 * ETH_ALEN); +@@ -3248,15 +3250,19 @@ static bool ieee80211_amsdu_aggregate(struct ieee80211_sub_if_data *sdata, + memcpy(data, &len, 2); + memcpy(data + 2, rfc1042_header, sizeof(rfc1042_header)); + ++ memset(skb_push(skb, pad), 0, pad); ++ + head->len += skb->len; + head->data_len += skb->len; + *frag_tail = skb; + +- flow->backlog += head->len - orig_len; +- tin->backlog_bytes += head->len - orig_len; +- +- fq_recalc_backlog(fq, tin, flow); ++out_recalc: ++ if (head->len != orig_len) { ++ flow->backlog += head->len - orig_len; ++ tin->backlog_bytes += head->len - orig_len; + ++ fq_recalc_backlog(fq, tin, flow); ++ } + out: + spin_unlock_bh(&fq->lock); + +diff --git a/net/mac80211/util.c b/net/mac80211/util.c +index d02fbfec3783..93b5bb849ad7 100644 +--- a/net/mac80211/util.c ++++ b/net/mac80211/util.c +@@ -1120,7 +1120,7 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata, + { + struct ieee80211_chanctx_conf *chanctx_conf; + const struct ieee80211_reg_rule *rrule; +- struct ieee80211_wmm_ac *wmm_ac; ++ const struct ieee80211_wmm_ac *wmm_ac; + u16 center_freq = 0; + + if (sdata->vif.type != NL80211_IFTYPE_AP && +@@ -1139,20 +1139,19 @@ void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata, + + rrule = freq_reg_info(sdata->wdev.wiphy, MHZ_TO_KHZ(center_freq)); + +- if (IS_ERR_OR_NULL(rrule) || !rrule->wmm_rule) { ++ if (IS_ERR_OR_NULL(rrule) || !rrule->has_wmm) { + rcu_read_unlock(); + return; + } + + if (sdata->vif.type == NL80211_IFTYPE_AP) +- wmm_ac = &rrule->wmm_rule->ap[ac]; ++ wmm_ac = &rrule->wmm_rule.ap[ac]; + else +- wmm_ac = &rrule->wmm_rule->client[ac]; ++ wmm_ac = &rrule->wmm_rule.client[ac]; + qparam->cw_min = max_t(u16, qparam->cw_min, wmm_ac->cw_min); + qparam->cw_max = max_t(u16, qparam->cw_max, wmm_ac->cw_max); + qparam->aifs = max_t(u8, qparam->aifs, wmm_ac->aifsn); +- qparam->txop = !qparam->txop ? wmm_ac->cot / 32 : +- min_t(u16, qparam->txop, wmm_ac->cot / 32); ++ qparam->txop = min_t(u16, qparam->txop, wmm_ac->cot / 32); + rcu_read_unlock(); + } + +diff --git a/net/netfilter/Kconfig b/net/netfilter/Kconfig +index f0a1c536ef15..e6d5c87f0d96 100644 +--- a/net/netfilter/Kconfig ++++ b/net/netfilter/Kconfig +@@ -740,13 +740,13 @@ config NETFILTER_XT_TARGET_CHECKSUM + depends on NETFILTER_ADVANCED + ---help--- + This option adds a `CHECKSUM' target, which can be used in the iptables mangle +- table. ++ table to work around buggy DHCP clients in virtualized environments. + +- You can use this target to compute and fill in the checksum in +- a packet that lacks a checksum. This is particularly useful, +- if you need to work around old applications such as dhcp clients, +- that do not work well with checksum offloads, but don't want to disable +- checksum offload in your device. ++ Some old DHCP clients drop packets because they are not aware ++ that the checksum would normally be offloaded to hardware and ++ thus should be considered valid. ++ This target can be used to fill in the checksum using iptables ++ when such packets are sent via a virtual network device. + + To compile it as a module, choose M here. If unsure, say N. + +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index f5745e4c6513..77d690a87144 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -4582,6 +4582,7 @@ static int nft_flush_set(const struct nft_ctx *ctx, + } + set->ndeact++; + ++ nft_set_elem_deactivate(ctx->net, set, elem); + nft_trans_elem_set(trans) = set; + nft_trans_elem(trans) = *elem; + list_add_tail(&trans->list, &ctx->net->nft.commit_list); +diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c +index ea4ba551abb2..d33094f4ec41 100644 +--- a/net/netfilter/nfnetlink_queue.c ++++ b/net/netfilter/nfnetlink_queue.c +@@ -233,6 +233,7 @@ static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict) + int err; + + if (verdict == NF_ACCEPT || ++ verdict == NF_REPEAT || + verdict == NF_STOP) { + rcu_read_lock(); + ct_hook = rcu_dereference(nf_ct_hook); +diff --git a/net/netfilter/xt_CHECKSUM.c b/net/netfilter/xt_CHECKSUM.c +index 9f4151ec3e06..6c7aa6a0a0d2 100644 +--- a/net/netfilter/xt_CHECKSUM.c ++++ b/net/netfilter/xt_CHECKSUM.c +@@ -16,6 +16,9 @@ + #include + #include + ++#include ++#include ++ + MODULE_LICENSE("GPL"); + MODULE_AUTHOR("Michael S. Tsirkin "); + MODULE_DESCRIPTION("Xtables: checksum modification"); +@@ -25,7 +28,7 @@ MODULE_ALIAS("ip6t_CHECKSUM"); + static unsigned int + checksum_tg(struct sk_buff *skb, const struct xt_action_param *par) + { +- if (skb->ip_summed == CHECKSUM_PARTIAL) ++ if (skb->ip_summed == CHECKSUM_PARTIAL && !skb_is_gso(skb)) + skb_checksum_help(skb); + + return XT_CONTINUE; +@@ -34,6 +37,8 @@ checksum_tg(struct sk_buff *skb, const struct xt_action_param *par) + static int checksum_tg_check(const struct xt_tgchk_param *par) + { + const struct xt_CHECKSUM_info *einfo = par->targinfo; ++ const struct ip6t_ip6 *i6 = par->entryinfo; ++ const struct ipt_ip *i4 = par->entryinfo; + + if (einfo->operation & ~XT_CHECKSUM_OP_FILL) { + pr_info_ratelimited("unsupported CHECKSUM operation %x\n", +@@ -43,6 +48,21 @@ static int checksum_tg_check(const struct xt_tgchk_param *par) + if (!einfo->operation) + return -EINVAL; + ++ switch (par->family) { ++ case NFPROTO_IPV4: ++ if (i4->proto == IPPROTO_UDP && ++ (i4->invflags & XT_INV_PROTO) == 0) ++ return 0; ++ break; ++ case NFPROTO_IPV6: ++ if ((i6->flags & IP6T_F_PROTO) && ++ i6->proto == IPPROTO_UDP && ++ (i6->invflags & XT_INV_PROTO) == 0) ++ return 0; ++ break; ++ } ++ ++ pr_warn_once("CHECKSUM should be avoided. If really needed, restrict with \"-p udp\" and only use in OUTPUT\n"); + return 0; + } + +diff --git a/net/netfilter/xt_cluster.c b/net/netfilter/xt_cluster.c +index dfbdbb2fc0ed..51d0c257e7a5 100644 +--- a/net/netfilter/xt_cluster.c ++++ b/net/netfilter/xt_cluster.c +@@ -125,6 +125,7 @@ xt_cluster_mt(const struct sk_buff *skb, struct xt_action_param *par) + static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par) + { + struct xt_cluster_match_info *info = par->matchinfo; ++ int ret; + + if (info->total_nodes > XT_CLUSTER_NODES_MAX) { + pr_info_ratelimited("you have exceeded the maximum number of cluster nodes (%u > %u)\n", +@@ -135,7 +136,17 @@ static int xt_cluster_mt_checkentry(const struct xt_mtchk_param *par) + pr_info_ratelimited("node mask cannot exceed total number of nodes\n"); + return -EDOM; + } +- return 0; ++ ++ ret = nf_ct_netns_get(par->net, par->family); ++ if (ret < 0) ++ pr_info_ratelimited("cannot load conntrack support for proto=%u\n", ++ par->family); ++ return ret; ++} ++ ++static void xt_cluster_mt_destroy(const struct xt_mtdtor_param *par) ++{ ++ nf_ct_netns_put(par->net, par->family); + } + + static struct xt_match xt_cluster_match __read_mostly = { +@@ -144,6 +155,7 @@ static struct xt_match xt_cluster_match __read_mostly = { + .match = xt_cluster_mt, + .checkentry = xt_cluster_mt_checkentry, + .matchsize = sizeof(struct xt_cluster_match_info), ++ .destroy = xt_cluster_mt_destroy, + .me = THIS_MODULE, + }; + +diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c +index 9b16402f29af..3e7d259e5d8d 100644 +--- a/net/netfilter/xt_hashlimit.c ++++ b/net/netfilter/xt_hashlimit.c +@@ -1057,7 +1057,7 @@ static struct xt_match hashlimit_mt_reg[] __read_mostly = { + static void *dl_seq_start(struct seq_file *s, loff_t *pos) + __acquires(htable->lock) + { +- struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file)); + unsigned int *bucket; + + spin_lock_bh(&htable->lock); +@@ -1074,7 +1074,7 @@ static void *dl_seq_start(struct seq_file *s, loff_t *pos) + + static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos) + { +- struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file)); + unsigned int *bucket = v; + + *pos = ++(*bucket); +@@ -1088,7 +1088,7 @@ static void *dl_seq_next(struct seq_file *s, void *v, loff_t *pos) + static void dl_seq_stop(struct seq_file *s, void *v) + __releases(htable->lock) + { +- struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file)); + unsigned int *bucket = v; + + if (!IS_ERR(bucket)) +@@ -1130,7 +1130,7 @@ static void dl_seq_print(struct dsthash_ent *ent, u_int8_t family, + static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family, + struct seq_file *s) + { +- struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file)); + + spin_lock(&ent->lock); + /* recalculate to show accurate numbers */ +@@ -1145,7 +1145,7 @@ static int dl_seq_real_show_v2(struct dsthash_ent *ent, u_int8_t family, + static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family, + struct seq_file *s) + { +- struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file)); + + spin_lock(&ent->lock); + /* recalculate to show accurate numbers */ +@@ -1160,7 +1160,7 @@ static int dl_seq_real_show_v1(struct dsthash_ent *ent, u_int8_t family, + static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family, + struct seq_file *s) + { +- struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *ht = PDE_DATA(file_inode(s->file)); + + spin_lock(&ent->lock); + /* recalculate to show accurate numbers */ +@@ -1174,7 +1174,7 @@ static int dl_seq_real_show(struct dsthash_ent *ent, u_int8_t family, + + static int dl_seq_show_v2(struct seq_file *s, void *v) + { +- struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file)); + unsigned int *bucket = (unsigned int *)v; + struct dsthash_ent *ent; + +@@ -1188,7 +1188,7 @@ static int dl_seq_show_v2(struct seq_file *s, void *v) + + static int dl_seq_show_v1(struct seq_file *s, void *v) + { +- struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file)); + unsigned int *bucket = v; + struct dsthash_ent *ent; + +@@ -1202,7 +1202,7 @@ static int dl_seq_show_v1(struct seq_file *s, void *v) + + static int dl_seq_show(struct seq_file *s, void *v) + { +- struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->private)); ++ struct xt_hashlimit_htable *htable = PDE_DATA(file_inode(s->file)); + unsigned int *bucket = v; + struct dsthash_ent *ent; + +diff --git a/net/tipc/diag.c b/net/tipc/diag.c +index aaabb0b776dd..73137f4aeb68 100644 +--- a/net/tipc/diag.c ++++ b/net/tipc/diag.c +@@ -84,7 +84,9 @@ static int tipc_sock_diag_handler_dump(struct sk_buff *skb, + + if (h->nlmsg_flags & NLM_F_DUMP) { + struct netlink_dump_control c = { ++ .start = tipc_dump_start, + .dump = tipc_diag_dump, ++ .done = tipc_dump_done, + }; + netlink_dump_start(net->diag_nlsk, skb, h, &c); + return 0; +diff --git a/net/tipc/netlink.c b/net/tipc/netlink.c +index 6ff2254088f6..99ee419210ba 100644 +--- a/net/tipc/netlink.c ++++ b/net/tipc/netlink.c +@@ -167,7 +167,9 @@ static const struct genl_ops tipc_genl_v2_ops[] = { + }, + { + .cmd = TIPC_NL_SOCK_GET, ++ .start = tipc_dump_start, + .dumpit = tipc_nl_sk_dump, ++ .done = tipc_dump_done, + .policy = tipc_nl_policy, + }, + { +diff --git a/net/tipc/socket.c b/net/tipc/socket.c +index ac8ca238c541..bdb4a9a5a83a 100644 +--- a/net/tipc/socket.c ++++ b/net/tipc/socket.c +@@ -3233,45 +3233,69 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb, + struct netlink_callback *cb, + struct tipc_sock *tsk)) + { +- struct net *net = sock_net(skb->sk); +- struct tipc_net *tn = tipc_net(net); +- const struct bucket_table *tbl; +- u32 prev_portid = cb->args[1]; +- u32 tbl_id = cb->args[0]; +- struct rhash_head *pos; ++ struct rhashtable_iter *iter = (void *)cb->args[0]; + struct tipc_sock *tsk; + int err; + +- rcu_read_lock(); +- tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht); +- for (; tbl_id < tbl->size; tbl_id++) { +- rht_for_each_entry_rcu(tsk, pos, tbl, tbl_id, node) { +- spin_lock_bh(&tsk->sk.sk_lock.slock); +- if (prev_portid && prev_portid != tsk->portid) { +- spin_unlock_bh(&tsk->sk.sk_lock.slock); ++ rhashtable_walk_start(iter); ++ while ((tsk = rhashtable_walk_next(iter)) != NULL) { ++ if (IS_ERR(tsk)) { ++ err = PTR_ERR(tsk); ++ if (err == -EAGAIN) { ++ err = 0; + continue; + } ++ break; ++ } + +- err = skb_handler(skb, cb, tsk); +- if (err) { +- prev_portid = tsk->portid; +- spin_unlock_bh(&tsk->sk.sk_lock.slock); +- goto out; +- } +- +- prev_portid = 0; +- spin_unlock_bh(&tsk->sk.sk_lock.slock); ++ sock_hold(&tsk->sk); ++ rhashtable_walk_stop(iter); ++ lock_sock(&tsk->sk); ++ err = skb_handler(skb, cb, tsk); ++ if (err) { ++ release_sock(&tsk->sk); ++ sock_put(&tsk->sk); ++ goto out; + } ++ release_sock(&tsk->sk); ++ rhashtable_walk_start(iter); ++ sock_put(&tsk->sk); + } ++ rhashtable_walk_stop(iter); + out: +- rcu_read_unlock(); +- cb->args[0] = tbl_id; +- cb->args[1] = prev_portid; +- + return skb->len; + } + EXPORT_SYMBOL(tipc_nl_sk_walk); + ++int tipc_dump_start(struct netlink_callback *cb) ++{ ++ struct rhashtable_iter *iter = (void *)cb->args[0]; ++ struct net *net = sock_net(cb->skb->sk); ++ struct tipc_net *tn = tipc_net(net); ++ ++ if (!iter) { ++ iter = kmalloc(sizeof(*iter), GFP_KERNEL); ++ if (!iter) ++ return -ENOMEM; ++ ++ cb->args[0] = (long)iter; ++ } ++ ++ rhashtable_walk_enter(&tn->sk_rht, iter); ++ return 0; ++} ++EXPORT_SYMBOL(tipc_dump_start); ++ ++int tipc_dump_done(struct netlink_callback *cb) ++{ ++ struct rhashtable_iter *hti = (void *)cb->args[0]; ++ ++ rhashtable_walk_exit(hti); ++ kfree(hti); ++ return 0; ++} ++EXPORT_SYMBOL(tipc_dump_done); ++ + int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb, + struct tipc_sock *tsk, u32 sk_filter_state, + u64 (*tipc_diag_gen_cookie)(struct sock *sk)) +diff --git a/net/tipc/socket.h b/net/tipc/socket.h +index aff9b2ae5a1f..d43032e26532 100644 +--- a/net/tipc/socket.h ++++ b/net/tipc/socket.h +@@ -68,4 +68,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb, struct netlink_callback *cb, + int (*skb_handler)(struct sk_buff *skb, + struct netlink_callback *cb, + struct tipc_sock *tsk)); ++int tipc_dump_start(struct netlink_callback *cb); ++int tipc_dump_done(struct netlink_callback *cb); + #endif +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index 80bc986c79e5..733ccf867972 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -667,13 +667,13 @@ static int nl80211_msg_put_wmm_rules(struct sk_buff *msg, + goto nla_put_failure; + + if (nla_put_u16(msg, NL80211_WMMR_CW_MIN, +- rule->wmm_rule->client[j].cw_min) || ++ rule->wmm_rule.client[j].cw_min) || + nla_put_u16(msg, NL80211_WMMR_CW_MAX, +- rule->wmm_rule->client[j].cw_max) || ++ rule->wmm_rule.client[j].cw_max) || + nla_put_u8(msg, NL80211_WMMR_AIFSN, +- rule->wmm_rule->client[j].aifsn) || +- nla_put_u8(msg, NL80211_WMMR_TXOP, +- rule->wmm_rule->client[j].cot)) ++ rule->wmm_rule.client[j].aifsn) || ++ nla_put_u16(msg, NL80211_WMMR_TXOP, ++ rule->wmm_rule.client[j].cot)) + goto nla_put_failure; + + nla_nest_end(msg, nl_wmm_rule); +@@ -764,9 +764,9 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy, + + if (large) { + const struct ieee80211_reg_rule *rule = +- freq_reg_info(wiphy, chan->center_freq); ++ freq_reg_info(wiphy, MHZ_TO_KHZ(chan->center_freq)); + +- if (!IS_ERR(rule) && rule->wmm_rule) { ++ if (!IS_ERR_OR_NULL(rule) && rule->has_wmm) { + if (nl80211_msg_put_wmm_rules(msg, rule)) + goto nla_put_failure; + } +@@ -12099,6 +12099,7 @@ static int nl80211_update_ft_ies(struct sk_buff *skb, struct genl_info *info) + return -EOPNOTSUPP; + + if (!info->attrs[NL80211_ATTR_MDID] || ++ !info->attrs[NL80211_ATTR_IE] || + !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE])) + return -EINVAL; + +diff --git a/net/wireless/reg.c b/net/wireless/reg.c +index 4fc66a117b7d..2f702adf2912 100644 +--- a/net/wireless/reg.c ++++ b/net/wireless/reg.c +@@ -425,36 +425,23 @@ static const struct ieee80211_regdomain * + reg_copy_regd(const struct ieee80211_regdomain *src_regd) + { + struct ieee80211_regdomain *regd; +- int size_of_regd, size_of_wmms; ++ int size_of_regd; + unsigned int i; +- struct ieee80211_wmm_rule *d_wmm, *s_wmm; + + size_of_regd = + sizeof(struct ieee80211_regdomain) + + src_regd->n_reg_rules * sizeof(struct ieee80211_reg_rule); +- size_of_wmms = src_regd->n_wmm_rules * +- sizeof(struct ieee80211_wmm_rule); + +- regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL); ++ regd = kzalloc(size_of_regd, GFP_KERNEL); + if (!regd) + return ERR_PTR(-ENOMEM); + + memcpy(regd, src_regd, sizeof(struct ieee80211_regdomain)); + +- d_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); +- s_wmm = (struct ieee80211_wmm_rule *)((u8 *)src_regd + size_of_regd); +- memcpy(d_wmm, s_wmm, size_of_wmms); +- +- for (i = 0; i < src_regd->n_reg_rules; i++) { ++ for (i = 0; i < src_regd->n_reg_rules; i++) + memcpy(®d->reg_rules[i], &src_regd->reg_rules[i], + sizeof(struct ieee80211_reg_rule)); +- if (!src_regd->reg_rules[i].wmm_rule) +- continue; + +- regd->reg_rules[i].wmm_rule = d_wmm + +- (src_regd->reg_rules[i].wmm_rule - s_wmm) / +- sizeof(struct ieee80211_wmm_rule); +- } + return regd; + } + +@@ -860,9 +847,10 @@ static bool valid_regdb(const u8 *data, unsigned int size) + return true; + } + +-static void set_wmm_rule(struct ieee80211_wmm_rule *rule, ++static void set_wmm_rule(struct ieee80211_reg_rule *rrule, + struct fwdb_wmm_rule *wmm) + { ++ struct ieee80211_wmm_rule *rule = &rrule->wmm_rule; + unsigned int i; + + for (i = 0; i < IEEE80211_NUM_ACS; i++) { +@@ -876,11 +864,13 @@ static void set_wmm_rule(struct ieee80211_wmm_rule *rule, + rule->ap[i].aifsn = wmm->ap[i].aifsn; + rule->ap[i].cot = 1000 * be16_to_cpu(wmm->ap[i].cot); + } ++ ++ rrule->has_wmm = true; + } + + static int __regdb_query_wmm(const struct fwdb_header *db, + const struct fwdb_country *country, int freq, +- u32 *dbptr, struct ieee80211_wmm_rule *rule) ++ struct ieee80211_reg_rule *rule) + { + unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2; + struct fwdb_collection *coll = (void *)((u8 *)db + ptr); +@@ -901,8 +891,6 @@ static int __regdb_query_wmm(const struct fwdb_header *db, + wmm_ptr = be16_to_cpu(rrule->wmm_ptr) << 2; + wmm = (void *)((u8 *)db + wmm_ptr); + set_wmm_rule(rule, wmm); +- if (dbptr) +- *dbptr = wmm_ptr; + return 0; + } + } +@@ -910,8 +898,7 @@ static int __regdb_query_wmm(const struct fwdb_header *db, + return -ENODATA; + } + +-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr, +- struct ieee80211_wmm_rule *rule) ++int reg_query_regdb_wmm(char *alpha2, int freq, struct ieee80211_reg_rule *rule) + { + const struct fwdb_header *hdr = regdb; + const struct fwdb_country *country; +@@ -925,8 +912,7 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr, + country = &hdr->country[0]; + while (country->coll_ptr) { + if (alpha2_equal(alpha2, country->alpha2)) +- return __regdb_query_wmm(regdb, country, freq, dbptr, +- rule); ++ return __regdb_query_wmm(regdb, country, freq, rule); + + country++; + } +@@ -935,32 +921,13 @@ int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr, + } + EXPORT_SYMBOL(reg_query_regdb_wmm); + +-struct wmm_ptrs { +- struct ieee80211_wmm_rule *rule; +- u32 ptr; +-}; +- +-static struct ieee80211_wmm_rule *find_wmm_ptr(struct wmm_ptrs *wmm_ptrs, +- u32 wmm_ptr, int n_wmms) +-{ +- int i; +- +- for (i = 0; i < n_wmms; i++) { +- if (wmm_ptrs[i].ptr == wmm_ptr) +- return wmm_ptrs[i].rule; +- } +- return NULL; +-} +- + static int regdb_query_country(const struct fwdb_header *db, + const struct fwdb_country *country) + { + unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2; + struct fwdb_collection *coll = (void *)((u8 *)db + ptr); + struct ieee80211_regdomain *regdom; +- struct ieee80211_regdomain *tmp_rd; +- unsigned int size_of_regd, i, n_wmms = 0; +- struct wmm_ptrs *wmm_ptrs; ++ unsigned int size_of_regd, i; + + size_of_regd = sizeof(struct ieee80211_regdomain) + + coll->n_rules * sizeof(struct ieee80211_reg_rule); +@@ -969,12 +936,6 @@ static int regdb_query_country(const struct fwdb_header *db, + if (!regdom) + return -ENOMEM; + +- wmm_ptrs = kcalloc(coll->n_rules, sizeof(*wmm_ptrs), GFP_KERNEL); +- if (!wmm_ptrs) { +- kfree(regdom); +- return -ENOMEM; +- } +- + regdom->n_reg_rules = coll->n_rules; + regdom->alpha2[0] = country->alpha2[0]; + regdom->alpha2[1] = country->alpha2[1]; +@@ -1013,37 +974,11 @@ static int regdb_query_country(const struct fwdb_header *db, + 1000 * be16_to_cpu(rule->cac_timeout); + if (rule->len >= offsetofend(struct fwdb_rule, wmm_ptr)) { + u32 wmm_ptr = be16_to_cpu(rule->wmm_ptr) << 2; +- struct ieee80211_wmm_rule *wmm_pos = +- find_wmm_ptr(wmm_ptrs, wmm_ptr, n_wmms); +- struct fwdb_wmm_rule *wmm; +- struct ieee80211_wmm_rule *wmm_rule; +- +- if (wmm_pos) { +- rrule->wmm_rule = wmm_pos; +- continue; +- } +- wmm = (void *)((u8 *)db + wmm_ptr); +- tmp_rd = krealloc(regdom, size_of_regd + (n_wmms + 1) * +- sizeof(struct ieee80211_wmm_rule), +- GFP_KERNEL); +- +- if (!tmp_rd) { +- kfree(regdom); +- kfree(wmm_ptrs); +- return -ENOMEM; +- } +- regdom = tmp_rd; +- +- wmm_rule = (struct ieee80211_wmm_rule *) +- ((u8 *)regdom + size_of_regd + n_wmms * +- sizeof(struct ieee80211_wmm_rule)); ++ struct fwdb_wmm_rule *wmm = (void *)((u8 *)db + wmm_ptr); + +- set_wmm_rule(wmm_rule, wmm); +- wmm_ptrs[n_wmms].ptr = wmm_ptr; +- wmm_ptrs[n_wmms++].rule = wmm_rule; ++ set_wmm_rule(rrule, wmm); + } + } +- kfree(wmm_ptrs); + + return reg_schedule_apply(regdom); + } +diff --git a/net/wireless/util.c b/net/wireless/util.c +index 3c654cd7ba56..908bf5b6d89e 100644 +--- a/net/wireless/util.c ++++ b/net/wireless/util.c +@@ -1374,7 +1374,7 @@ bool ieee80211_chandef_to_operating_class(struct cfg80211_chan_def *chandef, + u8 *op_class) + { + u8 vht_opclass; +- u16 freq = chandef->center_freq1; ++ u32 freq = chandef->center_freq1; + + if (freq >= 2412 && freq <= 2472) { + if (chandef->width > NL80211_CHAN_WIDTH_40) +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index d14b05f68d6d..08b6369f930b 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -6455,6 +6455,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), + SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE), + SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE), ++ SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME), + SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME), + SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3), + SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER), +diff --git a/tools/hv/hv_fcopy_daemon.c b/tools/hv/hv_fcopy_daemon.c +index d78aed86af09..8ff8cb1a11f4 100644 +--- a/tools/hv/hv_fcopy_daemon.c ++++ b/tools/hv/hv_fcopy_daemon.c +@@ -234,6 +234,7 @@ int main(int argc, char *argv[]) + break; + + default: ++ error = HV_E_FAIL; + syslog(LOG_ERR, "Unknown operation: %d", + buffer.hdr.operation); + +diff --git a/tools/kvm/kvm_stat/kvm_stat b/tools/kvm/kvm_stat/kvm_stat +index 56c4b3f8a01b..7c92545931e3 100755 +--- a/tools/kvm/kvm_stat/kvm_stat ++++ b/tools/kvm/kvm_stat/kvm_stat +@@ -759,13 +759,20 @@ class DebugfsProvider(Provider): + if len(vms) == 0: + self.do_read = False + +- self.paths = filter(lambda x: "{}-".format(pid) in x, vms) ++ self.paths = list(filter(lambda x: "{}-".format(pid) in x, vms)) + + else: + self.paths = [] + self.do_read = True + self.reset() + ++ def _verify_paths(self): ++ """Remove invalid paths""" ++ for path in self.paths: ++ if not os.path.exists(os.path.join(PATH_DEBUGFS_KVM, path)): ++ self.paths.remove(path) ++ continue ++ + def read(self, reset=0, by_guest=0): + """Returns a dict with format:'file name / field -> current value'. + +@@ -780,6 +787,7 @@ class DebugfsProvider(Provider): + # If no debugfs filtering support is available, then don't read. + if not self.do_read: + return results ++ self._verify_paths() + + paths = self.paths + if self._pid == 0: +@@ -1162,6 +1170,9 @@ class Tui(object): + + return sorted_items + ++ if not self._is_running_guest(self.stats.pid_filter): ++ # leave final data on screen ++ return + row = 3 + self.screen.move(row, 0) + self.screen.clrtobot() +@@ -1219,10 +1230,10 @@ class Tui(object): + (x, term_width) = self.screen.getmaxyx() + row = 2 + for line in text: +- start = (term_width - len(line)) / 2 ++ start = (term_width - len(line)) // 2 + self.screen.addstr(row, start, line) + row += 1 +- self.screen.addstr(row + 1, (term_width - len(hint)) / 2, hint, ++ self.screen.addstr(row + 1, (term_width - len(hint)) // 2, hint, + curses.A_STANDOUT) + self.screen.getkey() + +@@ -1319,6 +1330,12 @@ class Tui(object): + msg = '"' + str(val) + '": Invalid value' + self._refresh_header() + ++ def _is_running_guest(self, pid): ++ """Check if pid is still a running process.""" ++ if not pid: ++ return True ++ return os.path.isdir(os.path.join('/proc/', str(pid))) ++ + def _show_vm_selection_by_guest(self): + """Draws guest selection mask. + +@@ -1346,7 +1363,7 @@ class Tui(object): + if not guest or guest == '0': + break + if guest.isdigit(): +- if not os.path.isdir(os.path.join('/proc/', guest)): ++ if not self._is_running_guest(guest): + msg = '"' + guest + '": Not a running process' + continue + pid = int(guest) +diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c +index 20e7d74d86cd..10a44e946f77 100644 +--- a/tools/perf/arch/powerpc/util/sym-handling.c ++++ b/tools/perf/arch/powerpc/util/sym-handling.c +@@ -22,15 +22,16 @@ bool elf__needs_adjust_symbols(GElf_Ehdr ehdr) + + #endif + +-#if !defined(_CALL_ELF) || _CALL_ELF != 2 + int arch__choose_best_symbol(struct symbol *syma, + struct symbol *symb __maybe_unused) + { + char *sym = syma->name; + ++#if !defined(_CALL_ELF) || _CALL_ELF != 2 + /* Skip over any initial dot */ + if (*sym == '.') + sym++; ++#endif + + /* Avoid "SyS" kernel syscall aliases */ + if (strlen(sym) >= 3 && !strncmp(sym, "SyS", 3)) +@@ -41,6 +42,7 @@ int arch__choose_best_symbol(struct symbol *syma, + return SYMBOL_A; + } + ++#if !defined(_CALL_ELF) || _CALL_ELF != 2 + /* Allow matching against dot variants */ + int arch__compare_symbol_names(const char *namea, const char *nameb) + { +diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c +index f91775b4bc3c..3b05219c3ed7 100644 +--- a/tools/perf/util/annotate.c ++++ b/tools/perf/util/annotate.c +@@ -245,8 +245,14 @@ find_target: + + indirect_call: + tok = strchr(endptr, '*'); +- if (tok != NULL) +- ops->target.addr = strtoull(tok + 1, NULL, 16); ++ if (tok != NULL) { ++ endptr++; ++ ++ /* Indirect call can use a non-rip register and offset: callq *0x8(%rbx). ++ * Do not parse such instruction. */ ++ if (strstr(endptr, "(%r") == NULL) ++ ops->target.addr = strtoull(endptr, NULL, 16); ++ } + goto find_target; + } + +@@ -275,7 +281,19 @@ bool ins__is_call(const struct ins *ins) + return ins->ops == &call_ops || ins->ops == &s390_call_ops; + } + +-static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *ops, struct map_symbol *ms) ++/* ++ * Prevents from matching commas in the comment section, e.g.: ++ * ffff200008446e70: b.cs ffff2000084470f4 // b.hs, b.nlast ++ */ ++static inline const char *validate_comma(const char *c, struct ins_operands *ops) ++{ ++ if (ops->raw_comment && c > ops->raw_comment) ++ return NULL; ++ ++ return c; ++} ++ ++static int jump__parse(struct arch *arch, struct ins_operands *ops, struct map_symbol *ms) + { + struct map *map = ms->map; + struct symbol *sym = ms->sym; +@@ -284,6 +302,10 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op + }; + const char *c = strchr(ops->raw, ','); + u64 start, end; ++ ++ ops->raw_comment = strchr(ops->raw, arch->objdump.comment_char); ++ c = validate_comma(c, ops); ++ + /* + * Examples of lines to parse for the _cpp_lex_token@@Base + * function: +@@ -303,6 +325,7 @@ static int jump__parse(struct arch *arch __maybe_unused, struct ins_operands *op + ops->target.addr = strtoull(c, NULL, 16); + if (!ops->target.addr) { + c = strchr(c, ','); ++ c = validate_comma(c, ops); + if (c++ != NULL) + ops->target.addr = strtoull(c, NULL, 16); + } +@@ -360,9 +383,12 @@ static int jump__scnprintf(struct ins *ins, char *bf, size_t size, + return scnprintf(bf, size, "%-6s %s", ins->name, ops->target.sym->name); + + c = strchr(ops->raw, ','); ++ c = validate_comma(c, ops); ++ + if (c != NULL) { + const char *c2 = strchr(c + 1, ','); + ++ c2 = validate_comma(c2, ops); + /* check for 3-op insn */ + if (c2 != NULL) + c = c2; +diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h +index a4c0d91907e6..61e0c7fd5efd 100644 +--- a/tools/perf/util/annotate.h ++++ b/tools/perf/util/annotate.h +@@ -21,6 +21,7 @@ struct ins { + + struct ins_operands { + char *raw; ++ char *raw_comment; + struct { + char *raw; + char *name; +diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c +index 0d5504751cc5..6324afba8fdd 100644 +--- a/tools/perf/util/evsel.c ++++ b/tools/perf/util/evsel.c +@@ -251,8 +251,9 @@ struct perf_evsel *perf_evsel__new_idx(struct perf_event_attr *attr, int idx) + { + struct perf_evsel *evsel = zalloc(perf_evsel__object.size); + +- if (evsel != NULL) +- perf_evsel__init(evsel, attr, idx); ++ if (!evsel) ++ return NULL; ++ perf_evsel__init(evsel, attr, idx); + + if (perf_evsel__is_bpf_output(evsel)) { + evsel->attr.sample_type |= (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME | +diff --git a/tools/perf/util/trace-event-info.c b/tools/perf/util/trace-event-info.c +index c85d0d1a65ed..7b0ca7cbb7de 100644 +--- a/tools/perf/util/trace-event-info.c ++++ b/tools/perf/util/trace-event-info.c +@@ -377,7 +377,7 @@ out: + + static int record_saved_cmdline(void) + { +- unsigned int size; ++ unsigned long long size; + char *path; + struct stat st; + int ret, err = 0; +diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh +index f8cc38afffa2..32a194e3e07a 100755 +--- a/tools/testing/selftests/net/pmtu.sh ++++ b/tools/testing/selftests/net/pmtu.sh +@@ -46,6 +46,9 @@ + # Kselftest framework requirement - SKIP code is 4. + ksft_skip=4 + ++# Some systems don't have a ping6 binary anymore ++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping) ++ + tests=" + pmtu_vti6_exception vti6: PMTU exceptions + pmtu_vti4_exception vti4: PMTU exceptions +@@ -274,7 +277,7 @@ test_pmtu_vti6_exception() { + mtu "${ns_b}" veth_b 4000 + mtu "${ns_a}" vti6_a 5000 + mtu "${ns_b}" vti6_b 5000 +- ${ns_a} ping6 -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null ++ ${ns_a} ${ping6} -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null + + # Check that exception was created + if [ "$(route_get_dst_pmtu_from_exception "${ns_a}" ${vti6_b_addr})" = "" ]; then +@@ -334,7 +337,7 @@ test_pmtu_vti4_link_add_mtu() { + fail=0 + + min=68 +- max=$((65528 - 20)) ++ max=$((65535 - 20)) + # Check invalid values first + for v in $((min - 1)) $((max + 1)); do + ${ns_a} ip link add vti4_a mtu ${v} type vti local ${veth4_a_addr} remote ${veth4_b_addr} key 10 2>/dev/null +diff --git a/tools/testing/selftests/rseq/param_test.c b/tools/testing/selftests/rseq/param_test.c +index 615252331813..4bc071525bf7 100644 +--- a/tools/testing/selftests/rseq/param_test.c ++++ b/tools/testing/selftests/rseq/param_test.c +@@ -56,15 +56,13 @@ unsigned int yield_mod_cnt, nr_abort; + printf(fmt, ## __VA_ARGS__); \ + } while (0) + +-#if defined(__x86_64__) || defined(__i386__) ++#ifdef __i386__ + + #define INJECT_ASM_REG "eax" + + #define RSEQ_INJECT_CLOBBER \ + , INJECT_ASM_REG + +-#ifdef __i386__ +- + #define RSEQ_INJECT_ASM(n) \ + "mov asm_loop_cnt_" #n ", %%" INJECT_ASM_REG "\n\t" \ + "test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \ +@@ -76,9 +74,16 @@ unsigned int yield_mod_cnt, nr_abort; + + #elif defined(__x86_64__) + ++#define INJECT_ASM_REG_P "rax" ++#define INJECT_ASM_REG "eax" ++ ++#define RSEQ_INJECT_CLOBBER \ ++ , INJECT_ASM_REG_P \ ++ , INJECT_ASM_REG ++ + #define RSEQ_INJECT_ASM(n) \ +- "lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG "\n\t" \ +- "mov (%%" INJECT_ASM_REG "), %%" INJECT_ASM_REG "\n\t" \ ++ "lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG_P "\n\t" \ ++ "mov (%%" INJECT_ASM_REG_P "), %%" INJECT_ASM_REG "\n\t" \ + "test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \ + "jz 333f\n\t" \ + "222:\n\t" \ +@@ -86,10 +91,6 @@ unsigned int yield_mod_cnt, nr_abort; + "jnz 222b\n\t" \ + "333:\n\t" + +-#else +-#error "Unsupported architecture" +-#endif +- + #elif defined(__ARMEL__) + + #define RSEQ_INJECT_INPUT \ +diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json +index f03763d81617..30f9b54bd666 100644 +--- a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json ++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json +@@ -312,6 +312,54 @@ + "$TC actions flush action police" + ] + }, ++ { ++ "id": "6aaf", ++ "name": "Add police actions with conform-exceed control pass/pipe [with numeric values]", ++ "category": [ ++ "actions", ++ "police" ++ ], ++ "setup": [ ++ [ ++ "$TC actions flush action police", ++ 0, ++ 1, ++ 255 ++ ] ++ ], ++ "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 0/3 index 1", ++ "expExitCode": "0", ++ "verifyCmd": "$TC actions get action police index 1", ++ "matchPattern": "action order [0-9]*: police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action pass/pipe", ++ "matchCount": "1", ++ "teardown": [ ++ "$TC actions flush action police" ++ ] ++ }, ++ { ++ "id": "29b1", ++ "name": "Add police actions with conform-exceed control /drop", ++ "category": [ ++ "actions", ++ "police" ++ ], ++ "setup": [ ++ [ ++ "$TC actions flush action police", ++ 0, ++ 1, ++ 255 ++ ] ++ ], ++ "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 10/drop index 1", ++ "expExitCode": "255", ++ "verifyCmd": "$TC actions ls action police", ++ "matchPattern": "action order [0-9]*: police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action ", ++ "matchCount": "0", ++ "teardown": [ ++ "$TC actions flush action police" ++ ] ++ }, + { + "id": "c26f", + "name": "Add police action with invalid peakrate value", +diff --git a/tools/vm/page-types.c b/tools/vm/page-types.c +index cce853dca691..a4c31fb2887b 100644 +--- a/tools/vm/page-types.c ++++ b/tools/vm/page-types.c +@@ -156,12 +156,6 @@ static const char * const page_flag_names[] = { + }; + + +-static const char * const debugfs_known_mountpoints[] = { +- "/sys/kernel/debug", +- "/debug", +- 0, +-}; +- + /* + * data structures + */ +diff --git a/tools/vm/slabinfo.c b/tools/vm/slabinfo.c +index f82c2eaa859d..334b16db0ebb 100644 +--- a/tools/vm/slabinfo.c ++++ b/tools/vm/slabinfo.c +@@ -30,8 +30,8 @@ struct slabinfo { + int alias; + int refs; + int aliases, align, cache_dma, cpu_slabs, destroy_by_rcu; +- int hwcache_align, object_size, objs_per_slab; +- int sanity_checks, slab_size, store_user, trace; ++ unsigned int hwcache_align, object_size, objs_per_slab; ++ unsigned int sanity_checks, slab_size, store_user, trace; + int order, poison, reclaim_account, red_zone; + unsigned long partial, objects, slabs, objects_partial, objects_total; + unsigned long alloc_fastpath, alloc_slowpath;