From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 420BF1382C5 for ; Sun, 11 Mar 2018 18:26:55 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id A31BDE093A; Sun, 11 Mar 2018 18:26:54 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 713F8E093A for ; Sun, 11 Mar 2018 18:26:54 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 5F20C335C2A for ; Sun, 11 Mar 2018 18:26:53 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id E873623F for ; Sun, 11 Mar 2018 18:26:51 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1520792803.c98b20cec7950610d68008ac00ffb1e0501b0d63.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.9 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1086_linux-4.9.87.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: c98b20cec7950610d68008ac00ffb1e0501b0d63 X-VCS-Branch: 4.9 Date: Sun, 11 Mar 2018 18:26:51 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 029eaf6c-6d2b-46a2-b719-7362ae5eb677 X-Archives-Hash: 6f5294f5b42f5dd2b99fefa01340ee85 commit: c98b20cec7950610d68008ac00ffb1e0501b0d63 Author: Mike Pagano gentoo org> AuthorDate: Sun Mar 11 18:26:43 2018 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sun Mar 11 18:26:43 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c98b20ce Linux patch 4.9.87 0000_README | 4 + 1086_linux-4.9.87.patch | 2410 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2414 insertions(+) diff --git a/0000_README b/0000_README index f21a5fb..02091ce 100644 --- a/0000_README +++ b/0000_README @@ -387,6 +387,10 @@ Patch: 1085_linux-4.9.86.patch From: http://www.kernel.org Desc: Linux 4.9.86 +Patch: 1086_linux-4.9.87.patch +From: http://www.kernel.org +Desc: Linux 4.9.87 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1086_linux-4.9.87.patch b/1086_linux-4.9.87.patch new file mode 100644 index 0000000..47608e9 --- /dev/null +++ b/1086_linux-4.9.87.patch @@ -0,0 +1,2410 @@ +diff --git a/Makefile b/Makefile +index e918d25e95bb..3043937a65d1 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 9 +-SUBLEVEL = 86 ++SUBLEVEL = 87 + EXTRAVERSION = + NAME = Roaring Lionus + +diff --git a/arch/arm/boot/dts/logicpd-som-lv.dtsi b/arch/arm/boot/dts/logicpd-som-lv.dtsi +index 4f2c5ec75714..e262fa9ef334 100644 +--- a/arch/arm/boot/dts/logicpd-som-lv.dtsi ++++ b/arch/arm/boot/dts/logicpd-som-lv.dtsi +@@ -97,6 +97,8 @@ + }; + + &i2c1 { ++ pinctrl-names = "default"; ++ pinctrl-0 = <&i2c1_pins>; + clock-frequency = <2600000>; + + twl: twl@48 { +@@ -215,7 +217,12 @@ + >; + }; + +- ++ i2c1_pins: pinmux_i2c1_pins { ++ pinctrl-single,pins = < ++ OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */ ++ OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */ ++ >; ++ }; + }; + + &omap3_pmx_wkup { +diff --git a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi +index efe53998c961..08f0a35dc0d1 100644 +--- a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi ++++ b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi +@@ -100,6 +100,8 @@ + }; + + &i2c1 { ++ pinctrl-names = "default"; ++ pinctrl-0 = <&i2c1_pins>; + clock-frequency = <2600000>; + + twl: twl@48 { +@@ -207,6 +209,12 @@ + OMAP3_CORE1_IOPAD(0x21b8, PIN_INPUT | MUX_MODE0) /* hsusb0_data7.hsusb0_data7 */ + >; + }; ++ i2c1_pins: pinmux_i2c1_pins { ++ pinctrl-single,pins = < ++ OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */ ++ OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */ ++ >; ++ }; + }; + + &uart2 { +diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile +index 92eab1d51785..61049216e4d5 100644 +--- a/arch/arm/kvm/hyp/Makefile ++++ b/arch/arm/kvm/hyp/Makefile +@@ -6,6 +6,8 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING + + KVM=../../../../virt/kvm + ++CFLAGS_ARMV7VE :=$(call cc-option, -march=armv7ve) ++ + obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v2-sr.o + obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v3-sr.o + obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/timer-sr.o +@@ -14,7 +16,10 @@ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o + obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o + obj-$(CONFIG_KVM_ARM_HOST) += vfp.o + obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o ++CFLAGS_banked-sr.o += $(CFLAGS_ARMV7VE) ++ + obj-$(CONFIG_KVM_ARM_HOST) += entry.o + obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o + obj-$(CONFIG_KVM_ARM_HOST) += switch.o ++CFLAGS_switch.o += $(CFLAGS_ARMV7VE) + obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o +diff --git a/arch/arm/kvm/hyp/banked-sr.c b/arch/arm/kvm/hyp/banked-sr.c +index 111bda8cdebd..be4b8b0a40ad 100644 +--- a/arch/arm/kvm/hyp/banked-sr.c ++++ b/arch/arm/kvm/hyp/banked-sr.c +@@ -20,6 +20,10 @@ + + #include + ++/* ++ * gcc before 4.9 doesn't understand -march=armv7ve, so we have to ++ * trick the assembler. ++ */ + __asm__(".arch_extension virt"); + + void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt) +diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig +index 541647f57192..895c0746fe50 100644 +--- a/arch/arm/mach-mvebu/Kconfig ++++ b/arch/arm/mach-mvebu/Kconfig +@@ -42,7 +42,7 @@ config MACH_ARMADA_375 + depends on ARCH_MULTI_V7 + select ARMADA_370_XP_IRQ + select ARM_ERRATA_720789 +- select ARM_ERRATA_753970 ++ select PL310_ERRATA_753970 + select ARM_GIC + select ARMADA_375_CLK + select HAVE_ARM_SCU +@@ -58,7 +58,7 @@ config MACH_ARMADA_38X + bool "Marvell Armada 380/385 boards" + depends on ARCH_MULTI_V7 + select ARM_ERRATA_720789 +- select ARM_ERRATA_753970 ++ select PL310_ERRATA_753970 + select ARM_GIC + select ARMADA_370_XP_IRQ + select ARMADA_38X_CLK +diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c +index d8199e12fb6e..b47a26f4290c 100644 +--- a/arch/arm64/net/bpf_jit_comp.c ++++ b/arch/arm64/net/bpf_jit_comp.c +@@ -234,8 +234,9 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) + off = offsetof(struct bpf_array, map.max_entries); + emit_a64_mov_i64(tmp, off, ctx); + emit(A64_LDR32(tmp, r2, tmp), ctx); ++ emit(A64_MOV(0, r3, r3), ctx); + emit(A64_CMP(0, r3, tmp), ctx); +- emit(A64_B_(A64_COND_GE, jmp_offset), ctx); ++ emit(A64_B_(A64_COND_CS, jmp_offset), ctx); + + /* if (tail_call_cnt > MAX_TAIL_CALL_CNT) + * goto out; +@@ -243,7 +244,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx) + */ + emit_a64_mov_i64(tmp, MAX_TAIL_CALL_CNT, ctx); + emit(A64_CMP(1, tcc, tmp), ctx); +- emit(A64_B_(A64_COND_GT, jmp_offset), ctx); ++ emit(A64_B_(A64_COND_HI, jmp_offset), ctx); + emit(A64_ADD_I(1, tcc, tcc, 1), ctx); + + /* prog = array->ptrs[index]; +diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h +index 1d8c24dc04d4..88290d32b956 100644 +--- a/arch/parisc/include/asm/cacheflush.h ++++ b/arch/parisc/include/asm/cacheflush.h +@@ -25,6 +25,7 @@ void flush_user_icache_range_asm(unsigned long, unsigned long); + void flush_kernel_icache_range_asm(unsigned long, unsigned long); + void flush_user_dcache_range_asm(unsigned long, unsigned long); + void flush_kernel_dcache_range_asm(unsigned long, unsigned long); ++void purge_kernel_dcache_range_asm(unsigned long, unsigned long); + void flush_kernel_dcache_page_asm(void *); + void flush_kernel_icache_page(void *); + void flush_user_dcache_range(unsigned long, unsigned long); +diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c +index df757c9675e6..025afe5f17a7 100644 +--- a/arch/parisc/kernel/cache.c ++++ b/arch/parisc/kernel/cache.c +@@ -464,10 +464,10 @@ EXPORT_SYMBOL(copy_user_page); + int __flush_tlb_range(unsigned long sid, unsigned long start, + unsigned long end) + { +- unsigned long flags, size; ++ unsigned long flags; + +- size = (end - start); +- if (size >= parisc_tlb_flush_threshold) { ++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && ++ end - start >= parisc_tlb_flush_threshold) { + flush_tlb_all(); + return 1; + } +@@ -538,13 +538,11 @@ void flush_cache_mm(struct mm_struct *mm) + struct vm_area_struct *vma; + pgd_t *pgd; + +- /* Flush the TLB to avoid speculation if coherency is required. */ +- if (parisc_requires_coherency()) +- flush_tlb_all(); +- + /* Flushing the whole cache on each cpu takes forever on + rp3440, etc. So, avoid it if the mm isn't too big. */ +- if (mm_total_size(mm) >= parisc_cache_flush_threshold) { ++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && ++ mm_total_size(mm) >= parisc_cache_flush_threshold) { ++ flush_tlb_all(); + flush_cache_all(); + return; + } +@@ -552,9 +550,9 @@ void flush_cache_mm(struct mm_struct *mm) + if (mm->context == mfsp(3)) { + for (vma = mm->mmap; vma; vma = vma->vm_next) { + flush_user_dcache_range_asm(vma->vm_start, vma->vm_end); +- if ((vma->vm_flags & VM_EXEC) == 0) +- continue; +- flush_user_icache_range_asm(vma->vm_start, vma->vm_end); ++ if (vma->vm_flags & VM_EXEC) ++ flush_user_icache_range_asm(vma->vm_start, vma->vm_end); ++ flush_tlb_range(vma, vma->vm_start, vma->vm_end); + } + return; + } +@@ -598,14 +596,9 @@ flush_user_icache_range(unsigned long start, unsigned long end) + void flush_cache_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) + { +- BUG_ON(!vma->vm_mm->context); +- +- /* Flush the TLB to avoid speculation if coherency is required. */ +- if (parisc_requires_coherency()) ++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && ++ end - start >= parisc_cache_flush_threshold) { + flush_tlb_range(vma, start, end); +- +- if ((end - start) >= parisc_cache_flush_threshold +- || vma->vm_mm->context != mfsp(3)) { + flush_cache_all(); + return; + } +@@ -613,6 +606,7 @@ void flush_cache_range(struct vm_area_struct *vma, + flush_user_dcache_range_asm(start, end); + if (vma->vm_flags & VM_EXEC) + flush_user_icache_range_asm(start, end); ++ flush_tlb_range(vma, start, end); + } + + void +@@ -621,8 +615,7 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long + BUG_ON(!vma->vm_mm->context); + + if (pfn_valid(pfn)) { +- if (parisc_requires_coherency()) +- flush_tlb_page(vma, vmaddr); ++ flush_tlb_page(vma, vmaddr); + __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn)); + } + } +@@ -630,21 +623,33 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long + void flush_kernel_vmap_range(void *vaddr, int size) + { + unsigned long start = (unsigned long)vaddr; ++ unsigned long end = start + size; + +- if ((unsigned long)size > parisc_cache_flush_threshold) ++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && ++ (unsigned long)size >= parisc_cache_flush_threshold) { ++ flush_tlb_kernel_range(start, end); + flush_data_cache(); +- else +- flush_kernel_dcache_range_asm(start, start + size); ++ return; ++ } ++ ++ flush_kernel_dcache_range_asm(start, end); ++ flush_tlb_kernel_range(start, end); + } + EXPORT_SYMBOL(flush_kernel_vmap_range); + + void invalidate_kernel_vmap_range(void *vaddr, int size) + { + unsigned long start = (unsigned long)vaddr; ++ unsigned long end = start + size; + +- if ((unsigned long)size > parisc_cache_flush_threshold) ++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) && ++ (unsigned long)size >= parisc_cache_flush_threshold) { ++ flush_tlb_kernel_range(start, end); + flush_data_cache(); +- else +- flush_kernel_dcache_range_asm(start, start + size); ++ return; ++ } ++ ++ purge_kernel_dcache_range_asm(start, end); ++ flush_tlb_kernel_range(start, end); + } + EXPORT_SYMBOL(invalidate_kernel_vmap_range); +diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S +index 2d40c4ff3f69..67b0f7532e83 100644 +--- a/arch/parisc/kernel/pacache.S ++++ b/arch/parisc/kernel/pacache.S +@@ -1110,6 +1110,28 @@ ENTRY_CFI(flush_kernel_dcache_range_asm) + .procend + ENDPROC_CFI(flush_kernel_dcache_range_asm) + ++ENTRY_CFI(purge_kernel_dcache_range_asm) ++ .proc ++ .callinfo NO_CALLS ++ .entry ++ ++ ldil L%dcache_stride, %r1 ++ ldw R%dcache_stride(%r1), %r23 ++ ldo -1(%r23), %r21 ++ ANDCM %r26, %r21, %r26 ++ ++1: cmpb,COND(<<),n %r26, %r25,1b ++ pdc,m %r23(%r26) ++ ++ sync ++ syncdma ++ bv %r0(%r2) ++ nop ++ .exit ++ ++ .procend ++ENDPROC_CFI(purge_kernel_dcache_range_asm) ++ + ENTRY_CFI(flush_user_icache_range_asm) + .proc + .callinfo NO_CALLS +diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c +index 0fe98a567125..be9d968244ad 100644 +--- a/arch/powerpc/net/bpf_jit_comp64.c ++++ b/arch/powerpc/net/bpf_jit_comp64.c +@@ -245,6 +245,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32 + * goto out; + */ + PPC_LWZ(b2p[TMP_REG_1], b2p_bpf_array, offsetof(struct bpf_array, map.max_entries)); ++ PPC_RLWINM(b2p_index, b2p_index, 0, 0, 31); + PPC_CMPLW(b2p_index, b2p[TMP_REG_1]); + PPC_BCC(COND_GE, out); + +diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h +index 8b272a08d1a8..e2e09347ee3c 100644 +--- a/arch/x86/include/asm/mmu.h ++++ b/arch/x86/include/asm/mmu.h +@@ -3,12 +3,18 @@ + + #include + #include ++#include + + /* +- * The x86 doesn't have a mmu context, but +- * we put the segment information here. ++ * x86 has arch-specific MMU state beyond what lives in mm_struct. + */ + typedef struct { ++ /* ++ * ctx_id uniquely identifies this mm_struct. A ctx_id will never ++ * be reused, and zero is not a valid ctx_id. ++ */ ++ u64 ctx_id; ++ + #ifdef CONFIG_MODIFY_LDT_SYSCALL + struct ldt_struct *ldt; + #endif +@@ -33,6 +39,11 @@ typedef struct { + #endif + } mm_context_t; + ++#define INIT_MM_CONTEXT(mm) \ ++ .context = { \ ++ .ctx_id = 1, \ ++ } ++ + void leave_mm(int cpu); + + #endif /* _ASM_X86_MMU_H */ +diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h +index d23e35584f15..5a295bb97103 100644 +--- a/arch/x86/include/asm/mmu_context.h ++++ b/arch/x86/include/asm/mmu_context.h +@@ -12,6 +12,9 @@ + #include + #include + #include ++ ++extern atomic64_t last_mm_ctx_id; ++ + #ifndef CONFIG_PARAVIRT + static inline void paravirt_activate_mm(struct mm_struct *prev, + struct mm_struct *next) +@@ -106,6 +109,8 @@ static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) + static inline int init_new_context(struct task_struct *tsk, + struct mm_struct *mm) + { ++ mm->context.ctx_id = atomic64_inc_return(&last_mm_ctx_id); ++ + #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS + if (cpu_feature_enabled(X86_FEATURE_OSPKE)) { + /* pkey 0 is the default and always allocated */ +diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h +index 76b058533e47..81a1be326571 100644 +--- a/arch/x86/include/asm/nospec-branch.h ++++ b/arch/x86/include/asm/nospec-branch.h +@@ -177,4 +177,41 @@ static inline void indirect_branch_prediction_barrier(void) + } + + #endif /* __ASSEMBLY__ */ ++ ++/* ++ * Below is used in the eBPF JIT compiler and emits the byte sequence ++ * for the following assembly: ++ * ++ * With retpolines configured: ++ * ++ * callq do_rop ++ * spec_trap: ++ * pause ++ * lfence ++ * jmp spec_trap ++ * do_rop: ++ * mov %rax,(%rsp) ++ * retq ++ * ++ * Without retpolines configured: ++ * ++ * jmp *%rax ++ */ ++#ifdef CONFIG_RETPOLINE ++# define RETPOLINE_RAX_BPF_JIT_SIZE 17 ++# define RETPOLINE_RAX_BPF_JIT() \ ++ EMIT1_off32(0xE8, 7); /* callq do_rop */ \ ++ /* spec_trap: */ \ ++ EMIT2(0xF3, 0x90); /* pause */ \ ++ EMIT3(0x0F, 0xAE, 0xE8); /* lfence */ \ ++ EMIT2(0xEB, 0xF9); /* jmp spec_trap */ \ ++ /* do_rop: */ \ ++ EMIT4(0x48, 0x89, 0x04, 0x24); /* mov %rax,(%rsp) */ \ ++ EMIT1(0xC3); /* retq */ ++#else ++# define RETPOLINE_RAX_BPF_JIT_SIZE 2 ++# define RETPOLINE_RAX_BPF_JIT() \ ++ EMIT2(0xFF, 0xE0); /* jmp *%rax */ ++#endif ++ + #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */ +diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h +index 94146f665a3c..99185a064978 100644 +--- a/arch/x86/include/asm/tlbflush.h ++++ b/arch/x86/include/asm/tlbflush.h +@@ -68,6 +68,8 @@ static inline void invpcid_flush_all_nonglobals(void) + struct tlb_state { + struct mm_struct *active_mm; + int state; ++ /* last user mm's ctx id */ ++ u64 last_ctx_id; + + /* + * Access to this CR4 shadow and to H/W CR4 is protected by +diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c +index b5229abd1629..4922ab66fd29 100644 +--- a/arch/x86/kernel/apic/vector.c ++++ b/arch/x86/kernel/apic/vector.c +@@ -93,8 +93,12 @@ static struct apic_chip_data *alloc_apic_chip_data(int node) + return NULL; + } + +-static void free_apic_chip_data(struct apic_chip_data *data) ++static void free_apic_chip_data(unsigned int virq, struct apic_chip_data *data) + { ++#ifdef CONFIG_X86_IO_APIC ++ if (virq < nr_legacy_irqs()) ++ legacy_irq_data[virq] = NULL; ++#endif + if (data) { + free_cpumask_var(data->domain); + free_cpumask_var(data->old_domain); +@@ -318,11 +322,7 @@ static void x86_vector_free_irqs(struct irq_domain *domain, + apic_data = irq_data->chip_data; + irq_domain_reset_irq_data(irq_data); + raw_spin_unlock_irqrestore(&vector_lock, flags); +- free_apic_chip_data(apic_data); +-#ifdef CONFIG_X86_IO_APIC +- if (virq + i < nr_legacy_irqs()) +- legacy_irq_data[virq + i] = NULL; +-#endif ++ free_apic_chip_data(virq + i, apic_data); + } + } + } +@@ -363,7 +363,7 @@ static int x86_vector_alloc_irqs(struct irq_domain *domain, unsigned int virq, + err = assign_irq_vector_policy(virq + i, node, data, info); + if (err) { + irq_data->chip_data = NULL; +- free_apic_chip_data(data); ++ free_apic_chip_data(virq + i, data); + goto error; + } + } +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c +index be644afab1bb..24d2a3ee743f 100644 +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -44,6 +44,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -4919,7 +4920,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) + * being speculatively taken. + */ + if (svm->spec_ctrl) +- wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); ++ native_wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); + + asm volatile ( + "push %%" _ASM_BP "; \n\t" +@@ -5028,11 +5029,11 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu) + * If the L02 MSR bitmap does not intercept the MSR, then we need to + * save it. + */ +- if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)) +- rdmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl); ++ if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) ++ svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); + + if (svm->spec_ctrl) +- wrmsrl(MSR_IA32_SPEC_CTRL, 0); ++ native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); + + /* Eliminate branch target predictions from guest mode */ + vmexit_fill_RSB(); +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index c51aaac953b4..0f3bb4632310 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -49,6 +49,7 @@ + #include + #include + #include ++#include + #include + + #include "trace.h" +@@ -8906,7 +8907,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) + * being speculatively taken. + */ + if (vmx->spec_ctrl) +- wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); ++ native_wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); + + vmx->__launched = vmx->loaded_vmcs->launched; + asm( +@@ -9041,11 +9042,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu) + * If the L02 MSR bitmap does not intercept the MSR, then we need to + * save it. + */ +- if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)) +- rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); ++ if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))) ++ vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL); + + if (vmx->spec_ctrl) +- wrmsrl(MSR_IA32_SPEC_CTRL, 0); ++ native_wrmsrl(MSR_IA32_SPEC_CTRL, 0); + + /* Eliminate branch target predictions from guest mode */ + vmexit_fill_RSB(); +diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c +index 578973ade71b..eac92e2d171b 100644 +--- a/arch/x86/mm/tlb.c ++++ b/arch/x86/mm/tlb.c +@@ -10,6 +10,7 @@ + + #include + #include ++#include + #include + #include + #include +@@ -29,6 +30,8 @@ + * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi + */ + ++atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1); ++ + struct flush_tlb_info { + struct mm_struct *flush_mm; + unsigned long flush_start; +@@ -104,6 +107,28 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, + unsigned cpu = smp_processor_id(); + + if (likely(prev != next)) { ++ u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id); ++ ++ /* ++ * Avoid user/user BTB poisoning by flushing the branch ++ * predictor when switching between processes. This stops ++ * one process from doing Spectre-v2 attacks on another. ++ * ++ * As an optimization, flush indirect branches only when ++ * switching into processes that disable dumping. This ++ * protects high value processes like gpg, without having ++ * too high performance overhead. IBPB is *expensive*! ++ * ++ * This will not flush branches when switching into kernel ++ * threads. It will also not flush if we switch to idle ++ * thread and back to the same process. It will flush if we ++ * switch to a different non-dumpable process. ++ */ ++ if (tsk && tsk->mm && ++ tsk->mm->context.ctx_id != last_ctx_id && ++ get_dumpable(tsk->mm) != SUID_DUMP_USER) ++ indirect_branch_prediction_barrier(); ++ + if (IS_ENABLED(CONFIG_VMAP_STACK)) { + /* + * If our current stack is in vmalloc space and isn't +@@ -118,6 +143,14 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, + set_pgd(pgd, init_mm.pgd[stack_pgd_index]); + } + ++ /* ++ * Record last user mm's context id, so we can avoid ++ * flushing branch buffer with IBPB if we switch back ++ * to the same user. ++ */ ++ if (next != &init_mm) ++ this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); ++ + this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK); + this_cpu_write(cpu_tlbstate.active_mm, next); + +diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c +index 7840331d3056..1f7ed2ed6ff7 100644 +--- a/arch/x86/net/bpf_jit_comp.c ++++ b/arch/x86/net/bpf_jit_comp.c +@@ -12,6 +12,7 @@ + #include + #include + #include ++#include + #include + + int bpf_jit_enable __read_mostly; +@@ -281,7 +282,7 @@ static void emit_bpf_tail_call(u8 **pprog) + EMIT2(0x89, 0xD2); /* mov edx, edx */ + EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */ + offsetof(struct bpf_array, map.max_entries)); +-#define OFFSET1 43 /* number of bytes to jump */ ++#define OFFSET1 (41 + RETPOLINE_RAX_BPF_JIT_SIZE) /* number of bytes to jump */ + EMIT2(X86_JBE, OFFSET1); /* jbe out */ + label1 = cnt; + +@@ -290,7 +291,7 @@ static void emit_bpf_tail_call(u8 **pprog) + */ + EMIT2_off32(0x8B, 0x85, -STACKSIZE + 36); /* mov eax, dword ptr [rbp - 516] */ + EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */ +-#define OFFSET2 32 ++#define OFFSET2 (30 + RETPOLINE_RAX_BPF_JIT_SIZE) + EMIT2(X86_JA, OFFSET2); /* ja out */ + label2 = cnt; + EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */ +@@ -304,7 +305,7 @@ static void emit_bpf_tail_call(u8 **pprog) + * goto out; + */ + EMIT3(0x48, 0x85, 0xC0); /* test rax,rax */ +-#define OFFSET3 10 ++#define OFFSET3 (8 + RETPOLINE_RAX_BPF_JIT_SIZE) + EMIT2(X86_JE, OFFSET3); /* je out */ + label3 = cnt; + +@@ -317,7 +318,7 @@ static void emit_bpf_tail_call(u8 **pprog) + * rdi == ctx (1st arg) + * rax == prog->bpf_func + prologue_size + */ +- EMIT2(0xFF, 0xE0); /* jmp rax */ ++ RETPOLINE_RAX_BPF_JIT(); + + /* out: */ + BUILD_BUG_ON(cnt - label1 != OFFSET1); +diff --git a/arch/x86/platform/intel-mid/intel-mid.c b/arch/x86/platform/intel-mid/intel-mid.c +index 7850128f0026..834783bc6752 100644 +--- a/arch/x86/platform/intel-mid/intel-mid.c ++++ b/arch/x86/platform/intel-mid/intel-mid.c +@@ -79,7 +79,7 @@ static void intel_mid_power_off(void) + + static void intel_mid_reboot(void) + { +- intel_scu_ipc_simple_command(IPCMSG_COLD_BOOT, 0); ++ intel_scu_ipc_simple_command(IPCMSG_COLD_RESET, 0); + } + + static unsigned long __init intel_mid_calibrate_tsc(void) +diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c +index 7f664c416faf..4ecd0de08557 100644 +--- a/arch/x86/xen/suspend.c ++++ b/arch/x86/xen/suspend.c +@@ -1,11 +1,14 @@ + #include + #include ++#include + + #include + #include + #include + #include + ++#include ++#include + #include + #include + #include +@@ -68,6 +71,8 @@ static void xen_pv_post_suspend(int suspend_cancelled) + xen_mm_unpin_all(); + } + ++static DEFINE_PER_CPU(u64, spec_ctrl); ++ + void xen_arch_pre_suspend(void) + { + if (xen_pv_domain()) +@@ -84,6 +89,9 @@ void xen_arch_post_suspend(int cancelled) + + static void xen_vcpu_notify_restore(void *data) + { ++ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL)) ++ wrmsrl(MSR_IA32_SPEC_CTRL, this_cpu_read(spec_ctrl)); ++ + /* Boot processor notified via generic timekeeping_resume() */ + if (smp_processor_id() == 0) + return; +@@ -93,7 +101,15 @@ static void xen_vcpu_notify_restore(void *data) + + static void xen_vcpu_notify_suspend(void *data) + { ++ u64 tmp; ++ + tick_suspend_local(); ++ ++ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL)) { ++ rdmsrl(MSR_IA32_SPEC_CTRL, tmp); ++ this_cpu_write(spec_ctrl, tmp); ++ wrmsrl(MSR_IA32_SPEC_CTRL, 0); ++ } + } + + void xen_arch_resume(void) +diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c +index 6f060c76217b..7205e6da16cd 100644 +--- a/drivers/char/tpm/st33zp24/st33zp24.c ++++ b/drivers/char/tpm/st33zp24/st33zp24.c +@@ -458,7 +458,7 @@ static int st33zp24_recv(struct tpm_chip *chip, unsigned char *buf, + size_t count) + { + int size = 0; +- int expected; ++ u32 expected; + + if (!chip) + return -EBUSY; +@@ -475,7 +475,7 @@ static int st33zp24_recv(struct tpm_chip *chip, unsigned char *buf, + } + + expected = be32_to_cpu(*(__be32 *)(buf + 2)); +- if (expected > count) { ++ if (expected > count || expected < TPM_HEADER_SIZE) { + size = -EIO; + goto out; + } +diff --git a/drivers/char/tpm/tpm-dev.c b/drivers/char/tpm/tpm-dev.c +index 912ad30be585..65b824954bdc 100644 +--- a/drivers/char/tpm/tpm-dev.c ++++ b/drivers/char/tpm/tpm-dev.c +@@ -136,6 +136,12 @@ static ssize_t tpm_write(struct file *file, const char __user *buf, + return -EFAULT; + } + ++ if (in_size < 6 || ++ in_size < be32_to_cpu(*((__be32 *) (priv->data_buffer + 2)))) { ++ mutex_unlock(&priv->buffer_mutex); ++ return -EINVAL; ++ } ++ + /* atomic tpm command send and result receive. We only hold the ops + * lock during this period so that the tpm can be unregistered even if + * the char dev is held open. +diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c +index 62ee44e57ddc..da69ddea56cf 100644 +--- a/drivers/char/tpm/tpm_i2c_infineon.c ++++ b/drivers/char/tpm/tpm_i2c_infineon.c +@@ -437,7 +437,8 @@ static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count) + static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count) + { + int size = 0; +- int expected, status; ++ int status; ++ u32 expected; + + if (count < TPM_HEADER_SIZE) { + size = -EIO; +@@ -452,7 +453,7 @@ static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count) + } + + expected = be32_to_cpu(*(__be32 *)(buf + 2)); +- if ((size_t) expected > count) { ++ if (((size_t) expected > count) || (expected < TPM_HEADER_SIZE)) { + size = -EIO; + goto out; + } +diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c +index c6428771841f..caa86b19c76d 100644 +--- a/drivers/char/tpm/tpm_i2c_nuvoton.c ++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c +@@ -281,7 +281,11 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count) + struct device *dev = chip->dev.parent; + struct i2c_client *client = to_i2c_client(dev); + s32 rc; +- int expected, status, burst_count, retries, size = 0; ++ int status; ++ int burst_count; ++ int retries; ++ int size = 0; ++ u32 expected; + + if (count < TPM_HEADER_SIZE) { + i2c_nuvoton_ready(chip); /* return to idle */ +@@ -323,7 +327,7 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count) + * to machine native + */ + expected = be32_to_cpu(*(__be32 *) (buf + 2)); +- if (expected > count) { ++ if (expected > count || expected < size) { + dev_err(dev, "%s() expected > count\n", __func__); + size = -EIO; + continue; +diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c +index 8022bea27fed..06173d2e316f 100644 +--- a/drivers/char/tpm/tpm_tis.c ++++ b/drivers/char/tpm/tpm_tis.c +@@ -98,7 +98,7 @@ static int tpm_tcg_read_bytes(struct tpm_tis_data *data, u32 addr, u16 len, + } + + static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len, +- u8 *value) ++ const u8 *value) + { + struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data); + +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index 4d24ec3d7cd6..f9aa47ec7af7 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -208,7 +208,8 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) + { + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); + int size = 0; +- int expected, status; ++ int status; ++ u32 expected; + + if (count < TPM_HEADER_SIZE) { + size = -EIO; +@@ -223,7 +224,7 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) + } + + expected = be32_to_cpu(*(__be32 *) (buf + 2)); +- if (expected > count) { ++ if (expected > count || expected < TPM_HEADER_SIZE) { + size = -EIO; + goto out; + } +@@ -256,7 +257,7 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count) + * tpm.c can skip polling for the data to be available as the interrupt is + * waited for here + */ +-static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len) ++static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len) + { + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); + int rc, status, burstcnt; +@@ -345,7 +346,7 @@ static void disable_interrupts(struct tpm_chip *chip) + * tpm.c can skip polling for the data to be available as the interrupt is + * waited for here + */ +-static int tpm_tis_send_main(struct tpm_chip *chip, u8 *buf, size_t len) ++static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len) + { + struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev); + int rc; +diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h +index 9191aabbf9c2..e1c2193f2ed3 100644 +--- a/drivers/char/tpm/tpm_tis_core.h ++++ b/drivers/char/tpm/tpm_tis_core.h +@@ -98,7 +98,7 @@ struct tpm_tis_phy_ops { + int (*read_bytes)(struct tpm_tis_data *data, u32 addr, u16 len, + u8 *result); + int (*write_bytes)(struct tpm_tis_data *data, u32 addr, u16 len, +- u8 *value); ++ const u8 *value); + int (*read16)(struct tpm_tis_data *data, u32 addr, u16 *result); + int (*read32)(struct tpm_tis_data *data, u32 addr, u32 *result); + int (*write32)(struct tpm_tis_data *data, u32 addr, u32 src); +@@ -128,7 +128,7 @@ static inline int tpm_tis_read32(struct tpm_tis_data *data, u32 addr, + } + + static inline int tpm_tis_write_bytes(struct tpm_tis_data *data, u32 addr, +- u16 len, u8 *value) ++ u16 len, const u8 *value) + { + return data->phy_ops->write_bytes(data, addr, len, value); + } +diff --git a/drivers/char/tpm/tpm_tis_spi.c b/drivers/char/tpm/tpm_tis_spi.c +index 3b97b14c3417..01eccb193b5a 100644 +--- a/drivers/char/tpm/tpm_tis_spi.c ++++ b/drivers/char/tpm/tpm_tis_spi.c +@@ -47,9 +47,7 @@ + struct tpm_tis_spi_phy { + struct tpm_tis_data priv; + struct spi_device *spi_device; +- +- u8 tx_buf[4]; +- u8 rx_buf[4]; ++ u8 *iobuf; + }; + + static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *data) +@@ -58,7 +56,7 @@ static inline struct tpm_tis_spi_phy *to_tpm_tis_spi_phy(struct tpm_tis_data *da + } + + static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, +- u8 *buffer, u8 direction) ++ u8 *in, const u8 *out) + { + struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data); + int ret = 0; +@@ -72,14 +70,14 @@ static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, + while (len) { + transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE); + +- phy->tx_buf[0] = direction | (transfer_len - 1); +- phy->tx_buf[1] = 0xd4; +- phy->tx_buf[2] = addr >> 8; +- phy->tx_buf[3] = addr; ++ phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1); ++ phy->iobuf[1] = 0xd4; ++ phy->iobuf[2] = addr >> 8; ++ phy->iobuf[3] = addr; + + memset(&spi_xfer, 0, sizeof(spi_xfer)); +- spi_xfer.tx_buf = phy->tx_buf; +- spi_xfer.rx_buf = phy->rx_buf; ++ spi_xfer.tx_buf = phy->iobuf; ++ spi_xfer.rx_buf = phy->iobuf; + spi_xfer.len = 4; + spi_xfer.cs_change = 1; + +@@ -89,9 +87,9 @@ static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, + if (ret < 0) + goto exit; + +- if ((phy->rx_buf[3] & 0x01) == 0) { ++ if ((phy->iobuf[3] & 0x01) == 0) { + // handle SPI wait states +- phy->tx_buf[0] = 0; ++ phy->iobuf[0] = 0; + + for (i = 0; i < TPM_RETRY; i++) { + spi_xfer.len = 1; +@@ -100,7 +98,7 @@ static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, + ret = spi_sync_locked(phy->spi_device, &m); + if (ret < 0) + goto exit; +- if (phy->rx_buf[0] & 0x01) ++ if (phy->iobuf[0] & 0x01) + break; + } + +@@ -114,12 +112,12 @@ static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, + spi_xfer.len = transfer_len; + spi_xfer.delay_usecs = 5; + +- if (direction) { ++ if (in) { + spi_xfer.tx_buf = NULL; +- spi_xfer.rx_buf = buffer; +- } else { +- spi_xfer.tx_buf = buffer; ++ } else if (out) { + spi_xfer.rx_buf = NULL; ++ memcpy(phy->iobuf, out, transfer_len); ++ out += transfer_len; + } + + spi_message_init(&m); +@@ -128,8 +126,12 @@ static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, + if (ret < 0) + goto exit; + ++ if (in) { ++ memcpy(in, phy->iobuf, transfer_len); ++ in += transfer_len; ++ } ++ + len -= transfer_len; +- buffer += transfer_len; + } + + exit: +@@ -140,13 +142,13 @@ static int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len, + static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr, + u16 len, u8 *result) + { +- return tpm_tis_spi_transfer(data, addr, len, result, 0x80); ++ return tpm_tis_spi_transfer(data, addr, len, result, NULL); + } + + static int tpm_tis_spi_write_bytes(struct tpm_tis_data *data, u32 addr, +- u16 len, u8 *value) ++ u16 len, const u8 *value) + { +- return tpm_tis_spi_transfer(data, addr, len, value, 0); ++ return tpm_tis_spi_transfer(data, addr, len, NULL, value); + } + + static int tpm_tis_spi_read16(struct tpm_tis_data *data, u32 addr, u16 *result) +@@ -195,6 +197,10 @@ static int tpm_tis_spi_probe(struct spi_device *dev) + + phy->spi_device = dev; + ++ phy->iobuf = devm_kmalloc(&dev->dev, MAX_SPI_FRAMESIZE, GFP_KERNEL); ++ if (!phy->iobuf) ++ return -ENOMEM; ++ + return tpm_tis_core_init(&dev->dev, &phy->priv, -1, &tpm_spi_phy_ops, + NULL); + } +diff --git a/drivers/cpufreq/s3c24xx-cpufreq.c b/drivers/cpufreq/s3c24xx-cpufreq.c +index 7b596fa38ad2..6bebc1f9f55a 100644 +--- a/drivers/cpufreq/s3c24xx-cpufreq.c ++++ b/drivers/cpufreq/s3c24xx-cpufreq.c +@@ -351,7 +351,13 @@ struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name) + static int s3c_cpufreq_init(struct cpufreq_policy *policy) + { + policy->clk = clk_arm; +- return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency); ++ ++ policy->cpuinfo.transition_latency = cpu_cur.info->latency; ++ ++ if (ftab) ++ return cpufreq_table_validate_and_show(policy, ftab); ++ ++ return 0; + } + + static int __init s3c_cpufreq_initclks(void) +diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c +index 0bf1a12e35fe..ee6045d6c0bb 100644 +--- a/drivers/md/dm-io.c ++++ b/drivers/md/dm-io.c +@@ -302,6 +302,7 @@ static void do_region(int op, int op_flags, unsigned region, + special_cmd_max_sectors = q->limits.max_write_same_sectors; + if ((op == REQ_OP_DISCARD || op == REQ_OP_WRITE_SAME) && + special_cmd_max_sectors == 0) { ++ atomic_inc(&io->count); + dec_count(io, region, -EOPNOTSUPP); + return; + } +diff --git a/drivers/md/md.c b/drivers/md/md.c +index 8ebf1b97e1d2..27d8bb21e04f 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -8224,6 +8224,10 @@ static int remove_and_add_spares(struct mddev *mddev, + int removed = 0; + bool remove_some = false; + ++ if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery)) ++ /* Mustn't remove devices when resync thread is running */ ++ return 0; ++ + rdev_for_each(rdev, mddev) { + if ((this == NULL || rdev == this) && + rdev->raid_disk >= 0 && +diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c +index e0fe5bc9dbce..31f16105184c 100644 +--- a/drivers/media/dvb-frontends/m88ds3103.c ++++ b/drivers/media/dvb-frontends/m88ds3103.c +@@ -1262,11 +1262,12 @@ static int m88ds3103_select(struct i2c_mux_core *muxc, u32 chan) + * New users must use I2C client binding directly! + */ + struct dvb_frontend *m88ds3103_attach(const struct m88ds3103_config *cfg, +- struct i2c_adapter *i2c, struct i2c_adapter **tuner_i2c_adapter) ++ struct i2c_adapter *i2c, ++ struct i2c_adapter **tuner_i2c_adapter) + { + struct i2c_client *client; + struct i2c_board_info board_info; +- struct m88ds3103_platform_data pdata; ++ struct m88ds3103_platform_data pdata = {}; + + pdata.clk = cfg->clock; + pdata.i2c_wr_max = cfg->i2c_wr_max; +@@ -1409,6 +1410,8 @@ static int m88ds3103_probe(struct i2c_client *client, + case M88DS3103_CHIP_ID: + break; + default: ++ ret = -ENODEV; ++ dev_err(&client->dev, "Unknown device. Chip_id=%02x\n", dev->chip_id); + goto err_kfree; + } + +diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +index 1e2c8eca3af1..bea9ae31a769 100644 +--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c ++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c +@@ -809,6 +809,7 @@ static int __mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u8 local_port, + bool dynamic) + { + char *sfd_pl; ++ u8 num_rec; + int err; + + sfd_pl = kmalloc(MLXSW_REG_SFD_LEN, GFP_KERNEL); +@@ -818,9 +819,16 @@ static int __mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u8 local_port, + mlxsw_reg_sfd_pack(sfd_pl, mlxsw_sp_sfd_op(adding), 0); + mlxsw_reg_sfd_uc_pack(sfd_pl, 0, mlxsw_sp_sfd_rec_policy(dynamic), + mac, fid, action, local_port); ++ num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl); + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl); +- kfree(sfd_pl); ++ if (err) ++ goto out; ++ ++ if (num_rec != mlxsw_reg_sfd_num_rec_get(sfd_pl)) ++ err = -EBUSY; + ++out: ++ kfree(sfd_pl); + return err; + } + +@@ -845,6 +853,7 @@ static int mlxsw_sp_port_fdb_uc_lag_op(struct mlxsw_sp *mlxsw_sp, u16 lag_id, + bool adding, bool dynamic) + { + char *sfd_pl; ++ u8 num_rec; + int err; + + sfd_pl = kmalloc(MLXSW_REG_SFD_LEN, GFP_KERNEL); +@@ -855,9 +864,16 @@ static int mlxsw_sp_port_fdb_uc_lag_op(struct mlxsw_sp *mlxsw_sp, u16 lag_id, + mlxsw_reg_sfd_uc_lag_pack(sfd_pl, 0, mlxsw_sp_sfd_rec_policy(dynamic), + mac, fid, MLXSW_REG_SFD_REC_ACTION_NOP, + lag_vid, lag_id); ++ num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl); + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl); +- kfree(sfd_pl); ++ if (err) ++ goto out; ++ ++ if (num_rec != mlxsw_reg_sfd_num_rec_get(sfd_pl)) ++ err = -EBUSY; + ++out: ++ kfree(sfd_pl); + return err; + } + +@@ -891,6 +907,7 @@ static int mlxsw_sp_port_mdb_op(struct mlxsw_sp *mlxsw_sp, const char *addr, + u16 fid, u16 mid, bool adding) + { + char *sfd_pl; ++ u8 num_rec; + int err; + + sfd_pl = kmalloc(MLXSW_REG_SFD_LEN, GFP_KERNEL); +@@ -900,7 +917,15 @@ static int mlxsw_sp_port_mdb_op(struct mlxsw_sp *mlxsw_sp, const char *addr, + mlxsw_reg_sfd_pack(sfd_pl, mlxsw_sp_sfd_op(adding), 0); + mlxsw_reg_sfd_mc_pack(sfd_pl, 0, addr, fid, + MLXSW_REG_SFD_REC_ACTION_NOP, mid); ++ num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl); + err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl); ++ if (err) ++ goto out; ++ ++ if (num_rec != mlxsw_reg_sfd_num_rec_get(sfd_pl)) ++ err = -EBUSY; ++ ++out: + kfree(sfd_pl); + return err; + } +diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c +index 6e12401b5102..e2d9ca60e467 100644 +--- a/drivers/net/phy/phy.c ++++ b/drivers/net/phy/phy.c +@@ -925,7 +925,7 @@ void phy_start(struct phy_device *phydev) + break; + case PHY_HALTED: + /* make sure interrupts are re-enabled for the PHY */ +- if (phydev->irq != PHY_POLL) { ++ if (phy_interrupt_is_valid(phydev)) { + err = phy_enable_interrupts(phydev); + if (err < 0) + break; +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c +index fc4c2ccc3d22..114457921890 100644 +--- a/drivers/net/ppp/ppp_generic.c ++++ b/drivers/net/ppp/ppp_generic.c +@@ -3157,6 +3157,15 @@ ppp_connect_channel(struct channel *pch, int unit) + goto outl; + + ppp_lock(ppp); ++ spin_lock_bh(&pch->downl); ++ if (!pch->chan) { ++ /* Don't connect unregistered channels */ ++ spin_unlock_bh(&pch->downl); ++ ppp_unlock(ppp); ++ ret = -ENOTCONN; ++ goto outl; ++ } ++ spin_unlock_bh(&pch->downl); + if (pch->file.hdrlen > ppp->file.hdrlen) + ppp->file.hdrlen = pch->file.hdrlen; + hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */ +diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c +index 47fdb87d3567..8a9aced850be 100644 +--- a/drivers/net/wan/hdlc_ppp.c ++++ b/drivers/net/wan/hdlc_ppp.c +@@ -574,7 +574,10 @@ static void ppp_timer(unsigned long arg) + ppp_cp_event(proto->dev, proto->pid, TO_GOOD, 0, 0, + 0, NULL); + proto->restart_counter--; +- } else ++ } else if (netif_carrier_ok(proto->dev)) ++ ppp_cp_event(proto->dev, proto->pid, TO_GOOD, 0, 0, ++ 0, NULL); ++ else + ppp_cp_event(proto->dev, proto->pid, TO_BAD, 0, 0, + 0, NULL); + break; +diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c +index b0916b126923..6643a7bc381c 100644 +--- a/drivers/pci/pcie/aspm.c ++++ b/drivers/pci/pcie/aspm.c +@@ -526,10 +526,14 @@ static struct pcie_link_state *alloc_pcie_link_state(struct pci_dev *pdev) + + /* + * Root Ports and PCI/PCI-X to PCIe Bridges are roots of PCIe +- * hierarchies. ++ * hierarchies. Note that some PCIe host implementations omit ++ * the root ports entirely, in which case a downstream port on ++ * a switch may become the root of the link state chain for all ++ * its subordinate endpoints. + */ + if (pci_pcie_type(pdev) == PCI_EXP_TYPE_ROOT_PORT || +- pci_pcie_type(pdev) == PCI_EXP_TYPE_PCIE_BRIDGE) { ++ pci_pcie_type(pdev) == PCI_EXP_TYPE_PCIE_BRIDGE || ++ !pdev->bus->parent->self) { + link->root = link; + } else { + struct pcie_link_state *parent; +diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h +index 9b5fc502f6a1..403712bf1ddf 100644 +--- a/drivers/s390/net/qeth_core.h ++++ b/drivers/s390/net/qeth_core.h +@@ -592,6 +592,11 @@ struct qeth_cmd_buffer { + void (*callback) (struct qeth_channel *, struct qeth_cmd_buffer *); + }; + ++static inline struct qeth_ipa_cmd *__ipa_cmd(struct qeth_cmd_buffer *iob) ++{ ++ return (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE); ++} ++ + /** + * definition of a qeth channel, used for read and write + */ +@@ -849,7 +854,7 @@ struct qeth_trap_id { + */ + static inline int qeth_get_elements_for_range(addr_t start, addr_t end) + { +- return PFN_UP(end - 1) - PFN_DOWN(start); ++ return PFN_UP(end) - PFN_DOWN(start); + } + + static inline int qeth_get_micros(void) +diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c +index df8f74cb1406..cc28dda322b5 100644 +--- a/drivers/s390/net/qeth_core_main.c ++++ b/drivers/s390/net/qeth_core_main.c +@@ -2050,7 +2050,7 @@ int qeth_send_control_data(struct qeth_card *card, int len, + unsigned long flags; + struct qeth_reply *reply = NULL; + unsigned long timeout, event_timeout; +- struct qeth_ipa_cmd *cmd; ++ struct qeth_ipa_cmd *cmd = NULL; + + QETH_CARD_TEXT(card, 2, "sendctl"); + +@@ -2064,23 +2064,27 @@ int qeth_send_control_data(struct qeth_card *card, int len, + } + reply->callback = reply_cb; + reply->param = reply_param; +- if (card->state == CARD_STATE_DOWN) +- reply->seqno = QETH_IDX_COMMAND_SEQNO; +- else +- reply->seqno = card->seqno.ipa++; ++ + init_waitqueue_head(&reply->wait_q); +- spin_lock_irqsave(&card->lock, flags); +- list_add_tail(&reply->list, &card->cmd_waiter_list); +- spin_unlock_irqrestore(&card->lock, flags); + QETH_DBF_HEX(CTRL, 2, iob->data, QETH_DBF_CTRL_LEN); + + while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ; +- qeth_prepare_control_data(card, len, iob); + +- if (IS_IPA(iob->data)) ++ if (IS_IPA(iob->data)) { ++ cmd = __ipa_cmd(iob); ++ cmd->hdr.seqno = card->seqno.ipa++; ++ reply->seqno = cmd->hdr.seqno; + event_timeout = QETH_IPA_TIMEOUT; +- else ++ } else { ++ reply->seqno = QETH_IDX_COMMAND_SEQNO; + event_timeout = QETH_TIMEOUT; ++ } ++ qeth_prepare_control_data(card, len, iob); ++ ++ spin_lock_irqsave(&card->lock, flags); ++ list_add_tail(&reply->list, &card->cmd_waiter_list); ++ spin_unlock_irqrestore(&card->lock, flags); ++ + timeout = jiffies + event_timeout; + + QETH_CARD_TEXT(card, 6, "noirqpnd"); +@@ -2105,9 +2109,8 @@ int qeth_send_control_data(struct qeth_card *card, int len, + + /* we have only one long running ipassist, since we can ensure + process context of this command we can sleep */ +- cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); +- if ((cmd->hdr.command == IPA_CMD_SETIP) && +- (cmd->hdr.prot_version == QETH_PROT_IPV4)) { ++ if (cmd && cmd->hdr.command == IPA_CMD_SETIP && ++ cmd->hdr.prot_version == QETH_PROT_IPV4) { + if (!wait_event_timeout(reply->wait_q, + atomic_read(&reply->received), event_timeout)) + goto time_err; +@@ -2871,7 +2874,7 @@ static void qeth_fill_ipacmd_header(struct qeth_card *card, + memset(cmd, 0, sizeof(struct qeth_ipa_cmd)); + cmd->hdr.command = command; + cmd->hdr.initiator = IPA_CMD_INITIATOR_HOST; +- cmd->hdr.seqno = card->seqno.ipa; ++ /* cmd->hdr.seqno is set by qeth_send_control_data() */ + cmd->hdr.adapter_type = qeth_get_ipa_adp_type(card->info.link_type); + cmd->hdr.rel_adapter_no = (__u8) card->info.portno; + if (card->options.layer2) +@@ -3852,10 +3855,12 @@ EXPORT_SYMBOL_GPL(qeth_get_elements_for_frags); + int qeth_get_elements_no(struct qeth_card *card, + struct sk_buff *skb, int extra_elems, int data_offset) + { +- int elements = qeth_get_elements_for_range( +- (addr_t)skb->data + data_offset, +- (addr_t)skb->data + skb_headlen(skb)) + +- qeth_get_elements_for_frags(skb); ++ addr_t end = (addr_t)skb->data + skb_headlen(skb); ++ int elements = qeth_get_elements_for_frags(skb); ++ addr_t start = (addr_t)skb->data + data_offset; ++ ++ if (start != end) ++ elements += qeth_get_elements_for_range(start, end); + + if ((elements + extra_elems) > QETH_MAX_BUFFER_ELEMENTS(card)) { + QETH_DBF_MESSAGE(2, "Invalid size of IP packet " +diff --git a/drivers/s390/net/qeth_l3.h b/drivers/s390/net/qeth_l3.h +index eedf9b01a496..573569474e44 100644 +--- a/drivers/s390/net/qeth_l3.h ++++ b/drivers/s390/net/qeth_l3.h +@@ -39,8 +39,40 @@ struct qeth_ipaddr { + unsigned int pfxlen; + } a6; + } u; +- + }; ++ ++static inline bool qeth_l3_addr_match_ip(struct qeth_ipaddr *a1, ++ struct qeth_ipaddr *a2) ++{ ++ if (a1->proto != a2->proto) ++ return false; ++ if (a1->proto == QETH_PROT_IPV6) ++ return ipv6_addr_equal(&a1->u.a6.addr, &a2->u.a6.addr); ++ return a1->u.a4.addr == a2->u.a4.addr; ++} ++ ++static inline bool qeth_l3_addr_match_all(struct qeth_ipaddr *a1, ++ struct qeth_ipaddr *a2) ++{ ++ /* Assumes that the pair was obtained via qeth_l3_addr_find_by_ip(), ++ * so 'proto' and 'addr' match for sure. ++ * ++ * For ucast: ++ * - 'mac' is always 0. ++ * - 'mask'/'pfxlen' for RXIP/VIPA is always 0. For NORMAL, matching ++ * values are required to avoid mixups in takeover eligibility. ++ * ++ * For mcast, ++ * - 'mac' is mapped from the IP, and thus always matches. ++ * - 'mask'/'pfxlen' is always 0. ++ */ ++ if (a1->type != a2->type) ++ return false; ++ if (a1->proto == QETH_PROT_IPV6) ++ return a1->u.a6.pfxlen == a2->u.a6.pfxlen; ++ return a1->u.a4.mask == a2->u.a4.mask; ++} ++ + static inline u64 qeth_l3_ipaddr_hash(struct qeth_ipaddr *addr) + { + u64 ret = 0; +diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c +index 1487f8a0c575..a668e6b71a29 100644 +--- a/drivers/s390/net/qeth_l3_main.c ++++ b/drivers/s390/net/qeth_l3_main.c +@@ -154,6 +154,24 @@ int qeth_l3_string_to_ipaddr(const char *buf, enum qeth_prot_versions proto, + return -EINVAL; + } + ++static struct qeth_ipaddr *qeth_l3_find_addr_by_ip(struct qeth_card *card, ++ struct qeth_ipaddr *query) ++{ ++ u64 key = qeth_l3_ipaddr_hash(query); ++ struct qeth_ipaddr *addr; ++ ++ if (query->is_multicast) { ++ hash_for_each_possible(card->ip_mc_htable, addr, hnode, key) ++ if (qeth_l3_addr_match_ip(addr, query)) ++ return addr; ++ } else { ++ hash_for_each_possible(card->ip_htable, addr, hnode, key) ++ if (qeth_l3_addr_match_ip(addr, query)) ++ return addr; ++ } ++ return NULL; ++} ++ + static void qeth_l3_convert_addr_to_bits(u8 *addr, u8 *bits, int len) + { + int i, j; +@@ -207,34 +225,6 @@ static bool qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card, + return rc; + } + +-inline int +-qeth_l3_ipaddrs_is_equal(struct qeth_ipaddr *addr1, struct qeth_ipaddr *addr2) +-{ +- return addr1->proto == addr2->proto && +- !memcmp(&addr1->u, &addr2->u, sizeof(addr1->u)) && +- !memcmp(&addr1->mac, &addr2->mac, sizeof(addr1->mac)); +-} +- +-static struct qeth_ipaddr * +-qeth_l3_ip_from_hash(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) +-{ +- struct qeth_ipaddr *addr; +- +- if (tmp_addr->is_multicast) { +- hash_for_each_possible(card->ip_mc_htable, addr, +- hnode, qeth_l3_ipaddr_hash(tmp_addr)) +- if (qeth_l3_ipaddrs_is_equal(tmp_addr, addr)) +- return addr; +- } else { +- hash_for_each_possible(card->ip_htable, addr, +- hnode, qeth_l3_ipaddr_hash(tmp_addr)) +- if (qeth_l3_ipaddrs_is_equal(tmp_addr, addr)) +- return addr; +- } +- +- return NULL; +-} +- + int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) + { + int rc = 0; +@@ -249,8 +239,8 @@ int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) + QETH_CARD_HEX(card, 4, ((char *)&tmp_addr->u.a6.addr) + 8, 8); + } + +- addr = qeth_l3_ip_from_hash(card, tmp_addr); +- if (!addr) ++ addr = qeth_l3_find_addr_by_ip(card, tmp_addr); ++ if (!addr || !qeth_l3_addr_match_all(addr, tmp_addr)) + return -ENOENT; + + addr->ref_counter--; +@@ -259,12 +249,8 @@ int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) + if (addr->in_progress) + return -EINPROGRESS; + +- if (!qeth_card_hw_is_reachable(card)) { +- addr->disp_flag = QETH_DISP_ADDR_DELETE; +- return 0; +- } +- +- rc = qeth_l3_deregister_addr_entry(card, addr); ++ if (qeth_card_hw_is_reachable(card)) ++ rc = qeth_l3_deregister_addr_entry(card, addr); + + hash_del(&addr->hnode); + kfree(addr); +@@ -276,6 +262,7 @@ int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) + { + int rc = 0; + struct qeth_ipaddr *addr; ++ char buf[40]; + + QETH_CARD_TEXT(card, 4, "addip"); + +@@ -286,8 +273,20 @@ int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) + QETH_CARD_HEX(card, 4, ((char *)&tmp_addr->u.a6.addr) + 8, 8); + } + +- addr = qeth_l3_ip_from_hash(card, tmp_addr); +- if (!addr) { ++ addr = qeth_l3_find_addr_by_ip(card, tmp_addr); ++ if (addr) { ++ if (tmp_addr->type != QETH_IP_TYPE_NORMAL) ++ return -EADDRINUSE; ++ if (qeth_l3_addr_match_all(addr, tmp_addr)) { ++ addr->ref_counter++; ++ return 0; ++ } ++ qeth_l3_ipaddr_to_string(tmp_addr->proto, (u8 *)&tmp_addr->u, ++ buf); ++ dev_warn(&card->gdev->dev, ++ "Registering IP address %s failed\n", buf); ++ return -EADDRINUSE; ++ } else { + addr = qeth_l3_get_addr_buffer(tmp_addr->proto); + if (!addr) + return -ENOMEM; +@@ -327,18 +326,15 @@ int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr) + (rc == IPA_RC_LAN_OFFLINE)) { + addr->disp_flag = QETH_DISP_ADDR_DO_NOTHING; + if (addr->ref_counter < 1) { +- qeth_l3_delete_ip(card, addr); ++ qeth_l3_deregister_addr_entry(card, addr); ++ hash_del(&addr->hnode); + kfree(addr); + } + } else { + hash_del(&addr->hnode); + kfree(addr); + } +- } else { +- if (addr->type == QETH_IP_TYPE_NORMAL) +- addr->ref_counter++; + } +- + return rc; + } + +@@ -406,11 +402,7 @@ static void qeth_l3_recover_ip(struct qeth_card *card) + spin_lock_bh(&card->ip_lock); + + hash_for_each_safe(card->ip_htable, i, tmp, addr, hnode) { +- if (addr->disp_flag == QETH_DISP_ADDR_DELETE) { +- qeth_l3_deregister_addr_entry(card, addr); +- hash_del(&addr->hnode); +- kfree(addr); +- } else if (addr->disp_flag == QETH_DISP_ADDR_ADD) { ++ if (addr->disp_flag == QETH_DISP_ADDR_ADD) { + if (addr->proto == QETH_PROT_IPV4) { + addr->in_progress = 1; + spin_unlock_bh(&card->ip_lock); +@@ -726,12 +718,7 @@ int qeth_l3_add_vipa(struct qeth_card *card, enum qeth_prot_versions proto, + return -ENOMEM; + + spin_lock_bh(&card->ip_lock); +- +- if (qeth_l3_ip_from_hash(card, ipaddr)) +- rc = -EEXIST; +- else +- qeth_l3_add_ip(card, ipaddr); +- ++ rc = qeth_l3_add_ip(card, ipaddr); + spin_unlock_bh(&card->ip_lock); + + kfree(ipaddr); +@@ -794,12 +781,7 @@ int qeth_l3_add_rxip(struct qeth_card *card, enum qeth_prot_versions proto, + return -ENOMEM; + + spin_lock_bh(&card->ip_lock); +- +- if (qeth_l3_ip_from_hash(card, ipaddr)) +- rc = -EEXIST; +- else +- qeth_l3_add_ip(card, ipaddr); +- ++ rc = qeth_l3_add_ip(card, ipaddr); + spin_unlock_bh(&card->ip_lock); + + kfree(ipaddr); +@@ -1444,8 +1426,9 @@ qeth_l3_add_mc_to_hash(struct qeth_card *card, struct in_device *in4_dev) + memcpy(tmp->mac, buf, sizeof(tmp->mac)); + tmp->is_multicast = 1; + +- ipm = qeth_l3_ip_from_hash(card, tmp); ++ ipm = qeth_l3_find_addr_by_ip(card, tmp); + if (ipm) { ++ /* for mcast, by-IP match means full match */ + ipm->disp_flag = QETH_DISP_ADDR_DO_NOTHING; + } else { + ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV4); +@@ -1528,8 +1511,9 @@ qeth_l3_add_mc6_to_hash(struct qeth_card *card, struct inet6_dev *in6_dev) + sizeof(struct in6_addr)); + tmp->is_multicast = 1; + +- ipm = qeth_l3_ip_from_hash(card, tmp); ++ ipm = qeth_l3_find_addr_by_ip(card, tmp); + if (ipm) { ++ /* for mcast, by-IP match means full match */ + ipm->disp_flag = QETH_DISP_ADDR_DO_NOTHING; + continue; + } +@@ -2784,11 +2768,12 @@ static void qeth_tso_fill_header(struct qeth_card *card, + static int qeth_l3_get_elements_no_tso(struct qeth_card *card, + struct sk_buff *skb, int extra_elems) + { +- addr_t tcpdptr = (addr_t)tcp_hdr(skb) + tcp_hdrlen(skb); +- int elements = qeth_get_elements_for_range( +- tcpdptr, +- (addr_t)skb->data + skb_headlen(skb)) + +- qeth_get_elements_for_frags(skb); ++ addr_t start = (addr_t)tcp_hdr(skb) + tcp_hdrlen(skb); ++ addr_t end = (addr_t)skb->data + skb_headlen(skb); ++ int elements = qeth_get_elements_for_frags(skb); ++ ++ if (start != end) ++ elements += qeth_get_elements_for_range(start, end); + + if ((elements + extra_elems) > QETH_MAX_BUFFER_ELEMENTS(card)) { + QETH_DBF_MESSAGE(2, +diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c +index 8d8370ddb6b2..1ba49ebe67da 100644 +--- a/fs/btrfs/acl.c ++++ b/fs/btrfs/acl.c +@@ -114,13 +114,17 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans, + int btrfs_set_acl(struct inode *inode, struct posix_acl *acl, int type) + { + int ret; ++ umode_t old_mode = inode->i_mode; + + if (type == ACL_TYPE_ACCESS && acl) { + ret = posix_acl_update_mode(inode, &inode->i_mode, &acl); + if (ret) + return ret; + } +- return __btrfs_set_acl(NULL, inode, acl, type); ++ ret = __btrfs_set_acl(NULL, inode, acl, type); ++ if (ret) ++ inode->i_mode = old_mode; ++ return ret; + } + + /* +diff --git a/include/linux/fs.h b/include/linux/fs.h +index 745ea1b2e02c..18552189560b 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -3048,7 +3048,7 @@ static inline bool vma_is_fsdax(struct vm_area_struct *vma) + if (!vma_is_dax(vma)) + return false; + inode = file_inode(vma->vm_file); +- if (inode->i_mode == S_IFCHR) ++ if (S_ISCHR(inode->i_mode)) + return false; /* device-dax */ + return true; + } +diff --git a/include/linux/nospec.h b/include/linux/nospec.h +index fbc98e2c8228..132e3f5a2e0d 100644 +--- a/include/linux/nospec.h ++++ b/include/linux/nospec.h +@@ -72,7 +72,6 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, + BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \ + BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \ + \ +- _i &= _mask; \ +- _i; \ ++ (typeof(_i)) (_i & _mask); \ + }) + #endif /* _LINUX_NOSPEC_H */ +diff --git a/include/net/udplite.h b/include/net/udplite.h +index 80761938b9a7..8228155b305e 100644 +--- a/include/net/udplite.h ++++ b/include/net/udplite.h +@@ -62,6 +62,7 @@ static inline int udplite_checksum_init(struct sk_buff *skb, struct udphdr *uh) + UDP_SKB_CB(skb)->cscov = cscov; + if (skb->ip_summed == CHECKSUM_COMPLETE) + skb->ip_summed = CHECKSUM_NONE; ++ skb->csum_valid = 0; + } + + return 0; +diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c +index 9a1e6ed7babc..a38119e4a427 100644 +--- a/kernel/bpf/arraymap.c ++++ b/kernel/bpf/arraymap.c +@@ -20,8 +20,10 @@ static void bpf_array_free_percpu(struct bpf_array *array) + { + int i; + +- for (i = 0; i < array->map.max_entries; i++) ++ for (i = 0; i < array->map.max_entries; i++) { + free_percpu(array->pptrs[i]); ++ cond_resched(); ++ } + } + + static int bpf_array_alloc_percpu(struct bpf_array *array) +@@ -37,6 +39,7 @@ static int bpf_array_alloc_percpu(struct bpf_array *array) + return -ENOMEM; + } + array->pptrs[i] = ptr; ++ cond_resched(); + } + + return 0; +@@ -48,8 +51,9 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr) + bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY; + u32 elem_size, index_mask, max_entries; + bool unpriv = !capable(CAP_SYS_ADMIN); ++ u64 cost, array_size, mask64; + struct bpf_array *array; +- u64 array_size, mask64; ++ int ret; + + /* check sanity of attributes */ + if (attr->max_entries == 0 || attr->key_size != 4 || +@@ -92,8 +96,19 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr) + array_size += (u64) max_entries * elem_size; + + /* make sure there is no u32 overflow later in round_up() */ +- if (array_size >= U32_MAX - PAGE_SIZE) ++ cost = array_size; ++ if (cost >= U32_MAX - PAGE_SIZE) + return ERR_PTR(-ENOMEM); ++ if (percpu) { ++ cost += (u64)attr->max_entries * elem_size * num_possible_cpus(); ++ if (cost >= U32_MAX - PAGE_SIZE) ++ return ERR_PTR(-ENOMEM); ++ } ++ cost = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT; ++ ++ ret = bpf_map_precharge_memlock(cost); ++ if (ret < 0) ++ return ERR_PTR(ret); + + /* allocate all map elements and zero-initialize them */ + array = bpf_map_area_alloc(array_size); +@@ -107,20 +122,16 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr) + array->map.key_size = attr->key_size; + array->map.value_size = attr->value_size; + array->map.max_entries = attr->max_entries; ++ array->map.map_flags = attr->map_flags; ++ array->map.pages = cost; + array->elem_size = elem_size; + +- if (!percpu) +- goto out; +- +- array_size += (u64) attr->max_entries * elem_size * num_possible_cpus(); +- +- if (array_size >= U32_MAX - PAGE_SIZE || +- elem_size > PCPU_MIN_UNIT_SIZE || bpf_array_alloc_percpu(array)) { ++ if (percpu && ++ (elem_size > PCPU_MIN_UNIT_SIZE || ++ bpf_array_alloc_percpu(array))) { + bpf_map_area_free(array); + return ERR_PTR(-ENOMEM); + } +-out: +- array->map.pages = round_up(array_size, PAGE_SIZE) >> PAGE_SHIFT; + + return &array->map; + } +diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c +index be8519148c25..a2a232dec236 100644 +--- a/kernel/bpf/stackmap.c ++++ b/kernel/bpf/stackmap.c +@@ -88,6 +88,7 @@ static struct bpf_map *stack_map_alloc(union bpf_attr *attr) + smap->map.key_size = attr->key_size; + smap->map.value_size = value_size; + smap->map.max_entries = attr->max_entries; ++ smap->map.map_flags = attr->map_flags; + smap->n_buckets = n_buckets; + smap->map.pages = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT; + +diff --git a/kernel/time/timer.c b/kernel/time/timer.c +index 2d5cc7dfee14..7c477912f36d 100644 +--- a/kernel/time/timer.c ++++ b/kernel/time/timer.c +@@ -1884,6 +1884,12 @@ int timers_dead_cpu(unsigned int cpu) + spin_lock_irq(&new_base->lock); + spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING); + ++ /* ++ * The current CPUs base clock might be stale. Update it ++ * before moving the timers over. ++ */ ++ forward_timer_base(new_base); ++ + BUG_ON(old_base->running_timer); + + for (i = 0; i < WHEEL_SIZE; i++) +diff --git a/net/bridge/br_sysfs_if.c b/net/bridge/br_sysfs_if.c +index 8bd569695e76..abf711112418 100644 +--- a/net/bridge/br_sysfs_if.c ++++ b/net/bridge/br_sysfs_if.c +@@ -230,6 +230,9 @@ static ssize_t brport_show(struct kobject *kobj, + struct brport_attribute *brport_attr = to_brport_attr(attr); + struct net_bridge_port *p = to_brport(kobj); + ++ if (!brport_attr->show) ++ return -EINVAL; ++ + return brport_attr->show(p, buf); + } + +diff --git a/net/core/dev.c b/net/core/dev.c +index 8898618bf341..272f84ad16e0 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -2199,8 +2199,11 @@ EXPORT_SYMBOL(netif_set_xps_queue); + */ + int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq) + { ++ bool disabling; + int rc; + ++ disabling = txq < dev->real_num_tx_queues; ++ + if (txq < 1 || txq > dev->num_tx_queues) + return -EINVAL; + +@@ -2216,15 +2219,19 @@ int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq) + if (dev->num_tc) + netif_setup_tc(dev, txq); + +- if (txq < dev->real_num_tx_queues) { ++ dev->real_num_tx_queues = txq; ++ ++ if (disabling) { ++ synchronize_net(); + qdisc_reset_all_tx_gt(dev, txq); + #ifdef CONFIG_XPS + netif_reset_xps_queues_gt(dev, txq); + #endif + } ++ } else { ++ dev->real_num_tx_queues = txq; + } + +- dev->real_num_tx_queues = txq; + return 0; + } + EXPORT_SYMBOL(netif_set_real_num_tx_queues); +diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c +index 38c1c979ecb1..7e7b7a3efa99 100644 +--- a/net/ipv4/fib_semantics.c ++++ b/net/ipv4/fib_semantics.c +@@ -640,6 +640,11 @@ int fib_nh_match(struct fib_config *cfg, struct fib_info *fi) + fi->fib_nh, cfg)) + return 1; + } ++#ifdef CONFIG_IP_ROUTE_CLASSID ++ if (cfg->fc_flow && ++ cfg->fc_flow != fi->fib_nh->nh_tclassid) ++ return 1; ++#endif + if ((!cfg->fc_oif || cfg->fc_oif == fi->fib_nh->nh_oif) && + (!cfg->fc_gw || cfg->fc_gw == fi->fib_nh->nh_gw)) + return 0; +diff --git a/net/ipv4/route.c b/net/ipv4/route.c +index 7ac319222558..4c9fbf4f5905 100644 +--- a/net/ipv4/route.c ++++ b/net/ipv4/route.c +@@ -126,10 +126,13 @@ static int ip_rt_redirect_silence __read_mostly = ((HZ / 50) << (9 + 1)); + static int ip_rt_error_cost __read_mostly = HZ; + static int ip_rt_error_burst __read_mostly = 5 * HZ; + static int ip_rt_mtu_expires __read_mostly = 10 * 60 * HZ; +-static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; ++static u32 ip_rt_min_pmtu __read_mostly = 512 + 20 + 20; + static int ip_rt_min_advmss __read_mostly = 256; + + static int ip_rt_gc_timeout __read_mostly = RT_GC_TIMEOUT; ++ ++static int ip_min_valid_pmtu __read_mostly = IPV4_MIN_MTU; ++ + /* + * Interface to generic destination cache. + */ +@@ -2772,7 +2775,8 @@ static struct ctl_table ipv4_route_table[] = { + .data = &ip_rt_min_pmtu, + .maxlen = sizeof(int), + .mode = 0644, +- .proc_handler = proc_dointvec, ++ .proc_handler = proc_dointvec_minmax, ++ .extra1 = &ip_min_valid_pmtu, + }, + { + .procname = "min_adv_mss", +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index 3d7b59ecc76c..a69606031e5f 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -1580,7 +1580,7 @@ u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now, + */ + segs = max_t(u32, bytes / mss_now, min_tso_segs); + +- return min_t(u32, segs, sk->sk_gso_max_segs); ++ return segs; + } + EXPORT_SYMBOL(tcp_tso_autosize); + +@@ -1592,8 +1592,10 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now) + const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops; + u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0; + +- return tso_segs ? : +- tcp_tso_autosize(sk, mss_now, sysctl_tcp_min_tso_segs); ++ if (!tso_segs) ++ tso_segs = tcp_tso_autosize(sk, mss_now, ++ sysctl_tcp_min_tso_segs); ++ return min_t(u32, tso_segs, sk->sk_gso_max_segs); + } + + /* Returns the portion of skb which can be sent right away */ +@@ -1907,6 +1909,24 @@ static inline void tcp_mtu_check_reprobe(struct sock *sk) + } + } + ++static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len) ++{ ++ struct sk_buff *skb, *next; ++ ++ skb = tcp_send_head(sk); ++ tcp_for_write_queue_from_safe(skb, next, sk) { ++ if (len <= skb->len) ++ break; ++ ++ if (unlikely(TCP_SKB_CB(skb)->eor)) ++ return false; ++ ++ len -= skb->len; ++ } ++ ++ return true; ++} ++ + /* Create a new MTU probe if we are ready. + * MTU probe is regularly attempting to increase the path MTU by + * deliberately sending larger packets. This discovers routing +@@ -1979,6 +1999,9 @@ static int tcp_mtu_probe(struct sock *sk) + return 0; + } + ++ if (!tcp_can_coalesce_send_queue_head(sk, probe_size)) ++ return -1; ++ + /* We're allowed to probe. Build it now. */ + nskb = sk_stream_alloc_skb(sk, probe_size, GFP_ATOMIC, false); + if (!nskb) +@@ -2014,6 +2037,10 @@ static int tcp_mtu_probe(struct sock *sk) + /* We've eaten all the data from this skb. + * Throw it away. */ + TCP_SKB_CB(nskb)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags; ++ /* If this is the last SKB we copy and eor is set ++ * we need to propagate it to the new skb. ++ */ ++ TCP_SKB_CB(nskb)->eor = TCP_SKB_CB(skb)->eor; + tcp_unlink_write_queue(skb, sk); + sk_wmem_free_skb(sk, skb); + } else { +diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c +index bef4a94ce1a0..4cd943096afa 100644 +--- a/net/ipv4/udp.c ++++ b/net/ipv4/udp.c +@@ -1713,6 +1713,11 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh, + err = udplite_checksum_init(skb, uh); + if (err) + return err; ++ ++ if (UDP_SKB_CB(skb)->partial_cov) { ++ skb->csum = inet_compute_pseudo(skb, proto); ++ return 0; ++ } + } + + /* Note, we are only interested in != 0 or == 0, thus the +diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c +index c0cbcb259f5a..1dc023ca98fd 100644 +--- a/net/ipv6/ip6_checksum.c ++++ b/net/ipv6/ip6_checksum.c +@@ -72,6 +72,11 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto) + err = udplite_checksum_init(skb, uh); + if (err) + return err; ++ ++ if (UDP_SKB_CB(skb)->partial_cov) { ++ skb->csum = ip6_compute_pseudo(skb, proto); ++ return 0; ++ } + } + + /* To support RFC 6936 (allow zero checksum in UDP/IPV6 for tunnels) +diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c +index db6d437002a6..d4d84da28672 100644 +--- a/net/ipv6/sit.c ++++ b/net/ipv6/sit.c +@@ -176,7 +176,7 @@ static void ipip6_tunnel_clone_6rd(struct net_device *dev, struct sit_net *sitn) + #ifdef CONFIG_IPV6_SIT_6RD + struct ip_tunnel *t = netdev_priv(dev); + +- if (t->dev == sitn->fb_tunnel_dev) { ++ if (dev == sitn->fb_tunnel_dev) { + ipv6_addr_set(&t->ip6rd.prefix, htonl(0x20020000), 0, 0, 0); + t->ip6rd.relay_prefix = 0; + t->ip6rd.prefixlen = 16; +diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c +index c5a5a6959c1b..ffab94d61e1d 100644 +--- a/net/mpls/af_mpls.c ++++ b/net/mpls/af_mpls.c +@@ -7,6 +7,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -756,6 +757,22 @@ static int mpls_nh_build_multi(struct mpls_route_config *cfg, + return err; + } + ++static bool mpls_label_ok(struct net *net, unsigned int *index) ++{ ++ bool is_ok = true; ++ ++ /* Reserved labels may not be set */ ++ if (*index < MPLS_LABEL_FIRST_UNRESERVED) ++ is_ok = false; ++ ++ /* The full 20 bit range may not be supported. */ ++ if (is_ok && *index >= net->mpls.platform_labels) ++ is_ok = false; ++ ++ *index = array_index_nospec(*index, net->mpls.platform_labels); ++ return is_ok; ++} ++ + static int mpls_route_add(struct mpls_route_config *cfg) + { + struct mpls_route __rcu **platform_label; +@@ -774,12 +791,7 @@ static int mpls_route_add(struct mpls_route_config *cfg) + index = find_free_label(net); + } + +- /* Reserved labels may not be set */ +- if (index < MPLS_LABEL_FIRST_UNRESERVED) +- goto errout; +- +- /* The full 20 bit range may not be supported. */ +- if (index >= net->mpls.platform_labels) ++ if (!mpls_label_ok(net, &index)) + goto errout; + + /* Append makes no sense with mpls */ +@@ -840,12 +852,7 @@ static int mpls_route_del(struct mpls_route_config *cfg) + + index = cfg->rc_label; + +- /* Reserved labels may not be removed */ +- if (index < MPLS_LABEL_FIRST_UNRESERVED) +- goto errout; +- +- /* The full 20 bit range may not be supported */ +- if (index >= net->mpls.platform_labels) ++ if (!mpls_label_ok(net, &index)) + goto errout; + + mpls_route_update(net, index, NULL, &cfg->rc_nlinfo); +@@ -1279,10 +1286,9 @@ static int rtm_to_route_config(struct sk_buff *skb, struct nlmsghdr *nlh, + &cfg->rc_label)) + goto errout; + +- /* Reserved labels may not be set */ +- if (cfg->rc_label < MPLS_LABEL_FIRST_UNRESERVED) ++ if (!mpls_label_ok(cfg->rc_nlinfo.nl_net, ++ &cfg->rc_label)) + goto errout; +- + break; + } + case RTA_VIA: +diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c +index e1c123d4cdda..c1f59a06da6f 100644 +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -2258,7 +2258,7 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb, + if (cb->start) { + ret = cb->start(cb); + if (ret) +- goto error_unlock; ++ goto error_put; + } + + nlk->cb_running = true; +@@ -2278,6 +2278,8 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb, + */ + return -EINTR; + ++error_put: ++ module_put(control->module); + error_unlock: + sock_put(sk); + mutex_unlock(nlk->cb_mutex); +diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c +index 49c28e8ef01b..11702016c900 100644 +--- a/net/netlink/genetlink.c ++++ b/net/netlink/genetlink.c +@@ -1103,6 +1103,7 @@ static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group, + { + struct sk_buff *tmp; + struct net *net, *prev = NULL; ++ bool delivered = false; + int err; + + for_each_net_rcu(net) { +@@ -1114,14 +1115,21 @@ static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group, + } + err = nlmsg_multicast(prev->genl_sock, tmp, + portid, group, flags); +- if (err) ++ if (!err) ++ delivered = true; ++ else if (err != -ESRCH) + goto error; + } + + prev = net; + } + +- return nlmsg_multicast(prev->genl_sock, skb, portid, group, flags); ++ err = nlmsg_multicast(prev->genl_sock, skb, portid, group, flags); ++ if (!err) ++ delivered = true; ++ else if (err != -ESRCH) ++ goto error; ++ return delivered ? 0 : -ESRCH; + error: + kfree_skb(skb); + return err; +diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c +index 5dab1ff3a6c2..59d328603312 100644 +--- a/net/rxrpc/output.c ++++ b/net/rxrpc/output.c +@@ -391,7 +391,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb, + (char *)&opt, sizeof(opt)); + if (ret == 0) { + ret = kernel_sendmsg(conn->params.local->socket, &msg, +- iov, 1, iov[0].iov_len); ++ iov, 2, len); + + opt = IPV6_PMTUDISC_DO; + kernel_setsockopt(conn->params.local->socket, +diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c +index 5d015270e454..11f69d4c5619 100644 +--- a/net/sctp/ipv6.c ++++ b/net/sctp/ipv6.c +@@ -324,8 +324,10 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr, + final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final); + bdst = ip6_dst_lookup_flow(sk, fl6, final_p); + +- if (!IS_ERR(bdst) && +- ipv6_chk_addr(dev_net(bdst->dev), ++ if (IS_ERR(bdst)) ++ continue; ++ ++ if (ipv6_chk_addr(dev_net(bdst->dev), + &laddr->a.v6.sin6_addr, bdst->dev, 1)) { + if (!IS_ERR_OR_NULL(dst)) + dst_release(dst); +@@ -334,8 +336,10 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr, + } + + bmatchlen = sctp_v6_addr_match_len(daddr, &laddr->a); +- if (matchlen > bmatchlen) ++ if (matchlen > bmatchlen) { ++ dst_release(bdst); + continue; ++ } + + if (!IS_ERR_OR_NULL(dst)) + dst_release(dst); +diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c +index 7b523e3f551f..fb7b7632316a 100644 +--- a/net/sctp/protocol.c ++++ b/net/sctp/protocol.c +@@ -510,22 +510,20 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr, + if (IS_ERR(rt)) + continue; + +- if (!dst) +- dst = &rt->dst; +- + /* Ensure the src address belongs to the output + * interface. + */ + odev = __ip_dev_find(sock_net(sk), laddr->a.v4.sin_addr.s_addr, + false); + if (!odev || odev->ifindex != fl4->flowi4_oif) { +- if (&rt->dst != dst) ++ if (!dst) ++ dst = &rt->dst; ++ else + dst_release(&rt->dst); + continue; + } + +- if (dst != &rt->dst) +- dst_release(dst); ++ dst_release(dst); + dst = &rt->dst; + break; + } +diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c +index 9e9690b7afe1..fc67d356b5fa 100644 +--- a/net/sctp/sm_make_chunk.c ++++ b/net/sctp/sm_make_chunk.c +@@ -1373,9 +1373,14 @@ static struct sctp_chunk *_sctp_make_chunk(const struct sctp_association *asoc, + sctp_chunkhdr_t *chunk_hdr; + struct sk_buff *skb; + struct sock *sk; ++ int chunklen; ++ ++ chunklen = SCTP_PAD4(sizeof(*chunk_hdr) + paylen); ++ if (chunklen > SCTP_MAX_CHUNK_LEN) ++ goto nodata; + + /* No need to allocate LL here, as this is only a chunk. */ +- skb = alloc_skb(SCTP_PAD4(sizeof(sctp_chunkhdr_t) + paylen), gfp); ++ skb = alloc_skb(chunklen, gfp); + if (!skb) + goto nodata; + +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 293f3f213776..ceb162a9dcfd 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -180,7 +180,7 @@ static const struct kernel_param_ops param_ops_xint = { + }; + #define param_check_xint param_check_int + +-static int power_save = CONFIG_SND_HDA_POWER_SAVE_DEFAULT; ++static int power_save = -1; + module_param(power_save, xint, 0644); + MODULE_PARM_DESC(power_save, "Automatic power-saving timeout " + "(in second, 0 = disable)."); +@@ -2042,6 +2042,24 @@ static int azx_probe(struct pci_dev *pci, + return err; + } + ++#ifdef CONFIG_PM ++/* On some boards setting power_save to a non 0 value leads to clicking / ++ * popping sounds when ever we enter/leave powersaving mode. Ideally we would ++ * figure out how to avoid these sounds, but that is not always feasible. ++ * So we keep a list of devices where we disable powersaving as its known ++ * to causes problems on these devices. ++ */ ++static struct snd_pci_quirk power_save_blacklist[] = { ++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ ++ SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0), ++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ ++ SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0), ++ /* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */ ++ SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0), ++ {} ++}; ++#endif /* CONFIG_PM */ ++ + /* number of codec slots for each chipset: 0 = default slots (i.e. 4) */ + static unsigned int azx_max_codecs[AZX_NUM_DRIVERS] = { + [AZX_DRIVER_NVIDIA] = 8, +@@ -2054,6 +2072,7 @@ static int azx_probe_continue(struct azx *chip) + struct hdac_bus *bus = azx_bus(chip); + struct pci_dev *pci = chip->pci; + int dev = chip->dev_index; ++ int val; + int err; + + hda->probe_continued = 1; +@@ -2129,7 +2148,22 @@ static int azx_probe_continue(struct azx *chip) + + chip->running = 1; + azx_add_card_list(chip); +- snd_hda_set_power_save(&chip->bus, power_save * 1000); ++ ++ val = power_save; ++#ifdef CONFIG_PM ++ if (val == -1) { ++ const struct snd_pci_quirk *q; ++ ++ val = CONFIG_SND_HDA_POWER_SAVE_DEFAULT; ++ q = snd_pci_quirk_lookup(chip->pci, power_save_blacklist); ++ if (q && val) { ++ dev_info(chip->card->dev, "device %04x:%04x is on the power_save blacklist, forcing power_save to 0\n", ++ q->subvendor, q->subdevice); ++ val = 0; ++ } ++ } ++#endif /* CONFIG_PM */ ++ snd_hda_set_power_save(&chip->bus, val * 1000); + if (azx_has_pm_runtime(chip) || hda->use_vga_switcheroo) + pm_runtime_put_autosuspend(&pci->dev); + +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 89c166b97e81..974b74e91ef0 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -4480,13 +4480,14 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec, + + if (action == HDA_FIXUP_ACT_PRE_PROBE) { + spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP; ++ snd_hda_apply_pincfgs(codec, pincfgs); ++ } else if (action == HDA_FIXUP_ACT_INIT) { + /* Enable DOCK device */ + snd_hda_codec_write(codec, 0x17, 0, + AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0); + /* Enable DOCK device */ + snd_hda_codec_write(codec, 0x19, 0, + AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0); +- snd_hda_apply_pincfgs(codec, pincfgs); + } + } + +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h +index 8a59d4782a0f..69bf5cf1e91e 100644 +--- a/sound/usb/quirks-table.h ++++ b/sound/usb/quirks-table.h +@@ -3277,4 +3277,51 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), + } + }, + ++{ ++ /* ++ * Bower's & Wilkins PX headphones only support the 48 kHz sample rate ++ * even though it advertises more. The capture interface doesn't work ++ * even on windows. ++ */ ++ USB_DEVICE(0x19b5, 0x0021), ++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { ++ .ifnum = QUIRK_ANY_INTERFACE, ++ .type = QUIRK_COMPOSITE, ++ .data = (const struct snd_usb_audio_quirk[]) { ++ { ++ .ifnum = 0, ++ .type = QUIRK_AUDIO_STANDARD_MIXER, ++ }, ++ /* Capture */ ++ { ++ .ifnum = 1, ++ .type = QUIRK_IGNORE_INTERFACE, ++ }, ++ /* Playback */ ++ { ++ .ifnum = 2, ++ .type = QUIRK_AUDIO_FIXED_ENDPOINT, ++ .data = &(const struct audioformat) { ++ .formats = SNDRV_PCM_FMTBIT_S16_LE, ++ .channels = 2, ++ .iface = 2, ++ .altsetting = 1, ++ .altset_idx = 1, ++ .attributes = UAC_EP_CS_ATTR_FILL_MAX | ++ UAC_EP_CS_ATTR_SAMPLE_RATE, ++ .endpoint = 0x03, ++ .ep_attr = USB_ENDPOINT_XFER_ISOC, ++ .rates = SNDRV_PCM_RATE_48000, ++ .rate_min = 48000, ++ .rate_max = 48000, ++ .nr_rates = 1, ++ .rate_table = (unsigned int[]) { ++ 48000 ++ } ++ } ++ }, ++ } ++ } ++}, ++ + #undef USB_DEVICE_VENDOR_SPEC +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 1b20768e781d..eaae7252f60c 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -976,8 +976,7 @@ int __kvm_set_memory_region(struct kvm *kvm, + /* Check for overlaps */ + r = -EEXIST; + kvm_for_each_memslot(slot, __kvm_memslots(kvm, as_id)) { +- if ((slot->id >= KVM_USER_MEM_SLOTS) || +- (slot->id == id)) ++ if (slot->id == id) + continue; + if (!((base_gfn + npages <= slot->base_gfn) || + (base_gfn >= slot->base_gfn + slot->npages)))