From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 03F98138334 for ; Sat, 23 Mar 2019 14:19:54 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 3524EE09A7; Sat, 23 Mar 2019 14:19:53 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id BABD6E09A7 for ; Sat, 23 Mar 2019 14:19:52 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id E775E335D03 for ; Sat, 23 Mar 2019 14:19:50 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 6E3E0566 for ; Sat, 23 Mar 2019 14:19:49 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1553350769.273a15e0e066043f45374fdffe52018c9b507a6b.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1107_linux-4.14.108.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 273a15e0e066043f45374fdffe52018c9b507a6b X-VCS-Branch: 4.14 Date: Sat, 23 Mar 2019 14:19:49 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: dcc28007-fd21-4797-8eab-07262ed1ab61 X-Archives-Hash: 93e29d5c2a7bff4b0ef42b3617f4e188 commit: 273a15e0e066043f45374fdffe52018c9b507a6b Author: Mike Pagano gentoo org> AuthorDate: Sat Mar 23 14:19:29 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sat Mar 23 14:19:29 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=273a15e0 proj/linux-patches: Linux patch 4.14.108 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1107_linux-4.14.108.patch | 7555 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 7559 insertions(+) diff --git a/0000_README b/0000_README index a3295c6..19f9dbd 100644 --- a/0000_README +++ b/0000_README @@ -471,6 +471,10 @@ Patch: 1106_4.14.107.patch From: http://www.kernel.org Desc: Linux 4.14.107 +Patch: 1107_4.14.108.patch +From: http://www.kernel.org +Desc: Linux 4.14.108 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1107_linux-4.14.108.patch b/1107_linux-4.14.108.patch new file mode 100644 index 0000000..2d5a734 --- /dev/null +++ b/1107_linux-4.14.108.patch @@ -0,0 +1,7555 @@ +diff --git a/Makefile b/Makefile +index e3e2121718a8..170411b62525 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 14 +-SUBLEVEL = 107 ++SUBLEVEL = 108 + EXTRAVERSION = + NAME = Petit Gorille + +diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig +index 9d06c9478a0d..82050893d0b3 100644 +--- a/arch/arc/Kconfig ++++ b/arch/arc/Kconfig +@@ -417,6 +417,14 @@ config ARC_HAS_ACCL_REGS + (also referred to as r58:r59). These can also be used by gcc as GPR so + kernel needs to save/restore per process + ++config ARC_IRQ_NO_AUTOSAVE ++ bool "Disable hardware autosave regfile on interrupts" ++ default n ++ help ++ On HS cores, taken interrupt auto saves the regfile on stack. ++ This is programmable and can be optionally disabled in which case ++ software INTERRUPT_PROLOGUE/EPILGUE do the needed work ++ + endif # ISA_ARCV2 + + endmenu # "ARC CPU Configuration" +diff --git a/arch/arc/include/asm/entry-arcv2.h b/arch/arc/include/asm/entry-arcv2.h +index 257a68f3c2fe..9f581553dcc3 100644 +--- a/arch/arc/include/asm/entry-arcv2.h ++++ b/arch/arc/include/asm/entry-arcv2.h +@@ -17,6 +17,33 @@ + ; + ; Now manually save: r12, sp, fp, gp, r25 + ++#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE ++.ifnc \called_from, exception ++ st.as r9, [sp, -10] ; save r9 in it's final stack slot ++ sub sp, sp, 12 ; skip JLI, LDI, EI ++ ++ PUSH lp_count ++ PUSHAX lp_start ++ PUSHAX lp_end ++ PUSH blink ++ ++ PUSH r11 ++ PUSH r10 ++ ++ sub sp, sp, 4 ; skip r9 ++ ++ PUSH r8 ++ PUSH r7 ++ PUSH r6 ++ PUSH r5 ++ PUSH r4 ++ PUSH r3 ++ PUSH r2 ++ PUSH r1 ++ PUSH r0 ++.endif ++#endif ++ + #ifdef CONFIG_ARC_HAS_ACCL_REGS + PUSH r59 + PUSH r58 +@@ -86,6 +113,33 @@ + POP r59 + #endif + ++#ifdef CONFIG_ARC_IRQ_NO_AUTOSAVE ++.ifnc \called_from, exception ++ POP r0 ++ POP r1 ++ POP r2 ++ POP r3 ++ POP r4 ++ POP r5 ++ POP r6 ++ POP r7 ++ POP r8 ++ POP r9 ++ POP r10 ++ POP r11 ++ ++ POP blink ++ POPAX lp_end ++ POPAX lp_start ++ ++ POP r9 ++ mov lp_count, r9 ++ ++ add sp, sp, 12 ; skip JLI, LDI, EI ++ ld.as r9, [sp, -10] ; reload r9 which got clobbered ++.endif ++#endif ++ + .endm + + /*------------------------------------------------------------------------*/ +diff --git a/arch/arc/include/asm/uaccess.h b/arch/arc/include/asm/uaccess.h +index c9173c02081c..eabc3efa6c6d 100644 +--- a/arch/arc/include/asm/uaccess.h ++++ b/arch/arc/include/asm/uaccess.h +@@ -207,7 +207,7 @@ raw_copy_from_user(void *to, const void __user *from, unsigned long n) + */ + "=&r" (tmp), "+r" (to), "+r" (from) + : +- : "lp_count", "lp_start", "lp_end", "memory"); ++ : "lp_count", "memory"); + + return n; + } +@@ -433,7 +433,7 @@ raw_copy_to_user(void __user *to, const void *from, unsigned long n) + */ + "=&r" (tmp), "+r" (to), "+r" (from) + : +- : "lp_count", "lp_start", "lp_end", "memory"); ++ : "lp_count", "memory"); + + return n; + } +@@ -653,7 +653,7 @@ static inline unsigned long __arc_clear_user(void __user *to, unsigned long n) + " .previous \n" + : "+r"(d_char), "+r"(res) + : "i"(0) +- : "lp_count", "lp_start", "lp_end", "memory"); ++ : "lp_count", "memory"); + + return res; + } +@@ -686,7 +686,7 @@ __arc_strncpy_from_user(char *dst, const char __user *src, long count) + " .previous \n" + : "+r"(res), "+r"(dst), "+r"(src), "=r"(val) + : "g"(-EFAULT), "r"(count) +- : "lp_count", "lp_start", "lp_end", "memory"); ++ : "lp_count", "memory"); + + return res; + } +diff --git a/arch/arc/kernel/entry-arcv2.S b/arch/arc/kernel/entry-arcv2.S +index cc558a25b8fa..562089d62d9d 100644 +--- a/arch/arc/kernel/entry-arcv2.S ++++ b/arch/arc/kernel/entry-arcv2.S +@@ -209,7 +209,9 @@ restore_regs: + ;####### Return from Intr ####### + + debug_marker_l1: +- bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot ++ ; bbit1.nt r0, STATUS_DE_BIT, .Lintr_ret_to_delay_slot ++ btst r0, STATUS_DE_BIT ; Z flag set if bit clear ++ bnz .Lintr_ret_to_delay_slot ; branch if STATUS_DE_BIT set + + .Lisr_ret_fast_path: + ; Handle special case #1: (Entry via Exception, Return via IRQ) +diff --git a/arch/arc/kernel/intc-arcv2.c b/arch/arc/kernel/intc-arcv2.c +index 067ea362fb3e..cf18b3e5a934 100644 +--- a/arch/arc/kernel/intc-arcv2.c ++++ b/arch/arc/kernel/intc-arcv2.c +@@ -49,11 +49,13 @@ void arc_init_IRQ(void) + + *(unsigned int *)&ictrl = 0; + ++#ifndef CONFIG_ARC_IRQ_NO_AUTOSAVE + ictrl.save_nr_gpr_pairs = 6; /* r0 to r11 (r12 saved manually) */ + ictrl.save_blink = 1; + ictrl.save_lp_regs = 1; /* LP_COUNT, LP_START, LP_END */ + ictrl.save_u_to_u = 0; /* user ctxt saved on kernel stack */ + ictrl.save_idx_regs = 1; /* JLI, LDI, EI */ ++#endif + + WRITE_AUX(AUX_IRQ_CTRL, ictrl); + +diff --git a/arch/arc/lib/memcpy-archs.S b/arch/arc/lib/memcpy-archs.S +index d61044dd8b58..ea14b0bf3116 100644 +--- a/arch/arc/lib/memcpy-archs.S ++++ b/arch/arc/lib/memcpy-archs.S +@@ -25,15 +25,11 @@ + #endif + + #ifdef CONFIG_ARC_HAS_LL64 +-# define PREFETCH_READ(RX) prefetch [RX, 56] +-# define PREFETCH_WRITE(RX) prefetchw [RX, 64] + # define LOADX(DST,RX) ldd.ab DST, [RX, 8] + # define STOREX(SRC,RX) std.ab SRC, [RX, 8] + # define ZOLSHFT 5 + # define ZOLAND 0x1F + #else +-# define PREFETCH_READ(RX) prefetch [RX, 28] +-# define PREFETCH_WRITE(RX) prefetchw [RX, 32] + # define LOADX(DST,RX) ld.ab DST, [RX, 4] + # define STOREX(SRC,RX) st.ab SRC, [RX, 4] + # define ZOLSHFT 4 +@@ -41,8 +37,6 @@ + #endif + + ENTRY_CFI(memcpy) +- prefetch [r1] ; Prefetch the read location +- prefetchw [r0] ; Prefetch the write location + mov.f 0, r2 + ;;; if size is zero + jz.d [blink] +@@ -72,8 +66,6 @@ ENTRY_CFI(memcpy) + lpnz @.Lcopy32_64bytes + ;; LOOP START + LOADX (r6, r1) +- PREFETCH_READ (r1) +- PREFETCH_WRITE (r3) + LOADX (r8, r1) + LOADX (r10, r1) + LOADX (r4, r1) +@@ -117,9 +109,7 @@ ENTRY_CFI(memcpy) + lpnz @.Lcopy8bytes_1 + ;; LOOP START + ld.ab r6, [r1, 4] +- prefetch [r1, 28] ;Prefetch the next read location + ld.ab r8, [r1,4] +- prefetchw [r3, 32] ;Prefetch the next write location + + SHIFT_1 (r7, r6, 24) + or r7, r7, r5 +@@ -162,9 +152,7 @@ ENTRY_CFI(memcpy) + lpnz @.Lcopy8bytes_2 + ;; LOOP START + ld.ab r6, [r1, 4] +- prefetch [r1, 28] ;Prefetch the next read location + ld.ab r8, [r1,4] +- prefetchw [r3, 32] ;Prefetch the next write location + + SHIFT_1 (r7, r6, 16) + or r7, r7, r5 +@@ -204,9 +192,7 @@ ENTRY_CFI(memcpy) + lpnz @.Lcopy8bytes_3 + ;; LOOP START + ld.ab r6, [r1, 4] +- prefetch [r1, 28] ;Prefetch the next read location + ld.ab r8, [r1,4] +- prefetchw [r3, 32] ;Prefetch the next write location + + SHIFT_1 (r7, r6, 8) + or r7, r7, r5 +diff --git a/arch/arc/plat-hsdk/Kconfig b/arch/arc/plat-hsdk/Kconfig +index fcc9a9e27e9c..8fb1600b29b7 100644 +--- a/arch/arc/plat-hsdk/Kconfig ++++ b/arch/arc/plat-hsdk/Kconfig +@@ -9,5 +9,6 @@ menuconfig ARC_SOC_HSDK + bool "ARC HS Development Kit SOC" + depends on ISA_ARCV2 + select ARC_HAS_ACCL_REGS ++ select ARC_IRQ_NO_AUTOSAVE + select CLK_HSDK + select RESET_HSDK +diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig +index d1346a160760..cf69aab648fb 100644 +--- a/arch/arm/Kconfig ++++ b/arch/arm/Kconfig +@@ -1447,6 +1447,7 @@ config NR_CPUS + config HOTPLUG_CPU + bool "Support for hot-pluggable CPUs" + depends on SMP ++ select GENERIC_IRQ_MIGRATION + help + Say Y here to experiment with turning CPUs off and on. CPUs + can be controlled through /sys/devices/system/cpu. +diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S +index ce45ba0c0687..16019b5961e7 100644 +--- a/arch/arm/crypto/crct10dif-ce-core.S ++++ b/arch/arm/crypto/crct10dif-ce-core.S +@@ -124,10 +124,10 @@ ENTRY(crc_t10dif_pmull) + vext.8 q10, qzr, q0, #4 + + // receive the initial 64B data, xor the initial crc value +- vld1.64 {q0-q1}, [arg2, :128]! +- vld1.64 {q2-q3}, [arg2, :128]! +- vld1.64 {q4-q5}, [arg2, :128]! +- vld1.64 {q6-q7}, [arg2, :128]! ++ vld1.64 {q0-q1}, [arg2]! ++ vld1.64 {q2-q3}, [arg2]! ++ vld1.64 {q4-q5}, [arg2]! ++ vld1.64 {q6-q7}, [arg2]! + CPU_LE( vrev64.8 q0, q0 ) + CPU_LE( vrev64.8 q1, q1 ) + CPU_LE( vrev64.8 q2, q2 ) +@@ -167,7 +167,7 @@ CPU_LE( vrev64.8 q7, q7 ) + _fold_64_B_loop: + + .macro fold64, reg1, reg2 +- vld1.64 {q11-q12}, [arg2, :128]! ++ vld1.64 {q11-q12}, [arg2]! + + vmull.p64 q8, \reg1\()h, d21 + vmull.p64 \reg1, \reg1\()l, d20 +@@ -238,7 +238,7 @@ _16B_reduction_loop: + vmull.p64 q7, d15, d21 + veor.8 q7, q7, q8 + +- vld1.64 {q0}, [arg2, :128]! ++ vld1.64 {q0}, [arg2]! + CPU_LE( vrev64.8 q0, q0 ) + vswp d0, d1 + veor.8 q7, q7, q0 +@@ -335,7 +335,7 @@ _less_than_128: + vmov.i8 q0, #0 + vmov s3, arg1_low32 // get the initial crc value + +- vld1.64 {q7}, [arg2, :128]! ++ vld1.64 {q7}, [arg2]! + CPU_LE( vrev64.8 q7, q7 ) + vswp d14, d15 + veor.8 q7, q7, q0 +diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c +index d428355cf38d..14c19c70a841 100644 +--- a/arch/arm/crypto/crct10dif-ce-glue.c ++++ b/arch/arm/crypto/crct10dif-ce-glue.c +@@ -35,26 +35,15 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data, + unsigned int length) + { + u16 *crc = shash_desc_ctx(desc); +- unsigned int l; + +- if (!may_use_simd()) { +- *crc = crc_t10dif_generic(*crc, data, length); ++ if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) { ++ kernel_neon_begin(); ++ *crc = crc_t10dif_pmull(*crc, data, length); ++ kernel_neon_end(); + } else { +- if (unlikely((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) { +- l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE - +- ((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)); +- +- *crc = crc_t10dif_generic(*crc, data, l); +- +- length -= l; +- data += l; +- } +- if (length > 0) { +- kernel_neon_begin(); +- *crc = crc_t10dif_pmull(*crc, data, length); +- kernel_neon_end(); +- } ++ *crc = crc_t10dif_generic(*crc, data, length); + } ++ + return 0; + } + +diff --git a/arch/arm/include/asm/irq.h b/arch/arm/include/asm/irq.h +index b6f319606e30..2de321e89b94 100644 +--- a/arch/arm/include/asm/irq.h ++++ b/arch/arm/include/asm/irq.h +@@ -25,7 +25,6 @@ + #ifndef __ASSEMBLY__ + struct irqaction; + struct pt_regs; +-extern void migrate_irqs(void); + + extern void asm_do_IRQ(unsigned int, struct pt_regs *); + void handle_IRQ(unsigned int, struct pt_regs *); +diff --git a/arch/arm/kernel/irq.c b/arch/arm/kernel/irq.c +index ece04a457486..5b07c7a31c31 100644 +--- a/arch/arm/kernel/irq.c ++++ b/arch/arm/kernel/irq.c +@@ -31,7 +31,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -119,64 +118,3 @@ int __init arch_probe_nr_irqs(void) + return nr_irqs; + } + #endif +- +-#ifdef CONFIG_HOTPLUG_CPU +-static bool migrate_one_irq(struct irq_desc *desc) +-{ +- struct irq_data *d = irq_desc_get_irq_data(desc); +- const struct cpumask *affinity = irq_data_get_affinity_mask(d); +- struct irq_chip *c; +- bool ret = false; +- +- /* +- * If this is a per-CPU interrupt, or the affinity does not +- * include this CPU, then we have nothing to do. +- */ +- if (irqd_is_per_cpu(d) || !cpumask_test_cpu(smp_processor_id(), affinity)) +- return false; +- +- if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { +- affinity = cpu_online_mask; +- ret = true; +- } +- +- c = irq_data_get_irq_chip(d); +- if (!c->irq_set_affinity) +- pr_debug("IRQ%u: unable to set affinity\n", d->irq); +- else if (c->irq_set_affinity(d, affinity, false) == IRQ_SET_MASK_OK && ret) +- cpumask_copy(irq_data_get_affinity_mask(d), affinity); +- +- return ret; +-} +- +-/* +- * The current CPU has been marked offline. Migrate IRQs off this CPU. +- * If the affinity settings do not allow other CPUs, force them onto any +- * available CPU. +- * +- * Note: we must iterate over all IRQs, whether they have an attached +- * action structure or not, as we need to get chained interrupts too. +- */ +-void migrate_irqs(void) +-{ +- unsigned int i; +- struct irq_desc *desc; +- unsigned long flags; +- +- local_irq_save(flags); +- +- for_each_irq_desc(i, desc) { +- bool affinity_broken; +- +- raw_spin_lock(&desc->lock); +- affinity_broken = migrate_one_irq(desc); +- raw_spin_unlock(&desc->lock); +- +- if (affinity_broken) +- pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n", +- i, smp_processor_id()); +- } +- +- local_irq_restore(flags); +-} +-#endif /* CONFIG_HOTPLUG_CPU */ +diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c +index f57333f46242..65f85737c6a2 100644 +--- a/arch/arm/kernel/smp.c ++++ b/arch/arm/kernel/smp.c +@@ -254,7 +254,7 @@ int __cpu_disable(void) + /* + * OK - migrate IRQs away from this CPU + */ +- migrate_irqs(); ++ irq_migrate_all_off_this_cpu(); + + /* + * Flush user cache and TLB mappings, and then remove this CPU +diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c +index a8b291f00109..dae514c8276a 100644 +--- a/arch/arm/mach-omap2/cpuidle44xx.c ++++ b/arch/arm/mach-omap2/cpuidle44xx.c +@@ -152,6 +152,10 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, + mpuss_can_lose_context = (cx->mpu_state == PWRDM_POWER_RET) && + (cx->mpu_logic_state == PWRDM_POWER_OFF); + ++ /* Enter broadcast mode for periodic timers */ ++ tick_broadcast_enable(); ++ ++ /* Enter broadcast mode for one-shot timers */ + tick_broadcast_enter(); + + /* +@@ -218,15 +222,6 @@ fail: + return index; + } + +-/* +- * For each cpu, setup the broadcast timer because local timers +- * stops for the states above C1. +- */ +-static void omap_setup_broadcast_timer(void *arg) +-{ +- tick_broadcast_enable(); +-} +- + static struct cpuidle_driver omap4_idle_driver = { + .name = "omap4_idle", + .owner = THIS_MODULE, +@@ -319,8 +314,5 @@ int __init omap4_idle_init(void) + if (!cpu_clkdm[0] || !cpu_clkdm[1]) + return -ENODEV; + +- /* Configure the broadcast timer on each cpu */ +- on_each_cpu(omap_setup_broadcast_timer, NULL, 1); +- + return cpuidle_register(idle_driver, cpu_online_mask); + } +diff --git a/arch/arm/mach-omap2/display.c b/arch/arm/mach-omap2/display.c +index b3f6eb5d04a2..6e7440ef503a 100644 +--- a/arch/arm/mach-omap2/display.c ++++ b/arch/arm/mach-omap2/display.c +@@ -84,6 +84,7 @@ static int omap4_dsi_mux_pads(int dsi_id, unsigned lanes) + u32 enable_mask, enable_shift; + u32 pipd_mask, pipd_shift; + u32 reg; ++ int ret; + + if (dsi_id == 0) { + enable_mask = OMAP4_DSI1_LANEENABLE_MASK; +@@ -99,7 +100,11 @@ static int omap4_dsi_mux_pads(int dsi_id, unsigned lanes) + return -ENODEV; + } + +- regmap_read(omap4_dsi_mux_syscon, OMAP4_DSIPHY_SYSCON_OFFSET, ®); ++ ret = regmap_read(omap4_dsi_mux_syscon, ++ OMAP4_DSIPHY_SYSCON_OFFSET, ++ ®); ++ if (ret) ++ return ret; + + reg &= ~enable_mask; + reg &= ~pipd_mask; +diff --git a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c +index 6cac7da15e2b..2e8ad83beda8 100644 +--- a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c ++++ b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c +@@ -70,16 +70,16 @@ static int osiris_dvs_notify(struct notifier_block *nb, + + switch (val) { + case CPUFREQ_PRECHANGE: +- if (old_dvs & !new_dvs || +- cur_dvs & !new_dvs) { ++ if ((old_dvs && !new_dvs) || ++ (cur_dvs && !new_dvs)) { + pr_debug("%s: exiting dvs\n", __func__); + cur_dvs = false; + gpio_set_value(OSIRIS_GPIO_DVS, 1); + } + break; + case CPUFREQ_POSTCHANGE: +- if (!old_dvs & new_dvs || +- !cur_dvs & new_dvs) { ++ if ((!old_dvs && new_dvs) || ++ (!cur_dvs && new_dvs)) { + pr_debug("entering dvs\n"); + cur_dvs = true; + gpio_set_value(OSIRIS_GPIO_DVS, 0); +diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S +index e3a375c4cb83..1b151442dac1 100644 +--- a/arch/arm64/crypto/aes-ce-ccm-core.S ++++ b/arch/arm64/crypto/aes-ce-ccm-core.S +@@ -74,12 +74,13 @@ ENTRY(ce_aes_ccm_auth_data) + beq 10f + ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */ + b 7b +-8: mov w7, w8 ++8: cbz w8, 91f ++ mov w7, w8 + add w8, w8, #16 + 9: ext v1.16b, v1.16b, v1.16b, #1 + adds w7, w7, #1 + bne 9b +- eor v0.16b, v0.16b, v1.16b ++91: eor v0.16b, v0.16b, v1.16b + st1 {v0.16b}, [x0] + 10: str w8, [x3] + ret +diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c +index a1254036f2b1..ae0d26705851 100644 +--- a/arch/arm64/crypto/aes-ce-ccm-glue.c ++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c +@@ -123,7 +123,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], + abytes -= added; + } + +- while (abytes > AES_BLOCK_SIZE) { ++ while (abytes >= AES_BLOCK_SIZE) { + __aes_arm64_encrypt(key->key_enc, mac, mac, + num_rounds(key)); + crypto_xor(mac, in, AES_BLOCK_SIZE); +@@ -137,8 +137,6 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], + num_rounds(key)); + crypto_xor(mac, in, abytes); + *macp = abytes; +- } else { +- *macp = 0; + } + } + } +diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S +index ca0472500433..3b18e3e79531 100644 +--- a/arch/arm64/crypto/aes-neonbs-core.S ++++ b/arch/arm64/crypto/aes-neonbs-core.S +@@ -940,7 +940,7 @@ CPU_LE( rev x8, x8 ) + 8: next_ctr v0 + cbnz x4, 99b + +-0: st1 {v0.16b}, [x5] ++ st1 {v0.16b}, [x5] + ldp x29, x30, [sp], #16 + ret + +@@ -948,6 +948,9 @@ CPU_LE( rev x8, x8 ) + * If we are handling the tail of the input (x6 != NULL), return the + * final keystream block back to the caller. + */ ++0: cbz x6, 8b ++ st1 {v0.16b}, [x6] ++ b 8b + 1: cbz x6, 8b + st1 {v1.16b}, [x6] + b 8b +diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c +index 96f0cae4a022..617bcfc1b080 100644 +--- a/arch/arm64/crypto/crct10dif-ce-glue.c ++++ b/arch/arm64/crypto/crct10dif-ce-glue.c +@@ -36,26 +36,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data, + unsigned int length) + { + u16 *crc = shash_desc_ctx(desc); +- unsigned int l; + +- if (unlikely((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) { +- l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE - +- ((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)); +- +- *crc = crc_t10dif_generic(*crc, data, l); +- +- length -= l; +- data += l; +- } +- +- if (length > 0) { +- if (may_use_simd()) { +- kernel_neon_begin(); +- *crc = crc_t10dif_pmull(*crc, data, length); +- kernel_neon_end(); +- } else { +- *crc = crc_t10dif_generic(*crc, data, length); +- } ++ if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) { ++ kernel_neon_begin(); ++ *crc = crc_t10dif_pmull(*crc, data, length); ++ kernel_neon_end(); ++ } else { ++ *crc = crc_t10dif_generic(*crc, data, length); + } + + return 0; +diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h +index 1473fc2f7ab7..89691c86640a 100644 +--- a/arch/arm64/include/asm/hardirq.h ++++ b/arch/arm64/include/asm/hardirq.h +@@ -17,8 +17,12 @@ + #define __ASM_HARDIRQ_H + + #include ++#include + #include ++#include + #include ++#include ++#include + + #define NR_IPI 7 + +@@ -37,6 +41,33 @@ u64 smp_irq_stat_cpu(unsigned int cpu); + + #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1 + ++struct nmi_ctx { ++ u64 hcr; ++}; ++ ++DECLARE_PER_CPU(struct nmi_ctx, nmi_contexts); ++ ++#define arch_nmi_enter() \ ++ do { \ ++ if (is_kernel_in_hyp_mode()) { \ ++ struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts); \ ++ nmi_ctx->hcr = read_sysreg(hcr_el2); \ ++ if (!(nmi_ctx->hcr & HCR_TGE)) { \ ++ write_sysreg(nmi_ctx->hcr | HCR_TGE, hcr_el2); \ ++ isb(); \ ++ } \ ++ } \ ++ } while (0) ++ ++#define arch_nmi_exit() \ ++ do { \ ++ if (is_kernel_in_hyp_mode()) { \ ++ struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts); \ ++ if (!(nmi_ctx->hcr & HCR_TGE)) \ ++ write_sysreg(nmi_ctx->hcr, hcr_el2); \ ++ } \ ++ } while (0) ++ + static inline void ack_bad_irq(unsigned int irq) + { + extern unsigned long irq_err_count; +diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S +index ec393275ba04..1371542de0d3 100644 +--- a/arch/arm64/kernel/head.S ++++ b/arch/arm64/kernel/head.S +@@ -442,8 +442,7 @@ set_hcr: + /* GICv3 system register access */ + mrs x0, id_aa64pfr0_el1 + ubfx x0, x0, #24, #4 +- cmp x0, #1 +- b.ne 3f ++ cbz x0, 3f + + mrs_s x0, SYS_ICC_SRE_EL2 + orr x0, x0, #ICC_SRE_EL2_SRE // Set ICC_SRE_EL2.SRE==1 +diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c +index 713561e5bcab..b91abb8f7cd4 100644 +--- a/arch/arm64/kernel/irq.c ++++ b/arch/arm64/kernel/irq.c +@@ -32,6 +32,9 @@ + + unsigned long irq_err_count; + ++/* Only access this in an NMI enter/exit */ ++DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); ++ + DEFINE_PER_CPU(unsigned long *, irq_stack_ptr); + + int arch_show_interrupts(struct seq_file *p, int prec) +diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c +index 2122cd187f19..470afb3a04ca 100644 +--- a/arch/arm64/kernel/kgdb.c ++++ b/arch/arm64/kernel/kgdb.c +@@ -233,27 +233,33 @@ int kgdb_arch_handle_exception(int exception_vector, int signo, + + static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr) + { ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + kgdb_handle_exception(1, SIGTRAP, 0, regs); +- return 0; ++ return DBG_HOOK_HANDLED; + } + NOKPROBE_SYMBOL(kgdb_brk_fn) + + static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) + { ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + compiled_break = 1; + kgdb_handle_exception(1, SIGTRAP, 0, regs); + +- return 0; ++ return DBG_HOOK_HANDLED; + } + NOKPROBE_SYMBOL(kgdb_compiled_brk_fn); + + static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr) + { +- if (!kgdb_single_step) ++ if (user_mode(regs) || !kgdb_single_step) + return DBG_HOOK_ERROR; + + kgdb_handle_exception(1, SIGTRAP, 0, regs); +- return 0; ++ return DBG_HOOK_HANDLED; + } + NOKPROBE_SYMBOL(kgdb_step_brk_fn); + +diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c +index 7d8c33279e9f..6a6d661f38fb 100644 +--- a/arch/arm64/kernel/probes/kprobes.c ++++ b/arch/arm64/kernel/probes/kprobes.c +@@ -458,6 +458,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + int retval; + ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + /* return error if this is not our step */ + retval = kprobe_ss_hit(kcb, instruction_pointer(regs)); + +@@ -474,6 +477,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) + int __kprobes + kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr) + { ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + kprobe_handler(regs); + return DBG_HOOK_HANDLED; + } +diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c +index a74311beda35..c1c5a57249d2 100644 +--- a/arch/arm64/kvm/reset.c ++++ b/arch/arm64/kvm/reset.c +@@ -95,16 +95,33 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext) + * This function finds the right table above and sets the registers on + * the virtual CPU struct to their architecturally defined reset + * values. ++ * ++ * Note: This function can be called from two paths: The KVM_ARM_VCPU_INIT ++ * ioctl or as part of handling a request issued by another VCPU in the PSCI ++ * handling code. In the first case, the VCPU will not be loaded, and in the ++ * second case the VCPU will be loaded. Because this function operates purely ++ * on the memory-backed valus of system registers, we want to do a full put if ++ * we were loaded (handling a request) and load the values back at the end of ++ * the function. Otherwise we leave the state alone. In both cases, we ++ * disable preemption around the vcpu reset as we would otherwise race with ++ * preempt notifiers which also call put/load. + */ + int kvm_reset_vcpu(struct kvm_vcpu *vcpu) + { + const struct kvm_regs *cpu_reset; ++ int ret = -EINVAL; ++ bool loaded; ++ ++ preempt_disable(); ++ loaded = (vcpu->cpu != -1); ++ if (loaded) ++ kvm_arch_vcpu_put(vcpu); + + switch (vcpu->arch.target) { + default: + if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) { + if (!cpu_has_32bit_el1()) +- return -EINVAL; ++ goto out; + cpu_reset = &default_regs_reset32; + } else { + cpu_reset = &default_regs_reset; +@@ -127,5 +144,10 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu) + vcpu->arch.workaround_flags |= VCPU_WORKAROUND_2_FLAG; + + /* Reset timer */ +- return kvm_timer_vcpu_reset(vcpu); ++ ret = kvm_timer_vcpu_reset(vcpu); ++out: ++ if (loaded) ++ kvm_arch_vcpu_load(vcpu, smp_processor_id()); ++ preempt_enable(); ++ return ret; + } +diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c +index 2e070d3baf9f..cfbf7bd0dfba 100644 +--- a/arch/arm64/kvm/sys_regs.c ++++ b/arch/arm64/kvm/sys_regs.c +@@ -1079,7 +1079,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { + + { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, + { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, +- { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 }, ++ { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 }, + }; + + static bool trap_dbgidr(struct kvm_vcpu *vcpu, +diff --git a/arch/m68k/Makefile b/arch/m68k/Makefile +index f0dd9fc84002..a229d28e14cc 100644 +--- a/arch/m68k/Makefile ++++ b/arch/m68k/Makefile +@@ -58,7 +58,10 @@ cpuflags-$(CONFIG_M5206e) := $(call cc-option,-mcpu=5206e,-m5200) + cpuflags-$(CONFIG_M5206) := $(call cc-option,-mcpu=5206,-m5200) + + KBUILD_AFLAGS += $(cpuflags-y) +-KBUILD_CFLAGS += $(cpuflags-y) -pipe ++KBUILD_CFLAGS += $(cpuflags-y) ++ ++KBUILD_CFLAGS += -pipe -ffreestanding ++ + ifdef CONFIG_MMU + # without -fno-strength-reduce the 53c7xx.c driver fails ;-( + KBUILD_CFLAGS += -fno-strength-reduce -ffixed-a2 +diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h +index a9af1d2dcd69..673049bf29b6 100644 +--- a/arch/mips/include/asm/kvm_host.h ++++ b/arch/mips/include/asm/kvm_host.h +@@ -1132,7 +1132,7 @@ static inline void kvm_arch_hardware_unsetup(void) {} + static inline void kvm_arch_sync_events(struct kvm *kvm) {} + static inline void kvm_arch_free_memslot(struct kvm *kvm, + struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {} +-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {} ++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} + static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} + static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} +diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h +index c459f937d484..8438df443540 100644 +--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h ++++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h +@@ -55,6 +55,14 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma, + #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE + static inline bool gigantic_page_supported(void) + { ++ /* ++ * We used gigantic page reservation with hypervisor assist in some case. ++ * We cannot use runtime allocation of gigantic pages in those platforms ++ * This is hash translation mode LPARs. ++ */ ++ if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled()) ++ return false; ++ + return true; + } + #endif +diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h +index e372ed871c51..e3ba58f64c3d 100644 +--- a/arch/powerpc/include/asm/kvm_host.h ++++ b/arch/powerpc/include/asm/kvm_host.h +@@ -809,7 +809,7 @@ struct kvm_vcpu_arch { + static inline void kvm_arch_hardware_disable(void) {} + static inline void kvm_arch_hardware_unsetup(void) {} + static inline void kvm_arch_sync_events(struct kvm *kvm) {} +-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {} ++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} + static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {} + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} + static inline void kvm_arch_exit(void) {} +diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S +index e780e1fbf6c2..4ae464b9d490 100644 +--- a/arch/powerpc/kernel/entry_32.S ++++ b/arch/powerpc/kernel/entry_32.S +@@ -726,6 +726,9 @@ fast_exception_return: + mtcr r10 + lwz r10,_LINK(r11) + mtlr r10 ++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */ ++ li r10, 0 ++ stw r10, 8(r11) + REST_GPR(10, r11) + #ifdef CONFIG_PPC_8xx_PERF_EVENT + mtspr SPRN_NRI, r0 +@@ -963,6 +966,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX) + mtcrf 0xFF,r10 + mtlr r11 + ++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */ ++ li r10, 0 ++ stw r10, 8(r1) + /* + * Once we put values in SRR0 and SRR1, we are in a state + * where exceptions are not recoverable, since taking an +@@ -1002,6 +1008,9 @@ exc_exit_restart_end: + mtlr r11 + lwz r10,_CCR(r1) + mtcrf 0xff,r10 ++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */ ++ li r10, 0 ++ stw r10, 8(r1) + REST_2GPRS(9, r1) + .globl exc_exit_restart + exc_exit_restart: +diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c +index a0c74bbf3454..b10531372d7f 100644 +--- a/arch/powerpc/kernel/process.c ++++ b/arch/powerpc/kernel/process.c +@@ -156,7 +156,7 @@ void __giveup_fpu(struct task_struct *tsk) + + save_fpu(tsk); + msr = tsk->thread.regs->msr; +- msr &= ~MSR_FP; ++ msr &= ~(MSR_FP|MSR_FE0|MSR_FE1); + #ifdef CONFIG_VSX + if (cpu_has_feature(CPU_FTR_VSX)) + msr &= ~MSR_VSX; +diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c +index 81750d9624ab..bfc5f59d9f1b 100644 +--- a/arch/powerpc/kernel/ptrace.c ++++ b/arch/powerpc/kernel/ptrace.c +@@ -547,6 +547,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset, + /* + * Copy out only the low-order word of vrsave. + */ ++ int start, end; + union { + elf_vrreg_t reg; + u32 word; +@@ -555,8 +556,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset, + + vrsave.word = target->thread.vrsave; + ++ start = 33 * sizeof(vector128); ++ end = start + sizeof(vrsave); + ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave, +- 33 * sizeof(vector128), -1); ++ start, end); + } + + return ret; +@@ -594,6 +597,7 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset, + /* + * We use only the first word of vrsave. + */ ++ int start, end; + union { + elf_vrreg_t reg; + u32 word; +@@ -602,8 +606,10 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset, + + vrsave.word = target->thread.vrsave; + ++ start = 33 * sizeof(vector128); ++ end = start + sizeof(vrsave); + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave, +- 33 * sizeof(vector128), -1); ++ start, end); + if (!ret) + target->thread.vrsave = vrsave.word; + } +diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c +index a5f2b7593976..3c9457420aee 100644 +--- a/arch/powerpc/kernel/traps.c ++++ b/arch/powerpc/kernel/traps.c +@@ -694,15 +694,15 @@ void machine_check_exception(struct pt_regs *regs) + if (check_io_access(regs)) + goto bail; + +- /* Must die if the interrupt is not recoverable */ +- if (!(regs->msr & MSR_RI)) +- nmi_panic(regs, "Unrecoverable Machine check"); +- + if (!nested) + nmi_exit(); + + die("Machine check", regs, SIGBUS); + ++ /* Must die if the interrupt is not recoverable */ ++ if (!(regs->msr & MSR_RI)) ++ nmi_panic(regs, "Unrecoverable Machine check"); ++ + return; + + bail: +@@ -1292,8 +1292,8 @@ void slb_miss_bad_addr(struct pt_regs *regs) + + void StackOverflow(struct pt_regs *regs) + { +- printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n", +- current, regs->gpr[1]); ++ pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n", ++ current->comm, task_pid_nr(current), regs->gpr[1]); + debugger(regs); + show_regs(regs); + panic("kernel stack overflow"); +diff --git a/arch/powerpc/platforms/83xx/suspend-asm.S b/arch/powerpc/platforms/83xx/suspend-asm.S +index 3d1ecd211776..8137f77abad5 100644 +--- a/arch/powerpc/platforms/83xx/suspend-asm.S ++++ b/arch/powerpc/platforms/83xx/suspend-asm.S +@@ -26,13 +26,13 @@ + #define SS_MSR 0x74 + #define SS_SDR1 0x78 + #define SS_LR 0x7c +-#define SS_SPRG 0x80 /* 4 SPRGs */ +-#define SS_DBAT 0x90 /* 8 DBATs */ +-#define SS_IBAT 0xd0 /* 8 IBATs */ +-#define SS_TB 0x110 +-#define SS_CR 0x118 +-#define SS_GPREG 0x11c /* r12-r31 */ +-#define STATE_SAVE_SIZE 0x16c ++#define SS_SPRG 0x80 /* 8 SPRGs */ ++#define SS_DBAT 0xa0 /* 8 DBATs */ ++#define SS_IBAT 0xe0 /* 8 IBATs */ ++#define SS_TB 0x120 ++#define SS_CR 0x128 ++#define SS_GPREG 0x12c /* r12-r31 */ ++#define STATE_SAVE_SIZE 0x17c + + .section .data + .align 5 +@@ -103,6 +103,16 @@ _GLOBAL(mpc83xx_enter_deep_sleep) + stw r7, SS_SPRG+12(r3) + stw r8, SS_SDR1(r3) + ++ mfspr r4, SPRN_SPRG4 ++ mfspr r5, SPRN_SPRG5 ++ mfspr r6, SPRN_SPRG6 ++ mfspr r7, SPRN_SPRG7 ++ ++ stw r4, SS_SPRG+16(r3) ++ stw r5, SS_SPRG+20(r3) ++ stw r6, SS_SPRG+24(r3) ++ stw r7, SS_SPRG+28(r3) ++ + mfspr r4, SPRN_DBAT0U + mfspr r5, SPRN_DBAT0L + mfspr r6, SPRN_DBAT1U +@@ -493,6 +503,16 @@ mpc83xx_deep_resume: + mtspr SPRN_IBAT7U, r6 + mtspr SPRN_IBAT7L, r7 + ++ lwz r4, SS_SPRG+16(r3) ++ lwz r5, SS_SPRG+20(r3) ++ lwz r6, SS_SPRG+24(r3) ++ lwz r7, SS_SPRG+28(r3) ++ ++ mtspr SPRN_SPRG4, r4 ++ mtspr SPRN_SPRG5, r5 ++ mtspr SPRN_SPRG6, r6 ++ mtspr SPRN_SPRG7, r7 ++ + lwz r4, SS_SPRG+0(r3) + lwz r5, SS_SPRG+4(r3) + lwz r6, SS_SPRG+8(r3) +diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c +index 3fd683e40bc9..2914529c0695 100644 +--- a/arch/powerpc/platforms/embedded6xx/wii.c ++++ b/arch/powerpc/platforms/embedded6xx/wii.c +@@ -104,6 +104,10 @@ unsigned long __init wii_mmu_mapin_mem2(unsigned long top) + /* MEM2 64MB@0x10000000 */ + delta = wii_hole_start + wii_hole_size; + size = top - delta; ++ ++ if (__map_without_bats) ++ return delta; ++ + for (bl = 128<<10; bl < max_size; bl <<= 1) { + if (bl * 2 > size) + break; +diff --git a/arch/powerpc/platforms/powernv/opal-msglog.c b/arch/powerpc/platforms/powernv/opal-msglog.c +index 7a9cde0cfbd1..2ee7af22138e 100644 +--- a/arch/powerpc/platforms/powernv/opal-msglog.c ++++ b/arch/powerpc/platforms/powernv/opal-msglog.c +@@ -98,7 +98,7 @@ static ssize_t opal_msglog_read(struct file *file, struct kobject *kobj, + } + + static struct bin_attribute opal_msglog_attr = { +- .attr = {.name = "msglog", .mode = 0444}, ++ .attr = {.name = "msglog", .mode = 0400}, + .read = opal_msglog_read + }; + +diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h +index d660e784e445..3fdc0bb974d9 100644 +--- a/arch/s390/include/asm/kvm_host.h ++++ b/arch/s390/include/asm/kvm_host.h +@@ -784,7 +784,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} + static inline void kvm_arch_free_memslot(struct kvm *kvm, + struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {} +-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {} ++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} + static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {} + static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) {} +diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c +index 3cb71fc94995..5c2558cc6977 100644 +--- a/arch/s390/kernel/setup.c ++++ b/arch/s390/kernel/setup.c +@@ -300,7 +300,7 @@ early_param("vmalloc", parse_vmalloc); + + void *restart_stack __section(.data); + +-static void __init setup_lowcore(void) ++static void __init setup_lowcore_dat_off(void) + { + struct lowcore *lc; + +@@ -311,19 +311,16 @@ static void __init setup_lowcore(void) + lc = memblock_virt_alloc_low(sizeof(*lc), sizeof(*lc)); + lc->restart_psw.mask = PSW_KERNEL_BITS; + lc->restart_psw.addr = (unsigned long) restart_int_handler; +- lc->external_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_MCHECK; ++ lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; + lc->external_new_psw.addr = (unsigned long) ext_int_handler; + lc->svc_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; ++ PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; + lc->svc_new_psw.addr = (unsigned long) system_call; +- lc->program_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_MCHECK; ++ lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; + lc->program_new_psw.addr = (unsigned long) pgm_check_handler; + lc->mcck_new_psw.mask = PSW_KERNEL_BITS; + lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler; +- lc->io_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_MCHECK; ++ lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; + lc->io_new_psw.addr = (unsigned long) io_int_handler; + lc->clock_comparator = clock_comparator_max; + lc->kernel_stack = ((unsigned long) &init_thread_union) +@@ -391,6 +388,16 @@ static void __init setup_lowcore(void) + lowcore_ptr[0] = lc; + } + ++static void __init setup_lowcore_dat_on(void) ++{ ++ __ctl_clear_bit(0, 28); ++ S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT; ++ S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT; ++ S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT; ++ S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT; ++ __ctl_set_bit(0, 28); ++} ++ + static struct resource code_resource = { + .name = "Kernel code", + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, +@@ -948,7 +955,7 @@ void __init setup_arch(char **cmdline_p) + #endif + + setup_resources(); +- setup_lowcore(); ++ setup_lowcore_dat_off(); + smp_fill_possible_mask(); + cpu_detect_mhz_feature(); + cpu_init(); +@@ -961,6 +968,12 @@ void __init setup_arch(char **cmdline_p) + */ + paging_init(); + ++ /* ++ * After paging_init created the kernel page table, the new PSWs ++ * in lowcore can now run with DAT enabled. ++ */ ++ setup_lowcore_dat_on(); ++ + /* Setup default console */ + conmode_default(); + set_preferred_console(); +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index 72fac8646e9b..d2ae93faafe8 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -1121,7 +1121,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask); + void kvm_mmu_zap_all(struct kvm *kvm); +-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots); ++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); + unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); + void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); + +diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c +index 3668f28cf5fc..f4e4db0cbd59 100644 +--- a/arch/x86/kernel/kprobes/opt.c ++++ b/arch/x86/kernel/kprobes/opt.c +@@ -141,6 +141,11 @@ asm ( + + void optprobe_template_func(void); + STACK_FRAME_NON_STANDARD(optprobe_template_func); ++NOKPROBE_SYMBOL(optprobe_template_func); ++NOKPROBE_SYMBOL(optprobe_template_entry); ++NOKPROBE_SYMBOL(optprobe_template_val); ++NOKPROBE_SYMBOL(optprobe_template_call); ++NOKPROBE_SYMBOL(optprobe_template_end); + + #define TMPL_MOVE_IDX \ + ((long)&optprobe_template_val - (long)&optprobe_template_entry) +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c +index 364d9895dd56..f97b533bc6e6 100644 +--- a/arch/x86/kvm/mmu.c ++++ b/arch/x86/kvm/mmu.c +@@ -5418,13 +5418,30 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm) + return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages)); + } + +-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots) ++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) + { ++ gen &= MMIO_GEN_MASK; ++ ++ /* ++ * Shift to eliminate the "update in-progress" flag, which isn't ++ * included in the spte's generation number. ++ */ ++ gen >>= 1; ++ ++ /* ++ * Generation numbers are incremented in multiples of the number of ++ * address spaces in order to provide unique generations across all ++ * address spaces. Strip what is effectively the address space ++ * modifier prior to checking for a wrap of the MMIO generation so ++ * that a wrap in any address space is detected. ++ */ ++ gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1); ++ + /* +- * The very rare case: if the generation-number is round, ++ * The very rare case: if the MMIO generation number has wrapped, + * zap all shadow pages. + */ +- if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) { ++ if (unlikely(gen == 0)) { + kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n"); + kvm_mmu_invalidate_zap_all_pages(kvm); + } +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 8e5a977bf50e..229d5e39f5c0 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -7446,25 +7446,50 @@ static int get_vmx_mem_address(struct kvm_vcpu *vcpu, + /* Addr = segment_base + offset */ + /* offset = base + [index * scale] + displacement */ + off = exit_qualification; /* holds the displacement */ ++ if (addr_size == 1) ++ off = (gva_t)sign_extend64(off, 31); ++ else if (addr_size == 0) ++ off = (gva_t)sign_extend64(off, 15); + if (base_is_valid) + off += kvm_register_read(vcpu, base_reg); + if (index_is_valid) + off += kvm_register_read(vcpu, index_reg)< s.limit); ++ if (!(s.base == 0 && s.limit == 0xffffffff && ++ ((s.type & 8) || !(s.type & 4)))) ++ exn = exn || (off + sizeof(u64) > s.limit); + } + if (exn) { + kvm_queue_exception_e(vcpu, +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index b0e7621ddf01..ce5b3dc348ce 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -8524,13 +8524,13 @@ out_free: + return -ENOMEM; + } + +-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) ++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) + { + /* + * memslots->generation has been incremented. + * mmio generation may have reached its maximum value. + */ +- kvm_mmu_invalidate_mmio_sptes(kvm, slots); ++ kvm_mmu_invalidate_mmio_sptes(kvm, gen); + } + + int kvm_arch_prepare_memory_region(struct kvm *kvm, +diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h +index d4b59cf0dc51..c88305d997b0 100644 +--- a/arch/x86/kvm/x86.h ++++ b/arch/x86/kvm/x86.h +@@ -136,6 +136,11 @@ static inline bool emul_is_noncanonical_address(u64 la, + static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, + gva_t gva, gfn_t gfn, unsigned access) + { ++ u64 gen = kvm_memslots(vcpu->kvm)->generation; ++ ++ if (unlikely(gen & 1)) ++ return; ++ + /* + * If this is a shadow nested page table, the "GVA" is + * actually a nGPA. +@@ -143,7 +148,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, + vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK; + vcpu->arch.access = access; + vcpu->arch.mmio_gfn = gfn; +- vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation; ++ vcpu->arch.mmio_gen = gen; + } + + static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu) +diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c +index 7631e6130d44..44b1f1334ef8 100644 +--- a/arch/x86/xen/mmu_pv.c ++++ b/arch/x86/xen/mmu_pv.c +@@ -2080,10 +2080,10 @@ void __init xen_relocate_p2m(void) + pt = early_memremap(pt_phys, PAGE_SIZE); + clear_page(pt); + for (idx_pte = 0; +- idx_pte < min(n_pte, PTRS_PER_PTE); +- idx_pte++) { +- set_pte(pt + idx_pte, +- pfn_pte(p2m_pfn, PAGE_KERNEL)); ++ idx_pte < min(n_pte, PTRS_PER_PTE); ++ idx_pte++) { ++ pt[idx_pte] = pfn_pte(p2m_pfn, ++ PAGE_KERNEL); + p2m_pfn++; + } + n_pte -= PTRS_PER_PTE; +@@ -2091,8 +2091,7 @@ void __init xen_relocate_p2m(void) + make_lowmem_page_readonly(__va(pt_phys)); + pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, + PFN_DOWN(pt_phys)); +- set_pmd(pmd + idx_pt, +- __pmd(_PAGE_TABLE | pt_phys)); ++ pmd[idx_pt] = __pmd(_PAGE_TABLE | pt_phys); + pt_phys += PAGE_SIZE; + } + n_pt -= PTRS_PER_PMD; +@@ -2100,7 +2099,7 @@ void __init xen_relocate_p2m(void) + make_lowmem_page_readonly(__va(pmd_phys)); + pin_pagetable_pfn(MMUEXT_PIN_L2_TABLE, + PFN_DOWN(pmd_phys)); +- set_pud(pud + idx_pmd, __pud(_PAGE_TABLE | pmd_phys)); ++ pud[idx_pmd] = __pud(_PAGE_TABLE | pmd_phys); + pmd_phys += PAGE_SIZE; + } + n_pmd -= PTRS_PER_PUD; +diff --git a/crypto/ahash.c b/crypto/ahash.c +index 3980e9e45289..5a9fa1a867f9 100644 +--- a/crypto/ahash.c ++++ b/crypto/ahash.c +@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk) + int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) + { + unsigned int alignmask = walk->alignmask; +- unsigned int nbytes = walk->entrylen; + + walk->data -= walk->offset; + +- if (nbytes && walk->offset & alignmask && !err) { +- walk->offset = ALIGN(walk->offset, alignmask + 1); +- nbytes = min(nbytes, +- ((unsigned int)(PAGE_SIZE)) - walk->offset); +- walk->entrylen -= nbytes; ++ if (walk->entrylen && (walk->offset & alignmask) && !err) { ++ unsigned int nbytes; + ++ walk->offset = ALIGN(walk->offset, alignmask + 1); ++ nbytes = min(walk->entrylen, ++ (unsigned int)(PAGE_SIZE - walk->offset)); + if (nbytes) { ++ walk->entrylen -= nbytes; + walk->data += walk->offset; + return nbytes; + } +@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) + if (err) + return err; + +- if (nbytes) { ++ if (walk->entrylen) { + walk->offset = 0; + walk->pg++; + return hash_walk_next(walk); +@@ -190,6 +190,21 @@ static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key, + return ret; + } + ++static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, ++ unsigned int keylen) ++{ ++ return -ENOSYS; ++} ++ ++static void ahash_set_needkey(struct crypto_ahash *tfm) ++{ ++ const struct hash_alg_common *alg = crypto_hash_alg_common(tfm); ++ ++ if (tfm->setkey != ahash_nosetkey && ++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) ++ crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); ++} ++ + int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) + { +@@ -201,20 +216,16 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, + else + err = tfm->setkey(tfm, key, keylen); + +- if (err) ++ if (unlikely(err)) { ++ ahash_set_needkey(tfm); + return err; ++ } + + crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; + } + EXPORT_SYMBOL_GPL(crypto_ahash_setkey); + +-static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, +- unsigned int keylen) +-{ +- return -ENOSYS; +-} +- + static inline unsigned int ahash_align_buffer_size(unsigned len, + unsigned long mask) + { +@@ -483,8 +494,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) + + if (alg->setkey) { + hash->setkey = alg->setkey; +- if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) +- crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY); ++ ahash_set_needkey(hash); + } + if (alg->export) + hash->export = alg->export; +diff --git a/crypto/pcbc.c b/crypto/pcbc.c +index d9e45a958720..67009a532201 100644 +--- a/crypto/pcbc.c ++++ b/crypto/pcbc.c +@@ -50,7 +50,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req, + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; + u8 *dst = walk->dst.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + + do { + crypto_xor(iv, src, bsize); +@@ -71,7 +71,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req, + int bsize = crypto_cipher_blocksize(tfm); + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + u8 tmpbuf[bsize]; + + do { +@@ -83,8 +83,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req, + src += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +@@ -120,7 +118,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req, + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; + u8 *dst = walk->dst.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + + do { + crypto_cipher_decrypt_one(tfm, dst, src); +@@ -131,8 +129,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req, + dst += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +@@ -143,7 +139,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req, + int bsize = crypto_cipher_blocksize(tfm); + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + u8 tmpbuf[bsize] __aligned(__alignof__(u32)); + + do { +@@ -155,8 +151,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req, + src += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +diff --git a/crypto/shash.c b/crypto/shash.c +index 5d732c6bb4b2..a04145e5306a 100644 +--- a/crypto/shash.c ++++ b/crypto/shash.c +@@ -53,6 +53,13 @@ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key, + return err; + } + ++static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg) ++{ ++ if (crypto_shash_alg_has_setkey(alg) && ++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) ++ crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); ++} ++ + int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, + unsigned int keylen) + { +@@ -65,8 +72,10 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, + else + err = shash->setkey(tfm, key, keylen); + +- if (err) ++ if (unlikely(err)) { ++ shash_set_needkey(tfm, shash); + return err; ++ } + + crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; +@@ -368,7 +377,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm) + crt->final = shash_async_final; + crt->finup = shash_async_finup; + crt->digest = shash_async_digest; +- crt->setkey = shash_async_setkey; ++ if (crypto_shash_alg_has_setkey(alg)) ++ crt->setkey = shash_async_setkey; + + crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) & + CRYPTO_TFM_NEED_KEY); +@@ -390,9 +400,7 @@ static int crypto_shash_init_tfm(struct crypto_tfm *tfm) + + hash->descsize = alg->descsize; + +- if (crypto_shash_alg_has_setkey(alg) && +- !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) +- crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY); ++ shash_set_needkey(hash, alg); + + return 0; + } +diff --git a/crypto/testmgr.c b/crypto/testmgr.c +index 7125ba3880af..d91278c01ea8 100644 +--- a/crypto/testmgr.c ++++ b/crypto/testmgr.c +@@ -1839,14 +1839,21 @@ static int alg_test_crc32c(const struct alg_test_desc *desc, + + err = alg_test_hash(desc, driver, type, mask); + if (err) +- goto out; ++ return err; + + tfm = crypto_alloc_shash(driver, type, mask); + if (IS_ERR(tfm)) { ++ if (PTR_ERR(tfm) == -ENOENT) { ++ /* ++ * This crc32c implementation is only available through ++ * ahash API, not the shash API, so the remaining part ++ * of the test is not applicable to it. ++ */ ++ return 0; ++ } + printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: " + "%ld\n", driver, PTR_ERR(tfm)); +- err = PTR_ERR(tfm); +- goto out; ++ return PTR_ERR(tfm); + } + + do { +@@ -1873,7 +1880,6 @@ static int alg_test_crc32c(const struct alg_test_desc *desc, + + crypto_free_shash(tfm); + +-out: + return err; + } + +diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c +index a041689e5701..012aa86d4b16 100644 +--- a/drivers/acpi/device_sysfs.c ++++ b/drivers/acpi/device_sysfs.c +@@ -202,11 +202,15 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias, + { + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; + const union acpi_object *of_compatible, *obj; ++ acpi_status status; + int len, count; + int i, nval; + char *c; + +- acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf); ++ status = acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf); ++ if (ACPI_FAILURE(status)) ++ return -ENODEV; ++ + /* DT strings are all in lower case */ + for (c = buf.pointer; *c != '\0'; c++) + *c = tolower(*c); +diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c +index 4a6c5e7b6835..05fb821c2558 100644 +--- a/drivers/acpi/nfit/core.c ++++ b/drivers/acpi/nfit/core.c +@@ -329,6 +329,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + return -EINVAL; + } + ++ if (out_obj->type != ACPI_TYPE_BUFFER) { ++ dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n", ++ dimm_name, cmd_name, out_obj->type); ++ rc = -EINVAL; ++ goto out; ++ } ++ + if (call_pkg) { + call_pkg->nd_fw_size = out_obj->buffer.length; + memcpy(call_pkg->nd_payload + call_pkg->nd_size_in, +@@ -347,13 +354,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + return 0; + } + +- if (out_obj->package.type != ACPI_TYPE_BUFFER) { +- dev_dbg(dev, "%s:%s unexpected output object type cmd: %s type: %d\n", +- __func__, dimm_name, cmd_name, out_obj->type); +- rc = -EINVAL; +- goto out; +- } +- + dev_dbg(dev, "%s:%s cmd: %s output length: %d\n", __func__, dimm_name, + cmd_name, out_obj->buffer.length); + print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4, +diff --git a/drivers/auxdisplay/ht16k33.c b/drivers/auxdisplay/ht16k33.c +index fbfa5b4cc567..a93ded300740 100644 +--- a/drivers/auxdisplay/ht16k33.c ++++ b/drivers/auxdisplay/ht16k33.c +@@ -517,7 +517,7 @@ static int ht16k33_remove(struct i2c_client *client) + struct ht16k33_priv *priv = i2c_get_clientdata(client); + struct ht16k33_fbdev *fbdev = &priv->fbdev; + +- cancel_delayed_work(&fbdev->work); ++ cancel_delayed_work_sync(&fbdev->work); + unregister_framebuffer(fbdev->info); + framebuffer_release(fbdev->info); + free_page((unsigned long) fbdev->buffer); +diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c +index cdd6f256da59..df53e2b3296b 100644 +--- a/drivers/base/power/wakeup.c ++++ b/drivers/base/power/wakeup.c +@@ -113,7 +113,6 @@ void wakeup_source_drop(struct wakeup_source *ws) + if (!ws) + return; + +- del_timer_sync(&ws->timer); + __pm_relax(ws); + } + EXPORT_SYMBOL_GPL(wakeup_source_drop); +@@ -201,6 +200,13 @@ void wakeup_source_remove(struct wakeup_source *ws) + list_del_rcu(&ws->entry); + spin_unlock_irqrestore(&events_lock, flags); + synchronize_srcu(&wakeup_srcu); ++ ++ del_timer_sync(&ws->timer); ++ /* ++ * Clear timer.function to make wakeup_source_not_registered() treat ++ * this wakeup source as not registered. ++ */ ++ ws->timer.function = NULL; + } + EXPORT_SYMBOL_GPL(wakeup_source_remove); + +diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c +index a7f212ea17bf..3ea9c3e9acb3 100644 +--- a/drivers/block/floppy.c ++++ b/drivers/block/floppy.c +@@ -4079,7 +4079,7 @@ static unsigned int floppy_check_events(struct gendisk *disk, + + if (time_after(jiffies, UDRS->last_checked + UDP->checkfreq)) { + if (lock_fdc(drive)) +- return -EINTR; ++ return 0; + poll_drive(false, 0); + process_fd_request(); + } +diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c +index f95b9c75175b..77f3fa10db12 100644 +--- a/drivers/char/tpm/st33zp24/st33zp24.c ++++ b/drivers/char/tpm/st33zp24/st33zp24.c +@@ -438,7 +438,7 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf, + goto out_err; + } + +- return len; ++ return 0; + out_err: + st33zp24_cancel(chip); + release_locality(chip); +diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c +index 038b91bcbd31..e3beeb2a93dc 100644 +--- a/drivers/char/tpm/tpm-interface.c ++++ b/drivers/char/tpm/tpm-interface.c +@@ -497,10 +497,19 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, + if (rc < 0) { + if (rc != -EPIPE) + dev_err(&chip->dev, +- "%s: tpm_send: error %d\n", __func__, rc); ++ "%s: send(): error %d\n", __func__, rc); + goto out; + } + ++ /* A sanity check. send() should just return zero on success e.g. ++ * not the command length. ++ */ ++ if (rc > 0) { ++ dev_warn(&chip->dev, ++ "%s: send(): invalid value %d\n", __func__, rc); ++ rc = 0; ++ } ++ + if (chip->flags & TPM_CHIP_FLAG_IRQ) + goto out_recv; + +diff --git a/drivers/char/tpm/tpm_atmel.c b/drivers/char/tpm/tpm_atmel.c +index 66a14526aaf4..a290b30a0c35 100644 +--- a/drivers/char/tpm/tpm_atmel.c ++++ b/drivers/char/tpm/tpm_atmel.c +@@ -105,7 +105,7 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count) + iowrite8(buf[i], priv->iobase); + } + +- return count; ++ return 0; + } + + static void tpm_atml_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c +index b4ad169836e9..f978738554d5 100644 +--- a/drivers/char/tpm/tpm_crb.c ++++ b/drivers/char/tpm/tpm_crb.c +@@ -288,19 +288,29 @@ static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count) + struct crb_priv *priv = dev_get_drvdata(&chip->dev); + unsigned int expected; + +- /* sanity check */ +- if (count < 6) ++ /* A sanity check that the upper layer wants to get at least the header ++ * as that is the minimum size for any TPM response. ++ */ ++ if (count < TPM_HEADER_SIZE) + return -EIO; + ++ /* If this bit is set, according to the spec, the TPM is in ++ * unrecoverable condition. ++ */ + if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR) + return -EIO; + +- memcpy_fromio(buf, priv->rsp, 6); +- expected = be32_to_cpup((__be32 *) &buf[2]); +- if (expected > count || expected < 6) ++ /* Read the first 8 bytes in order to get the length of the response. ++ * We read exactly a quad word in order to make sure that the remaining ++ * reads will be aligned. ++ */ ++ memcpy_fromio(buf, priv->rsp, 8); ++ ++ expected = be32_to_cpup((__be32 *)&buf[2]); ++ if (expected > count || expected < TPM_HEADER_SIZE) + return -EIO; + +- memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6); ++ memcpy_fromio(&buf[8], &priv->rsp[8], expected - 8); + + return expected; + } +diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c +index 95ce2e9ccdc6..32a8e27c5382 100644 +--- a/drivers/char/tpm/tpm_i2c_atmel.c ++++ b/drivers/char/tpm/tpm_i2c_atmel.c +@@ -65,7 +65,11 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len) + dev_dbg(&chip->dev, + "%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__, + (int)min_t(size_t, 64, len), buf, len, status); +- return status; ++ ++ if (status < 0) ++ return status; ++ ++ return 0; + } + + static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count) +diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c +index c619e76ce827..94bdb8ec372e 100644 +--- a/drivers/char/tpm/tpm_i2c_infineon.c ++++ b/drivers/char/tpm/tpm_i2c_infineon.c +@@ -587,7 +587,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len) + /* go and do it */ + iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1); + +- return len; ++ return 0; + out_err: + tpm_tis_i2c_ready(chip); + /* The TPM needs some time to clean up here, +diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c +index f74f451baf6a..b8defdfdf2dc 100644 +--- a/drivers/char/tpm/tpm_i2c_nuvoton.c ++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c +@@ -469,7 +469,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len) + } + + dev_dbg(dev, "%s() -> %zd\n", __func__, len); +- return len; ++ return 0; + } + + static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status) +diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c +index 25f6e2665385..77e47dc5aacc 100644 +--- a/drivers/char/tpm/tpm_ibmvtpm.c ++++ b/drivers/char/tpm/tpm_ibmvtpm.c +@@ -141,14 +141,14 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) + } + + /** +- * tpm_ibmvtpm_send - Send tpm request +- * ++ * tpm_ibmvtpm_send() - Send a TPM command + * @chip: tpm chip struct + * @buf: buffer contains data to send + * @count: size of buffer + * + * Return: +- * Number of bytes sent or < 0 on error. ++ * 0 on success, ++ * -errno on error + */ + static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + { +@@ -194,7 +194,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + rc = 0; + ibmvtpm->tpm_processing_cmd = false; + } else +- rc = count; ++ rc = 0; + + spin_unlock(&ibmvtpm->rtce_lock); + return rc; +diff --git a/drivers/char/tpm/tpm_infineon.c b/drivers/char/tpm/tpm_infineon.c +index d8f10047fbba..97f6d4fe0aee 100644 +--- a/drivers/char/tpm/tpm_infineon.c ++++ b/drivers/char/tpm/tpm_infineon.c +@@ -354,7 +354,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count) + for (i = 0; i < count; i++) { + wait_and_send(chip, buf[i]); + } +- return count; ++ return 0; + } + + static void tpm_inf_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/tpm_nsc.c b/drivers/char/tpm/tpm_nsc.c +index 5d6cce74cd3f..9bee3c5eb4bf 100644 +--- a/drivers/char/tpm/tpm_nsc.c ++++ b/drivers/char/tpm/tpm_nsc.c +@@ -226,7 +226,7 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count) + } + outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND); + +- return count; ++ return 0; + } + + static void tpm_nsc_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index 58123df6b5f6..a7d9c0c53fcd 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -379,7 +379,7 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len) + goto out_err; + } + } +- return len; ++ return 0; + out_err: + tpm_tis_ready(chip); + return rc; +diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c +index 1d877cc9af97..94a539384619 100644 +--- a/drivers/char/tpm/tpm_vtpm_proxy.c ++++ b/drivers/char/tpm/tpm_vtpm_proxy.c +@@ -335,7 +335,6 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip, + static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) + { + struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); +- int rc = 0; + + if (count > sizeof(proxy_dev->buffer)) { + dev_err(&chip->dev, +@@ -366,7 +365,7 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) + + wake_up_interruptible(&proxy_dev->wq); + +- return rc; ++ return 0; + } + + static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c +index 2cffaf567d99..538c9297dee1 100644 +--- a/drivers/char/tpm/xen-tpmfront.c ++++ b/drivers/char/tpm/xen-tpmfront.c +@@ -112,7 +112,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + return -ETIME; + } + +- return count; ++ return 0; + } + + static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) +diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c +index 7b222a5db931..82d615fe2947 100644 +--- a/drivers/clk/clk-twl6040.c ++++ b/drivers/clk/clk-twl6040.c +@@ -41,6 +41,43 @@ static int twl6040_pdmclk_is_prepared(struct clk_hw *hw) + return pdmclk->enabled; + } + ++static int twl6040_pdmclk_reset_one_clock(struct twl6040_pdmclk *pdmclk, ++ unsigned int reg) ++{ ++ const u8 reset_mask = TWL6040_HPLLRST; /* Same for HPPLL and LPPLL */ ++ int ret; ++ ++ ret = twl6040_set_bits(pdmclk->twl6040, reg, reset_mask); ++ if (ret < 0) ++ return ret; ++ ++ ret = twl6040_clear_bits(pdmclk->twl6040, reg, reset_mask); ++ if (ret < 0) ++ return ret; ++ ++ return 0; ++} ++ ++/* ++ * TWL6040A2 Phoenix Audio IC erratum #6: "PDM Clock Generation Issue At ++ * Cold Temperature". This affects cold boot and deeper idle states it ++ * seems. The workaround consists of resetting HPPLL and LPPLL. ++ */ ++static int twl6040_pdmclk_quirk_reset_clocks(struct twl6040_pdmclk *pdmclk) ++{ ++ int ret; ++ ++ ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_HPPLLCTL); ++ if (ret) ++ return ret; ++ ++ ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_LPPLLCTL); ++ if (ret) ++ return ret; ++ ++ return 0; ++} ++ + static int twl6040_pdmclk_prepare(struct clk_hw *hw) + { + struct twl6040_pdmclk *pdmclk = container_of(hw, struct twl6040_pdmclk, +@@ -48,8 +85,20 @@ static int twl6040_pdmclk_prepare(struct clk_hw *hw) + int ret; + + ret = twl6040_power(pdmclk->twl6040, 1); +- if (!ret) +- pdmclk->enabled = 1; ++ if (ret) ++ return ret; ++ ++ ret = twl6040_pdmclk_quirk_reset_clocks(pdmclk); ++ if (ret) ++ goto out_err; ++ ++ pdmclk->enabled = 1; ++ ++ return 0; ++ ++out_err: ++ dev_err(pdmclk->dev, "%s: error %i\n", __func__, ret); ++ twl6040_power(pdmclk->twl6040, 0); + + return ret; + } +diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c +index ab393637f7b0..a6b4b90ff227 100644 +--- a/drivers/clk/ingenic/cgu.c ++++ b/drivers/clk/ingenic/cgu.c +@@ -364,16 +364,16 @@ ingenic_clk_round_rate(struct clk_hw *hw, unsigned long req_rate, + struct ingenic_clk *ingenic_clk = to_ingenic_clk(hw); + struct ingenic_cgu *cgu = ingenic_clk->cgu; + const struct ingenic_cgu_clk_info *clk_info; +- long rate = *parent_rate; ++ unsigned int div = 1; + + clk_info = &cgu->clock_info[ingenic_clk->idx]; + + if (clk_info->type & CGU_CLK_DIV) +- rate /= ingenic_clk_calc_div(clk_info, *parent_rate, req_rate); ++ div = ingenic_clk_calc_div(clk_info, *parent_rate, req_rate); + else if (clk_info->type & CGU_CLK_FIXDIV) +- rate /= clk_info->fixdiv.div; ++ div = clk_info->fixdiv.div; + +- return rate; ++ return DIV_ROUND_UP(*parent_rate, div); + } + + static int +@@ -393,7 +393,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate, + + if (clk_info->type & CGU_CLK_DIV) { + div = ingenic_clk_calc_div(clk_info, parent_rate, req_rate); +- rate = parent_rate / div; ++ rate = DIV_ROUND_UP(parent_rate, div); + + if (rate != req_rate) + return -EINVAL; +diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h +index e78b586536ea..74ed385309a5 100644 +--- a/drivers/clk/ingenic/cgu.h ++++ b/drivers/clk/ingenic/cgu.h +@@ -78,7 +78,7 @@ struct ingenic_cgu_mux_info { + * @reg: offset of the divider control register within the CGU + * @shift: number of bits to left shift the divide value by (ie. the index of + * the lowest bit of the divide value within its control register) +- * @div: number of bits to divide the divider value by (i.e. if the ++ * @div: number to divide the divider value by (i.e. if the + * effective divider value is the value written to the register + * multiplied by some constant) + * @bits: the size of the divide value in bits +diff --git a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c +index 40d5f74cb2ac..d93b4815e65c 100644 +--- a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c ++++ b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c +@@ -252,9 +252,9 @@ static SUNXI_CCU_GATE(ahb1_mmc1_clk, "ahb1-mmc1", "ahb1", + static SUNXI_CCU_GATE(ahb1_mmc2_clk, "ahb1-mmc2", "ahb1", + 0x060, BIT(10), 0); + static SUNXI_CCU_GATE(ahb1_mmc3_clk, "ahb1-mmc3", "ahb1", +- 0x060, BIT(12), 0); ++ 0x060, BIT(11), 0); + static SUNXI_CCU_GATE(ahb1_nand1_clk, "ahb1-nand1", "ahb1", +- 0x060, BIT(13), 0); ++ 0x060, BIT(12), 0); + static SUNXI_CCU_GATE(ahb1_nand0_clk, "ahb1-nand0", "ahb1", + 0x060, BIT(13), 0); + static SUNXI_CCU_GATE(ahb1_sdram_clk, "ahb1-sdram", "ahb1", +diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c +index 621b1cd996db..ac12f261f8ca 100644 +--- a/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c ++++ b/drivers/clk/sunxi-ng/ccu-sun8i-v3s.c +@@ -542,7 +542,7 @@ static struct ccu_reset_map sun8i_v3s_ccu_resets[] = { + [RST_BUS_OHCI0] = { 0x2c0, BIT(29) }, + + [RST_BUS_VE] = { 0x2c4, BIT(0) }, +- [RST_BUS_TCON0] = { 0x2c4, BIT(3) }, ++ [RST_BUS_TCON0] = { 0x2c4, BIT(4) }, + [RST_BUS_CSI] = { 0x2c4, BIT(8) }, + [RST_BUS_DE] = { 0x2c4, BIT(12) }, + [RST_BUS_DBG] = { 0x2c4, BIT(31) }, +diff --git a/drivers/clk/uniphier/clk-uniphier-cpugear.c b/drivers/clk/uniphier/clk-uniphier-cpugear.c +index ec11f55594ad..5d2d42b7e182 100644 +--- a/drivers/clk/uniphier/clk-uniphier-cpugear.c ++++ b/drivers/clk/uniphier/clk-uniphier-cpugear.c +@@ -47,7 +47,7 @@ static int uniphier_clk_cpugear_set_parent(struct clk_hw *hw, u8 index) + return ret; + + ret = regmap_write_bits(gear->regmap, +- gear->regbase + UNIPHIER_CLK_CPUGEAR_SET, ++ gear->regbase + UNIPHIER_CLK_CPUGEAR_UPD, + UNIPHIER_CLK_CPUGEAR_UPD_BIT, + UNIPHIER_CLK_CPUGEAR_UPD_BIT); + if (ret) +diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c +index 7a244b681876..d55c30f6981d 100644 +--- a/drivers/clocksource/exynos_mct.c ++++ b/drivers/clocksource/exynos_mct.c +@@ -388,6 +388,13 @@ static void exynos4_mct_tick_start(unsigned long cycles, + exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET); + } + ++static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) ++{ ++ /* Clear the MCT tick interrupt */ ++ if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1) ++ exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET); ++} ++ + static int exynos4_tick_set_next_event(unsigned long cycles, + struct clock_event_device *evt) + { +@@ -404,6 +411,7 @@ static int set_state_shutdown(struct clock_event_device *evt) + + mevt = container_of(evt, struct mct_clock_event_device, evt); + exynos4_mct_tick_stop(mevt); ++ exynos4_mct_tick_clear(mevt); + return 0; + } + +@@ -420,8 +428,11 @@ static int set_state_periodic(struct clock_event_device *evt) + return 0; + } + +-static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) ++static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id) + { ++ struct mct_clock_event_device *mevt = dev_id; ++ struct clock_event_device *evt = &mevt->evt; ++ + /* + * This is for supporting oneshot mode. + * Mct would generate interrupt periodically +@@ -430,16 +441,6 @@ static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) + if (!clockevent_state_periodic(&mevt->evt)) + exynos4_mct_tick_stop(mevt); + +- /* Clear the MCT tick interrupt */ +- if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1) +- exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET); +-} +- +-static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id) +-{ +- struct mct_clock_event_device *mevt = dev_id; +- struct clock_event_device *evt = &mevt->evt; +- + exynos4_mct_tick_clear(mevt); + + evt->event_handler(evt); +diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c +index ce345bf34d5d..a24e9f037865 100644 +--- a/drivers/cpufreq/pxa2xx-cpufreq.c ++++ b/drivers/cpufreq/pxa2xx-cpufreq.c +@@ -192,7 +192,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq) + return ret; + } + +-static void __init pxa_cpufreq_init_voltages(void) ++static void pxa_cpufreq_init_voltages(void) + { + vcc_core = regulator_get(NULL, "vcc_core"); + if (IS_ERR(vcc_core)) { +@@ -208,7 +208,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq) + return 0; + } + +-static void __init pxa_cpufreq_init_voltages(void) { } ++static void pxa_cpufreq_init_voltages(void) { } + #endif + + static void find_freq_tables(struct cpufreq_frequency_table **freq_table, +diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c +index 43530254201a..4bb154f6c54c 100644 +--- a/drivers/cpufreq/tegra124-cpufreq.c ++++ b/drivers/cpufreq/tegra124-cpufreq.c +@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev) + + platform_set_drvdata(pdev, priv); + ++ of_node_put(np); ++ + return 0; + + out_switch_to_pllx: +diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c +index 43fe195f6dca..63a21a6fc6cf 100644 +--- a/drivers/crypto/caam/caamalg.c ++++ b/drivers/crypto/caam/caamalg.c +@@ -1097,6 +1097,7 @@ static void init_ablkcipher_job(u32 *sh_desc, dma_addr_t ptr, + } else { + if (edesc->dst_nents == 1) { + dst_dma = sg_dma_address(req->dst); ++ out_options = 0; + } else { + dst_dma = edesc->sec4_sg_dma + (edesc->src_nents + 1) * + sizeof(struct sec4_sg_entry); +diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c +index c9d622abd90c..0ce4a65b95f5 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto.c ++++ b/drivers/crypto/rockchip/rk3288_crypto.c +@@ -119,7 +119,7 @@ static int rk_load_data(struct rk_crypto_info *dev, + count = (dev->left_bytes > PAGE_SIZE) ? + PAGE_SIZE : dev->left_bytes; + +- if (!sg_pcopy_to_buffer(dev->first, dev->nents, ++ if (!sg_pcopy_to_buffer(dev->first, dev->src_nents, + dev->addr_vir, count, + dev->total - dev->left_bytes)) { + dev_err(dev->dev, "[%s:%d] pcopy err\n", +diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h +index d5fb4013fb42..54ee5b3ed9db 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto.h ++++ b/drivers/crypto/rockchip/rk3288_crypto.h +@@ -207,7 +207,8 @@ struct rk_crypto_info { + void *addr_vir; + int aligned; + int align_size; +- size_t nents; ++ size_t src_nents; ++ size_t dst_nents; + unsigned int total; + unsigned int count; + dma_addr_t addr_in; +@@ -244,6 +245,7 @@ struct rk_cipher_ctx { + struct rk_crypto_info *dev; + unsigned int keylen; + u32 mode; ++ u8 iv[AES_BLOCK_SIZE]; + }; + + enum alg_type { +diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c +index 639c15c5364b..23305f22072f 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c ++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c +@@ -242,6 +242,17 @@ static void crypto_dma_start(struct rk_crypto_info *dev) + static int rk_set_data_start(struct rk_crypto_info *dev) + { + int err; ++ struct ablkcipher_request *req = ++ ablkcipher_request_cast(dev->async_req); ++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); ++ struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); ++ u32 ivsize = crypto_ablkcipher_ivsize(tfm); ++ u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + ++ dev->sg_src->offset + dev->sg_src->length - ivsize; ++ ++ /* store the iv that need to be updated in chain mode */ ++ if (ctx->mode & RK_CRYPTO_DEC) ++ memcpy(ctx->iv, src_last_blk, ivsize); + + err = dev->load_data(dev, dev->sg_src, dev->sg_dst); + if (!err) +@@ -260,8 +271,9 @@ static int rk_ablk_start(struct rk_crypto_info *dev) + dev->total = req->nbytes; + dev->sg_src = req->src; + dev->first = req->src; +- dev->nents = sg_nents(req->src); ++ dev->src_nents = sg_nents(req->src); + dev->sg_dst = req->dst; ++ dev->dst_nents = sg_nents(req->dst); + dev->aligned = 1; + + spin_lock_irqsave(&dev->lock, flags); +@@ -285,6 +297,28 @@ static void rk_iv_copyback(struct rk_crypto_info *dev) + memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize); + } + ++static void rk_update_iv(struct rk_crypto_info *dev) ++{ ++ struct ablkcipher_request *req = ++ ablkcipher_request_cast(dev->async_req); ++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); ++ struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); ++ u32 ivsize = crypto_ablkcipher_ivsize(tfm); ++ u8 *new_iv = NULL; ++ ++ if (ctx->mode & RK_CRYPTO_DEC) { ++ new_iv = ctx->iv; ++ } else { ++ new_iv = page_address(sg_page(dev->sg_dst)) + ++ dev->sg_dst->offset + dev->sg_dst->length - ivsize; ++ } ++ ++ if (ivsize == DES_BLOCK_SIZE) ++ memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize); ++ else if (ivsize == AES_BLOCK_SIZE) ++ memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize); ++} ++ + /* return: + * true some err was occurred + * fault no err, continue +@@ -297,7 +331,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev) + + dev->unload_data(dev); + if (!dev->aligned) { +- if (!sg_pcopy_from_buffer(req->dst, dev->nents, ++ if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, + dev->addr_vir, dev->count, + dev->total - dev->left_bytes - + dev->count)) { +@@ -306,6 +340,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev) + } + } + if (dev->left_bytes) { ++ rk_update_iv(dev); + if (dev->aligned) { + if (sg_is_last(dev->sg_src)) { + dev_err(dev->dev, "[%s:%d] Lack of data\n", +diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c +index 821a506b9e17..c336ae75e361 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c ++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c +@@ -206,7 +206,7 @@ static int rk_ahash_start(struct rk_crypto_info *dev) + dev->sg_dst = NULL; + dev->sg_src = req->src; + dev->first = req->src; +- dev->nents = sg_nents(req->src); ++ dev->src_nents = sg_nents(req->src); + rctx = ahash_request_ctx(req); + rctx->mode = 0; + +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c +index 5d8a67c65141..727018a16cca 100644 +--- a/drivers/gpu/drm/i915/i915_gem.c ++++ b/drivers/gpu/drm/i915/i915_gem.c +@@ -1640,7 +1640,8 @@ __vma_matches(struct vm_area_struct *vma, struct file *filp, + if (vma->vm_file != filp) + return false; + +- return vma->vm_start == addr && (vma->vm_end - vma->vm_start) == size; ++ return vma->vm_start == addr && ++ (vma->vm_end - vma->vm_start) == PAGE_ALIGN(size); + } + + /** +diff --git a/drivers/gpu/drm/imx/imx-ldb.c b/drivers/gpu/drm/imx/imx-ldb.c +index dd5312b02a8d..4f2e6c7e04c1 100644 +--- a/drivers/gpu/drm/imx/imx-ldb.c ++++ b/drivers/gpu/drm/imx/imx-ldb.c +@@ -652,8 +652,10 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) + int bus_format; + + ret = of_property_read_u32(child, "reg", &i); +- if (ret || i < 0 || i > 1) +- return -EINVAL; ++ if (ret || i < 0 || i > 1) { ++ ret = -EINVAL; ++ goto free_child; ++ } + + if (!of_device_is_available(child)) + continue; +@@ -666,7 +668,6 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) + channel = &imx_ldb->channel[i]; + channel->ldb = imx_ldb; + channel->chno = i; +- channel->child = child; + + /* + * The output port is port@4 with an external 4-port mux or +@@ -676,13 +677,13 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) + imx_ldb->lvds_mux ? 4 : 2, 0, + &channel->panel, &channel->bridge); + if (ret && ret != -ENODEV) +- return ret; ++ goto free_child; + + /* panel ddc only if there is no bridge */ + if (!channel->bridge) { + ret = imx_ldb_panel_ddc(dev, channel, child); + if (ret) +- return ret; ++ goto free_child; + } + + bus_format = of_get_bus_format(dev, child); +@@ -698,18 +699,26 @@ static int imx_ldb_bind(struct device *dev, struct device *master, void *data) + if (bus_format < 0) { + dev_err(dev, "could not determine data mapping: %d\n", + bus_format); +- return bus_format; ++ ret = bus_format; ++ goto free_child; + } + channel->bus_format = bus_format; ++ channel->child = child; + + ret = imx_ldb_register(drm, channel); +- if (ret) +- return ret; ++ if (ret) { ++ channel->child = NULL; ++ goto free_child; ++ } + } + + dev_set_drvdata(dev, imx_ldb); + + return 0; ++ ++free_child: ++ of_node_put(child); ++ return ret; + } + + static void imx_ldb_unbind(struct device *dev, struct device *master, +diff --git a/drivers/gpu/drm/imx/ipuv3-plane.c b/drivers/gpu/drm/imx/ipuv3-plane.c +index cf98596c7ce1..d0d7f6adbc89 100644 +--- a/drivers/gpu/drm/imx/ipuv3-plane.c ++++ b/drivers/gpu/drm/imx/ipuv3-plane.c +@@ -348,9 +348,9 @@ static int ipu_plane_atomic_check(struct drm_plane *plane, + if (ret) + return ret; + +- /* CRTC should be enabled */ ++ /* nothing to check when disabling or disabled */ + if (!crtc_state->enable) +- return -EINVAL; ++ return 0; + + switch (plane->type) { + case DRM_PLANE_TYPE_PRIMARY: +diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c +index 54324330b91f..2f0a5bd50174 100644 +--- a/drivers/gpu/drm/radeon/evergreen_cs.c ++++ b/drivers/gpu/drm/radeon/evergreen_cs.c +@@ -1299,6 +1299,7 @@ static int evergreen_cs_handle_reg(struct radeon_cs_parser *p, u32 reg, u32 idx) + return -EINVAL; + } + ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff); ++ break; + case CB_TARGET_MASK: + track->cb_target_mask = radeon_get_ib_value(p, idx); + track->cb_dirty = true; +diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c +index 2c8411b8d050..f3a57c0500f3 100644 +--- a/drivers/gpu/ipu-v3/ipu-common.c ++++ b/drivers/gpu/ipu-v3/ipu-common.c +@@ -894,8 +894,8 @@ static struct ipu_devtype ipu_type_imx51 = { + .cpmem_ofs = 0x1f000000, + .srm_ofs = 0x1f040000, + .tpm_ofs = 0x1f060000, +- .csi0_ofs = 0x1f030000, +- .csi1_ofs = 0x1f038000, ++ .csi0_ofs = 0x1e030000, ++ .csi1_ofs = 0x1e038000, + .ic_ofs = 0x1e020000, + .disp0_ofs = 0x1e040000, + .disp1_ofs = 0x1e048000, +@@ -910,8 +910,8 @@ static struct ipu_devtype ipu_type_imx53 = { + .cpmem_ofs = 0x07000000, + .srm_ofs = 0x07040000, + .tpm_ofs = 0x07060000, +- .csi0_ofs = 0x07030000, +- .csi1_ofs = 0x07038000, ++ .csi0_ofs = 0x06030000, ++ .csi1_ofs = 0x06038000, + .ic_ofs = 0x06020000, + .disp0_ofs = 0x06040000, + .disp1_ofs = 0x06048000, +diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c +index 018678ec3c13..bb27a3150563 100644 +--- a/drivers/hwtracing/intel_th/gth.c ++++ b/drivers/hwtracing/intel_th/gth.c +@@ -615,6 +615,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev, + { + struct gth_device *gth = dev_get_drvdata(&thdev->dev); + int port = othdev->output.port; ++ int master; + + if (thdev->host_mode) + return; +@@ -623,6 +624,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev, + othdev->output.port = -1; + othdev->output.active = false; + gth->output[port].output = NULL; ++ for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++) ++ if (gth->master[master] == port) ++ gth->master[master] = -1; + spin_unlock(>h->gth_lock); + } + +diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c +index 736862967e32..41724d18e712 100644 +--- a/drivers/hwtracing/stm/core.c ++++ b/drivers/hwtracing/stm/core.c +@@ -252,6 +252,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start, + ; + if (i == width) + return pos; ++ ++ /* step over [pos..pos+i) to continue search */ ++ pos += i; + } + + return -1; +@@ -558,7 +561,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg) + { + struct stm_device *stm = stmf->stm; + struct stp_policy_id *id; +- int ret = -EINVAL; ++ int ret = -EINVAL, wlimit = 1; + u32 size; + + if (stmf->output.nr_chans) +@@ -586,8 +589,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg) + if (id->__reserved_0 || id->__reserved_1) + goto err_free; + +- if (id->width < 1 || +- id->width > PAGE_SIZE / stm->data->sw_mmiosz) ++ if (stm->data->sw_mmiosz) ++ wlimit = PAGE_SIZE / stm->data->sw_mmiosz; ++ ++ if (id->width < 1 || id->width > wlimit) + goto err_free; + + ret = stm_file_assign(stmf, id->id, id->width); +diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c +index 44deae78913e..4d19254f78c8 100644 +--- a/drivers/i2c/busses/i2c-bcm2835.c ++++ b/drivers/i2c/busses/i2c-bcm2835.c +@@ -191,6 +191,15 @@ static void bcm2835_i2c_start_transfer(struct bcm2835_i2c_dev *i2c_dev) + bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, c); + } + ++static void bcm2835_i2c_finish_transfer(struct bcm2835_i2c_dev *i2c_dev) ++{ ++ i2c_dev->curr_msg = NULL; ++ i2c_dev->num_msgs = 0; ++ ++ i2c_dev->msg_buf = NULL; ++ i2c_dev->msg_buf_remaining = 0; ++} ++ + /* + * Note about I2C_C_CLEAR on error: + * The I2C_C_CLEAR on errors will take some time to resolve -- if you were in +@@ -291,6 +300,9 @@ static int bcm2835_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg msgs[], + + time_left = wait_for_completion_timeout(&i2c_dev->completion, + adap->timeout); ++ ++ bcm2835_i2c_finish_transfer(i2c_dev); ++ + if (!time_left) { + bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_C, + BCM2835_I2C_C_CLEAR); +diff --git a/drivers/i2c/busses/i2c-cadence.c b/drivers/i2c/busses/i2c-cadence.c +index b13605718291..d917cefc5a19 100644 +--- a/drivers/i2c/busses/i2c-cadence.c ++++ b/drivers/i2c/busses/i2c-cadence.c +@@ -382,8 +382,10 @@ static void cdns_i2c_mrecv(struct cdns_i2c *id) + * Check for the message size against FIFO depth and set the + * 'hold bus' bit if it is greater than FIFO depth. + */ +- if (id->recv_count > CDNS_I2C_FIFO_DEPTH) ++ if ((id->recv_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag) + ctrl_reg |= CDNS_I2C_CR_HOLD; ++ else ++ ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD; + + cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET); + +@@ -440,8 +442,11 @@ static void cdns_i2c_msend(struct cdns_i2c *id) + * Check for the message size against FIFO depth and set the + * 'hold bus' bit if it is greater than FIFO depth. + */ +- if (id->send_count > CDNS_I2C_FIFO_DEPTH) ++ if ((id->send_count > CDNS_I2C_FIFO_DEPTH) || id->bus_hold_flag) + ctrl_reg |= CDNS_I2C_CR_HOLD; ++ else ++ ctrl_reg = ctrl_reg & ~CDNS_I2C_CR_HOLD; ++ + cdns_i2c_writereg(ctrl_reg, CDNS_I2C_CR_OFFSET); + + /* Clear the interrupts in interrupt status register. */ +diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c +index ec2d11af6c78..b90f1512f59c 100644 +--- a/drivers/i2c/busses/i2c-tegra.c ++++ b/drivers/i2c/busses/i2c-tegra.c +@@ -794,7 +794,7 @@ static const struct i2c_algorithm tegra_i2c_algo = { + /* payload size is only 12 bit */ + static const struct i2c_adapter_quirks tegra_i2c_quirks = { + .max_read_len = 4096, +- .max_write_len = 4096, ++ .max_write_len = 4096 - 12, + }; + + static const struct tegra_i2c_hw_feature tegra20_i2c_hw = { +diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c +index 6c5a7be9f8c1..019153882e70 100644 +--- a/drivers/iio/adc/exynos_adc.c ++++ b/drivers/iio/adc/exynos_adc.c +@@ -916,7 +916,7 @@ static int exynos_adc_remove(struct platform_device *pdev) + struct iio_dev *indio_dev = platform_get_drvdata(pdev); + struct exynos_adc *info = iio_priv(indio_dev); + +- if (IS_REACHABLE(CONFIG_INPUT)) { ++ if (IS_REACHABLE(CONFIG_INPUT) && info->input) { + free_irq(info->tsirq, info); + input_unregister_device(info->input); + } +diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h +index ee2859dcceab..af550c1767e3 100644 +--- a/drivers/infiniband/hw/hfi1/hfi.h ++++ b/drivers/infiniband/hw/hfi1/hfi.h +@@ -1398,7 +1398,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd, + struct hfi1_devdata *dd, u8 hw_pidx, u8 port); + void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd); + int hfi1_rcd_put(struct hfi1_ctxtdata *rcd); +-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd); ++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd); + struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt); + int handle_receive_interrupt(struct hfi1_ctxtdata *rcd, int thread); + int handle_receive_interrupt_nodma_rtail(struct hfi1_ctxtdata *rcd, int thread); +diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c +index ee5cbdfeb3ab..b7481701542e 100644 +--- a/drivers/infiniband/hw/hfi1/init.c ++++ b/drivers/infiniband/hw/hfi1/init.c +@@ -215,12 +215,12 @@ static void hfi1_rcd_free(struct kref *kref) + struct hfi1_ctxtdata *rcd = + container_of(kref, struct hfi1_ctxtdata, kref); + +- hfi1_free_ctxtdata(rcd->dd, rcd); +- + spin_lock_irqsave(&rcd->dd->uctxt_lock, flags); + rcd->dd->rcd[rcd->ctxt] = NULL; + spin_unlock_irqrestore(&rcd->dd->uctxt_lock, flags); + ++ hfi1_free_ctxtdata(rcd->dd, rcd); ++ + kfree(rcd); + } + +@@ -243,10 +243,13 @@ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd) + * @rcd: pointer to an initialized rcd data structure + * + * Use this to get a reference after the init. ++ * ++ * Return : reflect kref_get_unless_zero(), which returns non-zero on ++ * increment, otherwise 0. + */ +-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd) ++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd) + { +- kref_get(&rcd->kref); ++ return kref_get_unless_zero(&rcd->kref); + } + + /** +@@ -305,7 +308,8 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt) + spin_lock_irqsave(&dd->uctxt_lock, flags); + if (dd->rcd[ctxt]) { + rcd = dd->rcd[ctxt]; +- hfi1_rcd_get(rcd); ++ if (!hfi1_rcd_get(rcd)) ++ rcd = NULL; + } + spin_unlock_irqrestore(&dd->uctxt_lock, flags); + +diff --git a/drivers/input/keyboard/cap11xx.c b/drivers/input/keyboard/cap11xx.c +index 1a1eacae3ea1..87fb48143859 100644 +--- a/drivers/input/keyboard/cap11xx.c ++++ b/drivers/input/keyboard/cap11xx.c +@@ -75,9 +75,7 @@ + struct cap11xx_led { + struct cap11xx_priv *priv; + struct led_classdev cdev; +- struct work_struct work; + u32 reg; +- enum led_brightness new_brightness; + }; + #endif + +@@ -233,30 +231,21 @@ static void cap11xx_input_close(struct input_dev *idev) + } + + #ifdef CONFIG_LEDS_CLASS +-static void cap11xx_led_work(struct work_struct *work) ++static int cap11xx_led_set(struct led_classdev *cdev, ++ enum led_brightness value) + { +- struct cap11xx_led *led = container_of(work, struct cap11xx_led, work); ++ struct cap11xx_led *led = container_of(cdev, struct cap11xx_led, cdev); + struct cap11xx_priv *priv = led->priv; +- int value = led->new_brightness; + + /* +- * All LEDs share the same duty cycle as this is a HW limitation. +- * Brightness levels per LED are either 0 (OFF) and 1 (ON). ++ * All LEDs share the same duty cycle as this is a HW ++ * limitation. Brightness levels per LED are either ++ * 0 (OFF) and 1 (ON). + */ +- regmap_update_bits(priv->regmap, CAP11XX_REG_LED_OUTPUT_CONTROL, +- BIT(led->reg), value ? BIT(led->reg) : 0); +-} +- +-static void cap11xx_led_set(struct led_classdev *cdev, +- enum led_brightness value) +-{ +- struct cap11xx_led *led = container_of(cdev, struct cap11xx_led, cdev); +- +- if (led->new_brightness == value) +- return; +- +- led->new_brightness = value; +- schedule_work(&led->work); ++ return regmap_update_bits(priv->regmap, ++ CAP11XX_REG_LED_OUTPUT_CONTROL, ++ BIT(led->reg), ++ value ? BIT(led->reg) : 0); + } + + static int cap11xx_init_leds(struct device *dev, +@@ -299,7 +288,7 @@ static int cap11xx_init_leds(struct device *dev, + led->cdev.default_trigger = + of_get_property(child, "linux,default-trigger", NULL); + led->cdev.flags = 0; +- led->cdev.brightness_set = cap11xx_led_set; ++ led->cdev.brightness_set_blocking = cap11xx_led_set; + led->cdev.max_brightness = 1; + led->cdev.brightness = LED_OFF; + +@@ -312,8 +301,6 @@ static int cap11xx_init_leds(struct device *dev, + led->reg = reg; + led->priv = priv; + +- INIT_WORK(&led->work, cap11xx_led_work); +- + error = devm_led_classdev_register(dev, &led->cdev); + if (error) { + of_node_put(child); +diff --git a/drivers/input/keyboard/matrix_keypad.c b/drivers/input/keyboard/matrix_keypad.c +index 782dda68d93a..c04559a232f7 100644 +--- a/drivers/input/keyboard/matrix_keypad.c ++++ b/drivers/input/keyboard/matrix_keypad.c +@@ -222,7 +222,7 @@ static void matrix_keypad_stop(struct input_dev *dev) + keypad->stopped = true; + spin_unlock_irq(&keypad->lock); + +- flush_work(&keypad->work.work); ++ flush_delayed_work(&keypad->work); + /* + * matrix_keypad_scan() will leave IRQs enabled; + * we should disable them now. +diff --git a/drivers/input/keyboard/st-keyscan.c b/drivers/input/keyboard/st-keyscan.c +index babcfb165e4f..3b85631fde91 100644 +--- a/drivers/input/keyboard/st-keyscan.c ++++ b/drivers/input/keyboard/st-keyscan.c +@@ -153,6 +153,8 @@ static int keyscan_probe(struct platform_device *pdev) + + input_dev->id.bustype = BUS_HOST; + ++ keypad_data->input_dev = input_dev; ++ + error = keypad_matrix_key_parse_dt(keypad_data); + if (error) + return error; +@@ -168,8 +170,6 @@ static int keyscan_probe(struct platform_device *pdev) + + input_set_drvdata(input_dev, keypad_data); + +- keypad_data->input_dev = input_dev; +- + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + keypad_data->base = devm_ioremap_resource(&pdev->dev, res); + if (IS_ERR(keypad_data->base)) +diff --git a/drivers/input/misc/pwm-vibra.c b/drivers/input/misc/pwm-vibra.c +index 55da191ae550..dbb6d9e1b947 100644 +--- a/drivers/input/misc/pwm-vibra.c ++++ b/drivers/input/misc/pwm-vibra.c +@@ -34,6 +34,7 @@ struct pwm_vibrator { + struct work_struct play_work; + u16 level; + u32 direction_duty_cycle; ++ bool vcc_on; + }; + + static int pwm_vibrator_start(struct pwm_vibrator *vibrator) +@@ -42,10 +43,13 @@ static int pwm_vibrator_start(struct pwm_vibrator *vibrator) + struct pwm_state state; + int err; + +- err = regulator_enable(vibrator->vcc); +- if (err) { +- dev_err(pdev, "failed to enable regulator: %d", err); +- return err; ++ if (!vibrator->vcc_on) { ++ err = regulator_enable(vibrator->vcc); ++ if (err) { ++ dev_err(pdev, "failed to enable regulator: %d", err); ++ return err; ++ } ++ vibrator->vcc_on = true; + } + + pwm_get_state(vibrator->pwm, &state); +@@ -76,11 +80,14 @@ static int pwm_vibrator_start(struct pwm_vibrator *vibrator) + + static void pwm_vibrator_stop(struct pwm_vibrator *vibrator) + { +- regulator_disable(vibrator->vcc); +- + if (vibrator->pwm_dir) + pwm_disable(vibrator->pwm_dir); + pwm_disable(vibrator->pwm); ++ ++ if (vibrator->vcc_on) { ++ regulator_disable(vibrator->vcc); ++ vibrator->vcc_on = false; ++ } + } + + static void pwm_vibrator_play_work(struct work_struct *work) +diff --git a/drivers/input/serio/ps2-gpio.c b/drivers/input/serio/ps2-gpio.c +index b50e3817f3c4..4a64ab30589c 100644 +--- a/drivers/input/serio/ps2-gpio.c ++++ b/drivers/input/serio/ps2-gpio.c +@@ -76,6 +76,7 @@ static void ps2_gpio_close(struct serio *serio) + { + struct ps2_gpio_data *drvdata = serio->port_data; + ++ flush_delayed_work(&drvdata->tx_work); + disable_irq(drvdata->irq); + } + +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index d8ecc90ed1b5..121fb552f873 100644 +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -1708,6 +1708,8 @@ static int its_alloc_tables(struct its_node *its) + indirect = its_parse_indirect_baser(its, baser, + psz, &order, + its->device_ids); ++ break; ++ + case GITS_BASER_TYPE_VCPU: + indirect = its_parse_indirect_baser(its, baser, + psz, &order, +diff --git a/drivers/mailbox/bcm-flexrm-mailbox.c b/drivers/mailbox/bcm-flexrm-mailbox.c +index f052a3eb2098..7e3ed2714630 100644 +--- a/drivers/mailbox/bcm-flexrm-mailbox.c ++++ b/drivers/mailbox/bcm-flexrm-mailbox.c +@@ -1381,9 +1381,9 @@ static void flexrm_shutdown(struct mbox_chan *chan) + + /* Clear ring flush state */ + timeout = 1000; /* timeout of 1s */ +- writel_relaxed(0x0, ring + RING_CONTROL); ++ writel_relaxed(0x0, ring->regs + RING_CONTROL); + do { +- if (!(readl_relaxed(ring + RING_FLUSH_DONE) & ++ if (!(readl_relaxed(ring->regs + RING_FLUSH_DONE) & + FLUSH_DONE_MASK)) + break; + mdelay(1); +diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h +index 151544740148..973847e027a8 100644 +--- a/drivers/md/bcache/writeback.h ++++ b/drivers/md/bcache/writeback.h +@@ -69,6 +69,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio, + in_use > CUTOFF_WRITEBACK_SYNC) + return false; + ++ if (bio_op(bio) == REQ_OP_DISCARD) ++ return false; ++ + if (dc->partial_stripes_expensive && + bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector, + bio_sectors(bio))) +diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c +index b10e4c5641ea..da4baea9cf83 100644 +--- a/drivers/md/dm-integrity.c ++++ b/drivers/md/dm-integrity.c +@@ -1276,8 +1276,8 @@ again: + checksums_ptr - checksums, !dio->write ? TAG_CMP : TAG_WRITE); + if (unlikely(r)) { + if (r > 0) { +- DMERR("Checksum failed at sector 0x%llx", +- (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size))); ++ DMERR_LIMIT("Checksum failed at sector 0x%llx", ++ (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size))); + r = -EILSEQ; + atomic64_inc(&ic->number_of_mismatches); + } +@@ -1469,8 +1469,8 @@ retry_kmap: + + integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack); + if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) { +- DMERR("Checksum failed when reading from journal, at sector 0x%llx", +- (unsigned long long)logical_sector); ++ DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx", ++ (unsigned long long)logical_sector); + } + } + #endif +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index ed1b7bf1ec0e..433e78f453da 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -3821,6 +3821,8 @@ static int raid10_run(struct mddev *mddev) + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + mddev->sync_thread = md_register_thread(md_do_sync, mddev, + "reshape"); ++ if (!mddev->sync_thread) ++ goto out_free_conf; + } + + return 0; +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 7dbb74cd506a..77a482c6eeda 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -7370,6 +7370,8 @@ static int raid5_run(struct mddev *mddev) + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + mddev->sync_thread = md_register_thread(md_do_sync, mddev, + "reshape"); ++ if (!mddev->sync_thread) ++ goto abort; + } + + /* Ok, everything is just fine now */ +diff --git a/drivers/media/platform/vimc/Makefile b/drivers/media/platform/vimc/Makefile +index 4b2e3de7856e..c4fc8e7d365a 100644 +--- a/drivers/media/platform/vimc/Makefile ++++ b/drivers/media/platform/vimc/Makefile +@@ -5,6 +5,7 @@ vimc_common-objs := vimc-common.o + vimc_debayer-objs := vimc-debayer.o + vimc_scaler-objs := vimc-scaler.o + vimc_sensor-objs := vimc-sensor.o ++vimc_streamer-objs := vimc-streamer.o + + obj-$(CONFIG_VIDEO_VIMC) += vimc.o vimc_capture.o vimc_common.o vimc-debayer.o \ +- vimc_scaler.o vimc_sensor.o ++ vimc_scaler.o vimc_sensor.o vimc_streamer.o +diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c +index 88a1e5670c72..a078ad18909a 100644 +--- a/drivers/media/platform/vimc/vimc-capture.c ++++ b/drivers/media/platform/vimc/vimc-capture.c +@@ -23,6 +23,7 @@ + #include + + #include "vimc-common.h" ++#include "vimc-streamer.h" + + #define VIMC_CAP_DRV_NAME "vimc-capture" + +@@ -43,7 +44,7 @@ struct vimc_cap_device { + spinlock_t qlock; + struct mutex lock; + u32 sequence; +- struct media_pipeline pipe; ++ struct vimc_stream stream; + }; + + static const struct v4l2_pix_format fmt_default = { +@@ -247,14 +248,13 @@ static int vimc_cap_start_streaming(struct vb2_queue *vq, unsigned int count) + vcap->sequence = 0; + + /* Start the media pipeline */ +- ret = media_pipeline_start(entity, &vcap->pipe); ++ ret = media_pipeline_start(entity, &vcap->stream.pipe); + if (ret) { + vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED); + return ret; + } + +- /* Enable streaming from the pipe */ +- ret = vimc_pipeline_s_stream(&vcap->vdev.entity, 1); ++ ret = vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 1); + if (ret) { + media_pipeline_stop(entity); + vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED); +@@ -272,8 +272,7 @@ static void vimc_cap_stop_streaming(struct vb2_queue *vq) + { + struct vimc_cap_device *vcap = vb2_get_drv_priv(vq); + +- /* Disable streaming from the pipe */ +- vimc_pipeline_s_stream(&vcap->vdev.entity, 0); ++ vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 0); + + /* Stop the media pipeline */ + media_pipeline_stop(&vcap->vdev.entity); +@@ -354,8 +353,8 @@ static void vimc_cap_comp_unbind(struct device *comp, struct device *master, + kfree(vcap); + } + +-static void vimc_cap_process_frame(struct vimc_ent_device *ved, +- struct media_pad *sink, const void *frame) ++static void *vimc_cap_process_frame(struct vimc_ent_device *ved, ++ const void *frame) + { + struct vimc_cap_device *vcap = container_of(ved, struct vimc_cap_device, + ved); +@@ -369,7 +368,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved, + typeof(*vimc_buf), list); + if (!vimc_buf) { + spin_unlock(&vcap->qlock); +- return; ++ return ERR_PTR(-EAGAIN); + } + + /* Remove this entry from the list */ +@@ -390,6 +389,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved, + vb2_set_plane_payload(&vimc_buf->vb2.vb2_buf, 0, + vcap->format.sizeimage); + vb2_buffer_done(&vimc_buf->vb2.vb2_buf, VB2_BUF_STATE_DONE); ++ return NULL; + } + + static int vimc_cap_comp_bind(struct device *comp, struct device *master, +diff --git a/drivers/media/platform/vimc/vimc-common.c b/drivers/media/platform/vimc/vimc-common.c +index 9d63c84a9876..743554de724d 100644 +--- a/drivers/media/platform/vimc/vimc-common.c ++++ b/drivers/media/platform/vimc/vimc-common.c +@@ -207,41 +207,6 @@ const struct vimc_pix_map *vimc_pix_map_by_pixelformat(u32 pixelformat) + } + EXPORT_SYMBOL_GPL(vimc_pix_map_by_pixelformat); + +-int vimc_propagate_frame(struct media_pad *src, const void *frame) +-{ +- struct media_link *link; +- +- if (!(src->flags & MEDIA_PAD_FL_SOURCE)) +- return -EINVAL; +- +- /* Send this frame to all sink pads that are direct linked */ +- list_for_each_entry(link, &src->entity->links, list) { +- if (link->source == src && +- (link->flags & MEDIA_LNK_FL_ENABLED)) { +- struct vimc_ent_device *ved = NULL; +- struct media_entity *entity = link->sink->entity; +- +- if (is_media_entity_v4l2_subdev(entity)) { +- struct v4l2_subdev *sd = +- container_of(entity, struct v4l2_subdev, +- entity); +- ved = v4l2_get_subdevdata(sd); +- } else if (is_media_entity_v4l2_video_device(entity)) { +- struct video_device *vdev = +- container_of(entity, +- struct video_device, +- entity); +- ved = video_get_drvdata(vdev); +- } +- if (ved && ved->process_frame) +- ved->process_frame(ved, link->sink, frame); +- } +- } +- +- return 0; +-} +-EXPORT_SYMBOL_GPL(vimc_propagate_frame); +- + /* Helper function to allocate and initialize pads */ + struct media_pad *vimc_pads_init(u16 num_pads, const unsigned long *pads_flag) + { +diff --git a/drivers/media/platform/vimc/vimc-common.h b/drivers/media/platform/vimc/vimc-common.h +index dca528a316e7..d7c5f4616abb 100644 +--- a/drivers/media/platform/vimc/vimc-common.h ++++ b/drivers/media/platform/vimc/vimc-common.h +@@ -108,23 +108,12 @@ struct vimc_pix_map { + struct vimc_ent_device { + struct media_entity *ent; + struct media_pad *pads; +- void (*process_frame)(struct vimc_ent_device *ved, +- struct media_pad *sink, const void *frame); ++ void * (*process_frame)(struct vimc_ent_device *ved, ++ const void *frame); + void (*vdev_get_format)(struct vimc_ent_device *ved, + struct v4l2_pix_format *fmt); + }; + +-/** +- * vimc_propagate_frame - propagate a frame through the topology +- * +- * @src: the source pad where the frame is being originated +- * @frame: the frame to be propagated +- * +- * This function will call the process_frame callback from the vimc_ent_device +- * struct of the nodes directly connected to the @src pad +- */ +-int vimc_propagate_frame(struct media_pad *src, const void *frame); +- + /** + * vimc_pads_init - initialize pads + * +diff --git a/drivers/media/platform/vimc/vimc-debayer.c b/drivers/media/platform/vimc/vimc-debayer.c +index 4d663e89d33f..c4e674f665b2 100644 +--- a/drivers/media/platform/vimc/vimc-debayer.c ++++ b/drivers/media/platform/vimc/vimc-debayer.c +@@ -320,7 +320,6 @@ static void vimc_deb_set_rgb_mbus_fmt_rgb888_1x24(struct vimc_deb_device *vdeb, + static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable) + { + struct vimc_deb_device *vdeb = v4l2_get_subdevdata(sd); +- int ret; + + if (enable) { + const struct vimc_pix_map *vpix; +@@ -350,22 +349,10 @@ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable) + if (!vdeb->src_frame) + return -ENOMEM; + +- /* Turn the stream on in the subdevices directly connected */ +- ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 1); +- if (ret) { +- vfree(vdeb->src_frame); +- vdeb->src_frame = NULL; +- return ret; +- } + } else { + if (!vdeb->src_frame) + return 0; + +- /* Disable streaming from the pipe */ +- ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 0); +- if (ret) +- return ret; +- + vfree(vdeb->src_frame); + vdeb->src_frame = NULL; + } +@@ -479,9 +466,8 @@ static void vimc_deb_calc_rgb_sink(struct vimc_deb_device *vdeb, + } + } + +-static void vimc_deb_process_frame(struct vimc_ent_device *ved, +- struct media_pad *sink, +- const void *sink_frame) ++static void *vimc_deb_process_frame(struct vimc_ent_device *ved, ++ const void *sink_frame) + { + struct vimc_deb_device *vdeb = container_of(ved, struct vimc_deb_device, + ved); +@@ -490,7 +476,7 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved, + + /* If the stream in this node is not active, just return */ + if (!vdeb->src_frame) +- return; ++ return ERR_PTR(-EINVAL); + + for (i = 0; i < vdeb->sink_fmt.height; i++) + for (j = 0; j < vdeb->sink_fmt.width; j++) { +@@ -498,12 +484,8 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved, + vdeb->set_rgb_src(vdeb, i, j, rgb); + } + +- /* Propagate the frame through all source pads */ +- for (i = 1; i < vdeb->sd.entity.num_pads; i++) { +- struct media_pad *pad = &vdeb->sd.entity.pads[i]; ++ return vdeb->src_frame; + +- vimc_propagate_frame(pad, vdeb->src_frame); +- } + } + + static void vimc_deb_comp_unbind(struct device *comp, struct device *master, +diff --git a/drivers/media/platform/vimc/vimc-scaler.c b/drivers/media/platform/vimc/vimc-scaler.c +index e1602e0bc230..b763d87f4b4b 100644 +--- a/drivers/media/platform/vimc/vimc-scaler.c ++++ b/drivers/media/platform/vimc/vimc-scaler.c +@@ -216,7 +216,6 @@ static const struct v4l2_subdev_pad_ops vimc_sca_pad_ops = { + static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable) + { + struct vimc_sca_device *vsca = v4l2_get_subdevdata(sd); +- int ret; + + if (enable) { + const struct vimc_pix_map *vpix; +@@ -244,22 +243,10 @@ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable) + if (!vsca->src_frame) + return -ENOMEM; + +- /* Turn the stream on in the subdevices directly connected */ +- ret = vimc_pipeline_s_stream(&vsca->sd.entity, 1); +- if (ret) { +- vfree(vsca->src_frame); +- vsca->src_frame = NULL; +- return ret; +- } + } else { + if (!vsca->src_frame) + return 0; + +- /* Disable streaming from the pipe */ +- ret = vimc_pipeline_s_stream(&vsca->sd.entity, 0); +- if (ret) +- return ret; +- + vfree(vsca->src_frame); + vsca->src_frame = NULL; + } +@@ -345,26 +332,19 @@ static void vimc_sca_fill_src_frame(const struct vimc_sca_device *const vsca, + vimc_sca_scale_pix(vsca, i, j, sink_frame); + } + +-static void vimc_sca_process_frame(struct vimc_ent_device *ved, +- struct media_pad *sink, +- const void *sink_frame) ++static void *vimc_sca_process_frame(struct vimc_ent_device *ved, ++ const void *sink_frame) + { + struct vimc_sca_device *vsca = container_of(ved, struct vimc_sca_device, + ved); +- unsigned int i; + + /* If the stream in this node is not active, just return */ + if (!vsca->src_frame) +- return; ++ return ERR_PTR(-EINVAL); + + vimc_sca_fill_src_frame(vsca, sink_frame); + +- /* Propagate the frame through all source pads */ +- for (i = 1; i < vsca->sd.entity.num_pads; i++) { +- struct media_pad *pad = &vsca->sd.entity.pads[i]; +- +- vimc_propagate_frame(pad, vsca->src_frame); +- } ++ return vsca->src_frame; + }; + + static void vimc_sca_comp_unbind(struct device *comp, struct device *master, +diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c +index 02e68c8fc02b..70cee5c0c89a 100644 +--- a/drivers/media/platform/vimc/vimc-sensor.c ++++ b/drivers/media/platform/vimc/vimc-sensor.c +@@ -16,8 +16,6 @@ + */ + + #include +-#include +-#include + #include + #include + #include +@@ -197,38 +195,27 @@ static const struct v4l2_subdev_pad_ops vimc_sen_pad_ops = { + .set_fmt = vimc_sen_set_fmt, + }; + +-static int vimc_sen_tpg_thread(void *data) ++static void *vimc_sen_process_frame(struct vimc_ent_device *ved, ++ const void *sink_frame) + { +- struct vimc_sen_device *vsen = data; +- unsigned int i; +- +- set_freezable(); +- set_current_state(TASK_UNINTERRUPTIBLE); +- +- for (;;) { +- try_to_freeze(); +- if (kthread_should_stop()) +- break; +- +- tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame); ++ struct vimc_sen_device *vsen = container_of(ved, struct vimc_sen_device, ++ ved); ++ const struct vimc_pix_map *vpix; ++ unsigned int frame_size; + +- /* Send the frame to all source pads */ +- for (i = 0; i < vsen->sd.entity.num_pads; i++) +- vimc_propagate_frame(&vsen->sd.entity.pads[i], +- vsen->frame); ++ /* Calculate the frame size */ ++ vpix = vimc_pix_map_by_code(vsen->mbus_format.code); ++ frame_size = vsen->mbus_format.width * vpix->bpp * ++ vsen->mbus_format.height; + +- /* 60 frames per second */ +- schedule_timeout(HZ/60); +- } +- +- return 0; ++ tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame); ++ return vsen->frame; + } + + static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable) + { + struct vimc_sen_device *vsen = + container_of(sd, struct vimc_sen_device, sd); +- int ret; + + if (enable) { + const struct vimc_pix_map *vpix; +@@ -254,26 +241,8 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable) + /* configure the test pattern generator */ + vimc_sen_tpg_s_format(vsen); + +- /* Initialize the image generator thread */ +- vsen->kthread_sen = kthread_run(vimc_sen_tpg_thread, vsen, +- "%s-sen", vsen->sd.v4l2_dev->name); +- if (IS_ERR(vsen->kthread_sen)) { +- dev_err(vsen->dev, "%s: kernel_thread() failed\n", +- vsen->sd.name); +- vfree(vsen->frame); +- vsen->frame = NULL; +- return PTR_ERR(vsen->kthread_sen); +- } + } else { +- if (!vsen->kthread_sen) +- return 0; +- +- /* Stop image generator */ +- ret = kthread_stop(vsen->kthread_sen); +- if (ret) +- return ret; + +- vsen->kthread_sen = NULL; + vfree(vsen->frame); + vsen->frame = NULL; + return 0; +@@ -325,6 +294,7 @@ static int vimc_sen_comp_bind(struct device *comp, struct device *master, + if (ret) + goto err_free_vsen; + ++ vsen->ved.process_frame = vimc_sen_process_frame; + dev_set_drvdata(comp, &vsen->ved); + vsen->dev = comp; + +diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c +new file mode 100644 +index 000000000000..fcc897fb247b +--- /dev/null ++++ b/drivers/media/platform/vimc/vimc-streamer.c +@@ -0,0 +1,188 @@ ++// SPDX-License-Identifier: GPL-2.0+ ++/* ++ * vimc-streamer.c Virtual Media Controller Driver ++ * ++ * Copyright (C) 2018 Lucas A. M. Magalhães ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++ ++#include "vimc-streamer.h" ++ ++/** ++ * vimc_get_source_entity - get the entity connected with the first sink pad ++ * ++ * @ent: reference media_entity ++ * ++ * Helper function that returns the media entity containing the source pad ++ * linked with the first sink pad from the given media entity pad list. ++ */ ++static struct media_entity *vimc_get_source_entity(struct media_entity *ent) ++{ ++ struct media_pad *pad; ++ int i; ++ ++ for (i = 0; i < ent->num_pads; i++) { ++ if (ent->pads[i].flags & MEDIA_PAD_FL_SOURCE) ++ continue; ++ pad = media_entity_remote_pad(&ent->pads[i]); ++ return pad ? pad->entity : NULL; ++ } ++ return NULL; ++} ++ ++/* ++ * vimc_streamer_pipeline_terminate - Disable stream in all ved in stream ++ * ++ * @stream: the pointer to the stream structure with the pipeline to be ++ * disabled. ++ * ++ * Calls s_stream to disable the stream in each entity of the pipeline ++ * ++ */ ++static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream) ++{ ++ struct media_entity *entity; ++ struct v4l2_subdev *sd; ++ ++ while (stream->pipe_size) { ++ stream->pipe_size--; ++ entity = stream->ved_pipeline[stream->pipe_size]->ent; ++ entity = vimc_get_source_entity(entity); ++ stream->ved_pipeline[stream->pipe_size] = NULL; ++ ++ if (!is_media_entity_v4l2_subdev(entity)) ++ continue; ++ ++ sd = media_entity_to_v4l2_subdev(entity); ++ v4l2_subdev_call(sd, video, s_stream, 0); ++ } ++} ++ ++/* ++ * vimc_streamer_pipeline_init - initializes the stream structure ++ * ++ * @stream: the pointer to the stream structure to be initialized ++ * @ved: the pointer to the vimc entity initializing the stream ++ * ++ * Initializes the stream structure. Walks through the entity graph to ++ * construct the pipeline used later on the streamer thread. ++ * Calls s_stream to enable stream in all entities of the pipeline. ++ */ ++static int vimc_streamer_pipeline_init(struct vimc_stream *stream, ++ struct vimc_ent_device *ved) ++{ ++ struct media_entity *entity; ++ struct video_device *vdev; ++ struct v4l2_subdev *sd; ++ int ret = 0; ++ ++ stream->pipe_size = 0; ++ while (stream->pipe_size < VIMC_STREAMER_PIPELINE_MAX_SIZE) { ++ if (!ved) { ++ vimc_streamer_pipeline_terminate(stream); ++ return -EINVAL; ++ } ++ stream->ved_pipeline[stream->pipe_size++] = ved; ++ ++ entity = vimc_get_source_entity(ved->ent); ++ /* Check if the end of the pipeline was reached*/ ++ if (!entity) ++ return 0; ++ ++ if (is_media_entity_v4l2_subdev(entity)) { ++ sd = media_entity_to_v4l2_subdev(entity); ++ ret = v4l2_subdev_call(sd, video, s_stream, 1); ++ if (ret && ret != -ENOIOCTLCMD) { ++ vimc_streamer_pipeline_terminate(stream); ++ return ret; ++ } ++ ved = v4l2_get_subdevdata(sd); ++ } else { ++ vdev = container_of(entity, ++ struct video_device, ++ entity); ++ ved = video_get_drvdata(vdev); ++ } ++ } ++ ++ vimc_streamer_pipeline_terminate(stream); ++ return -EINVAL; ++} ++ ++static int vimc_streamer_thread(void *data) ++{ ++ struct vimc_stream *stream = data; ++ int i; ++ ++ set_freezable(); ++ set_current_state(TASK_UNINTERRUPTIBLE); ++ ++ for (;;) { ++ try_to_freeze(); ++ if (kthread_should_stop()) ++ break; ++ ++ for (i = stream->pipe_size - 1; i >= 0; i--) { ++ stream->frame = stream->ved_pipeline[i]->process_frame( ++ stream->ved_pipeline[i], ++ stream->frame); ++ if (!stream->frame) ++ break; ++ if (IS_ERR(stream->frame)) ++ break; ++ } ++ //wait for 60hz ++ schedule_timeout(HZ / 60); ++ } ++ ++ return 0; ++} ++ ++int vimc_streamer_s_stream(struct vimc_stream *stream, ++ struct vimc_ent_device *ved, ++ int enable) ++{ ++ int ret; ++ ++ if (!stream || !ved) ++ return -EINVAL; ++ ++ if (enable) { ++ if (stream->kthread) ++ return 0; ++ ++ ret = vimc_streamer_pipeline_init(stream, ved); ++ if (ret) ++ return ret; ++ ++ stream->kthread = kthread_run(vimc_streamer_thread, stream, ++ "vimc-streamer thread"); ++ ++ if (IS_ERR(stream->kthread)) ++ return PTR_ERR(stream->kthread); ++ ++ } else { ++ if (!stream->kthread) ++ return 0; ++ ++ ret = kthread_stop(stream->kthread); ++ if (ret) ++ return ret; ++ ++ stream->kthread = NULL; ++ ++ vimc_streamer_pipeline_terminate(stream); ++ } ++ ++ return 0; ++} ++EXPORT_SYMBOL_GPL(vimc_streamer_s_stream); ++ ++MODULE_DESCRIPTION("Virtual Media Controller Driver (VIMC) Streamer"); ++MODULE_AUTHOR("Lucas A. M. Magalhães "); ++MODULE_LICENSE("GPL"); +diff --git a/drivers/media/platform/vimc/vimc-streamer.h b/drivers/media/platform/vimc/vimc-streamer.h +new file mode 100644 +index 000000000000..752af2e2d5a2 +--- /dev/null ++++ b/drivers/media/platform/vimc/vimc-streamer.h +@@ -0,0 +1,38 @@ ++/* SPDX-License-Identifier: GPL-2.0+ */ ++/* ++ * vimc-streamer.h Virtual Media Controller Driver ++ * ++ * Copyright (C) 2018 Lucas A. M. Magalhães ++ * ++ */ ++ ++#ifndef _VIMC_STREAMER_H_ ++#define _VIMC_STREAMER_H_ ++ ++#include ++ ++#include "vimc-common.h" ++ ++#define VIMC_STREAMER_PIPELINE_MAX_SIZE 16 ++ ++struct vimc_stream { ++ struct media_pipeline pipe; ++ struct vimc_ent_device *ved_pipeline[VIMC_STREAMER_PIPELINE_MAX_SIZE]; ++ unsigned int pipe_size; ++ u8 *frame; ++ struct task_struct *kthread; ++}; ++ ++/** ++ * vimc_streamer_s_streamer - start/stop the stream ++ * ++ * @stream: the pointer to the stream to start or stop ++ * @ved: The last entity of the streamer pipeline ++ * @enable: any non-zero number start the stream, zero stop ++ * ++ */ ++int vimc_streamer_s_stream(struct vimc_stream *stream, ++ struct vimc_ent_device *ved, ++ int enable); ++ ++#endif //_VIMC_STREAMER_H_ +diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c +index a6d800291883..393371916381 100644 +--- a/drivers/media/usb/uvc/uvc_video.c ++++ b/drivers/media/usb/uvc/uvc_video.c +@@ -638,6 +638,14 @@ void uvc_video_clock_update(struct uvc_streaming *stream, + if (!uvc_hw_timestamps_param) + return; + ++ /* ++ * We will get called from __vb2_queue_cancel() if there are buffers ++ * done but not dequeued by the user, but the sample array has already ++ * been released at that time. Just bail out in that case. ++ */ ++ if (!clock->samples) ++ return; ++ + spin_lock_irqsave(&clock->lock, flags); + + if (clock->count < clock->size) +diff --git a/drivers/media/v4l2-core/videobuf2-v4l2.c b/drivers/media/v4l2-core/videobuf2-v4l2.c +index 0c0669976bdc..69ca8debb711 100644 +--- a/drivers/media/v4l2-core/videobuf2-v4l2.c ++++ b/drivers/media/v4l2-core/videobuf2-v4l2.c +@@ -145,7 +145,6 @@ static void vb2_warn_zero_bytesused(struct vb2_buffer *vb) + return; + + check_once = true; +- WARN_ON(1); + + pr_warn("use of bytesused == 0 is deprecated and will be removed in the future,\n"); + if (vb->vb2_queue->allow_zero_bytesused) +diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c +index f58b4b6c79f2..1a64eb185cfd 100644 +--- a/drivers/misc/cxl/guest.c ++++ b/drivers/misc/cxl/guest.c +@@ -267,6 +267,7 @@ static int guest_reset(struct cxl *adapter) + int i, rc; + + pr_devel("Adapter reset request\n"); ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + if ((afu = adapter->afu[i])) { + pci_error_handlers(afu, CXL_ERROR_DETECTED_EVENT, +@@ -283,6 +284,7 @@ static int guest_reset(struct cxl *adapter) + pci_error_handlers(afu, CXL_RESUME_EVENT, 0); + } + } ++ spin_unlock(&adapter->afu_list_lock); + return rc; + } + +diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c +index 2b3fd0a51701..cf069e11d2d2 100644 +--- a/drivers/misc/cxl/pci.c ++++ b/drivers/misc/cxl/pci.c +@@ -2050,7 +2050,7 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu, + /* There should only be one entry, but go through the list + * anyway + */ +- if (afu->phb == NULL) ++ if (afu == NULL || afu->phb == NULL) + return result; + + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { +@@ -2077,7 +2077,8 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + { + struct cxl *adapter = pci_get_drvdata(pdev); + struct cxl_afu *afu; +- pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET, afu_result; ++ pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; ++ pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET; + int i; + + /* At this point, we could still have an interrupt pending. +@@ -2088,6 +2089,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + + /* If we're permanently dead, give up. */ + if (state == pci_channel_io_perm_failure) { ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + /* +@@ -2096,6 +2098,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + */ + cxl_vphb_error_detected(afu, state); + } ++ spin_unlock(&adapter->afu_list_lock); + return PCI_ERS_RESULT_DISCONNECT; + } + +@@ -2177,11 +2180,17 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + * * In slot_reset, free the old resources and allocate new ones. + * * In resume, clear the flag to allow things to start. + */ ++ ++ /* Make sure no one else changes the afu list */ ++ spin_lock(&adapter->afu_list_lock); ++ + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + +- afu_result = cxl_vphb_error_detected(afu, state); ++ if (afu == NULL) ++ continue; + ++ afu_result = cxl_vphb_error_detected(afu, state); + cxl_context_detach_all(afu); + cxl_ops->afu_deactivate_mode(afu, afu->current_mode); + pci_deconfigure_afu(afu); +@@ -2193,6 +2202,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + (result == PCI_ERS_RESULT_NEED_RESET)) + result = PCI_ERS_RESULT_NONE; + } ++ spin_unlock(&adapter->afu_list_lock); + + /* should take the context lock here */ + if (cxl_adapter_context_lock(adapter) != 0) +@@ -2225,14 +2235,18 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) + */ + cxl_adapter_context_unlock(adapter); + ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + ++ if (afu == NULL) ++ continue; ++ + if (pci_configure_afu(afu, adapter, pdev)) +- goto err; ++ goto err_unlock; + + if (cxl_afu_select_best_mode(afu)) +- goto err; ++ goto err_unlock; + + if (afu->phb == NULL) + continue; +@@ -2244,16 +2258,16 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) + ctx = cxl_get_context(afu_dev); + + if (ctx && cxl_release_context(ctx)) +- goto err; ++ goto err_unlock; + + ctx = cxl_dev_context_init(afu_dev); + if (IS_ERR(ctx)) +- goto err; ++ goto err_unlock; + + afu_dev->dev.archdata.cxl_ctx = ctx; + + if (cxl_ops->afu_check_and_enable(afu)) +- goto err; ++ goto err_unlock; + + afu_dev->error_state = pci_channel_io_normal; + +@@ -2274,8 +2288,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) + result = PCI_ERS_RESULT_DISCONNECT; + } + } ++ ++ spin_unlock(&adapter->afu_list_lock); + return result; + ++err_unlock: ++ spin_unlock(&adapter->afu_list_lock); ++ + err: + /* All the bits that happen in both error_detected and cxl_remove + * should be idempotent, so we don't need to worry about leaving a mix +@@ -2296,10 +2315,11 @@ static void cxl_pci_resume(struct pci_dev *pdev) + * This is not the place to be checking if everything came back up + * properly, because there's no return value: do that in slot_reset. + */ ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + +- if (afu->phb == NULL) ++ if (afu == NULL || afu->phb == NULL) + continue; + + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { +@@ -2308,6 +2328,7 @@ static void cxl_pci_resume(struct pci_dev *pdev) + afu_dev->driver->err_handler->resume(afu_dev); + } + } ++ spin_unlock(&adapter->afu_list_lock); + } + + static const struct pci_error_handlers cxl_err_handler = { +diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c +index 59041f07b53c..ff5c4ad37a3a 100644 +--- a/drivers/mmc/host/sdhci-esdhc-imx.c ++++ b/drivers/mmc/host/sdhci-esdhc-imx.c +@@ -961,6 +961,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing) + case MMC_TIMING_UHS_SDR25: + case MMC_TIMING_UHS_SDR50: + case MMC_TIMING_UHS_SDR104: ++ case MMC_TIMING_MMC_HS: + case MMC_TIMING_MMC_HS200: + writel(m, host->ioaddr + ESDHC_MIX_CTRL); + break; +diff --git a/drivers/net/ethernet/atheros/atlx/atl2.c b/drivers/net/ethernet/atheros/atlx/atl2.c +index 77a1c03255de..225b4d452e0e 100644 +--- a/drivers/net/ethernet/atheros/atlx/atl2.c ++++ b/drivers/net/ethernet/atheros/atlx/atl2.c +@@ -1334,13 +1334,11 @@ static int atl2_probe(struct pci_dev *pdev, const struct pci_device_id *ent) + { + struct net_device *netdev; + struct atl2_adapter *adapter; +- static int cards_found; ++ static int cards_found = 0; + unsigned long mmio_start; + int mmio_len; + int err; + +- cards_found = 0; +- + err = pci_enable_device(pdev); + if (err) + return err; +diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c +index ed3edb17fd09..79018fea7be2 100644 +--- a/drivers/net/ethernet/broadcom/bcmsysport.c ++++ b/drivers/net/ethernet/broadcom/bcmsysport.c +@@ -134,6 +134,10 @@ static int bcm_sysport_set_rx_csum(struct net_device *dev, + + priv->rx_chk_en = !!(wanted & NETIF_F_RXCSUM); + reg = rxchk_readl(priv, RXCHK_CONTROL); ++ /* Clear L2 header checks, which would prevent BPDUs ++ * from being received. ++ */ ++ reg &= ~RXCHK_L2_HDR_DIS; + if (priv->rx_chk_en) + reg |= RXCHK_EN; + else +diff --git a/drivers/net/ethernet/cavium/thunder/nic_main.c b/drivers/net/ethernet/cavium/thunder/nic_main.c +index d89ec4724efd..819f38a3225d 100644 +--- a/drivers/net/ethernet/cavium/thunder/nic_main.c ++++ b/drivers/net/ethernet/cavium/thunder/nic_main.c +@@ -1030,7 +1030,7 @@ static void nic_handle_mbx_intr(struct nicpf *nic, int vf) + case NIC_MBOX_MSG_CFG_DONE: + /* Last message of VF config msg sequence */ + nic_enable_vf(nic, vf, true); +- goto unlock; ++ break; + case NIC_MBOX_MSG_SHUTDOWN: + /* First msg in VF teardown sequence */ + if (vf >= nic->num_vf_en) +diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c +index f13256af8031..59b62b49ad48 100644 +--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c ++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c +@@ -166,6 +166,17 @@ static int nicvf_check_pf_ready(struct nicvf *nic) + return 1; + } + ++static void nicvf_send_cfg_done(struct nicvf *nic) ++{ ++ union nic_mbx mbx = {}; ++ ++ mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE; ++ if (nicvf_send_msg_to_pf(nic, &mbx)) { ++ netdev_err(nic->netdev, ++ "PF didn't respond to CFG DONE msg\n"); ++ } ++} ++ + static void nicvf_read_bgx_stats(struct nicvf *nic, struct bgx_stats_msg *bgx) + { + if (bgx->rx) +@@ -1329,7 +1340,6 @@ int nicvf_open(struct net_device *netdev) + struct nicvf *nic = netdev_priv(netdev); + struct queue_set *qs = nic->qs; + struct nicvf_cq_poll *cq_poll = NULL; +- union nic_mbx mbx = {}; + + netif_carrier_off(netdev); + +@@ -1419,8 +1429,7 @@ int nicvf_open(struct net_device *netdev) + nicvf_enable_intr(nic, NICVF_INTR_RBDR, qidx); + + /* Send VF config done msg to PF */ +- mbx.msg.msg = NIC_MBOX_MSG_CFG_DONE; +- nicvf_write_to_mbx(nic, &mbx); ++ nicvf_send_cfg_done(nic); + + return 0; + cleanup: +diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c +index 51d42d7f6074..7e82dfbb4340 100644 +--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c +@@ -3074,6 +3074,7 @@ int hns_dsaf_roce_reset(struct fwnode_handle *dsaf_fwnode, bool dereset) + dsaf_dev = dev_get_drvdata(&pdev->dev); + if (!dsaf_dev) { + dev_err(&pdev->dev, "dsaf_dev is NULL\n"); ++ put_device(&pdev->dev); + return -ENODEV; + } + +@@ -3081,6 +3082,7 @@ int hns_dsaf_roce_reset(struct fwnode_handle *dsaf_fwnode, bool dereset) + if (AE_IS_VER1(dsaf_dev->dsaf_ver)) { + dev_err(dsaf_dev->dev, "%s v1 chip doesn't support RoCE!\n", + dsaf_dev->ae_dev.name); ++ put_device(&pdev->dev); + return -ENODEV; + } + +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +index 42183a8b649c..01c120d656c5 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +@@ -3827,8 +3827,11 @@ static void ixgbe_setup_mrqc(struct ixgbe_adapter *adapter) + else + mrqc = IXGBE_MRQC_VMDQRSS64EN; + +- /* Enable L3/L4 for Tx Switched packets */ +- mrqc |= IXGBE_MRQC_L3L4TXSWEN; ++ /* Enable L3/L4 for Tx Switched packets only for X550, ++ * older devices do not support this feature ++ */ ++ if (hw->mac.type >= ixgbe_mac_X550) ++ mrqc |= IXGBE_MRQC_L3L4TXSWEN; + } else { + if (tcs > 4) + mrqc = IXGBE_MRQC_RTRSS8TCEN; +diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c +index 81c1fac00d33..2434409f84b2 100644 +--- a/drivers/net/ethernet/marvell/mv643xx_eth.c ++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c +@@ -2886,7 +2886,7 @@ static int mv643xx_eth_shared_probe(struct platform_device *pdev) + + ret = mv643xx_eth_shared_of_probe(pdev); + if (ret) +- return ret; ++ goto err_put_clk; + pd = dev_get_platdata(&pdev->dev); + + msp->tx_csum_limit = (pd != NULL && pd->tx_csum_limit) ? +@@ -2894,6 +2894,11 @@ static int mv643xx_eth_shared_probe(struct platform_device *pdev) + infer_hw_params(msp); + + return 0; ++ ++err_put_clk: ++ if (!IS_ERR(msp->clk)) ++ clk_disable_unprepare(msp->clk); ++ return ret; + } + + static int mv643xx_eth_shared_remove(struct platform_device *pdev) +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index 074a5b79d691..f76cbefeb3c7 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -2102,7 +2102,7 @@ err_drop_frame: + if (unlikely(!skb)) + goto err_drop_frame_ret_pool; + +- dma_sync_single_range_for_cpu(dev->dev.parent, ++ dma_sync_single_range_for_cpu(&pp->bm_priv->pdev->dev, + rx_desc->buf_phys_addr, + MVNETA_MH_SIZE + NET_SKB_PAD, + rx_bytes, +diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c +index 239dfbe8a0a1..c1ffec85817a 100644 +--- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c ++++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c +@@ -756,15 +756,10 @@ wrp_alu64_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, + + static int + wrp_alu32_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta, +- enum alu_op alu_op, bool skip) ++ enum alu_op alu_op) + { + const struct bpf_insn *insn = &meta->insn; + +- if (skip) { +- meta->skip = true; +- return 0; +- } +- + wrp_alu_imm(nfp_prog, insn->dst_reg * 2, alu_op, insn->imm); + wrp_immed(nfp_prog, reg_both(insn->dst_reg * 2 + 1), 0); + +@@ -1017,7 +1012,7 @@ static int xor_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + + static int xor_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + { +- return wrp_alu32_imm(nfp_prog, meta, ALU_OP_XOR, !~meta->insn.imm); ++ return wrp_alu32_imm(nfp_prog, meta, ALU_OP_XOR); + } + + static int and_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +@@ -1027,7 +1022,7 @@ static int and_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + + static int and_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + { +- return wrp_alu32_imm(nfp_prog, meta, ALU_OP_AND, !~meta->insn.imm); ++ return wrp_alu32_imm(nfp_prog, meta, ALU_OP_AND); + } + + static int or_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +@@ -1037,7 +1032,7 @@ static int or_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + + static int or_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + { +- return wrp_alu32_imm(nfp_prog, meta, ALU_OP_OR, !meta->insn.imm); ++ return wrp_alu32_imm(nfp_prog, meta, ALU_OP_OR); + } + + static int add_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +@@ -1047,7 +1042,7 @@ static int add_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + + static int add_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + { +- return wrp_alu32_imm(nfp_prog, meta, ALU_OP_ADD, !meta->insn.imm); ++ return wrp_alu32_imm(nfp_prog, meta, ALU_OP_ADD); + } + + static int sub_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +@@ -1057,7 +1052,7 @@ static int sub_reg(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + + static int sub_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) + { +- return wrp_alu32_imm(nfp_prog, meta, ALU_OP_SUB, !meta->insn.imm); ++ return wrp_alu32_imm(nfp_prog, meta, ALU_OP_SUB); + } + + static int shl_imm(struct nfp_prog *nfp_prog, struct nfp_insn_meta *meta) +diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c +index eb666877d1aa..bb09f5a9846f 100644 +--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c ++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c +@@ -1651,6 +1651,15 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn, + + eth_hlen = ETH_HLEN + (vlan_valid ? sizeof(u32) : 0); + ++ if (!ether_addr_equal(ethh->h_dest, ++ p_hwfn->p_rdma_info->iwarp.mac_addr)) { ++ DP_VERBOSE(p_hwfn, ++ QED_MSG_RDMA, ++ "Got unexpected mac %pM instead of %pM\n", ++ ethh->h_dest, p_hwfn->p_rdma_info->iwarp.mac_addr); ++ return -EINVAL; ++ } ++ + ether_addr_copy(remote_mac_addr, ethh->h_source); + ether_addr_copy(local_mac_addr, ethh->h_dest); + +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 25204d2c9e89..65e47cc52d14 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -1193,8 +1193,8 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ + {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ + {QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */ +- {QMI_FIXED_INTF(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC7304/MC7354 */ +- {QMI_FIXED_INTF(0x1199, 0x68c0, 10)}, /* Sierra Wireless MC7304/MC7354 */ ++ {QMI_QUIRK_SET_DTR(0x1199, 0x68c0, 8)}, /* Sierra Wireless MC7304/MC7354, WP76xx */ ++ {QMI_QUIRK_SET_DTR(0x1199, 0x68c0, 10)},/* Sierra Wireless MC7304/MC7354 */ + {QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */ + {QMI_FIXED_INTF(0x1199, 0x901f, 8)}, /* Sierra Wireless EM7355 */ + {QMI_FIXED_INTF(0x1199, 0x9041, 8)}, /* Sierra Wireless MC7305/MC7355 */ +diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c +index 8f57ca969c9f..27224dc26413 100644 +--- a/drivers/net/wireless/mac80211_hwsim.c ++++ b/drivers/net/wireless/mac80211_hwsim.c +@@ -3241,7 +3241,7 @@ static int hwsim_get_radio_nl(struct sk_buff *msg, struct genl_info *info) + goto out_err; + } + +- genlmsg_reply(skb, info); ++ res = genlmsg_reply(skb, info); + break; + } + +diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c +index e9104eca327b..cae95362efd5 100644 +--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c ++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c +@@ -433,8 +433,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp, + skb_tail_pointer(skb), + MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, cardp); + +- cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET; +- + lbtf_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", + cardp->rx_urb); + ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC); +diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c +index de66c02f6140..184149a49b02 100644 +--- a/drivers/nvdimm/label.c ++++ b/drivers/nvdimm/label.c +@@ -616,7 +616,7 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class, + + static int __pmem_label_update(struct nd_region *nd_region, + struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm, +- int pos) ++ int pos, unsigned long flags) + { + struct nd_namespace_common *ndns = &nspm->nsio.common; + struct nd_interleave_set *nd_set = nd_region->nd_set; +@@ -657,7 +657,7 @@ static int __pmem_label_update(struct nd_region *nd_region, + memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN); + if (nspm->alt_name) + memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN); +- nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_UPDATING); ++ nd_label->flags = __cpu_to_le32(flags); + nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings); + nd_label->position = __cpu_to_le16(pos); + nd_label->isetcookie = __cpu_to_le64(cookie); +@@ -1111,13 +1111,13 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid) + int nd_pmem_namespace_label_update(struct nd_region *nd_region, + struct nd_namespace_pmem *nspm, resource_size_t size) + { +- int i; ++ int i, rc; + + for (i = 0; i < nd_region->ndr_mappings; i++) { + struct nd_mapping *nd_mapping = &nd_region->mapping[i]; + struct nvdimm_drvdata *ndd = to_ndd(nd_mapping); + struct resource *res; +- int rc, count = 0; ++ int count = 0; + + if (size == 0) { + rc = del_labels(nd_mapping, nspm->uuid); +@@ -1135,7 +1135,20 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region, + if (rc < 0) + return rc; + +- rc = __pmem_label_update(nd_region, nd_mapping, nspm, i); ++ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, ++ NSLABEL_FLAG_UPDATING); ++ if (rc) ++ return rc; ++ } ++ ++ if (size == 0) ++ return 0; ++ ++ /* Clear the UPDATING flag per UEFI 2.7 expectations */ ++ for (i = 0; i < nd_region->ndr_mappings; i++) { ++ struct nd_mapping *nd_mapping = &nd_region->mapping[i]; ++ ++ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, 0); + if (rc) + return rc; + } +diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c +index 228bafa4d322..50b01d3eadd9 100644 +--- a/drivers/nvdimm/namespace_devs.c ++++ b/drivers/nvdimm/namespace_devs.c +@@ -138,6 +138,7 @@ bool nd_is_uuid_unique(struct device *dev, u8 *uuid) + bool pmem_should_map_pages(struct device *dev) + { + struct nd_region *nd_region = to_nd_region(dev->parent); ++ struct nd_namespace_common *ndns = to_ndns(dev); + struct nd_namespace_io *nsio; + + if (!IS_ENABLED(CONFIG_ZONE_DEVICE)) +@@ -149,6 +150,9 @@ bool pmem_should_map_pages(struct device *dev) + if (is_nd_pfn(dev) || is_nd_btt(dev)) + return false; + ++ if (ndns->force_raw) ++ return false; ++ + nsio = to_nd_namespace_io(dev); + if (region_intersects(nsio->res.start, resource_size(&nsio->res), + IORESOURCE_SYSTEM_RAM, +diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c +index 6d38191ff0da..b9dad88b8ea3 100644 +--- a/drivers/nvdimm/pfn_devs.c ++++ b/drivers/nvdimm/pfn_devs.c +@@ -535,7 +535,7 @@ static unsigned long init_altmap_base(resource_size_t base) + + static unsigned long init_altmap_reserve(resource_size_t base) + { +- unsigned long reserve = PHYS_PFN(SZ_8K); ++ unsigned long reserve = PFN_UP(SZ_8K); + unsigned long base_pfn = PHYS_PFN(base); + + reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn); +@@ -618,7 +618,7 @@ static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trun + if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, + IORES_DESC_NONE) == REGION_MIXED + || !IS_ALIGNED(end, nd_pfn->align) +- || nd_region_conflict(nd_region, start, size + adjust)) ++ || nd_region_conflict(nd_region, start, size)) + *end_trunc = end - phys_pmem_align_down(nd_pfn, end); + } + +diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c +index 380916bff9e0..dee5b9e35ffd 100644 +--- a/drivers/parport/parport_pc.c ++++ b/drivers/parport/parport_pc.c +@@ -1377,7 +1377,7 @@ static struct superio_struct *find_superio(struct parport *p) + { + int i; + for (i = 0; i < NR_SUPERIOS; i++) +- if (superios[i].io != p->base) ++ if (superios[i].io == p->base) + return &superios[i]; + return NULL; + } +diff --git a/drivers/pinctrl/meson/pinctrl-meson8b.c b/drivers/pinctrl/meson/pinctrl-meson8b.c +index a6fff215e60f..aafd39eba64f 100644 +--- a/drivers/pinctrl/meson/pinctrl-meson8b.c ++++ b/drivers/pinctrl/meson/pinctrl-meson8b.c +@@ -668,7 +668,7 @@ static const char * const sd_a_groups[] = { + + static const char * const sdxc_a_groups[] = { + "sdxc_d0_0_a", "sdxc_d13_0_a", "sdxc_d47_a", "sdxc_clk_a", +- "sdxc_cmd_a", "sdxc_d0_1_a", "sdxc_d0_13_1_a" ++ "sdxc_cmd_a", "sdxc_d0_1_a", "sdxc_d13_1_a" + }; + + static const char * const pcm_a_groups[] = { +diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c +index 11a07633de6c..aa469ccc3b14 100644 +--- a/drivers/power/supply/cpcap-charger.c ++++ b/drivers/power/supply/cpcap-charger.c +@@ -458,6 +458,7 @@ static void cpcap_usb_detect(struct work_struct *work) + goto out_err; + } + ++ power_supply_changed(ddata->usb); + return; + + out_err: +diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c +index b94e3a721721..cd93cf53e23c 100644 +--- a/drivers/regulator/max77620-regulator.c ++++ b/drivers/regulator/max77620-regulator.c +@@ -1,7 +1,7 @@ + /* + * Maxim MAX77620 Regulator driver + * +- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. ++ * Copyright (c) 2016-2018, NVIDIA CORPORATION. All rights reserved. + * + * Author: Mallikarjun Kasoju + * Laxman Dewangan +@@ -803,6 +803,14 @@ static int max77620_regulator_probe(struct platform_device *pdev) + rdesc = &rinfo[id].desc; + pmic->rinfo[id] = &max77620_regs_info[id]; + pmic->enable_power_mode[id] = MAX77620_POWER_MODE_NORMAL; ++ pmic->reg_pdata[id].active_fps_src = -1; ++ pmic->reg_pdata[id].active_fps_pd_slot = -1; ++ pmic->reg_pdata[id].active_fps_pu_slot = -1; ++ pmic->reg_pdata[id].suspend_fps_src = -1; ++ pmic->reg_pdata[id].suspend_fps_pd_slot = -1; ++ pmic->reg_pdata[id].suspend_fps_pu_slot = -1; ++ pmic->reg_pdata[id].power_ok = -1; ++ pmic->reg_pdata[id].ramp_rate_setting = -1; + + ret = max77620_read_slew_rate(pmic, id); + if (ret < 0) +diff --git a/drivers/regulator/s2mpa01.c b/drivers/regulator/s2mpa01.c +index 48f0ca90743c..076735a3c85a 100644 +--- a/drivers/regulator/s2mpa01.c ++++ b/drivers/regulator/s2mpa01.c +@@ -304,13 +304,13 @@ static const struct regulator_desc regulators[] = { + regulator_desc_ldo(2, STEP_50_MV), + regulator_desc_ldo(3, STEP_50_MV), + regulator_desc_ldo(4, STEP_50_MV), +- regulator_desc_ldo(5, STEP_50_MV), ++ regulator_desc_ldo(5, STEP_25_MV), + regulator_desc_ldo(6, STEP_25_MV), + regulator_desc_ldo(7, STEP_50_MV), + regulator_desc_ldo(8, STEP_50_MV), + regulator_desc_ldo(9, STEP_50_MV), + regulator_desc_ldo(10, STEP_50_MV), +- regulator_desc_ldo(11, STEP_25_MV), ++ regulator_desc_ldo(11, STEP_50_MV), + regulator_desc_ldo(12, STEP_50_MV), + regulator_desc_ldo(13, STEP_50_MV), + regulator_desc_ldo(14, STEP_50_MV), +@@ -321,11 +321,11 @@ static const struct regulator_desc regulators[] = { + regulator_desc_ldo(19, STEP_50_MV), + regulator_desc_ldo(20, STEP_50_MV), + regulator_desc_ldo(21, STEP_50_MV), +- regulator_desc_ldo(22, STEP_25_MV), +- regulator_desc_ldo(23, STEP_25_MV), ++ regulator_desc_ldo(22, STEP_50_MV), ++ regulator_desc_ldo(23, STEP_50_MV), + regulator_desc_ldo(24, STEP_50_MV), + regulator_desc_ldo(25, STEP_50_MV), +- regulator_desc_ldo(26, STEP_50_MV), ++ regulator_desc_ldo(26, STEP_25_MV), + regulator_desc_buck1_4(1), + regulator_desc_buck1_4(2), + regulator_desc_buck1_4(3), +diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c +index 7726b874e539..17a816656b92 100644 +--- a/drivers/regulator/s2mps11.c ++++ b/drivers/regulator/s2mps11.c +@@ -376,7 +376,7 @@ static const struct regulator_desc s2mps11_regulators[] = { + regulator_desc_s2mps11_ldo(32, STEP_50_MV), + regulator_desc_s2mps11_ldo(33, STEP_50_MV), + regulator_desc_s2mps11_ldo(34, STEP_50_MV), +- regulator_desc_s2mps11_ldo(35, STEP_50_MV), ++ regulator_desc_s2mps11_ldo(35, STEP_25_MV), + regulator_desc_s2mps11_ldo(36, STEP_50_MV), + regulator_desc_s2mps11_ldo(37, STEP_50_MV), + regulator_desc_s2mps11_ldo(38, STEP_50_MV), +@@ -386,8 +386,8 @@ static const struct regulator_desc s2mps11_regulators[] = { + regulator_desc_s2mps11_buck1_4(4), + regulator_desc_s2mps11_buck5, + regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV), +- regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_6_25_MV), +- regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_6_25_MV), ++ regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV), ++ regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV), + regulator_desc_s2mps11_buck9, + regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV), + }; +diff --git a/drivers/s390/block/dasd_eckd.c b/drivers/s390/block/dasd_eckd.c +index 4c7c8455da96..0a1e7f9b5239 100644 +--- a/drivers/s390/block/dasd_eckd.c ++++ b/drivers/s390/block/dasd_eckd.c +@@ -4463,6 +4463,14 @@ static int dasd_symm_io(struct dasd_device *device, void __user *argp) + usrparm.psf_data &= 0x7fffffffULL; + usrparm.rssd_result &= 0x7fffffffULL; + } ++ /* at least 2 bytes are accessed and should be allocated */ ++ if (usrparm.psf_data_len < 2) { ++ DBF_DEV_EVENT(DBF_WARNING, device, ++ "Symmetrix ioctl invalid data length %d", ++ usrparm.psf_data_len); ++ rc = -EINVAL; ++ goto out; ++ } + /* alloc I/O data area */ + psf_data = kzalloc(usrparm.psf_data_len, GFP_KERNEL | GFP_DMA); + rssd_result = kzalloc(usrparm.rssd_result_len, GFP_KERNEL | GFP_DMA); +diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c +index 0847d05e138b..f9cf676a0469 100644 +--- a/drivers/s390/virtio/virtio_ccw.c ++++ b/drivers/s390/virtio/virtio_ccw.c +@@ -275,6 +275,8 @@ static void virtio_ccw_drop_indicators(struct virtio_ccw_device *vcdev) + { + struct virtio_ccw_vq_info *info; + ++ if (!vcdev->airq_info) ++ return; + list_for_each_entry(info, &vcdev->virtqueues, node) + drop_airq_indicator(info->vq, vcdev->airq_info); + } +@@ -416,7 +418,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev, + ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF); + if (ret) + return ret; +- return vcdev->config_block->num; ++ return vcdev->config_block->num ?: -ENOENT; + } + + static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw) +diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c +index 4917649cacd5..053a31c5485f 100644 +--- a/drivers/scsi/aacraid/linit.c ++++ b/drivers/scsi/aacraid/linit.c +@@ -413,13 +413,16 @@ static int aac_slave_configure(struct scsi_device *sdev) + if (chn < AAC_MAX_BUSES && tid < AAC_MAX_TARGETS && aac->sa_firmware) { + devtype = aac->hba_map[chn][tid].devtype; + +- if (devtype == AAC_DEVTYPE_NATIVE_RAW) ++ if (devtype == AAC_DEVTYPE_NATIVE_RAW) { + depth = aac->hba_map[chn][tid].qd_limit; +- else if (devtype == AAC_DEVTYPE_ARC_RAW) ++ set_timeout = 1; ++ goto common_config; ++ } ++ if (devtype == AAC_DEVTYPE_ARC_RAW) { + set_qd_dev_type = true; +- +- set_timeout = 1; +- goto common_config; ++ set_timeout = 1; ++ goto common_config; ++ } + } + + if (aac->jbod && (sdev->type == TYPE_DISK)) +diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c +index 3ff536b350a1..5ea5d42bac76 100644 +--- a/drivers/scsi/libiscsi.c ++++ b/drivers/scsi/libiscsi.c +@@ -1449,7 +1449,13 @@ static int iscsi_xmit_task(struct iscsi_conn *conn) + if (test_bit(ISCSI_SUSPEND_BIT, &conn->suspend_tx)) + return -ENODATA; + ++ spin_lock_bh(&conn->session->back_lock); ++ if (conn->task == NULL) { ++ spin_unlock_bh(&conn->session->back_lock); ++ return -ENODATA; ++ } + __iscsi_get_task(task); ++ spin_unlock_bh(&conn->session->back_lock); + spin_unlock_bh(&conn->session->frwd_lock); + rc = conn->session->tt->xmit_task(task); + spin_lock_bh(&conn->session->frwd_lock); +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index 048fccc72e03..d0cc8fb40f63 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -3077,6 +3077,55 @@ static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer) + sdkp->security = 1; + } + ++/* ++ * Determine the device's preferred I/O size for reads and writes ++ * unless the reported value is unreasonably small, large, not a ++ * multiple of the physical block size, or simply garbage. ++ */ ++static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp, ++ unsigned int dev_max) ++{ ++ struct scsi_device *sdp = sdkp->device; ++ unsigned int opt_xfer_bytes = ++ logical_to_bytes(sdp, sdkp->opt_xfer_blocks); ++ ++ if (sdkp->opt_xfer_blocks > dev_max) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u logical blocks " \ ++ "> dev_max (%u logical blocks)\n", ++ sdkp->opt_xfer_blocks, dev_max); ++ return false; ++ } ++ ++ if (sdkp->opt_xfer_blocks > SD_DEF_XFER_BLOCKS) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u logical blocks " \ ++ "> sd driver limit (%u logical blocks)\n", ++ sdkp->opt_xfer_blocks, SD_DEF_XFER_BLOCKS); ++ return false; ++ } ++ ++ if (opt_xfer_bytes < PAGE_SIZE) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u bytes < " \ ++ "PAGE_SIZE (%u bytes)\n", ++ opt_xfer_bytes, (unsigned int)PAGE_SIZE); ++ return false; ++ } ++ ++ if (opt_xfer_bytes & (sdkp->physical_block_size - 1)) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u bytes not a " \ ++ "multiple of physical block size (%u bytes)\n", ++ opt_xfer_bytes, sdkp->physical_block_size); ++ return false; ++ } ++ ++ sd_first_printk(KERN_INFO, sdkp, "Optimal transfer size %u bytes\n", ++ opt_xfer_bytes); ++ return true; ++} ++ + /** + * sd_revalidate_disk - called the first time a new disk is seen, + * performs disk spin up, read_capacity, etc. +@@ -3146,15 +3195,7 @@ static int sd_revalidate_disk(struct gendisk *disk) + dev_max = min_not_zero(dev_max, sdkp->max_xfer_blocks); + q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max); + +- /* +- * Determine the device's preferred I/O size for reads and writes +- * unless the reported value is unreasonably small, large, or +- * garbage. +- */ +- if (sdkp->opt_xfer_blocks && +- sdkp->opt_xfer_blocks <= dev_max && +- sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS && +- logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) { ++ if (sd_validate_opt_xfer_size(sdkp, dev_max)) { + q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks); + rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks); + } else +diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c +index 54e3a0f6844c..1f4bd7d0154d 100644 +--- a/drivers/scsi/virtio_scsi.c ++++ b/drivers/scsi/virtio_scsi.c +@@ -638,7 +638,6 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc) + return FAILED; + + memset(cmd, 0, sizeof(*cmd)); +- cmd->sc = sc; + cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){ + .type = VIRTIO_SCSI_T_TMF, + .subtype = cpu_to_virtio32(vscsi->vdev, +@@ -697,7 +696,6 @@ static int virtscsi_abort(struct scsi_cmnd *sc) + return FAILED; + + memset(cmd, 0, sizeof(*cmd)); +- cmd->sc = sc; + cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){ + .type = VIRTIO_SCSI_T_TMF, + .subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK, +diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c +index 3a2e46e49405..c0e915d8da5d 100644 +--- a/drivers/spi/spi-pxa2xx.c ++++ b/drivers/spi/spi-pxa2xx.c +@@ -1698,6 +1698,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev) + platform_info->enable_dma = false; + } else { + master->can_dma = pxa2xx_spi_can_dma; ++ master->max_dma_len = MAX_DMA_LEN; + } + } + +diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c +index c24d9b45a27c..d0ea62d151c0 100644 +--- a/drivers/spi/spi-ti-qspi.c ++++ b/drivers/spi/spi-ti-qspi.c +@@ -490,8 +490,8 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi) + ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG); + if (qspi->ctrl_base) { + regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg, +- MEM_CS_EN(spi->chip_select), +- MEM_CS_MASK); ++ MEM_CS_MASK, ++ MEM_CS_EN(spi->chip_select)); + } + qspi->mmap_enabled = true; + } +@@ -503,7 +503,7 @@ static void ti_qspi_disable_memory_map(struct spi_device *spi) + ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG); + if (qspi->ctrl_base) + regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg, +- 0, MEM_CS_MASK); ++ MEM_CS_MASK, 0); + qspi->mmap_enabled = false; + } + +diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c +index 111afd34aa3c..22149957afd0 100644 +--- a/drivers/staging/media/imx/imx-ic-prpencvf.c ++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c +@@ -676,12 +676,23 @@ static int prp_start(struct prp_priv *priv) + goto out_free_nfb4eof_irq; + } + ++ /* start upstream */ ++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1); ++ ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; ++ if (ret) { ++ v4l2_err(&ic_priv->sd, ++ "upstream stream on failed: %d\n", ret); ++ goto out_free_eof_irq; ++ } ++ + /* start the EOF timeout timer */ + mod_timer(&priv->eof_timeout_timer, + jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT)); + + return 0; + ++out_free_eof_irq: ++ devm_free_irq(ic_priv->dev, priv->eof_irq, priv); + out_free_nfb4eof_irq: + devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv); + out_unsetup: +@@ -713,6 +724,12 @@ static void prp_stop(struct prp_priv *priv) + if (ret == 0) + v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n"); + ++ /* stop upstream */ ++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 0); ++ if (ret && ret != -ENOIOCTLCMD) ++ v4l2_warn(&ic_priv->sd, ++ "upstream stream off failed: %d\n", ret); ++ + devm_free_irq(ic_priv->dev, priv->eof_irq, priv); + devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv); + +@@ -1144,15 +1161,6 @@ static int prp_s_stream(struct v4l2_subdev *sd, int enable) + if (ret) + goto out; + +- /* start/stop upstream */ +- ret = v4l2_subdev_call(priv->src_sd, video, s_stream, enable); +- ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; +- if (ret) { +- if (enable) +- prp_stop(priv); +- goto out; +- } +- + update_count: + priv->stream_count += enable ? 1 : -1; + if (priv->stream_count < 0) +diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c +index 83ecb5b2fb9e..69df8b23227a 100644 +--- a/drivers/staging/media/imx/imx-media-csi.c ++++ b/drivers/staging/media/imx/imx-media-csi.c +@@ -538,7 +538,7 @@ out_put_ipu: + return ret; + } + +-static void csi_idmac_stop(struct csi_priv *priv) ++static void csi_idmac_wait_last_eof(struct csi_priv *priv) + { + unsigned long flags; + int ret; +@@ -555,7 +555,10 @@ static void csi_idmac_stop(struct csi_priv *priv) + &priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT)); + if (ret == 0) + v4l2_warn(&priv->sd, "wait last EOF timeout\n"); ++} + ++static void csi_idmac_stop(struct csi_priv *priv) ++{ + devm_free_irq(priv->dev, priv->eof_irq, priv); + devm_free_irq(priv->dev, priv->nfb4eof_irq, priv); + +@@ -645,10 +648,16 @@ static int csi_start(struct csi_priv *priv) + usleep_range(delay_usec, delay_usec + 1000); + } + ++ /* start upstream */ ++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1); ++ ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; ++ if (ret) ++ return ret; ++ + if (priv->dest == IPU_CSI_DEST_IDMAC) { + ret = csi_idmac_start(priv); + if (ret) +- return ret; ++ goto stop_upstream; + } + + ret = csi_setup(priv); +@@ -676,11 +685,26 @@ fim_off: + idmac_stop: + if (priv->dest == IPU_CSI_DEST_IDMAC) + csi_idmac_stop(priv); ++stop_upstream: ++ v4l2_subdev_call(priv->src_sd, video, s_stream, 0); + return ret; + } + + static void csi_stop(struct csi_priv *priv) + { ++ if (priv->dest == IPU_CSI_DEST_IDMAC) ++ csi_idmac_wait_last_eof(priv); ++ ++ /* ++ * Disable the CSI asap, after syncing with the last EOF. ++ * Doing so after the IDMA channel is disabled has shown to ++ * create hard system-wide hangs. ++ */ ++ ipu_csi_disable(priv->csi); ++ ++ /* stop upstream */ ++ v4l2_subdev_call(priv->src_sd, video, s_stream, 0); ++ + if (priv->dest == IPU_CSI_DEST_IDMAC) { + csi_idmac_stop(priv); + +@@ -688,8 +712,6 @@ static void csi_stop(struct csi_priv *priv) + if (priv->fim) + imx_media_fim_set_stream(priv->fim, NULL, false); + } +- +- ipu_csi_disable(priv->csi); + } + + static const struct csi_skip_desc csi_skip[12] = { +@@ -850,23 +872,13 @@ static int csi_s_stream(struct v4l2_subdev *sd, int enable) + goto update_count; + + if (enable) { +- /* upstream must be started first, before starting CSI */ +- ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1); +- ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; +- if (ret) +- goto out; +- + dev_dbg(priv->dev, "stream ON\n"); + ret = csi_start(priv); +- if (ret) { +- v4l2_subdev_call(priv->src_sd, video, s_stream, 0); ++ if (ret) + goto out; +- } + } else { + dev_dbg(priv->dev, "stream OFF\n"); +- /* CSI must be stopped first, then stop upstream */ + csi_stop(priv); +- v4l2_subdev_call(priv->src_sd, video, s_stream, 0); + } + + update_count: +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c +index d2cafdae8317..fb7bd422e2e1 100644 +--- a/drivers/target/iscsi/iscsi_target.c ++++ b/drivers/target/iscsi/iscsi_target.c +@@ -4077,9 +4077,9 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn) + struct se_cmd *se_cmd = &cmd->se_cmd; + + if (se_cmd->se_tfo != NULL) { +- spin_lock(&se_cmd->t_state_lock); ++ spin_lock_irq(&se_cmd->t_state_lock); + se_cmd->transport_state |= CMD_T_FABRIC_STOP; +- spin_unlock(&se_cmd->t_state_lock); ++ spin_unlock_irq(&se_cmd->t_state_lock); + } + } + spin_unlock_bh(&conn->cmd_lock); +diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c +index 3613a6aabfb3..ec510e342e06 100644 +--- a/drivers/tty/serial/8250/8250_of.c ++++ b/drivers/tty/serial/8250/8250_of.c +@@ -105,6 +105,10 @@ static int of_platform_serial_setup(struct platform_device *ofdev, + if (of_property_read_u32(np, "reg-offset", &prop) == 0) + port->mapbase += prop; + ++ /* Compatibility with the deprecated pxa driver and 8250_pxa drivers. */ ++ if (of_device_is_compatible(np, "mrvl,mmp-uart")) ++ port->regshift = 2; ++ + /* Check for registers offset within the devices address range */ + if (of_property_read_u32(np, "reg-shift", &prop) == 0) + port->regshift = prop; +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 790375b5eeb2..b31fed7f1679 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -2033,6 +2033,111 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = { + .setup = pci_default_setup, + .exit = pci_plx9050_exit, + }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4S, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4SM, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, + /* + * SBS Technologies, Inc., PMC-OCTALPRO 232 + */ +@@ -4580,10 +4685,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + */ + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4592,10 +4697,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_2DB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4604,10 +4709,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SMDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4616,13 +4721,13 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_1, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7951 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4631,16 +4736,16 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2S, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4649,13 +4754,13 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_8, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7958 }, +@@ -4664,19 +4769,19 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7958 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_8, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7958 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_8SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7958 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + /* + * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke) + */ +diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c +index 217686cb4cd3..f438a2158006 100644 +--- a/drivers/tty/serial/xilinx_uartps.c ++++ b/drivers/tty/serial/xilinx_uartps.c +@@ -366,7 +366,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id) + cdns_uart_handle_tx(dev_id); + isrstatus &= ~CDNS_UART_IXR_TXEMPTY; + } +- if (isrstatus & CDNS_UART_IXR_RXMASK) ++ ++ /* ++ * Skip RX processing if RX is disabled as RXEMPTY will never be set ++ * as read bytes will not be removed from the FIFO. ++ */ ++ if (isrstatus & CDNS_UART_IXR_RXMASK && ++ !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS)) + cdns_uart_handle_rx(dev_id, isrstatus); + + spin_unlock(&port->lock); +diff --git a/drivers/usb/chipidea/ci_hdrc_tegra.c b/drivers/usb/chipidea/ci_hdrc_tegra.c +index bfcee2702d50..5cf62fa33762 100644 +--- a/drivers/usb/chipidea/ci_hdrc_tegra.c ++++ b/drivers/usb/chipidea/ci_hdrc_tegra.c +@@ -133,6 +133,7 @@ static int tegra_udc_remove(struct platform_device *pdev) + { + struct tegra_udc *udc = platform_get_drvdata(pdev); + ++ ci_hdrc_remove_device(udc->dev); + usb_phy_set_suspend(udc->phy, 1); + clk_disable_unprepare(udc->clk); + +diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h +index 5a0db6dec8d1..aaee1e6584e6 100644 +--- a/fs/9p/v9fs_vfs.h ++++ b/fs/9p/v9fs_vfs.h +@@ -40,6 +40,9 @@ + */ + #define P9_LOCK_TIMEOUT (30*HZ) + ++/* flags for v9fs_stat2inode() & v9fs_stat2inode_dotl() */ ++#define V9FS_STAT2INODE_KEEP_ISIZE 1 ++ + extern struct file_system_type v9fs_fs_type; + extern const struct address_space_operations v9fs_addr_operations; + extern const struct file_operations v9fs_file_operations; +@@ -61,8 +64,10 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses, + struct inode *inode, umode_t mode, dev_t); + void v9fs_evict_inode(struct inode *inode); + ino_t v9fs_qid2ino(struct p9_qid *qid); +-void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *); +-void v9fs_stat2inode_dotl(struct p9_stat_dotl *, struct inode *); ++void v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, ++ struct super_block *sb, unsigned int flags); ++void v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, ++ unsigned int flags); + int v9fs_dir_release(struct inode *inode, struct file *filp); + int v9fs_file_open(struct inode *inode, struct file *file); + void v9fs_inode2stat(struct inode *inode, struct p9_wstat *stat); +@@ -83,4 +88,18 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode) + } + + int v9fs_open_to_dotl_flags(int flags); ++ ++static inline void v9fs_i_size_write(struct inode *inode, loff_t i_size) ++{ ++ /* ++ * 32-bit need the lock, concurrent updates could break the ++ * sequences and make i_size_read() loop forever. ++ * 64-bit updates are atomic and can skip the locking. ++ */ ++ if (sizeof(i_size) > sizeof(long)) ++ spin_lock(&inode->i_lock); ++ i_size_write(inode, i_size); ++ if (sizeof(i_size) > sizeof(long)) ++ spin_unlock(&inode->i_lock); ++} + #endif +diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c +index 3a2f37ad1f89..af8cac975a74 100644 +--- a/fs/9p/vfs_file.c ++++ b/fs/9p/vfs_file.c +@@ -442,7 +442,11 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) + i_size = i_size_read(inode); + if (iocb->ki_pos > i_size) { + inode_add_bytes(inode, iocb->ki_pos - i_size); +- i_size_write(inode, iocb->ki_pos); ++ /* ++ * Need to serialize against i_size_write() in ++ * v9fs_stat2inode() ++ */ ++ v9fs_i_size_write(inode, iocb->ki_pos); + } + return retval; + } +diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c +index bdabb2765d1b..e88cb25176dc 100644 +--- a/fs/9p/vfs_inode.c ++++ b/fs/9p/vfs_inode.c +@@ -538,7 +538,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb, + if (retval) + goto error; + +- v9fs_stat2inode(st, inode, sb); ++ v9fs_stat2inode(st, inode, sb, 0); + v9fs_cache_inode_get_cookie(inode); + unlock_new_inode(inode); + return inode; +@@ -1080,7 +1080,7 @@ v9fs_vfs_getattr(const struct path *path, struct kstat *stat, + if (IS_ERR(st)) + return PTR_ERR(st); + +- v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb); ++ v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb, 0); + generic_fillattr(d_inode(dentry), stat); + + p9stat_free(st); +@@ -1158,12 +1158,13 @@ static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr) + * @stat: Plan 9 metadata (mistat) structure + * @inode: inode to populate + * @sb: superblock of filesystem ++ * @flags: control flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE) + * + */ + + void + v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, +- struct super_block *sb) ++ struct super_block *sb, unsigned int flags) + { + umode_t mode; + char ext[32]; +@@ -1204,10 +1205,11 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, + mode = p9mode2perm(v9ses, stat); + mode |= inode->i_mode & ~S_IALLUGO; + inode->i_mode = mode; +- i_size_write(inode, stat->length); + ++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) ++ v9fs_i_size_write(inode, stat->length); + /* not real number of blocks, but 512 byte ones ... */ +- inode->i_blocks = (i_size_read(inode) + 512 - 1) >> 9; ++ inode->i_blocks = (stat->length + 512 - 1) >> 9; + v9inode->cache_validity &= ~V9FS_INO_INVALID_ATTR; + } + +@@ -1404,9 +1406,9 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode) + { + int umode; + dev_t rdev; +- loff_t i_size; + struct p9_wstat *st; + struct v9fs_session_info *v9ses; ++ unsigned int flags; + + v9ses = v9fs_inode2v9ses(inode); + st = p9_client_stat(fid); +@@ -1419,16 +1421,13 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode) + if ((inode->i_mode & S_IFMT) != (umode & S_IFMT)) + goto out; + +- spin_lock(&inode->i_lock); + /* + * We don't want to refresh inode->i_size, + * because we may have cached data + */ +- i_size = inode->i_size; +- v9fs_stat2inode(st, inode, inode->i_sb); +- if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) +- inode->i_size = i_size; +- spin_unlock(&inode->i_lock); ++ flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ? ++ V9FS_STAT2INODE_KEEP_ISIZE : 0; ++ v9fs_stat2inode(st, inode, inode->i_sb, flags); + out: + p9stat_free(st); + kfree(st); +diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c +index 7f6ae21a27b3..3446ab1f44e7 100644 +--- a/fs/9p/vfs_inode_dotl.c ++++ b/fs/9p/vfs_inode_dotl.c +@@ -143,7 +143,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb, + if (retval) + goto error; + +- v9fs_stat2inode_dotl(st, inode); ++ v9fs_stat2inode_dotl(st, inode, 0); + v9fs_cache_inode_get_cookie(inode); + retval = v9fs_get_acl(inode, fid); + if (retval) +@@ -497,7 +497,7 @@ v9fs_vfs_getattr_dotl(const struct path *path, struct kstat *stat, + if (IS_ERR(st)) + return PTR_ERR(st); + +- v9fs_stat2inode_dotl(st, d_inode(dentry)); ++ v9fs_stat2inode_dotl(st, d_inode(dentry), 0); + generic_fillattr(d_inode(dentry), stat); + /* Change block size to what the server returned */ + stat->blksize = st->st_blksize; +@@ -608,11 +608,13 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr) + * v9fs_stat2inode_dotl - populate an inode structure with stat info + * @stat: stat structure + * @inode: inode to populate ++ * @flags: ctrl flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE) + * + */ + + void +-v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode) ++v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, ++ unsigned int flags) + { + umode_t mode; + struct v9fs_inode *v9inode = V9FS_I(inode); +@@ -632,7 +634,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode) + mode |= inode->i_mode & ~S_IALLUGO; + inode->i_mode = mode; + +- i_size_write(inode, stat->st_size); ++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) ++ v9fs_i_size_write(inode, stat->st_size); + inode->i_blocks = stat->st_blocks; + } else { + if (stat->st_result_mask & P9_STATS_ATIME) { +@@ -662,8 +665,9 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode) + } + if (stat->st_result_mask & P9_STATS_RDEV) + inode->i_rdev = new_decode_dev(stat->st_rdev); +- if (stat->st_result_mask & P9_STATS_SIZE) +- i_size_write(inode, stat->st_size); ++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) && ++ stat->st_result_mask & P9_STATS_SIZE) ++ v9fs_i_size_write(inode, stat->st_size); + if (stat->st_result_mask & P9_STATS_BLOCKS) + inode->i_blocks = stat->st_blocks; + } +@@ -929,9 +933,9 @@ v9fs_vfs_get_link_dotl(struct dentry *dentry, + + int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode) + { +- loff_t i_size; + struct p9_stat_dotl *st; + struct v9fs_session_info *v9ses; ++ unsigned int flags; + + v9ses = v9fs_inode2v9ses(inode); + st = p9_client_getattr_dotl(fid, P9_STATS_ALL); +@@ -943,16 +947,13 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode) + if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT)) + goto out; + +- spin_lock(&inode->i_lock); + /* + * We don't want to refresh inode->i_size, + * because we may have cached data + */ +- i_size = inode->i_size; +- v9fs_stat2inode_dotl(st, inode); +- if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) +- inode->i_size = i_size; +- spin_unlock(&inode->i_lock); ++ flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ? ++ V9FS_STAT2INODE_KEEP_ISIZE : 0; ++ v9fs_stat2inode_dotl(st, inode, flags); + out: + kfree(st); + return 0; +diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c +index 8b75463cb211..d4400779f6d9 100644 +--- a/fs/9p/vfs_super.c ++++ b/fs/9p/vfs_super.c +@@ -172,7 +172,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags, + goto release_sb; + } + d_inode(root)->i_ino = v9fs_qid2ino(&st->qid); +- v9fs_stat2inode_dotl(st, d_inode(root)); ++ v9fs_stat2inode_dotl(st, d_inode(root), 0); + kfree(st); + } else { + struct p9_wstat *st = NULL; +@@ -183,7 +183,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags, + } + + d_inode(root)->i_ino = v9fs_qid2ino(&st->qid); +- v9fs_stat2inode(st, d_inode(root), sb); ++ v9fs_stat2inode(st, d_inode(root), sb, 0); + + p9stat_free(st); + kfree(st); +diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c +index 1ba49ebe67da..1c42d9f1d6c8 100644 +--- a/fs/btrfs/acl.c ++++ b/fs/btrfs/acl.c +@@ -22,6 +22,7 @@ + #include + #include + #include ++#include + #include + + #include "ctree.h" +@@ -89,8 +90,16 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans, + } + + if (acl) { ++ unsigned int nofs_flag; ++ + size = posix_acl_xattr_size(acl->a_count); ++ /* ++ * We're holding a transaction handle, so use a NOFS memory ++ * allocation context to avoid deadlock if reclaim happens. ++ */ ++ nofs_flag = memalloc_nofs_save(); + value = kmalloc(size, GFP_KERNEL); ++ memalloc_nofs_restore(nofs_flag); + if (!value) { + ret = -ENOMEM; + goto out; +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 5b62e06567a3..4cc534584665 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -3014,11 +3014,11 @@ static int __do_readpage(struct extent_io_tree *tree, + */ + if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) && + prev_em_start && *prev_em_start != (u64)-1 && +- *prev_em_start != em->orig_start) ++ *prev_em_start != em->start) + force_bio_submit = true; + + if (prev_em_start) +- *prev_em_start = em->orig_start; ++ *prev_em_start = em->start; + + free_extent_map(em); + em = NULL; +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 9663b6aa2a56..38ed8e259e00 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -6420,10 +6420,10 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info, + } + + if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) || +- (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) || ++ (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes != 2) || + (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) || + (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < 3) || +- (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) || ++ (type & BTRFS_BLOCK_GROUP_DUP && num_stripes != 2) || + ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 && + num_stripes != 1)) { + btrfs_err(fs_info, +diff --git a/fs/cifs/file.c b/fs/cifs/file.c +index 852d7d1dcbbd..72d6f4db9bdc 100644 +--- a/fs/cifs/file.c ++++ b/fs/cifs/file.c +@@ -2889,14 +2889,16 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) + * these pages but not on the region from pos to ppos+len-1. + */ + written = cifs_user_writev(iocb, from); +- if (written > 0 && CIFS_CACHE_READ(cinode)) { ++ if (CIFS_CACHE_READ(cinode)) { + /* +- * Windows 7 server can delay breaking level2 oplock if a write +- * request comes - break it on the client to prevent reading +- * an old data. ++ * We have read level caching and we have just sent a write ++ * request to the server thus making data in the cache stale. ++ * Zap the cache and set oplock/lease level to NONE to avoid ++ * reading stale data from the cache. All subsequent read ++ * operations will read new data from the server. + */ + cifs_zap_mapping(inode); +- cifs_dbg(FYI, "Set no oplock for inode=%p after a write operation\n", ++ cifs_dbg(FYI, "Set Oplock/Lease to NONE for inode=%p after write\n", + inode); + cinode->oplock = 0; + } +diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c +index efdfdb47a7dd..a97a0e0b1a74 100644 +--- a/fs/cifs/smb2misc.c ++++ b/fs/cifs/smb2misc.c +@@ -479,7 +479,6 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp, + __u8 lease_state; + struct list_head *tmp; + struct cifsFileInfo *cfile; +- struct TCP_Server_Info *server = tcon->ses->server; + struct cifs_pending_open *open; + struct cifsInodeInfo *cinode; + int ack_req = le32_to_cpu(rsp->Flags & +@@ -499,13 +498,25 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp, + cifs_dbg(FYI, "lease key match, lease break 0x%x\n", + le32_to_cpu(rsp->NewLeaseState)); + +- server->ops->set_oplock_level(cinode, lease_state, 0, NULL); +- + if (ack_req) + cfile->oplock_break_cancelled = false; + else + cfile->oplock_break_cancelled = true; + ++ set_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags); ++ ++ /* ++ * Set or clear flags depending on the lease state being READ. ++ * HANDLE caching flag should be added when the client starts ++ * to defer closing remote file handles with HANDLE leases. ++ */ ++ if (lease_state & SMB2_LEASE_READ_CACHING_HE) ++ set_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2, ++ &cinode->flags); ++ else ++ clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2, ++ &cinode->flags); ++ + queue_work(cifsoplockd_wq, &cfile->oplock_break); + kfree(lw); + return true; +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index fb1c65f93114..418062c7f040 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -1932,6 +1932,15 @@ smb2_downgrade_oplock(struct TCP_Server_Info *server, + server->ops->set_oplock_level(cinode, 0, 0, NULL); + } + ++static void ++smb21_downgrade_oplock(struct TCP_Server_Info *server, ++ struct cifsInodeInfo *cinode, bool set_level2) ++{ ++ server->ops->set_oplock_level(cinode, ++ set_level2 ? SMB2_LEASE_READ_CACHING_HE : ++ 0, 0, NULL); ++} ++ + static void + smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock, + unsigned int epoch, bool *purge_cache) +@@ -2917,7 +2926,7 @@ struct smb_version_operations smb21_operations = { + .print_stats = smb2_print_stats, + .is_oplock_break = smb2_is_valid_oplock_break, + .handle_cancelled_mid = smb2_handle_cancelled_mid, +- .downgrade_oplock = smb2_downgrade_oplock, ++ .downgrade_oplock = smb21_downgrade_oplock, + .need_neg = smb2_need_neg, + .negotiate = smb2_negotiate, + .negotiate_wsize = smb2_negotiate_wsize, +@@ -3012,7 +3021,7 @@ struct smb_version_operations smb30_operations = { + .dump_share_caps = smb2_dump_share_caps, + .is_oplock_break = smb2_is_valid_oplock_break, + .handle_cancelled_mid = smb2_handle_cancelled_mid, +- .downgrade_oplock = smb2_downgrade_oplock, ++ .downgrade_oplock = smb21_downgrade_oplock, + .need_neg = smb2_need_neg, + .negotiate = smb2_negotiate, + .negotiate_wsize = smb2_negotiate_wsize, +@@ -3117,7 +3126,7 @@ struct smb_version_operations smb311_operations = { + .dump_share_caps = smb2_dump_share_caps, + .is_oplock_break = smb2_is_valid_oplock_break, + .handle_cancelled_mid = smb2_handle_cancelled_mid, +- .downgrade_oplock = smb2_downgrade_oplock, ++ .downgrade_oplock = smb21_downgrade_oplock, + .need_neg = smb2_need_neg, + .negotiate = smb2_negotiate, + .negotiate_wsize = smb2_negotiate_wsize, +diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c +index 542364bf923e..32f6f1c683d9 100644 +--- a/fs/devpts/inode.c ++++ b/fs/devpts/inode.c +@@ -439,6 +439,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent) + s->s_blocksize_bits = 10; + s->s_magic = DEVPTS_SUPER_MAGIC; + s->s_op = &devpts_sops; ++ s->s_d_op = &simple_dentry_operations; + s->s_time_gran = 1; + + error = -ENOMEM; +diff --git a/fs/ext2/super.c b/fs/ext2/super.c +index 726e680a3368..13f470636672 100644 +--- a/fs/ext2/super.c ++++ b/fs/ext2/super.c +@@ -754,7 +754,8 @@ static loff_t ext2_max_size(int bits) + { + loff_t res = EXT2_NDIR_BLOCKS; + int meta_blocks; +- loff_t upper_limit; ++ unsigned int upper_limit; ++ unsigned int ppb = 1 << (bits-2); + + /* This is calculated to be the largest file size for a + * dense, file such that the total number of +@@ -768,24 +769,34 @@ static loff_t ext2_max_size(int bits) + /* total blocks in file system block size */ + upper_limit >>= (bits - 9); + ++ /* Compute how many blocks we can address by block tree */ ++ res += 1LL << (bits-2); ++ res += 1LL << (2*(bits-2)); ++ res += 1LL << (3*(bits-2)); ++ /* Does block tree limit file size? */ ++ if (res < upper_limit) ++ goto check_lfs; + ++ res = upper_limit; ++ /* How many metadata blocks are needed for addressing upper_limit? */ ++ upper_limit -= EXT2_NDIR_BLOCKS; + /* indirect blocks */ + meta_blocks = 1; ++ upper_limit -= ppb; + /* double indirect blocks */ +- meta_blocks += 1 + (1LL << (bits-2)); +- /* tripple indirect blocks */ +- meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2))); +- +- upper_limit -= meta_blocks; +- upper_limit <<= bits; +- +- res += 1LL << (bits-2); +- res += 1LL << (2*(bits-2)); +- res += 1LL << (3*(bits-2)); ++ if (upper_limit < ppb * ppb) { ++ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb); ++ res -= meta_blocks; ++ goto check_lfs; ++ } ++ meta_blocks += 1 + ppb; ++ upper_limit -= ppb * ppb; ++ /* tripple indirect blocks for the rest */ ++ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) + ++ DIV_ROUND_UP(upper_limit, ppb*ppb); ++ res -= meta_blocks; ++check_lfs: + res <<= bits; +- if (res > upper_limit) +- res = upper_limit; +- + if (res > MAX_LFS_FILESIZE) + res = MAX_LFS_FILESIZE; + +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h +index 02970a2e86a3..95ef26b39e69 100644 +--- a/fs/ext4/ext4.h ++++ b/fs/ext4/ext4.h +@@ -426,6 +426,9 @@ struct flex_groups { + /* Flags that are appropriate for non-directories/regular files. */ + #define EXT4_OTHER_FLMASK (EXT4_NODUMP_FL | EXT4_NOATIME_FL) + ++/* The only flags that should be swapped */ ++#define EXT4_FL_SHOULD_SWAP (EXT4_HUGE_FILE_FL | EXT4_EXTENTS_FL) ++ + /* Mask out flags that are inappropriate for the given type of inode. */ + static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags) + { +diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c +index b2a47058e04c..7917cc89ab21 100644 +--- a/fs/ext4/ioctl.c ++++ b/fs/ext4/ioctl.c +@@ -61,6 +61,7 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2) + loff_t isize; + struct ext4_inode_info *ei1; + struct ext4_inode_info *ei2; ++ unsigned long tmp; + + ei1 = EXT4_I(inode1); + ei2 = EXT4_I(inode2); +@@ -73,7 +74,10 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2) + swap(inode1->i_mtime, inode2->i_mtime); + + memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data)); +- swap(ei1->i_flags, ei2->i_flags); ++ tmp = ei1->i_flags & EXT4_FL_SHOULD_SWAP; ++ ei1->i_flags = (ei2->i_flags & EXT4_FL_SHOULD_SWAP) | ++ (ei1->i_flags & ~EXT4_FL_SHOULD_SWAP); ++ ei2->i_flags = tmp | (ei2->i_flags & ~EXT4_FL_SHOULD_SWAP); + swap(ei1->i_disksize, ei2->i_disksize); + ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS); + ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS); +diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c +index 703b516366fd..6f0acfe31418 100644 +--- a/fs/ext4/resize.c ++++ b/fs/ext4/resize.c +@@ -1930,7 +1930,8 @@ retry: + le16_to_cpu(es->s_reserved_gdt_blocks); + n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb); + n_blocks_count = (ext4_fsblk_t)n_group * +- EXT4_BLOCKS_PER_GROUP(sb); ++ EXT4_BLOCKS_PER_GROUP(sb) + ++ le32_to_cpu(es->s_first_data_block); + n_group--; /* set to last group number */ + } + +diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c +index e42736c1fdc8..650927f0a2dc 100644 +--- a/fs/jbd2/transaction.c ++++ b/fs/jbd2/transaction.c +@@ -1224,11 +1224,12 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh) + struct journal_head *jh; + char *committed_data = NULL; + +- JBUFFER_TRACE(jh, "entry"); + if (jbd2_write_access_granted(handle, bh, true)) + return 0; + + jh = jbd2_journal_add_journal_head(bh); ++ JBUFFER_TRACE(jh, "entry"); ++ + /* + * Do this first --- it can drop the journal lock, so we want to + * make sure that obtaining the committed_data is done +@@ -1339,15 +1340,17 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh) + + if (is_handle_aborted(handle)) + return -EROFS; +- if (!buffer_jbd(bh)) { +- ret = -EUCLEAN; +- goto out; +- } ++ if (!buffer_jbd(bh)) ++ return -EUCLEAN; ++ + /* + * We don't grab jh reference here since the buffer must be part + * of the running transaction. + */ + jh = bh2jh(bh); ++ jbd_debug(5, "journal_head %p\n", jh); ++ JBUFFER_TRACE(jh, "entry"); ++ + /* + * This and the following assertions are unreliable since we may see jh + * in inconsistent state unless we grab bh_state lock. But this is +@@ -1381,9 +1384,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh) + } + + journal = transaction->t_journal; +- jbd_debug(5, "journal_head %p\n", jh); +- JBUFFER_TRACE(jh, "entry"); +- + jbd_lock_bh_state(bh); + + if (jh->b_modified == 0) { +@@ -1581,14 +1581,21 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh) + /* However, if the buffer is still owned by a prior + * (committing) transaction, we can't drop it yet... */ + JBUFFER_TRACE(jh, "belongs to older transaction"); +- /* ... but we CAN drop it from the new transaction if we +- * have also modified it since the original commit. */ ++ /* ... but we CAN drop it from the new transaction through ++ * marking the buffer as freed and set j_next_transaction to ++ * the new transaction, so that not only the commit code ++ * knows it should clear dirty bits when it is done with the ++ * buffer, but also the buffer can be checkpointed only ++ * after the new transaction commits. */ + +- if (jh->b_next_transaction) { +- J_ASSERT(jh->b_next_transaction == transaction); ++ set_buffer_freed(bh); ++ ++ if (!jh->b_next_transaction) { + spin_lock(&journal->j_list_lock); +- jh->b_next_transaction = NULL; ++ jh->b_next_transaction = transaction; + spin_unlock(&journal->j_list_lock); ++ } else { ++ J_ASSERT(jh->b_next_transaction == transaction); + + /* + * only drop a reference if this transaction modified +diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c +index 95a7c88baed9..5019058e0f6a 100644 +--- a/fs/kernfs/mount.c ++++ b/fs/kernfs/mount.c +@@ -196,8 +196,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn, + return dentry; + + knparent = find_next_ancestor(kn, NULL); +- if (WARN_ON(!knparent)) ++ if (WARN_ON(!knparent)) { ++ dput(dentry); + return ERR_PTR(-EINVAL); ++ } + + do { + struct dentry *dtmp; +@@ -206,8 +208,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn, + if (kn == knparent) + return dentry; + kntmp = find_next_ancestor(kn, knparent); +- if (WARN_ON(!kntmp)) ++ if (WARN_ON(!kntmp)) { ++ dput(dentry); + return ERR_PTR(-EINVAL); ++ } + dtmp = lookup_one_len_unlocked(kntmp->name, dentry, + strlen(kntmp->name)); + dput(dentry); +diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c +index b6f9d84ba19b..ae2d6f220627 100644 +--- a/fs/nfs/nfs4idmap.c ++++ b/fs/nfs/nfs4idmap.c +@@ -44,6 +44,7 @@ + #include + #include + #include ++#include + #include + + #include "internal.h" +@@ -59,7 +60,7 @@ static struct key_type key_type_id_resolver_legacy; + struct idmap_legacy_upcalldata { + struct rpc_pipe_msg pipe_msg; + struct idmap_msg idmap_msg; +- struct key_construction *key_cons; ++ struct key *authkey; + struct idmap *idmap; + }; + +@@ -384,7 +385,7 @@ static const match_table_t nfs_idmap_tokens = { + { Opt_find_err, NULL } + }; + +-static int nfs_idmap_legacy_upcall(struct key_construction *, const char *, void *); ++static int nfs_idmap_legacy_upcall(struct key *, void *); + static ssize_t idmap_pipe_downcall(struct file *, const char __user *, + size_t); + static void idmap_release_pipe(struct inode *); +@@ -545,11 +546,12 @@ nfs_idmap_prepare_pipe_upcall(struct idmap *idmap, + static void + nfs_idmap_complete_pipe_upcall_locked(struct idmap *idmap, int ret) + { +- struct key_construction *cons = idmap->idmap_upcall_data->key_cons; ++ struct key *authkey = idmap->idmap_upcall_data->authkey; + + kfree(idmap->idmap_upcall_data); + idmap->idmap_upcall_data = NULL; +- complete_request_key(cons, ret); ++ complete_request_key(authkey, ret); ++ key_put(authkey); + } + + static void +@@ -559,15 +561,14 @@ nfs_idmap_abort_pipe_upcall(struct idmap *idmap, int ret) + nfs_idmap_complete_pipe_upcall_locked(idmap, ret); + } + +-static int nfs_idmap_legacy_upcall(struct key_construction *cons, +- const char *op, +- void *aux) ++static int nfs_idmap_legacy_upcall(struct key *authkey, void *aux) + { + struct idmap_legacy_upcalldata *data; ++ struct request_key_auth *rka = get_request_key_auth(authkey); + struct rpc_pipe_msg *msg; + struct idmap_msg *im; + struct idmap *idmap = (struct idmap *)aux; +- struct key *key = cons->key; ++ struct key *key = rka->target_key; + int ret = -ENOKEY; + + if (!aux) +@@ -582,7 +583,7 @@ static int nfs_idmap_legacy_upcall(struct key_construction *cons, + msg = &data->pipe_msg; + im = &data->idmap_msg; + data->idmap = idmap; +- data->key_cons = cons; ++ data->authkey = key_get(authkey); + + ret = nfs_idmap_prepare_message(key->description, idmap, im, msg); + if (ret < 0) +@@ -600,7 +601,7 @@ static int nfs_idmap_legacy_upcall(struct key_construction *cons, + out2: + kfree(data); + out1: +- complete_request_key(cons, ret); ++ complete_request_key(authkey, ret); + return ret; + } + +@@ -647,9 +648,10 @@ out: + static ssize_t + idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen) + { ++ struct request_key_auth *rka; + struct rpc_inode *rpci = RPC_I(file_inode(filp)); + struct idmap *idmap = (struct idmap *)rpci->private; +- struct key_construction *cons; ++ struct key *authkey; + struct idmap_msg im; + size_t namelen_in; + int ret = -ENOKEY; +@@ -661,7 +663,8 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen) + if (idmap->idmap_upcall_data == NULL) + goto out_noupcall; + +- cons = idmap->idmap_upcall_data->key_cons; ++ authkey = idmap->idmap_upcall_data->authkey; ++ rka = get_request_key_auth(authkey); + + if (mlen != sizeof(im)) { + ret = -ENOSPC; +@@ -686,9 +689,9 @@ idmap_pipe_downcall(struct file *filp, const char __user *src, size_t mlen) + + ret = nfs_idmap_read_and_verify_message(&im, + &idmap->idmap_upcall_data->idmap_msg, +- cons->key, cons->authkey); ++ rka->target_key, authkey); + if (ret >= 0) { +- key_set_timeout(cons->key, nfs_idmap_cache_timeout); ++ key_set_timeout(rka->target_key, nfs_idmap_cache_timeout); + ret = mlen; + } + +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index a3b67d3b1dfb..9041a892701f 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -914,6 +914,13 @@ nfs4_sequence_process_interrupted(struct nfs_client *client, + + #endif /* !CONFIG_NFS_V4_1 */ + ++static void nfs41_sequence_res_init(struct nfs4_sequence_res *res) ++{ ++ res->sr_timestamp = jiffies; ++ res->sr_status_flags = 0; ++ res->sr_status = 1; ++} ++ + static + void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args, + struct nfs4_sequence_res *res, +@@ -925,10 +932,6 @@ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args, + args->sa_slot = slot; + + res->sr_slot = slot; +- res->sr_timestamp = jiffies; +- res->sr_status_flags = 0; +- res->sr_status = 1; +- + } + + int nfs4_setup_sequence(struct nfs_client *client, +@@ -974,6 +977,7 @@ int nfs4_setup_sequence(struct nfs_client *client, + + trace_nfs4_setup_sequence(session, args); + out_start: ++ nfs41_sequence_res_init(res); + rpc_call_start(task); + return 0; + +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c +index 37f20d7a26ed..28b013d1d44a 100644 +--- a/fs/nfs/pagelist.c ++++ b/fs/nfs/pagelist.c +@@ -988,6 +988,17 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc) + } + } + ++static void ++nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc, ++ struct nfs_page *req) ++{ ++ LIST_HEAD(head); ++ ++ nfs_list_remove_request(req); ++ nfs_list_add_request(req, &head); ++ desc->pg_completion_ops->error_cleanup(&head); ++} ++ + /** + * nfs_pageio_add_request - Attempt to coalesce a request into a page list. + * @desc: destination io descriptor +@@ -1025,10 +1036,8 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc, + nfs_page_group_unlock(req); + desc->pg_moreio = 1; + nfs_pageio_doio(desc); +- if (desc->pg_error < 0) +- return 0; +- if (mirror->pg_recoalesce) +- return 0; ++ if (desc->pg_error < 0 || mirror->pg_recoalesce) ++ goto out_cleanup_subreq; + /* retry add_request for this subreq */ + nfs_page_group_lock(req); + continue; +@@ -1061,6 +1070,10 @@ err_ptr: + desc->pg_error = PTR_ERR(subreq); + nfs_page_group_unlock(req); + return 0; ++out_cleanup_subreq: ++ if (req != subreq) ++ nfs_pageio_cleanup_request(desc, subreq); ++ return 0; + } + + static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) +@@ -1079,7 +1092,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) + struct nfs_page *req; + + req = list_first_entry(&head, struct nfs_page, wb_list); +- nfs_list_remove_request(req); + if (__nfs_pageio_add_request(desc, req)) + continue; + if (desc->pg_error < 0) { +@@ -1168,11 +1180,14 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc, + if (nfs_pgio_has_mirroring(desc)) + desc->pg_mirror_idx = midx; + if (!nfs_pageio_add_request_mirror(desc, dupreq)) +- goto out_failed; ++ goto out_cleanup_subreq; + } + + return 1; + ++out_cleanup_subreq: ++ if (req != dupreq) ++ nfs_pageio_cleanup_request(desc, dupreq); + out_failed: + /* remember fatal errors */ + if (nfs_error_is_fatal(desc->pg_error)) +@@ -1198,7 +1213,7 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc, + desc->pg_mirror_idx = mirror_idx; + for (;;) { + nfs_pageio_doio(desc); +- if (!mirror->pg_recoalesce) ++ if (desc->pg_error < 0 || !mirror->pg_recoalesce) + break; + if (!nfs_do_recoalesce(desc)) + break; +diff --git a/fs/nfs/write.c b/fs/nfs/write.c +index 2d956a7d5378..50ed3944d183 100644 +--- a/fs/nfs/write.c ++++ b/fs/nfs/write.c +@@ -236,9 +236,9 @@ out: + } + + /* A writeback failed: mark the page as bad, and invalidate the page cache */ +-static void nfs_set_pageerror(struct page *page) ++static void nfs_set_pageerror(struct address_space *mapping) + { +- nfs_zap_mapping(page_file_mapping(page)->host, page_file_mapping(page)); ++ nfs_zap_mapping(mapping->host, mapping); + } + + /* +@@ -994,7 +994,7 @@ static void nfs_write_completion(struct nfs_pgio_header *hdr) + nfs_list_remove_request(req); + if (test_bit(NFS_IOHDR_ERROR, &hdr->flags) && + (hdr->good_bytes < bytes)) { +- nfs_set_pageerror(req->wb_page); ++ nfs_set_pageerror(page_file_mapping(req->wb_page)); + nfs_context_set_write_error(req->wb_context, hdr->error); + goto remove_req; + } +@@ -1330,7 +1330,8 @@ int nfs_updatepage(struct file *file, struct page *page, + unsigned int offset, unsigned int count) + { + struct nfs_open_context *ctx = nfs_file_open_context(file); +- struct inode *inode = page_file_mapping(page)->host; ++ struct address_space *mapping = page_file_mapping(page); ++ struct inode *inode = mapping->host; + int status = 0; + + nfs_inc_stats(inode, NFSIOS_VFSUPDATEPAGE); +@@ -1348,7 +1349,7 @@ int nfs_updatepage(struct file *file, struct page *page, + + status = nfs_writepage_setup(ctx, page, offset, count); + if (status < 0) +- nfs_set_pageerror(page); ++ nfs_set_pageerror(mapping); + else + __set_page_dirty_nobuffers(page); + out: +diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c +index 1d0ce3c57d93..c0de4d6cd857 100644 +--- a/fs/nfsd/nfs3proc.c ++++ b/fs/nfsd/nfs3proc.c +@@ -446,8 +446,19 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp) + &resp->common, nfs3svc_encode_entry); + memcpy(resp->verf, argp->verf, 8); + resp->count = resp->buffer - argp->buffer; +- if (resp->offset) +- xdr_encode_hyper(resp->offset, argp->cookie); ++ if (resp->offset) { ++ loff_t offset = argp->cookie; ++ ++ if (unlikely(resp->offset1)) { ++ /* we ended up with offset on a page boundary */ ++ *resp->offset = htonl(offset >> 32); ++ *resp->offset1 = htonl(offset & 0xffffffff); ++ resp->offset1 = NULL; ++ } else { ++ xdr_encode_hyper(resp->offset, offset); ++ } ++ resp->offset = NULL; ++ } + + RETURN_STATUS(nfserr); + } +@@ -516,6 +527,7 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp) + } else { + xdr_encode_hyper(resp->offset, offset); + } ++ resp->offset = NULL; + } + + RETURN_STATUS(nfserr); +diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c +index f38acd905441..ef3e7878456c 100644 +--- a/fs/nfsd/nfs3xdr.c ++++ b/fs/nfsd/nfs3xdr.c +@@ -922,6 +922,7 @@ encode_entry(struct readdir_cd *ccd, const char *name, int namlen, + } else { + xdr_encode_hyper(cd->offset, offset64); + } ++ cd->offset = NULL; + } + + /* +diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c +index 4b8ebcc6b183..d44402241d9e 100644 +--- a/fs/nfsd/nfsctl.c ++++ b/fs/nfsd/nfsctl.c +@@ -1126,7 +1126,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) + case 'Y': + case 'y': + case '1': +- if (nn->nfsd_serv) ++ if (!nn->nfsd_serv) + return -EBUSY; + nfsd4_end_grace(nn); + break; +diff --git a/fs/pipe.c b/fs/pipe.c +index 8ef7d7bef775..8f9628494981 100644 +--- a/fs/pipe.c ++++ b/fs/pipe.c +@@ -239,6 +239,14 @@ static const struct pipe_buf_operations anon_pipe_buf_ops = { + .get = generic_pipe_buf_get, + }; + ++static const struct pipe_buf_operations anon_pipe_buf_nomerge_ops = { ++ .can_merge = 0, ++ .confirm = generic_pipe_buf_confirm, ++ .release = anon_pipe_buf_release, ++ .steal = anon_pipe_buf_steal, ++ .get = generic_pipe_buf_get, ++}; ++ + static const struct pipe_buf_operations packet_pipe_buf_ops = { + .can_merge = 0, + .confirm = generic_pipe_buf_confirm, +@@ -247,6 +255,12 @@ static const struct pipe_buf_operations packet_pipe_buf_ops = { + .get = generic_pipe_buf_get, + }; + ++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf) ++{ ++ if (buf->ops == &anon_pipe_buf_ops) ++ buf->ops = &anon_pipe_buf_nomerge_ops; ++} ++ + static ssize_t + pipe_read(struct kiocb *iocb, struct iov_iter *to) + { +diff --git a/fs/splice.c b/fs/splice.c +index f3084cce0ea6..00d2f142dcf9 100644 +--- a/fs/splice.c ++++ b/fs/splice.c +@@ -1580,6 +1580,8 @@ retry: + */ + obuf->flags &= ~PIPE_BUF_FLAG_GIFT; + ++ pipe_buf_mark_unmergeable(obuf); ++ + obuf->len = len; + opipe->nrbufs++; + ibuf->offset += obuf->len; +@@ -1654,6 +1656,8 @@ static int link_pipe(struct pipe_inode_info *ipipe, + */ + obuf->flags &= ~PIPE_BUF_FLAG_GIFT; + ++ pipe_buf_mark_unmergeable(obuf); ++ + if (obuf->len > len) + obuf->len = len; + +diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h +index fcec26d60d8c..c229ffbed6d4 100644 +--- a/include/asm-generic/vmlinux.lds.h ++++ b/include/asm-generic/vmlinux.lds.h +@@ -696,7 +696,7 @@ + KEEP(*(.orc_unwind_ip)) \ + VMLINUX_SYMBOL(__stop_orc_unwind_ip) = .; \ + } \ +- . = ALIGN(6); \ ++ . = ALIGN(2); \ + .orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) { \ + VMLINUX_SYMBOL(__start_orc_unwind) = .; \ + KEEP(*(.orc_unwind)) \ +diff --git a/include/keys/request_key_auth-type.h b/include/keys/request_key_auth-type.h +new file mode 100644 +index 000000000000..a726dd3f1dc6 +--- /dev/null ++++ b/include/keys/request_key_auth-type.h +@@ -0,0 +1,36 @@ ++/* request_key authorisation token key type ++ * ++ * Copyright (C) 2005 Red Hat, Inc. All Rights Reserved. ++ * Written by David Howells (dhowells@redhat.com) ++ * ++ * This program is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU General Public Licence ++ * as published by the Free Software Foundation; either version ++ * 2 of the Licence, or (at your option) any later version. ++ */ ++ ++#ifndef _KEYS_REQUEST_KEY_AUTH_TYPE_H ++#define _KEYS_REQUEST_KEY_AUTH_TYPE_H ++ ++#include ++ ++/* ++ * Authorisation record for request_key(). ++ */ ++struct request_key_auth { ++ struct key *target_key; ++ struct key *dest_keyring; ++ const struct cred *cred; ++ void *callout_info; ++ size_t callout_len; ++ pid_t pid; ++ char op[8]; ++} __randomize_layout; ++ ++static inline struct request_key_auth *get_request_key_auth(const struct key *key) ++{ ++ return key->payload.data[0]; ++} ++ ++ ++#endif /* _KEYS_REQUEST_KEY_AUTH_TYPE_H */ +diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h +index a5538433c927..91a063a1f3b3 100644 +--- a/include/linux/device-mapper.h ++++ b/include/linux/device-mapper.h +@@ -630,7 +630,7 @@ do { \ + */ + #define dm_target_offset(ti, sector) ((sector) - (ti)->begin) + +-static inline sector_t to_sector(unsigned long n) ++static inline sector_t to_sector(unsigned long long n) + { + return (n >> SECTOR_SHIFT); + } +diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h +index 0fbbcdf0c178..da0af631ded5 100644 +--- a/include/linux/hardirq.h ++++ b/include/linux/hardirq.h +@@ -60,8 +60,14 @@ extern void irq_enter(void); + */ + extern void irq_exit(void); + ++#ifndef arch_nmi_enter ++#define arch_nmi_enter() do { } while (0) ++#define arch_nmi_exit() do { } while (0) ++#endif ++ + #define nmi_enter() \ + do { \ ++ arch_nmi_enter(); \ + printk_nmi_enter(); \ + lockdep_off(); \ + ftrace_nmi_enter(); \ +@@ -80,6 +86,7 @@ extern void irq_exit(void); + ftrace_nmi_exit(); \ + lockdep_on(); \ + printk_nmi_exit(); \ ++ arch_nmi_exit(); \ + } while (0) + + #endif /* LINUX_HARDIRQ_H */ +diff --git a/include/linux/key-type.h b/include/linux/key-type.h +index 9520fc3c3b9a..dfb3ba782d2c 100644 +--- a/include/linux/key-type.h ++++ b/include/linux/key-type.h +@@ -17,15 +17,6 @@ + + #ifdef CONFIG_KEYS + +-/* +- * key under-construction record +- * - passed to the request_key actor if supplied +- */ +-struct key_construction { +- struct key *key; /* key being constructed */ +- struct key *authkey;/* authorisation for key being constructed */ +-}; +- + /* + * Pre-parsed payload, used by key add, update and instantiate. + * +@@ -47,8 +38,7 @@ struct key_preparsed_payload { + time_t expiry; /* Expiry time of key */ + } __randomize_layout; + +-typedef int (*request_key_actor_t)(struct key_construction *key, +- const char *op, void *aux); ++typedef int (*request_key_actor_t)(struct key *auth_key, void *aux); + + /* + * Preparsed matching criterion. +@@ -170,20 +160,20 @@ extern int key_instantiate_and_link(struct key *key, + const void *data, + size_t datalen, + struct key *keyring, +- struct key *instkey); ++ struct key *authkey); + extern int key_reject_and_link(struct key *key, + unsigned timeout, + unsigned error, + struct key *keyring, +- struct key *instkey); +-extern void complete_request_key(struct key_construction *cons, int error); ++ struct key *authkey); ++extern void complete_request_key(struct key *authkey, int error); + + static inline int key_negate_and_link(struct key *key, + unsigned timeout, + struct key *keyring, +- struct key *instkey) ++ struct key *authkey) + { +- return key_reject_and_link(key, timeout, ENOKEY, keyring, instkey); ++ return key_reject_and_link(key, timeout, ENOKEY, keyring, authkey); + } + + extern int generic_key_instantiate(struct key *key, struct key_preparsed_payload *prep); +diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h +index 4f7f19c1dc0a..753c16633bac 100644 +--- a/include/linux/kvm_host.h ++++ b/include/linux/kvm_host.h +@@ -625,7 +625,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, + struct kvm_memory_slot *dont); + int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned long npages); +-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots); ++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); + int kvm_arch_prepare_memory_region(struct kvm *kvm, + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region *mem, +diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h +index 6a80cfc63e0c..befdcd304b3d 100644 +--- a/include/linux/pipe_fs_i.h ++++ b/include/linux/pipe_fs_i.h +@@ -183,6 +183,7 @@ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *); + int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *); + int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *); + void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *); ++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf); + + extern const struct pipe_buf_operations nosteal_pipe_buf_ops; + +diff --git a/include/linux/property.h b/include/linux/property.h +index 89d94b349912..45777ec1c524 100644 +--- a/include/linux/property.h ++++ b/include/linux/property.h +@@ -252,7 +252,7 @@ struct property_entry { + #define PROPERTY_ENTRY_STRING(_name_, _val_) \ + (struct property_entry) { \ + .name = _name_, \ +- .length = sizeof(_val_), \ ++ .length = sizeof(const char *), \ + .is_string = true, \ + { .value = { .str = _val_ } }, \ + } +diff --git a/include/net/phonet/pep.h b/include/net/phonet/pep.h +index b669fe6dbc3b..98f31c7ea23d 100644 +--- a/include/net/phonet/pep.h ++++ b/include/net/phonet/pep.h +@@ -63,10 +63,11 @@ struct pnpipehdr { + u8 state_after_reset; /* reset request */ + u8 error_code; /* any response */ + u8 pep_type; /* status indication */ +- u8 data[1]; ++ u8 data0; /* anything else */ + }; ++ u8 data[]; + }; +-#define other_pep_type data[1] ++#define other_pep_type data[0] + + static inline struct pnpipehdr *pnp_hdr(struct sk_buff *skb) + { +diff --git a/init/main.c b/init/main.c +index c4a45145e102..3d3d79c5a232 100644 +--- a/init/main.c ++++ b/init/main.c +@@ -663,7 +663,6 @@ asmlinkage __visible void __init start_kernel(void) + initrd_start = 0; + } + #endif +- page_ext_init(); + kmemleak_init(); + debug_objects_mem_init(); + setup_per_cpu_pageset(); +@@ -1069,6 +1068,8 @@ static noinline void __init kernel_init_freeable(void) + sched_init_smp(); + + page_alloc_init_late(); ++ /* Initialize page ext after all struct pages are initialized. */ ++ page_ext_init(); + + do_basic_setup(); + +diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c +index 21bbfc09e395..7e79358b4473 100644 +--- a/kernel/cgroup/cgroup.c ++++ b/kernel/cgroup/cgroup.c +@@ -1942,7 +1942,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags, + struct cgroup_namespace *ns) + { + struct dentry *dentry; +- bool new_sb; ++ bool new_sb = false; + + dentry = kernfs_mount(fs_type, flags, root->kf_root, magic, &new_sb); + +@@ -1952,6 +1952,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags, + */ + if (!IS_ERR(dentry) && ns != &init_cgroup_ns) { + struct dentry *nsdentry; ++ struct super_block *sb = dentry->d_sb; + struct cgroup *cgrp; + + mutex_lock(&cgroup_mutex); +@@ -1962,12 +1963,14 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags, + spin_unlock_irq(&css_set_lock); + mutex_unlock(&cgroup_mutex); + +- nsdentry = kernfs_node_dentry(cgrp->kn, dentry->d_sb); ++ nsdentry = kernfs_node_dentry(cgrp->kn, sb); + dput(dentry); ++ if (IS_ERR(nsdentry)) ++ deactivate_locked_super(sb); + dentry = nsdentry; + } + +- if (IS_ERR(dentry) || !new_sb) ++ if (!new_sb) + cgroup_put(&root->cgrp); + + return dentry; +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 710ce1d6b982..fb051fa99b67 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -1789,15 +1789,23 @@ static int rcu_future_gp_cleanup(struct rcu_state *rsp, struct rcu_node *rnp) + } + + /* +- * Awaken the grace-period kthread for the specified flavor of RCU. +- * Don't do a self-awaken, and don't bother awakening when there is +- * nothing for the grace-period kthread to do (as in several CPUs +- * raced to awaken, and we lost), and finally don't try to awaken +- * a kthread that has not yet been created. ++ * Awaken the grace-period kthread. Don't do a self-awaken (unless in ++ * an interrupt or softirq handler), and don't bother awakening when there ++ * is nothing for the grace-period kthread to do (as in several CPUs raced ++ * to awaken, and we lost), and finally don't try to awaken a kthread that ++ * has not yet been created. If all those checks are passed, track some ++ * debug information and awaken. ++ * ++ * So why do the self-wakeup when in an interrupt or softirq handler ++ * in the grace-period kthread's context? Because the kthread might have ++ * been interrupted just as it was going to sleep, and just after the final ++ * pre-sleep check of the awaken condition. In this case, a wakeup really ++ * is required, and is therefore supplied. + */ + static void rcu_gp_kthread_wake(struct rcu_state *rsp) + { +- if (current == rsp->gp_kthread || ++ if ((current == rsp->gp_kthread && ++ !in_interrupt() && !in_serving_softirq()) || + !READ_ONCE(rsp->gp_flags) || + !rsp->gp_kthread) + return; +diff --git a/kernel/sysctl.c b/kernel/sysctl.c +index 3ad00bf90b3d..a7acb058b776 100644 +--- a/kernel/sysctl.c ++++ b/kernel/sysctl.c +@@ -2530,7 +2530,16 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp, + { + struct do_proc_dointvec_minmax_conv_param *param = data; + if (write) { +- int val = *negp ? -*lvalp : *lvalp; ++ int val; ++ if (*negp) { ++ if (*lvalp > (unsigned long) INT_MAX + 1) ++ return -EINVAL; ++ val = -*lvalp; ++ } else { ++ if (*lvalp > (unsigned long) INT_MAX) ++ return -EINVAL; ++ val = *lvalp; ++ } + if ((param->min && *param->min > val) || + (param->max && *param->max < val)) + return -EINVAL; +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index bd6e6142473f..287e61aba57c 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -5604,7 +5604,6 @@ out: + return ret; + + fail: +- kfree(iter->trace); + kfree(iter); + __trace_array_put(tr); + mutex_unlock(&trace_types_lock); +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c +index 7eb975a2d0e1..e8c9eba9b1e7 100644 +--- a/kernel/trace/trace_events_hist.c ++++ b/kernel/trace/trace_events_hist.c +@@ -872,9 +872,10 @@ static inline void add_to_key(char *compound_key, void *key, + /* ensure NULL-termination */ + if (size > key_field->size - 1) + size = key_field->size - 1; +- } + +- memcpy(compound_key + key_field->offset, key, size); ++ strncpy(compound_key + key_field->offset, (char *)key, size); ++ } else ++ memcpy(compound_key + key_field->offset, key, size); + } + + static void event_hist_trigger(struct event_trigger_data *data, void *rec) +diff --git a/lib/assoc_array.c b/lib/assoc_array.c +index 4e53be8bc590..9463d3445ccd 100644 +--- a/lib/assoc_array.c ++++ b/lib/assoc_array.c +@@ -781,9 +781,11 @@ all_leaves_cluster_together: + new_s0->index_key[i] = + ops->get_key_chunk(index_key, i * ASSOC_ARRAY_KEY_CHUNK_SIZE); + +- blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK); +- pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank); +- new_s0->index_key[keylen - 1] &= ~blank; ++ if (level & ASSOC_ARRAY_KEY_CHUNK_MASK) { ++ blank = ULONG_MAX << (level & ASSOC_ARRAY_KEY_CHUNK_MASK); ++ pr_devel("blank off [%zu] %d: %lx\n", keylen - 1, level, blank); ++ new_s0->index_key[keylen - 1] &= ~blank; ++ } + + /* This now reduces to a node splitting exercise for which we'll need + * to regenerate the disparity table. +diff --git a/mm/gup.c b/mm/gup.c +index 4cc8a6ff0f56..7c0e5b1bbcd4 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1643,7 +1643,8 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end, + if (!pmd_present(pmd)) + return 0; + +- if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd))) { ++ if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) || ++ pmd_devmap(pmd))) { + /* + * NUMA hinting faults need to be handled in the GUP + * slowpath for accounting purposes and so that they +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index ef080fa682a6..001b6bfccbfb 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1701,19 +1701,17 @@ static int soft_offline_in_use_page(struct page *page, int flags) + struct page *hpage = compound_head(page); + + if (!PageHuge(page) && PageTransHuge(hpage)) { +- lock_page(hpage); +- if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) { +- unlock_page(hpage); +- if (!PageAnon(hpage)) ++ lock_page(page); ++ if (!PageAnon(page) || unlikely(split_huge_page(page))) { ++ unlock_page(page); ++ if (!PageAnon(page)) + pr_info("soft offline: %#lx: non anonymous thp\n", page_to_pfn(page)); + else + pr_info("soft offline: %#lx: thp split failed\n", page_to_pfn(page)); +- put_hwpoison_page(hpage); ++ put_hwpoison_page(page); + return -EBUSY; + } +- unlock_page(hpage); +- get_hwpoison_page(page); +- put_hwpoison_page(hpage); ++ unlock_page(page); + } + + if (PageHuge(page)) +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index a2f365f40433..40075c1946b3 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -4325,11 +4325,11 @@ refill: + /* Even if we own the page, we do not use atomic_set(). + * This would break get_page_unless_zero() users. + */ +- page_ref_add(page, size - 1); ++ page_ref_add(page, size); + + /* reset page count bias and offset to start of new frag */ + nc->pfmemalloc = page_is_pfmemalloc(page); +- nc->pagecnt_bias = size; ++ nc->pagecnt_bias = size + 1; + nc->offset = size; + } + +@@ -4345,10 +4345,10 @@ refill: + size = nc->size; + #endif + /* OK, page count is 0, we can safely set it */ +- set_page_count(page, size); ++ set_page_count(page, size + 1); + + /* reset page count bias and offset to start of new frag */ +- nc->pagecnt_bias = size; ++ nc->pagecnt_bias = size + 1; + offset = size - fragsz; + } + +diff --git a/mm/page_ext.c b/mm/page_ext.c +index 2c16216c29b6..2c44f5b78435 100644 +--- a/mm/page_ext.c ++++ b/mm/page_ext.c +@@ -396,10 +396,8 @@ void __init page_ext_init(void) + * We know some arch can have a nodes layout such as + * -------------pfn--------------> + * N0 | N1 | N2 | N0 | N1 | N2|.... +- * +- * Take into account DEFERRED_STRUCT_PAGE_INIT. + */ +- if (early_pfn_to_nid(pfn) != nid) ++ if (pfn_to_nid(pfn) != nid) + continue; + if (init_section_page_ext(pfn, nid)) + goto oom; +diff --git a/mm/shmem.c b/mm/shmem.c +index 6c10f1d92251..037e2ee9ccac 100644 +--- a/mm/shmem.c ++++ b/mm/shmem.c +@@ -3096,16 +3096,20 @@ static int shmem_create(struct inode *dir, struct dentry *dentry, umode_t mode, + static int shmem_link(struct dentry *old_dentry, struct inode *dir, struct dentry *dentry) + { + struct inode *inode = d_inode(old_dentry); +- int ret; ++ int ret = 0; + + /* + * No ordinary (disk based) filesystem counts links as inodes; + * but each new link needs a new dentry, pinning lowmem, and + * tmpfs dentries cannot be pruned until they are unlinked. ++ * But if an O_TMPFILE file is linked into the tmpfs, the ++ * first link must skip that, to get the accounting right. + */ +- ret = shmem_reserve_inode(inode->i_sb); +- if (ret) +- goto out; ++ if (inode->i_nlink) { ++ ret = shmem_reserve_inode(inode->i_sb); ++ if (ret) ++ goto out; ++ } + + dir->i_size += BOGO_DIRENT_SIZE; + inode->i_ctime = dir->i_ctime = dir->i_mtime = current_time(inode); +diff --git a/mm/vmalloc.c b/mm/vmalloc.c +index 9ff21a12ea00..8d9f636d0c98 100644 +--- a/mm/vmalloc.c ++++ b/mm/vmalloc.c +@@ -2262,7 +2262,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr, + if (!(area->flags & VM_USERMAP)) + return -EINVAL; + +- if (kaddr + size > area->addr + area->size) ++ if (kaddr + size > area->addr + get_vm_area_size(area)) + return -EINVAL; + + do { +diff --git a/net/9p/client.c b/net/9p/client.c +index ef0f8fe3ac08..6a6b290574a1 100644 +--- a/net/9p/client.c ++++ b/net/9p/client.c +@@ -1082,7 +1082,7 @@ struct p9_client *p9_client_create(const char *dev_name, char *options) + p9_debug(P9_DEBUG_ERROR, + "Please specify a msize of at least 4k\n"); + err = -EINVAL; +- goto free_client; ++ goto close_trans; + } + + err = p9_client_version(clnt); +diff --git a/net/ipv4/esp4.c b/net/ipv4/esp4.c +index b00e4a43b4dc..d30285c5d52d 100644 +--- a/net/ipv4/esp4.c ++++ b/net/ipv4/esp4.c +@@ -307,7 +307,7 @@ int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info * + skb->len += tailen; + skb->data_len += tailen; + skb->truesize += tailen; +- if (sk) ++ if (sk && sk_fullsock(sk)) + refcount_add(tailen, &sk->sk_wmem_alloc); + + goto out; +diff --git a/net/ipv6/esp6.c b/net/ipv6/esp6.c +index f112fef79216..ef7822fad0fd 100644 +--- a/net/ipv6/esp6.c ++++ b/net/ipv6/esp6.c +@@ -275,7 +275,7 @@ int esp6_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info + skb->len += tailen; + skb->data_len += tailen; + skb->truesize += tailen; +- if (sk) ++ if (sk && sk_fullsock(sk)) + refcount_add(tailen, &sk->sk_wmem_alloc); + + goto out; +diff --git a/net/key/af_key.c b/net/key/af_key.c +index 3b209cbfe1df..b095551a5773 100644 +--- a/net/key/af_key.c ++++ b/net/key/af_key.c +@@ -196,30 +196,22 @@ static int pfkey_release(struct socket *sock) + return 0; + } + +-static int pfkey_broadcast_one(struct sk_buff *skb, struct sk_buff **skb2, +- gfp_t allocation, struct sock *sk) ++static int pfkey_broadcast_one(struct sk_buff *skb, gfp_t allocation, ++ struct sock *sk) + { + int err = -ENOBUFS; + +- sock_hold(sk); +- if (*skb2 == NULL) { +- if (refcount_read(&skb->users) != 1) { +- *skb2 = skb_clone(skb, allocation); +- } else { +- *skb2 = skb; +- refcount_inc(&skb->users); +- } +- } +- if (*skb2 != NULL) { +- if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf) { +- skb_set_owner_r(*skb2, sk); +- skb_queue_tail(&sk->sk_receive_queue, *skb2); +- sk->sk_data_ready(sk); +- *skb2 = NULL; +- err = 0; +- } ++ if (atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf) ++ return err; ++ ++ skb = skb_clone(skb, allocation); ++ ++ if (skb) { ++ skb_set_owner_r(skb, sk); ++ skb_queue_tail(&sk->sk_receive_queue, skb); ++ sk->sk_data_ready(sk); ++ err = 0; + } +- sock_put(sk); + return err; + } + +@@ -234,7 +226,6 @@ static int pfkey_broadcast(struct sk_buff *skb, gfp_t allocation, + { + struct netns_pfkey *net_pfkey = net_generic(net, pfkey_net_id); + struct sock *sk; +- struct sk_buff *skb2 = NULL; + int err = -ESRCH; + + /* XXX Do we need something like netlink_overrun? I think +@@ -253,7 +244,7 @@ static int pfkey_broadcast(struct sk_buff *skb, gfp_t allocation, + * socket. + */ + if (pfk->promisc) +- pfkey_broadcast_one(skb, &skb2, GFP_ATOMIC, sk); ++ pfkey_broadcast_one(skb, GFP_ATOMIC, sk); + + /* the exact target will be processed later */ + if (sk == one_sk) +@@ -268,7 +259,7 @@ static int pfkey_broadcast(struct sk_buff *skb, gfp_t allocation, + continue; + } + +- err2 = pfkey_broadcast_one(skb, &skb2, GFP_ATOMIC, sk); ++ err2 = pfkey_broadcast_one(skb, GFP_ATOMIC, sk); + + /* Error is cleared after successful sending to at least one + * registered KM */ +@@ -278,9 +269,8 @@ static int pfkey_broadcast(struct sk_buff *skb, gfp_t allocation, + rcu_read_unlock(); + + if (one_sk != NULL) +- err = pfkey_broadcast_one(skb, &skb2, allocation, one_sk); ++ err = pfkey_broadcast_one(skb, allocation, one_sk); + +- kfree_skb(skb2); + kfree_skb(skb); + return err; + } +diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c +index 197947a07f83..ed57db9b6086 100644 +--- a/net/mac80211/agg-tx.c ++++ b/net/mac80211/agg-tx.c +@@ -8,7 +8,7 @@ + * Copyright 2007, Michael Wu + * Copyright 2007-2010, Intel Corporation + * Copyright(c) 2015-2017 Intel Deutschland GmbH +- * Copyright (C) 2018 Intel Corporation ++ * Copyright (C) 2018 - 2019 Intel Corporation + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as +@@ -361,6 +361,8 @@ int ___ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid, + + set_bit(HT_AGG_STATE_STOPPING, &tid_tx->state); + ++ ieee80211_agg_stop_txq(sta, tid); ++ + spin_unlock_bh(&sta->lock); + + ht_dbg(sta->sdata, "Tx BA session stop requested for %pM tid %u\n", +diff --git a/net/netfilter/ipvs/Kconfig b/net/netfilter/ipvs/Kconfig +index b32fb0dbe237..3f8e490d1133 100644 +--- a/net/netfilter/ipvs/Kconfig ++++ b/net/netfilter/ipvs/Kconfig +@@ -29,6 +29,7 @@ config IP_VS_IPV6 + bool "IPv6 support for IPVS" + depends on IPV6 = y || IP_VS = IPV6 + select IP6_NF_IPTABLES ++ select NF_DEFRAG_IPV6 + ---help--- + Add IPv6 support to IPVS. + +diff --git a/net/netfilter/ipvs/ip_vs_core.c b/net/netfilter/ipvs/ip_vs_core.c +index 1bd53b1e7672..4278f5c947ab 100644 +--- a/net/netfilter/ipvs/ip_vs_core.c ++++ b/net/netfilter/ipvs/ip_vs_core.c +@@ -1524,14 +1524,12 @@ ip_vs_try_to_schedule(struct netns_ipvs *ipvs, int af, struct sk_buff *skb, + /* sorry, all this trouble for a no-hit :) */ + IP_VS_DBG_PKT(12, af, pp, skb, iph->off, + "ip_vs_in: packet continues traversal as normal"); +- if (iph->fragoffs) { +- /* Fragment that couldn't be mapped to a conn entry +- * is missing module nf_defrag_ipv6 +- */ +- IP_VS_DBG_RL("Unhandled frag, load nf_defrag_ipv6\n"); ++ ++ /* Fragment couldn't be mapped to a conn entry */ ++ if (iph->fragoffs) + IP_VS_DBG_PKT(7, af, pp, skb, iph->off, + "unhandled fragment"); +- } ++ + *verdict = NF_ACCEPT; + return 0; + } +diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c +index dff4ead3d117..56dd5ce6274f 100644 +--- a/net/netfilter/ipvs/ip_vs_ctl.c ++++ b/net/netfilter/ipvs/ip_vs_ctl.c +@@ -43,6 +43,7 @@ + #ifdef CONFIG_IP_VS_IPV6 + #include + #include ++#include + #endif + #include + #include +@@ -888,6 +889,7 @@ ip_vs_new_dest(struct ip_vs_service *svc, struct ip_vs_dest_user_kern *udest, + { + struct ip_vs_dest *dest; + unsigned int atype, i; ++ int ret = 0; + + EnterFunction(2); + +@@ -898,6 +900,10 @@ ip_vs_new_dest(struct ip_vs_service *svc, struct ip_vs_dest_user_kern *udest, + atype & IPV6_ADDR_LINKLOCAL) && + !__ip_vs_addr_is_local_v6(svc->ipvs->net, &udest->addr.in6)) + return -EINVAL; ++ ++ ret = nf_defrag_ipv6_enable(svc->ipvs->net); ++ if (ret) ++ return ret; + } else + #endif + { +@@ -1221,6 +1227,10 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u, + ret = -EINVAL; + goto out_err; + } ++ ++ ret = nf_defrag_ipv6_enable(ipvs->net); ++ if (ret) ++ goto out_err; + } + #endif + +diff --git a/net/phonet/pep.c b/net/phonet/pep.c +index e81537991ddf..bffcef58ebf5 100644 +--- a/net/phonet/pep.c ++++ b/net/phonet/pep.c +@@ -132,7 +132,7 @@ static int pep_indicate(struct sock *sk, u8 id, u8 code, + ph->utid = 0; + ph->message_id = id; + ph->pipe_handle = pn->pipe_handle; +- ph->data[0] = code; ++ ph->error_code = code; + return pn_skb_send(sk, skb, NULL); + } + +@@ -153,7 +153,7 @@ static int pipe_handler_request(struct sock *sk, u8 id, u8 code, + ph->utid = id; /* whatever */ + ph->message_id = id; + ph->pipe_handle = pn->pipe_handle; +- ph->data[0] = code; ++ ph->error_code = code; + return pn_skb_send(sk, skb, NULL); + } + +@@ -208,7 +208,7 @@ static int pep_ctrlreq_error(struct sock *sk, struct sk_buff *oskb, u8 code, + struct pnpipehdr *ph; + struct sockaddr_pn dst; + u8 data[4] = { +- oph->data[0], /* PEP type */ ++ oph->pep_type, /* PEP type */ + code, /* error code, at an unusual offset */ + PAD, PAD, + }; +@@ -221,7 +221,7 @@ static int pep_ctrlreq_error(struct sock *sk, struct sk_buff *oskb, u8 code, + ph->utid = oph->utid; + ph->message_id = PNS_PEP_CTRL_RESP; + ph->pipe_handle = oph->pipe_handle; +- ph->data[0] = oph->data[1]; /* CTRL id */ ++ ph->data0 = oph->data[0]; /* CTRL id */ + + pn_skb_get_src_sockaddr(oskb, &dst); + return pn_skb_send(sk, skb, &dst); +@@ -272,17 +272,17 @@ static int pipe_rcv_status(struct sock *sk, struct sk_buff *skb) + return -EINVAL; + + hdr = pnp_hdr(skb); +- if (hdr->data[0] != PN_PEP_TYPE_COMMON) { ++ if (hdr->pep_type != PN_PEP_TYPE_COMMON) { + net_dbg_ratelimited("Phonet unknown PEP type: %u\n", +- (unsigned int)hdr->data[0]); ++ (unsigned int)hdr->pep_type); + return -EOPNOTSUPP; + } + +- switch (hdr->data[1]) { ++ switch (hdr->data[0]) { + case PN_PEP_IND_FLOW_CONTROL: + switch (pn->tx_fc) { + case PN_LEGACY_FLOW_CONTROL: +- switch (hdr->data[4]) { ++ switch (hdr->data[3]) { + case PEP_IND_BUSY: + atomic_set(&pn->tx_credits, 0); + break; +@@ -292,7 +292,7 @@ static int pipe_rcv_status(struct sock *sk, struct sk_buff *skb) + } + break; + case PN_ONE_CREDIT_FLOW_CONTROL: +- if (hdr->data[4] == PEP_IND_READY) ++ if (hdr->data[3] == PEP_IND_READY) + atomic_set(&pn->tx_credits, wake = 1); + break; + } +@@ -301,12 +301,12 @@ static int pipe_rcv_status(struct sock *sk, struct sk_buff *skb) + case PN_PEP_IND_ID_MCFC_GRANT_CREDITS: + if (pn->tx_fc != PN_MULTI_CREDIT_FLOW_CONTROL) + break; +- atomic_add(wake = hdr->data[4], &pn->tx_credits); ++ atomic_add(wake = hdr->data[3], &pn->tx_credits); + break; + + default: + net_dbg_ratelimited("Phonet unknown PEP indication: %u\n", +- (unsigned int)hdr->data[1]); ++ (unsigned int)hdr->data[0]); + return -EOPNOTSUPP; + } + if (wake) +@@ -318,7 +318,7 @@ static int pipe_rcv_created(struct sock *sk, struct sk_buff *skb) + { + struct pep_sock *pn = pep_sk(sk); + struct pnpipehdr *hdr = pnp_hdr(skb); +- u8 n_sb = hdr->data[0]; ++ u8 n_sb = hdr->data0; + + pn->rx_fc = pn->tx_fc = PN_LEGACY_FLOW_CONTROL; + __skb_pull(skb, sizeof(*hdr)); +@@ -506,7 +506,7 @@ static int pep_connresp_rcv(struct sock *sk, struct sk_buff *skb) + return -ECONNREFUSED; + + /* Parse sub-blocks */ +- n_sb = hdr->data[4]; ++ n_sb = hdr->data[3]; + while (n_sb > 0) { + u8 type, buf[6], len = sizeof(buf); + const u8 *data = pep_get_sb(skb, &type, &len, buf); +@@ -739,7 +739,7 @@ static int pipe_do_remove(struct sock *sk) + ph->utid = 0; + ph->message_id = PNS_PIPE_REMOVE_REQ; + ph->pipe_handle = pn->pipe_handle; +- ph->data[0] = PAD; ++ ph->data0 = PAD; + return pn_skb_send(sk, skb, NULL); + } + +@@ -817,7 +817,7 @@ static struct sock *pep_sock_accept(struct sock *sk, int flags, int *errp, + peer_type = hdr->other_pep_type << 8; + + /* Parse sub-blocks (options) */ +- n_sb = hdr->data[4]; ++ n_sb = hdr->data[3]; + while (n_sb > 0) { + u8 type, buf[1], len = sizeof(buf); + const u8 *data = pep_get_sb(skb, &type, &len, buf); +@@ -1109,7 +1109,7 @@ static int pipe_skb_send(struct sock *sk, struct sk_buff *skb) + ph->utid = 0; + if (pn->aligned) { + ph->message_id = PNS_PIPE_ALIGNED_DATA; +- ph->data[0] = 0; /* padding */ ++ ph->data0 = 0; /* padding */ + } else + ph->message_id = PNS_PIPE_DATA; + ph->pipe_handle = pn->pipe_handle; +diff --git a/security/keys/internal.h b/security/keys/internal.h +index 503adbae7b0d..e3a573840186 100644 +--- a/security/keys/internal.h ++++ b/security/keys/internal.h +@@ -188,20 +188,9 @@ static inline int key_permission(const key_ref_t key_ref, unsigned perm) + return key_task_permission(key_ref, current_cred(), perm); + } + +-/* +- * Authorisation record for request_key(). +- */ +-struct request_key_auth { +- struct key *target_key; +- struct key *dest_keyring; +- const struct cred *cred; +- void *callout_info; +- size_t callout_len; +- pid_t pid; +-} __randomize_layout; +- + extern struct key_type key_type_request_key_auth; + extern struct key *request_key_auth_new(struct key *target, ++ const char *op, + const void *callout_info, + size_t callout_len, + struct key *dest_keyring); +diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c +index 1ffe60bb2845..ca31af186abd 100644 +--- a/security/keys/keyctl.c ++++ b/security/keys/keyctl.c +@@ -26,6 +26,7 @@ + #include + #include + #include ++#include + #include "internal.h" + + #define KEY_MAX_DESC_SIZE 4096 +diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c +index 740affd65ee9..5f2993ab2d50 100644 +--- a/security/keys/process_keys.c ++++ b/security/keys/process_keys.c +@@ -20,6 +20,7 @@ + #include + #include + #include ++#include + #include "internal.h" + + /* Session keyring create vs join semaphore */ +diff --git a/security/keys/request_key.c b/security/keys/request_key.c +index c707fdbb3429..2ecd67221476 100644 +--- a/security/keys/request_key.c ++++ b/security/keys/request_key.c +@@ -18,31 +18,30 @@ + #include + #include + #include "internal.h" ++#include + + #define key_negative_timeout 60 /* default timeout on a negative key's existence */ + + /** + * complete_request_key - Complete the construction of a key. +- * @cons: The key construction record. ++ * @auth_key: The authorisation key. + * @error: The success or failute of the construction. + * + * Complete the attempt to construct a key. The key will be negated + * if an error is indicated. The authorisation key will be revoked + * unconditionally. + */ +-void complete_request_key(struct key_construction *cons, int error) ++void complete_request_key(struct key *authkey, int error) + { +- kenter("{%d,%d},%d", cons->key->serial, cons->authkey->serial, error); ++ struct request_key_auth *rka = get_request_key_auth(authkey); ++ struct key *key = rka->target_key; ++ ++ kenter("%d{%d},%d", authkey->serial, key->serial, error); + + if (error < 0) +- key_negate_and_link(cons->key, key_negative_timeout, NULL, +- cons->authkey); ++ key_negate_and_link(key, key_negative_timeout, NULL, authkey); + else +- key_revoke(cons->authkey); +- +- key_put(cons->key); +- key_put(cons->authkey); +- kfree(cons); ++ key_revoke(authkey); + } + EXPORT_SYMBOL(complete_request_key); + +@@ -91,21 +90,19 @@ static int call_usermodehelper_keys(const char *path, char **argv, char **envp, + * Request userspace finish the construction of a key + * - execute "/sbin/request-key " + */ +-static int call_sbin_request_key(struct key_construction *cons, +- const char *op, +- void *aux) ++static int call_sbin_request_key(struct key *authkey, void *aux) + { + static char const request_key[] = "/sbin/request-key"; ++ struct request_key_auth *rka = get_request_key_auth(authkey); + const struct cred *cred = current_cred(); + key_serial_t prkey, sskey; +- struct key *key = cons->key, *authkey = cons->authkey, *keyring, +- *session; ++ struct key *key = rka->target_key, *keyring, *session; + char *argv[9], *envp[3], uid_str[12], gid_str[12]; + char key_str[12], keyring_str[3][12]; + char desc[20]; + int ret, i; + +- kenter("{%d},{%d},%s", key->serial, authkey->serial, op); ++ kenter("{%d},{%d},%s", key->serial, authkey->serial, rka->op); + + ret = install_user_keyrings(); + if (ret < 0) +@@ -163,7 +160,7 @@ static int call_sbin_request_key(struct key_construction *cons, + /* set up the argument list */ + i = 0; + argv[i++] = (char *)request_key; +- argv[i++] = (char *) op; ++ argv[i++] = (char *)rka->op; + argv[i++] = key_str; + argv[i++] = uid_str; + argv[i++] = gid_str; +@@ -191,7 +188,7 @@ error_link: + key_put(keyring); + + error_alloc: +- complete_request_key(cons, ret); ++ complete_request_key(authkey, ret); + kleave(" = %d", ret); + return ret; + } +@@ -205,42 +202,31 @@ static int construct_key(struct key *key, const void *callout_info, + size_t callout_len, void *aux, + struct key *dest_keyring) + { +- struct key_construction *cons; + request_key_actor_t actor; + struct key *authkey; + int ret; + + kenter("%d,%p,%zu,%p", key->serial, callout_info, callout_len, aux); + +- cons = kmalloc(sizeof(*cons), GFP_KERNEL); +- if (!cons) +- return -ENOMEM; +- + /* allocate an authorisation key */ +- authkey = request_key_auth_new(key, callout_info, callout_len, ++ authkey = request_key_auth_new(key, "create", callout_info, callout_len, + dest_keyring); +- if (IS_ERR(authkey)) { +- kfree(cons); +- ret = PTR_ERR(authkey); +- authkey = NULL; +- } else { +- cons->authkey = key_get(authkey); +- cons->key = key_get(key); ++ if (IS_ERR(authkey)) ++ return PTR_ERR(authkey); + +- /* make the call */ +- actor = call_sbin_request_key; +- if (key->type->request_key) +- actor = key->type->request_key; ++ /* Make the call */ ++ actor = call_sbin_request_key; ++ if (key->type->request_key) ++ actor = key->type->request_key; + +- ret = actor(cons, "create", aux); ++ ret = actor(authkey, aux); + +- /* check that the actor called complete_request_key() prior to +- * returning an error */ +- WARN_ON(ret < 0 && +- !test_bit(KEY_FLAG_REVOKED, &authkey->flags)); +- key_put(authkey); +- } ++ /* check that the actor called complete_request_key() prior to ++ * returning an error */ ++ WARN_ON(ret < 0 && ++ !test_bit(KEY_FLAG_REVOKED, &authkey->flags)); + ++ key_put(authkey); + kleave(" = %d", ret); + return ret; + } +@@ -275,7 +261,7 @@ static int construct_get_dest_keyring(struct key **_dest_keyring) + if (cred->request_key_auth) { + authkey = cred->request_key_auth; + down_read(&authkey->sem); +- rka = authkey->payload.data[0]; ++ rka = get_request_key_auth(authkey); + if (!test_bit(KEY_FLAG_REVOKED, + &authkey->flags)) + dest_keyring = +diff --git a/security/keys/request_key_auth.c b/security/keys/request_key_auth.c +index 6797843154f0..5e515791ccd1 100644 +--- a/security/keys/request_key_auth.c ++++ b/security/keys/request_key_auth.c +@@ -18,7 +18,7 @@ + #include + #include + #include "internal.h" +-#include ++#include + + static int request_key_auth_preparse(struct key_preparsed_payload *); + static void request_key_auth_free_preparse(struct key_preparsed_payload *); +@@ -69,7 +69,7 @@ static int request_key_auth_instantiate(struct key *key, + static void request_key_auth_describe(const struct key *key, + struct seq_file *m) + { +- struct request_key_auth *rka = key->payload.data[0]; ++ struct request_key_auth *rka = get_request_key_auth(key); + + seq_puts(m, "key:"); + seq_puts(m, key->description); +@@ -84,7 +84,7 @@ static void request_key_auth_describe(const struct key *key, + static long request_key_auth_read(const struct key *key, + char __user *buffer, size_t buflen) + { +- struct request_key_auth *rka = key->payload.data[0]; ++ struct request_key_auth *rka = get_request_key_auth(key); + size_t datalen; + long ret; + +@@ -110,7 +110,7 @@ static long request_key_auth_read(const struct key *key, + */ + static void request_key_auth_revoke(struct key *key) + { +- struct request_key_auth *rka = key->payload.data[0]; ++ struct request_key_auth *rka = get_request_key_auth(key); + + kenter("{%d}", key->serial); + +@@ -137,7 +137,7 @@ static void free_request_key_auth(struct request_key_auth *rka) + */ + static void request_key_auth_destroy(struct key *key) + { +- struct request_key_auth *rka = key->payload.data[0]; ++ struct request_key_auth *rka = get_request_key_auth(key); + + kenter("{%d}", key->serial); + +@@ -148,8 +148,9 @@ static void request_key_auth_destroy(struct key *key) + * Create an authorisation token for /sbin/request-key or whoever to gain + * access to the caller's security data. + */ +-struct key *request_key_auth_new(struct key *target, const void *callout_info, +- size_t callout_len, struct key *dest_keyring) ++struct key *request_key_auth_new(struct key *target, const char *op, ++ const void *callout_info, size_t callout_len, ++ struct key *dest_keyring) + { + struct request_key_auth *rka, *irka; + const struct cred *cred = current->cred; +@@ -167,6 +168,7 @@ struct key *request_key_auth_new(struct key *target, const void *callout_info, + if (!rka->callout_info) + goto error_free_rka; + rka->callout_len = callout_len; ++ strlcpy(rka->op, op, sizeof(rka->op)); + + /* see if the calling process is already servicing the key request of + * another process */ +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c +index d6b9ed34ceae..a5d9c0146ac3 100644 +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -1000,8 +1000,11 @@ static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb, + BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED)); + + /* if fs is reusing a sb, make sure that the contexts match */ +- if (newsbsec->flags & SE_SBINITIALIZED) ++ if (newsbsec->flags & SE_SBINITIALIZED) { ++ if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context) ++ *set_kern_flags |= SECURITY_LSM_NATIVE_LABELS; + return selinux_cmp_sb_context(oldsb, newsb); ++ } + + mutex_lock(&newsbsec->lock); + +diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c +index 81268760b7a9..a23d6a821ff3 100644 +--- a/sound/soc/fsl/fsl_esai.c ++++ b/sound/soc/fsl/fsl_esai.c +@@ -395,7 +395,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt) + break; + case SND_SOC_DAIFMT_RIGHT_J: + /* Data on rising edge of bclk, frame high, right aligned */ +- xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA; ++ xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP; ++ xcr |= ESAI_xCR_xWA; + break; + case SND_SOC_DAIFMT_DSP_A: + /* Data on rising edge of bclk, frame high, 1clk before data */ +@@ -452,12 +453,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt) + return -EINVAL; + } + +- mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR; ++ mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA; + regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr); + regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr); + + mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP | +- ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA; ++ ESAI_xCCR_xFSD | ESAI_xCCR_xCKD; + regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr); + regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr); + +diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c +index 0db2791f7035..60cc550c5a4c 100644 +--- a/sound/soc/sh/rcar/ssi.c ++++ b/sound/soc/sh/rcar/ssi.c +@@ -280,7 +280,7 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod, + if (rsnd_ssi_is_multi_slave(mod, io)) + return 0; + +- if (ssi->usrcnt > 1) { ++ if (ssi->usrcnt > 0) { + if (ssi->rate != rate) { + dev_err(dev, "SSI parent/child should use same rate\n"); + return -EINVAL; +diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c +index bba6a917cd02..e9f7c6287376 100644 +--- a/sound/soc/soc-dapm.c ++++ b/sound/soc/soc-dapm.c +@@ -75,12 +75,16 @@ static int dapm_up_seq[] = { + [snd_soc_dapm_clock_supply] = 1, + [snd_soc_dapm_supply] = 2, + [snd_soc_dapm_micbias] = 3, ++ [snd_soc_dapm_vmid] = 3, + [snd_soc_dapm_dai_link] = 2, + [snd_soc_dapm_dai_in] = 4, + [snd_soc_dapm_dai_out] = 4, + [snd_soc_dapm_aif_in] = 4, + [snd_soc_dapm_aif_out] = 4, + [snd_soc_dapm_mic] = 5, ++ [snd_soc_dapm_siggen] = 5, ++ [snd_soc_dapm_input] = 5, ++ [snd_soc_dapm_output] = 5, + [snd_soc_dapm_mux] = 6, + [snd_soc_dapm_demux] = 6, + [snd_soc_dapm_dac] = 7, +@@ -88,11 +92,19 @@ static int dapm_up_seq[] = { + [snd_soc_dapm_mixer] = 8, + [snd_soc_dapm_mixer_named_ctl] = 8, + [snd_soc_dapm_pga] = 9, ++ [snd_soc_dapm_buffer] = 9, ++ [snd_soc_dapm_scheduler] = 9, ++ [snd_soc_dapm_effect] = 9, ++ [snd_soc_dapm_src] = 9, ++ [snd_soc_dapm_asrc] = 9, ++ [snd_soc_dapm_encoder] = 9, ++ [snd_soc_dapm_decoder] = 9, + [snd_soc_dapm_adc] = 10, + [snd_soc_dapm_out_drv] = 11, + [snd_soc_dapm_hp] = 11, + [snd_soc_dapm_spk] = 11, + [snd_soc_dapm_line] = 11, ++ [snd_soc_dapm_sink] = 11, + [snd_soc_dapm_kcontrol] = 12, + [snd_soc_dapm_post] = 13, + }; +@@ -105,13 +117,25 @@ static int dapm_down_seq[] = { + [snd_soc_dapm_spk] = 3, + [snd_soc_dapm_line] = 3, + [snd_soc_dapm_out_drv] = 3, ++ [snd_soc_dapm_sink] = 3, + [snd_soc_dapm_pga] = 4, ++ [snd_soc_dapm_buffer] = 4, ++ [snd_soc_dapm_scheduler] = 4, ++ [snd_soc_dapm_effect] = 4, ++ [snd_soc_dapm_src] = 4, ++ [snd_soc_dapm_asrc] = 4, ++ [snd_soc_dapm_encoder] = 4, ++ [snd_soc_dapm_decoder] = 4, + [snd_soc_dapm_switch] = 5, + [snd_soc_dapm_mixer_named_ctl] = 5, + [snd_soc_dapm_mixer] = 5, + [snd_soc_dapm_dac] = 6, + [snd_soc_dapm_mic] = 7, ++ [snd_soc_dapm_siggen] = 7, ++ [snd_soc_dapm_input] = 7, ++ [snd_soc_dapm_output] = 7, + [snd_soc_dapm_micbias] = 8, ++ [snd_soc_dapm_vmid] = 8, + [snd_soc_dapm_mux] = 9, + [snd_soc_dapm_demux] = 9, + [snd_soc_dapm_aif_in] = 10, +diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c +index c1619860a5de..2d5cf263515b 100644 +--- a/sound/soc/soc-topology.c ++++ b/sound/soc/soc-topology.c +@@ -2513,6 +2513,7 @@ int snd_soc_tplg_component_load(struct snd_soc_component *comp, + struct snd_soc_tplg_ops *ops, const struct firmware *fw, u32 id) + { + struct soc_tplg tplg; ++ int ret; + + /* setup parsing context */ + memset(&tplg, 0, sizeof(tplg)); +@@ -2526,7 +2527,12 @@ int snd_soc_tplg_component_load(struct snd_soc_component *comp, + tplg.bytes_ext_ops = ops->bytes_ext_ops; + tplg.bytes_ext_ops_count = ops->bytes_ext_ops_count; + +- return soc_tplg_load(&tplg); ++ ret = soc_tplg_load(&tplg); ++ /* free the created components if fail to load topology */ ++ if (ret) ++ snd_soc_tplg_component_remove(comp, SND_SOC_TPLG_INDEX_ALL); ++ ++ return ret; + } + EXPORT_SYMBOL_GPL(snd_soc_tplg_component_load); + +diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c +index bbb9823e93b9..44c8bcefe224 100644 +--- a/tools/perf/util/auxtrace.c ++++ b/tools/perf/util/auxtrace.c +@@ -1264,9 +1264,9 @@ static int __auxtrace_mmap__read(struct auxtrace_mmap *mm, + } + + /* padding must be written by fn() e.g. record__process_auxtrace() */ +- padding = size & 7; ++ padding = size & (PERF_AUXTRACE_RECORD_ALIGNMENT - 1); + if (padding) +- padding = 8 - padding; ++ padding = PERF_AUXTRACE_RECORD_ALIGNMENT - padding; + + memset(&ev, 0, sizeof(ev)); + ev.auxtrace.header.type = PERF_RECORD_AUXTRACE; +diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h +index 33b5e6cdf38c..d273bb47b3e3 100644 +--- a/tools/perf/util/auxtrace.h ++++ b/tools/perf/util/auxtrace.h +@@ -38,6 +38,9 @@ struct record_opts; + struct auxtrace_info_event; + struct events_stats; + ++/* Auxtrace records must have the same alignment as perf event records */ ++#define PERF_AUXTRACE_RECORD_ALIGNMENT 8 ++ + enum auxtrace_type { + PERF_AUXTRACE_UNKNOWN, + PERF_AUXTRACE_INTEL_PT, +diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +index d404bed7003a..f3db68abbd9a 100644 +--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c ++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +@@ -26,6 +26,7 @@ + + #include "../cache.h" + #include "../util.h" ++#include "../auxtrace.h" + + #include "intel-pt-insn-decoder.h" + #include "intel-pt-pkt-decoder.h" +@@ -1389,7 +1390,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder) + { + intel_pt_log("ERROR: Buffer overflow\n"); + intel_pt_clear_tx_flags(decoder); +- decoder->cbr = 0; + decoder->timestamp_insn_cnt = 0; + decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC; + decoder->overflow = true; +@@ -2559,6 +2559,34 @@ static int intel_pt_tsc_cmp(uint64_t tsc1, uint64_t tsc2) + } + } + ++#define MAX_PADDING (PERF_AUXTRACE_RECORD_ALIGNMENT - 1) ++ ++/** ++ * adj_for_padding - adjust overlap to account for padding. ++ * @buf_b: second buffer ++ * @buf_a: first buffer ++ * @len_a: size of first buffer ++ * ++ * @buf_a might have up to 7 bytes of padding appended. Adjust the overlap ++ * accordingly. ++ * ++ * Return: A pointer into @buf_b from where non-overlapped data starts ++ */ ++static unsigned char *adj_for_padding(unsigned char *buf_b, ++ unsigned char *buf_a, size_t len_a) ++{ ++ unsigned char *p = buf_b - MAX_PADDING; ++ unsigned char *q = buf_a + len_a - MAX_PADDING; ++ int i; ++ ++ for (i = MAX_PADDING; i; i--, p++, q++) { ++ if (*p != *q) ++ break; ++ } ++ ++ return p; ++} ++ + /** + * intel_pt_find_overlap_tsc - determine start of non-overlapped trace data + * using TSC. +@@ -2609,8 +2637,11 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a, + + /* Same TSC, so buffers are consecutive */ + if (!cmp && rem_b >= rem_a) { ++ unsigned char *start; ++ + *consecutive = true; +- return buf_b + len_b - (rem_b - rem_a); ++ start = buf_b + len_b - (rem_b - rem_a); ++ return adj_for_padding(start, buf_a, len_a); + } + if (cmp < 0) + return buf_b; /* tsc_a < tsc_b => no overlap */ +@@ -2673,7 +2704,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a, + found = memmem(buf_a, len_a, buf_b, len_a); + if (found) { + *consecutive = true; +- return buf_b + len_a; ++ return adj_for_padding(buf_b + len_a, buf_a, len_a); + } + + /* Try again at next PSB in buffer 'a' */ +diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c +index 3b118fa9da89..e8e05e7838b2 100644 +--- a/tools/perf/util/intel-pt.c ++++ b/tools/perf/util/intel-pt.c +@@ -2545,6 +2545,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event, + } + + pt->timeless_decoding = intel_pt_timeless_decoding(pt); ++ if (pt->timeless_decoding && !pt->tc.time_mult) ++ pt->tc.time_mult = 1; + pt->have_tsc = intel_pt_have_tsc(pt); + pt->sampling_mode = false; + pt->est_tsc = !pt->timeless_decoding; +diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c +index ec275b8472a9..225dc671ae31 100644 +--- a/virt/kvm/arm/mmu.c ++++ b/virt/kvm/arm/mmu.c +@@ -1955,7 +1955,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + return 0; + } + +-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) ++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) + { + } + +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 9b79818758dc..66cc315efa6d 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -856,6 +856,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, + int as_id, struct kvm_memslots *slots) + { + struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id); ++ u64 gen; + + /* + * Set the low bit in the generation, which disables SPTE caching +@@ -878,9 +879,11 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, + * space 0 will use generations 0, 4, 8, ... while * address space 1 will + * use generations 2, 6, 10, 14, ... + */ +- slots->generation += KVM_ADDRESS_SPACE_NUM * 2 - 1; ++ gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1; + +- kvm_arch_memslots_updated(kvm, slots); ++ kvm_arch_memslots_updated(kvm, gen); ++ ++ slots->generation = gen; + + return old_memslots; + }