From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 2A055138334 for ; Sat, 23 Mar 2019 20:25:23 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 09F4DE0AC1; Sat, 23 Mar 2019 20:25:22 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 89289E0AC1 for ; Sat, 23 Mar 2019 20:25:21 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 19AA0335CE9 for ; Sat, 23 Mar 2019 20:25:06 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id A5E97326 for ; Sat, 23 Mar 2019 20:25:03 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1553372678.e32412028254c50ce52ba2ee81c7ed4d13c8843d.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.0 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1003_linux-5.0.4.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: e32412028254c50ce52ba2ee81c7ed4d13c8843d X-VCS-Branch: 5.0 Date: Sat, 23 Mar 2019 20:25:03 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: b15cc6a4-87da-4681-b34e-0a8cc59de8d2 X-Archives-Hash: da3e9c56ec435beb86eae93a4c939162 commit: e32412028254c50ce52ba2ee81c7ed4d13c8843d Author: Mike Pagano gentoo org> AuthorDate: Sat Mar 23 20:24:38 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sat Mar 23 20:24:38 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e3241202 proj/linux-patches: Linux patch 5.0.4 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1003_linux-5.0.4.patch | 11152 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 11156 insertions(+) diff --git a/0000_README b/0000_README index 4989a60..1974ef5 100644 --- a/0000_README +++ b/0000_README @@ -55,6 +55,10 @@ Patch: 1002_linux-5.0.3.patch From: http://www.kernel.org Desc: Linux 5.0.3 +Patch: 1003_linux-5.0.4.patch +From: http://www.kernel.org +Desc: Linux 5.0.4 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1003_linux-5.0.4.patch b/1003_linux-5.0.4.patch new file mode 100644 index 0000000..4bb590f --- /dev/null +++ b/1003_linux-5.0.4.patch @@ -0,0 +1,11152 @@ +diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt +index e133ccd60228..acfe3d0f78d1 100644 +--- a/Documentation/DMA-API.txt ++++ b/Documentation/DMA-API.txt +@@ -195,6 +195,14 @@ Requesting the required mask does not alter the current mask. If you + wish to take advantage of it, you should issue a dma_set_mask() + call to set the mask to the value returned. + ++:: ++ ++ size_t ++ dma_direct_max_mapping_size(struct device *dev); ++ ++Returns the maximum size of a mapping for the device. The size parameter ++of the mapping functions like dma_map_single(), dma_map_page() and ++others should not be larger than the returned value. + + Part Id - Streaming DMA mappings + -------------------------------- +diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt +index 1f09d043d086..ddb8ce5333ba 100644 +--- a/Documentation/arm64/silicon-errata.txt ++++ b/Documentation/arm64/silicon-errata.txt +@@ -44,6 +44,8 @@ stable kernels. + + | Implementor | Component | Erratum ID | Kconfig | + +----------------+-----------------+-----------------+-----------------------------+ ++| Allwinner | A64/R18 | UNKNOWN1 | SUN50I_ERRATUM_UNKNOWN1 | ++| | | | | + | ARM | Cortex-A53 | #826319 | ARM64_ERRATUM_826319 | + | ARM | Cortex-A53 | #827319 | ARM64_ERRATUM_827319 | + | ARM | Cortex-A53 | #824069 | ARM64_ERRATUM_824069 | +diff --git a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt +index a10c1f89037d..e1fe02f3e3e9 100644 +--- a/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt ++++ b/Documentation/devicetree/bindings/iio/adc/samsung,exynos-adc.txt +@@ -11,11 +11,13 @@ New driver handles the following + + Required properties: + - compatible: Must be "samsung,exynos-adc-v1" +- for exynos4412/5250 controllers. ++ for Exynos5250 controllers. + Must be "samsung,exynos-adc-v2" for + future controllers. + Must be "samsung,exynos3250-adc" for + controllers compatible with ADC of Exynos3250. ++ Must be "samsung,exynos4212-adc" for ++ controllers compatible with ADC of Exynos4212 and Exynos4412. + Must be "samsung,exynos7-adc" for + the ADC in Exynos7 and compatibles + Must be "samsung,s3c2410-adc" for +diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst +index 0de6f6145cc6..7ba8cd567f84 100644 +--- a/Documentation/process/stable-kernel-rules.rst ++++ b/Documentation/process/stable-kernel-rules.rst +@@ -38,6 +38,9 @@ Procedure for submitting patches to the -stable tree + - If the patch covers files in net/ or drivers/net please follow netdev stable + submission guidelines as described in + :ref:`Documentation/networking/netdev-FAQ.rst ` ++ after first checking the stable networking queue at ++ https://patchwork.ozlabs.org/bundle/davem/stable/?series=&submitter=&state=*&q=&archive= ++ to ensure the requested patch is not already queued up. + - Security patches should not be handled (solely) by the -stable review + process but should follow the procedures in + :ref:`Documentation/admin-guide/security-bugs.rst `. +diff --git a/Makefile b/Makefile +index fb888787e7d1..06fda21614bc 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 0 +-SUBLEVEL = 3 ++SUBLEVEL = 4 + EXTRAVERSION = + NAME = Shy Crocodile + +diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S +index ce45ba0c0687..16019b5961e7 100644 +--- a/arch/arm/crypto/crct10dif-ce-core.S ++++ b/arch/arm/crypto/crct10dif-ce-core.S +@@ -124,10 +124,10 @@ ENTRY(crc_t10dif_pmull) + vext.8 q10, qzr, q0, #4 + + // receive the initial 64B data, xor the initial crc value +- vld1.64 {q0-q1}, [arg2, :128]! +- vld1.64 {q2-q3}, [arg2, :128]! +- vld1.64 {q4-q5}, [arg2, :128]! +- vld1.64 {q6-q7}, [arg2, :128]! ++ vld1.64 {q0-q1}, [arg2]! ++ vld1.64 {q2-q3}, [arg2]! ++ vld1.64 {q4-q5}, [arg2]! ++ vld1.64 {q6-q7}, [arg2]! + CPU_LE( vrev64.8 q0, q0 ) + CPU_LE( vrev64.8 q1, q1 ) + CPU_LE( vrev64.8 q2, q2 ) +@@ -167,7 +167,7 @@ CPU_LE( vrev64.8 q7, q7 ) + _fold_64_B_loop: + + .macro fold64, reg1, reg2 +- vld1.64 {q11-q12}, [arg2, :128]! ++ vld1.64 {q11-q12}, [arg2]! + + vmull.p64 q8, \reg1\()h, d21 + vmull.p64 \reg1, \reg1\()l, d20 +@@ -238,7 +238,7 @@ _16B_reduction_loop: + vmull.p64 q7, d15, d21 + veor.8 q7, q7, q8 + +- vld1.64 {q0}, [arg2, :128]! ++ vld1.64 {q0}, [arg2]! + CPU_LE( vrev64.8 q0, q0 ) + vswp d0, d1 + veor.8 q7, q7, q0 +@@ -335,7 +335,7 @@ _less_than_128: + vmov.i8 q0, #0 + vmov s3, arg1_low32 // get the initial crc value + +- vld1.64 {q7}, [arg2, :128]! ++ vld1.64 {q7}, [arg2]! + CPU_LE( vrev64.8 q7, q7 ) + vswp d14, d15 + veor.8 q7, q7, q0 +diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c +index d428355cf38d..14c19c70a841 100644 +--- a/arch/arm/crypto/crct10dif-ce-glue.c ++++ b/arch/arm/crypto/crct10dif-ce-glue.c +@@ -35,26 +35,15 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data, + unsigned int length) + { + u16 *crc = shash_desc_ctx(desc); +- unsigned int l; + +- if (!may_use_simd()) { +- *crc = crc_t10dif_generic(*crc, data, length); ++ if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) { ++ kernel_neon_begin(); ++ *crc = crc_t10dif_pmull(*crc, data, length); ++ kernel_neon_end(); + } else { +- if (unlikely((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) { +- l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE - +- ((u32)data % CRC_T10DIF_PMULL_CHUNK_SIZE)); +- +- *crc = crc_t10dif_generic(*crc, data, l); +- +- length -= l; +- data += l; +- } +- if (length > 0) { +- kernel_neon_begin(); +- *crc = crc_t10dif_pmull(*crc, data, length); +- kernel_neon_end(); +- } ++ *crc = crc_t10dif_generic(*crc, data, length); + } ++ + return 0; + } + +diff --git a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c +index 058ce73137e8..5d819b6ea428 100644 +--- a/arch/arm/mach-s3c24xx/mach-osiris-dvs.c ++++ b/arch/arm/mach-s3c24xx/mach-osiris-dvs.c +@@ -65,16 +65,16 @@ static int osiris_dvs_notify(struct notifier_block *nb, + + switch (val) { + case CPUFREQ_PRECHANGE: +- if (old_dvs & !new_dvs || +- cur_dvs & !new_dvs) { ++ if ((old_dvs && !new_dvs) || ++ (cur_dvs && !new_dvs)) { + pr_debug("%s: exiting dvs\n", __func__); + cur_dvs = false; + gpio_set_value(OSIRIS_GPIO_DVS, 1); + } + break; + case CPUFREQ_POSTCHANGE: +- if (!old_dvs & new_dvs || +- !cur_dvs & new_dvs) { ++ if ((!old_dvs && new_dvs) || ++ (!cur_dvs && new_dvs)) { + pr_debug("entering dvs\n"); + cur_dvs = true; + gpio_set_value(OSIRIS_GPIO_DVS, 0); +diff --git a/arch/arm64/crypto/aes-ce-ccm-core.S b/arch/arm64/crypto/aes-ce-ccm-core.S +index e3a375c4cb83..1b151442dac1 100644 +--- a/arch/arm64/crypto/aes-ce-ccm-core.S ++++ b/arch/arm64/crypto/aes-ce-ccm-core.S +@@ -74,12 +74,13 @@ ENTRY(ce_aes_ccm_auth_data) + beq 10f + ext v0.16b, v0.16b, v0.16b, #1 /* rotate out the mac bytes */ + b 7b +-8: mov w7, w8 ++8: cbz w8, 91f ++ mov w7, w8 + add w8, w8, #16 + 9: ext v1.16b, v1.16b, v1.16b, #1 + adds w7, w7, #1 + bne 9b +- eor v0.16b, v0.16b, v1.16b ++91: eor v0.16b, v0.16b, v1.16b + st1 {v0.16b}, [x0] + 10: str w8, [x3] + ret +diff --git a/arch/arm64/crypto/aes-ce-ccm-glue.c b/arch/arm64/crypto/aes-ce-ccm-glue.c +index 68b11aa690e4..986191e8c058 100644 +--- a/arch/arm64/crypto/aes-ce-ccm-glue.c ++++ b/arch/arm64/crypto/aes-ce-ccm-glue.c +@@ -125,7 +125,7 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], + abytes -= added; + } + +- while (abytes > AES_BLOCK_SIZE) { ++ while (abytes >= AES_BLOCK_SIZE) { + __aes_arm64_encrypt(key->key_enc, mac, mac, + num_rounds(key)); + crypto_xor(mac, in, AES_BLOCK_SIZE); +@@ -139,8 +139,6 @@ static void ccm_update_mac(struct crypto_aes_ctx *key, u8 mac[], u8 const in[], + num_rounds(key)); + crypto_xor(mac, in, abytes); + *macp = abytes; +- } else { +- *macp = 0; + } + } + } +diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S +index e613a87f8b53..8432c8d0dea6 100644 +--- a/arch/arm64/crypto/aes-neonbs-core.S ++++ b/arch/arm64/crypto/aes-neonbs-core.S +@@ -971,18 +971,22 @@ CPU_LE( rev x8, x8 ) + + 8: next_ctr v0 + st1 {v0.16b}, [x24] +- cbz x23, 0f ++ cbz x23, .Lctr_done + + cond_yield_neon 98b + b 99b + +-0: frame_pop ++.Lctr_done: ++ frame_pop + ret + + /* + * If we are handling the tail of the input (x6 != NULL), return the + * final keystream block back to the caller. + */ ++0: cbz x25, 8b ++ st1 {v0.16b}, [x25] ++ b 8b + 1: cbz x25, 8b + st1 {v1.16b}, [x25] + b 8b +diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c +index b461d62023f2..567c24f3d224 100644 +--- a/arch/arm64/crypto/crct10dif-ce-glue.c ++++ b/arch/arm64/crypto/crct10dif-ce-glue.c +@@ -39,26 +39,13 @@ static int crct10dif_update(struct shash_desc *desc, const u8 *data, + unsigned int length) + { + u16 *crc = shash_desc_ctx(desc); +- unsigned int l; + +- if (unlikely((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)) { +- l = min_t(u32, length, CRC_T10DIF_PMULL_CHUNK_SIZE - +- ((u64)data % CRC_T10DIF_PMULL_CHUNK_SIZE)); +- +- *crc = crc_t10dif_generic(*crc, data, l); +- +- length -= l; +- data += l; +- } +- +- if (length > 0) { +- if (may_use_simd()) { +- kernel_neon_begin(); +- *crc = crc_t10dif_pmull(*crc, data, length); +- kernel_neon_end(); +- } else { +- *crc = crc_t10dif_generic(*crc, data, length); +- } ++ if (length >= CRC_T10DIF_PMULL_CHUNK_SIZE && may_use_simd()) { ++ kernel_neon_begin(); ++ *crc = crc_t10dif_pmull(*crc, data, length); ++ kernel_neon_end(); ++ } else { ++ *crc = crc_t10dif_generic(*crc, data, length); + } + + return 0; +diff --git a/arch/arm64/include/asm/hardirq.h b/arch/arm64/include/asm/hardirq.h +index 1473fc2f7ab7..89691c86640a 100644 +--- a/arch/arm64/include/asm/hardirq.h ++++ b/arch/arm64/include/asm/hardirq.h +@@ -17,8 +17,12 @@ + #define __ASM_HARDIRQ_H + + #include ++#include + #include ++#include + #include ++#include ++#include + + #define NR_IPI 7 + +@@ -37,6 +41,33 @@ u64 smp_irq_stat_cpu(unsigned int cpu); + + #define __ARCH_IRQ_EXIT_IRQS_DISABLED 1 + ++struct nmi_ctx { ++ u64 hcr; ++}; ++ ++DECLARE_PER_CPU(struct nmi_ctx, nmi_contexts); ++ ++#define arch_nmi_enter() \ ++ do { \ ++ if (is_kernel_in_hyp_mode()) { \ ++ struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts); \ ++ nmi_ctx->hcr = read_sysreg(hcr_el2); \ ++ if (!(nmi_ctx->hcr & HCR_TGE)) { \ ++ write_sysreg(nmi_ctx->hcr | HCR_TGE, hcr_el2); \ ++ isb(); \ ++ } \ ++ } \ ++ } while (0) ++ ++#define arch_nmi_exit() \ ++ do { \ ++ if (is_kernel_in_hyp_mode()) { \ ++ struct nmi_ctx *nmi_ctx = this_cpu_ptr(&nmi_contexts); \ ++ if (!(nmi_ctx->hcr & HCR_TGE)) \ ++ write_sysreg(nmi_ctx->hcr, hcr_el2); \ ++ } \ ++ } while (0) ++ + static inline void ack_bad_irq(unsigned int irq) + { + extern unsigned long irq_err_count; +diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c +index 780a12f59a8f..92fa81798fb9 100644 +--- a/arch/arm64/kernel/irq.c ++++ b/arch/arm64/kernel/irq.c +@@ -33,6 +33,9 @@ + + unsigned long irq_err_count; + ++/* Only access this in an NMI enter/exit */ ++DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts); ++ + DEFINE_PER_CPU(unsigned long *, irq_stack_ptr); + + int arch_show_interrupts(struct seq_file *p, int prec) +diff --git a/arch/arm64/kernel/kgdb.c b/arch/arm64/kernel/kgdb.c +index ce46c4cdf368..691854b77c7f 100644 +--- a/arch/arm64/kernel/kgdb.c ++++ b/arch/arm64/kernel/kgdb.c +@@ -244,27 +244,33 @@ int kgdb_arch_handle_exception(int exception_vector, int signo, + + static int kgdb_brk_fn(struct pt_regs *regs, unsigned int esr) + { ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + kgdb_handle_exception(1, SIGTRAP, 0, regs); +- return 0; ++ return DBG_HOOK_HANDLED; + } + NOKPROBE_SYMBOL(kgdb_brk_fn) + + static int kgdb_compiled_brk_fn(struct pt_regs *regs, unsigned int esr) + { ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + compiled_break = 1; + kgdb_handle_exception(1, SIGTRAP, 0, regs); + +- return 0; ++ return DBG_HOOK_HANDLED; + } + NOKPROBE_SYMBOL(kgdb_compiled_brk_fn); + + static int kgdb_step_brk_fn(struct pt_regs *regs, unsigned int esr) + { +- if (!kgdb_single_step) ++ if (user_mode(regs) || !kgdb_single_step) + return DBG_HOOK_ERROR; + + kgdb_handle_exception(1, SIGTRAP, 0, regs); +- return 0; ++ return DBG_HOOK_HANDLED; + } + NOKPROBE_SYMBOL(kgdb_step_brk_fn); + +diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c +index f17afb99890c..7fb6f3aa5ceb 100644 +--- a/arch/arm64/kernel/probes/kprobes.c ++++ b/arch/arm64/kernel/probes/kprobes.c +@@ -450,6 +450,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) + struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); + int retval; + ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + /* return error if this is not our step */ + retval = kprobe_ss_hit(kcb, instruction_pointer(regs)); + +@@ -466,6 +469,9 @@ kprobe_single_step_handler(struct pt_regs *regs, unsigned int esr) + int __kprobes + kprobe_breakpoint_handler(struct pt_regs *regs, unsigned int esr) + { ++ if (user_mode(regs)) ++ return DBG_HOOK_ERROR; ++ + kprobe_handler(regs); + return DBG_HOOK_HANDLED; + } +diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c +index c936aa40c3f4..b6dac3a68508 100644 +--- a/arch/arm64/kvm/sys_regs.c ++++ b/arch/arm64/kvm/sys_regs.c +@@ -1476,7 +1476,7 @@ static const struct sys_reg_desc sys_reg_descs[] = { + + { SYS_DESC(SYS_DACR32_EL2), NULL, reset_unknown, DACR32_EL2 }, + { SYS_DESC(SYS_IFSR32_EL2), NULL, reset_unknown, IFSR32_EL2 }, +- { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x70 }, ++ { SYS_DESC(SYS_FPEXC32_EL2), NULL, reset_val, FPEXC32_EL2, 0x700 }, + }; + + static bool trap_dbgidr(struct kvm_vcpu *vcpu, +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c +index efb7b2cbead5..ef46925096f0 100644 +--- a/arch/arm64/mm/fault.c ++++ b/arch/arm64/mm/fault.c +@@ -824,11 +824,12 @@ void __init hook_debug_fault_code(int nr, + debug_fault_info[nr].name = name; + } + +-asmlinkage int __exception do_debug_exception(unsigned long addr, ++asmlinkage int __exception do_debug_exception(unsigned long addr_if_watchpoint, + unsigned int esr, + struct pt_regs *regs) + { + const struct fault_info *inf = esr_to_debug_fault_info(esr); ++ unsigned long pc = instruction_pointer(regs); + int rv; + + /* +@@ -838,14 +839,14 @@ asmlinkage int __exception do_debug_exception(unsigned long addr, + if (interrupts_enabled(regs)) + trace_hardirqs_off(); + +- if (user_mode(regs) && !is_ttbr0_addr(instruction_pointer(regs))) ++ if (user_mode(regs) && !is_ttbr0_addr(pc)) + arm64_apply_bp_hardening(); + +- if (!inf->fn(addr, esr, regs)) { ++ if (!inf->fn(addr_if_watchpoint, esr, regs)) { + rv = 1; + } else { + arm64_notify_die(inf->name, regs, +- inf->sig, inf->code, (void __user *)addr, esr); ++ inf->sig, inf->code, (void __user *)pc, esr); + rv = 0; + } + +diff --git a/arch/m68k/Makefile b/arch/m68k/Makefile +index f00ca53f8c14..482513b9af2c 100644 +--- a/arch/m68k/Makefile ++++ b/arch/m68k/Makefile +@@ -58,7 +58,10 @@ cpuflags-$(CONFIG_M5206e) := $(call cc-option,-mcpu=5206e,-m5200) + cpuflags-$(CONFIG_M5206) := $(call cc-option,-mcpu=5206,-m5200) + + KBUILD_AFLAGS += $(cpuflags-y) +-KBUILD_CFLAGS += $(cpuflags-y) -pipe ++KBUILD_CFLAGS += $(cpuflags-y) ++ ++KBUILD_CFLAGS += -pipe -ffreestanding ++ + ifdef CONFIG_MMU + # without -fno-strength-reduce the 53c7xx.c driver fails ;-( + KBUILD_CFLAGS += -fno-strength-reduce -ffixed-a2 +diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h +index d2abd98471e8..41204a49cf95 100644 +--- a/arch/mips/include/asm/kvm_host.h ++++ b/arch/mips/include/asm/kvm_host.h +@@ -1134,7 +1134,7 @@ static inline void kvm_arch_hardware_unsetup(void) {} + static inline void kvm_arch_sync_events(struct kvm *kvm) {} + static inline void kvm_arch_free_memslot(struct kvm *kvm, + struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {} +-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {} ++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} + static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} + static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} +diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb.h b/arch/powerpc/include/asm/book3s/64/hugetlb.h +index 5b0177733994..46130ef4941c 100644 +--- a/arch/powerpc/include/asm/book3s/64/hugetlb.h ++++ b/arch/powerpc/include/asm/book3s/64/hugetlb.h +@@ -35,6 +35,14 @@ static inline int hstate_get_psize(struct hstate *hstate) + #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE + static inline bool gigantic_page_supported(void) + { ++ /* ++ * We used gigantic page reservation with hypervisor assist in some case. ++ * We cannot use runtime allocation of gigantic pages in those platforms ++ * This is hash translation mode LPARs. ++ */ ++ if (firmware_has_feature(FW_FEATURE_LPAR) && !radix_enabled()) ++ return false; ++ + return true; + } + #endif +diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h +index 0f98f00da2ea..19693b8add93 100644 +--- a/arch/powerpc/include/asm/kvm_host.h ++++ b/arch/powerpc/include/asm/kvm_host.h +@@ -837,7 +837,7 @@ struct kvm_vcpu_arch { + static inline void kvm_arch_hardware_disable(void) {} + static inline void kvm_arch_hardware_unsetup(void) {} + static inline void kvm_arch_sync_events(struct kvm *kvm) {} +-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {} ++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} + static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {} + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} + static inline void kvm_arch_exit(void) {} +diff --git a/arch/powerpc/include/asm/powernv.h b/arch/powerpc/include/asm/powernv.h +index 2f3ff7a27881..d85fcfea32ca 100644 +--- a/arch/powerpc/include/asm/powernv.h ++++ b/arch/powerpc/include/asm/powernv.h +@@ -23,6 +23,8 @@ extern int pnv_npu2_handle_fault(struct npu_context *context, uintptr_t *ea, + unsigned long *flags, unsigned long *status, + int count); + ++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val); ++ + void pnv_tm_init(void); + #else + static inline void powernv_set_nmmu_ptcr(unsigned long ptcr) { } +diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S +index 0768dfd8a64e..fdd528cdb2ee 100644 +--- a/arch/powerpc/kernel/entry_32.S ++++ b/arch/powerpc/kernel/entry_32.S +@@ -745,6 +745,9 @@ fast_exception_return: + mtcr r10 + lwz r10,_LINK(r11) + mtlr r10 ++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */ ++ li r10, 0 ++ stw r10, 8(r11) + REST_GPR(10, r11) + #if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS) + mtspr SPRN_NRI, r0 +@@ -982,6 +985,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_NEED_PAIRED_STWCX) + mtcrf 0xFF,r10 + mtlr r11 + ++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */ ++ li r10, 0 ++ stw r10, 8(r1) + /* + * Once we put values in SRR0 and SRR1, we are in a state + * where exceptions are not recoverable, since taking an +@@ -1021,6 +1027,9 @@ exc_exit_restart_end: + mtlr r11 + lwz r10,_CCR(r1) + mtcrf 0xff,r10 ++ /* Clear the exception_marker on the stack to avoid confusing stacktrace */ ++ li r10, 0 ++ stw r10, 8(r1) + REST_2GPRS(9, r1) + .globl exc_exit_restart + exc_exit_restart: +diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c +index ce393df243aa..71bad4b6f80d 100644 +--- a/arch/powerpc/kernel/process.c ++++ b/arch/powerpc/kernel/process.c +@@ -176,7 +176,7 @@ static void __giveup_fpu(struct task_struct *tsk) + + save_fpu(tsk); + msr = tsk->thread.regs->msr; +- msr &= ~MSR_FP; ++ msr &= ~(MSR_FP|MSR_FE0|MSR_FE1); + #ifdef CONFIG_VSX + if (cpu_has_feature(CPU_FTR_VSX)) + msr &= ~MSR_VSX; +diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c +index cdd5d1d3ae41..53151698bfe0 100644 +--- a/arch/powerpc/kernel/ptrace.c ++++ b/arch/powerpc/kernel/ptrace.c +@@ -561,6 +561,7 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset, + /* + * Copy out only the low-order word of vrsave. + */ ++ int start, end; + union { + elf_vrreg_t reg; + u32 word; +@@ -569,8 +570,10 @@ static int vr_get(struct task_struct *target, const struct user_regset *regset, + + vrsave.word = target->thread.vrsave; + ++ start = 33 * sizeof(vector128); ++ end = start + sizeof(vrsave); + ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf, &vrsave, +- 33 * sizeof(vector128), -1); ++ start, end); + } + + return ret; +@@ -608,6 +611,7 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset, + /* + * We use only the first word of vrsave. + */ ++ int start, end; + union { + elf_vrreg_t reg; + u32 word; +@@ -616,8 +620,10 @@ static int vr_set(struct task_struct *target, const struct user_regset *regset, + + vrsave.word = target->thread.vrsave; + ++ start = 33 * sizeof(vector128); ++ end = start + sizeof(vrsave); + ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &vrsave, +- 33 * sizeof(vector128), -1); ++ start, end); + if (!ret) + target->thread.vrsave = vrsave.word; + } +diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c +index 3f15edf25a0d..6e521a3f67ca 100644 +--- a/arch/powerpc/kernel/smp.c ++++ b/arch/powerpc/kernel/smp.c +@@ -358,13 +358,12 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask) + * NMI IPIs may not be recoverable, so should not be used as ongoing part of + * a running system. They can be used for crash, debug, halt/reboot, etc. + * +- * NMI IPIs are globally single threaded. No more than one in progress at +- * any time. +- * + * The IPI call waits with interrupts disabled until all targets enter the +- * NMI handler, then the call returns. ++ * NMI handler, then returns. Subsequent IPIs can be issued before targets ++ * have returned from their handlers, so there is no guarantee about ++ * concurrency or re-entrancy. + * +- * No new NMI can be initiated until targets exit the handler. ++ * A new NMI can be issued before all targets exit the handler. + * + * The IPI call may time out without all targets entering the NMI handler. + * In that case, there is some logic to recover (and ignore subsequent +@@ -375,7 +374,7 @@ void arch_send_call_function_ipi_mask(const struct cpumask *mask) + + static atomic_t __nmi_ipi_lock = ATOMIC_INIT(0); + static struct cpumask nmi_ipi_pending_mask; +-static int nmi_ipi_busy_count = 0; ++static bool nmi_ipi_busy = false; + static void (*nmi_ipi_function)(struct pt_regs *) = NULL; + + static void nmi_ipi_lock_start(unsigned long *flags) +@@ -414,7 +413,7 @@ static void nmi_ipi_unlock_end(unsigned long *flags) + */ + int smp_handle_nmi_ipi(struct pt_regs *regs) + { +- void (*fn)(struct pt_regs *); ++ void (*fn)(struct pt_regs *) = NULL; + unsigned long flags; + int me = raw_smp_processor_id(); + int ret = 0; +@@ -425,29 +424,17 @@ int smp_handle_nmi_ipi(struct pt_regs *regs) + * because the caller may have timed out. + */ + nmi_ipi_lock_start(&flags); +- if (!nmi_ipi_busy_count) +- goto out; +- if (!cpumask_test_cpu(me, &nmi_ipi_pending_mask)) +- goto out; +- +- fn = nmi_ipi_function; +- if (!fn) +- goto out; +- +- cpumask_clear_cpu(me, &nmi_ipi_pending_mask); +- nmi_ipi_busy_count++; +- nmi_ipi_unlock(); +- +- ret = 1; +- +- fn(regs); +- +- nmi_ipi_lock(); +- if (nmi_ipi_busy_count > 1) /* Can race with caller time-out */ +- nmi_ipi_busy_count--; +-out: ++ if (cpumask_test_cpu(me, &nmi_ipi_pending_mask)) { ++ cpumask_clear_cpu(me, &nmi_ipi_pending_mask); ++ fn = READ_ONCE(nmi_ipi_function); ++ WARN_ON_ONCE(!fn); ++ ret = 1; ++ } + nmi_ipi_unlock_end(&flags); + ++ if (fn) ++ fn(regs); ++ + return ret; + } + +@@ -473,7 +460,7 @@ static void do_smp_send_nmi_ipi(int cpu, bool safe) + * - cpu is the target CPU (must not be this CPU), or NMI_IPI_ALL_OTHERS. + * - fn is the target callback function. + * - delay_us > 0 is the delay before giving up waiting for targets to +- * complete executing the handler, == 0 specifies indefinite delay. ++ * begin executing the handler, == 0 specifies indefinite delay. + */ + int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool safe) + { +@@ -487,31 +474,33 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool + if (unlikely(!smp_ops)) + return 0; + +- /* Take the nmi_ipi_busy count/lock with interrupts hard disabled */ + nmi_ipi_lock_start(&flags); +- while (nmi_ipi_busy_count) { ++ while (nmi_ipi_busy) { + nmi_ipi_unlock_end(&flags); +- spin_until_cond(nmi_ipi_busy_count == 0); ++ spin_until_cond(!nmi_ipi_busy); + nmi_ipi_lock_start(&flags); + } +- ++ nmi_ipi_busy = true; + nmi_ipi_function = fn; + ++ WARN_ON_ONCE(!cpumask_empty(&nmi_ipi_pending_mask)); ++ + if (cpu < 0) { + /* ALL_OTHERS */ + cpumask_copy(&nmi_ipi_pending_mask, cpu_online_mask); + cpumask_clear_cpu(me, &nmi_ipi_pending_mask); + } else { +- /* cpumask starts clear */ + cpumask_set_cpu(cpu, &nmi_ipi_pending_mask); + } +- nmi_ipi_busy_count++; ++ + nmi_ipi_unlock(); + ++ /* Interrupts remain hard disabled */ ++ + do_smp_send_nmi_ipi(cpu, safe); + + nmi_ipi_lock(); +- /* nmi_ipi_busy_count is held here, so unlock/lock is okay */ ++ /* nmi_ipi_busy is set here, so unlock/lock is okay */ + while (!cpumask_empty(&nmi_ipi_pending_mask)) { + nmi_ipi_unlock(); + udelay(1); +@@ -523,29 +512,15 @@ int __smp_send_nmi_ipi(int cpu, void (*fn)(struct pt_regs *), u64 delay_us, bool + } + } + +- while (nmi_ipi_busy_count > 1) { +- nmi_ipi_unlock(); +- udelay(1); +- nmi_ipi_lock(); +- if (delay_us) { +- delay_us--; +- if (!delay_us) +- break; +- } +- } +- + if (!cpumask_empty(&nmi_ipi_pending_mask)) { + /* Timeout waiting for CPUs to call smp_handle_nmi_ipi */ + ret = 0; + cpumask_clear(&nmi_ipi_pending_mask); + } +- if (nmi_ipi_busy_count > 1) { +- /* Timeout waiting for CPUs to execute fn */ +- ret = 0; +- nmi_ipi_busy_count = 1; +- } + +- nmi_ipi_busy_count--; ++ nmi_ipi_function = NULL; ++ nmi_ipi_busy = false; ++ + nmi_ipi_unlock_end(&flags); + + return ret; +@@ -613,17 +588,8 @@ void crash_send_ipi(void (*crash_ipi_callback)(struct pt_regs *)) + static void nmi_stop_this_cpu(struct pt_regs *regs) + { + /* +- * This is a special case because it never returns, so the NMI IPI +- * handling would never mark it as done, which makes any later +- * smp_send_nmi_ipi() call spin forever. Mark it done now. +- * + * IRQs are already hard disabled by the smp_handle_nmi_ipi. + */ +- nmi_ipi_lock(); +- if (nmi_ipi_busy_count > 1) +- nmi_ipi_busy_count--; +- nmi_ipi_unlock(); +- + spin_begin(); + while (1) + spin_cpu_relax(); +diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c +index 64936b60d521..7a1de34f38c8 100644 +--- a/arch/powerpc/kernel/traps.c ++++ b/arch/powerpc/kernel/traps.c +@@ -763,15 +763,15 @@ void machine_check_exception(struct pt_regs *regs) + if (check_io_access(regs)) + goto bail; + +- /* Must die if the interrupt is not recoverable */ +- if (!(regs->msr & MSR_RI)) +- nmi_panic(regs, "Unrecoverable Machine check"); +- + if (!nested) + nmi_exit(); + + die("Machine check", regs, SIGBUS); + ++ /* Must die if the interrupt is not recoverable */ ++ if (!(regs->msr & MSR_RI)) ++ nmi_panic(regs, "Unrecoverable Machine check"); ++ + return; + + bail: +@@ -1542,8 +1542,8 @@ bail: + + void StackOverflow(struct pt_regs *regs) + { +- printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n", +- current, regs->gpr[1]); ++ pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n", ++ current->comm, task_pid_nr(current), regs->gpr[1]); + debugger(regs); + show_regs(regs); + panic("kernel stack overflow"); +diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +index 9b8d50a7cbaf..45b06e239d1f 100644 +--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S ++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +@@ -58,6 +58,8 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) + #define STACK_SLOT_DAWR (SFS-56) + #define STACK_SLOT_DAWRX (SFS-64) + #define STACK_SLOT_HFSCR (SFS-72) ++#define STACK_SLOT_AMR (SFS-80) ++#define STACK_SLOT_UAMOR (SFS-88) + /* the following is used by the P9 short path */ + #define STACK_SLOT_NVGPRS (SFS-152) /* 18 gprs */ + +@@ -726,11 +728,9 @@ BEGIN_FTR_SECTION + mfspr r5, SPRN_TIDR + mfspr r6, SPRN_PSSCR + mfspr r7, SPRN_PID +- mfspr r8, SPRN_IAMR + std r5, STACK_SLOT_TID(r1) + std r6, STACK_SLOT_PSSCR(r1) + std r7, STACK_SLOT_PID(r1) +- std r8, STACK_SLOT_IAMR(r1) + mfspr r5, SPRN_HFSCR + std r5, STACK_SLOT_HFSCR(r1) + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) +@@ -738,11 +738,18 @@ BEGIN_FTR_SECTION + mfspr r5, SPRN_CIABR + mfspr r6, SPRN_DAWR + mfspr r7, SPRN_DAWRX ++ mfspr r8, SPRN_IAMR + std r5, STACK_SLOT_CIABR(r1) + std r6, STACK_SLOT_DAWR(r1) + std r7, STACK_SLOT_DAWRX(r1) ++ std r8, STACK_SLOT_IAMR(r1) + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) + ++ mfspr r5, SPRN_AMR ++ std r5, STACK_SLOT_AMR(r1) ++ mfspr r6, SPRN_UAMOR ++ std r6, STACK_SLOT_UAMOR(r1) ++ + BEGIN_FTR_SECTION + /* Set partition DABR */ + /* Do this before re-enabling PMU to avoid P7 DABR corruption bug */ +@@ -1631,22 +1638,25 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_300) + mtspr SPRN_PSPB, r0 + mtspr SPRN_WORT, r0 + BEGIN_FTR_SECTION +- mtspr SPRN_IAMR, r0 + mtspr SPRN_TCSCR, r0 + /* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */ + li r0, 1 + sldi r0, r0, 31 + mtspr SPRN_MMCRS, r0 + END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_300) +-8: + +- /* Save and reset AMR and UAMOR before turning on the MMU */ ++ /* Save and restore AMR, IAMR and UAMOR before turning on the MMU */ ++ ld r8, STACK_SLOT_IAMR(r1) ++ mtspr SPRN_IAMR, r8 ++ ++8: /* Power7 jumps back in here */ + mfspr r5,SPRN_AMR + mfspr r6,SPRN_UAMOR + std r5,VCPU_AMR(r9) + std r6,VCPU_UAMOR(r9) +- li r6,0 +- mtspr SPRN_AMR,r6 ++ ld r5,STACK_SLOT_AMR(r1) ++ ld r6,STACK_SLOT_UAMOR(r1) ++ mtspr SPRN_AMR, r5 + mtspr SPRN_UAMOR, r6 + + /* Switch DSCR back to host value */ +@@ -1746,11 +1756,9 @@ BEGIN_FTR_SECTION + ld r5, STACK_SLOT_TID(r1) + ld r6, STACK_SLOT_PSSCR(r1) + ld r7, STACK_SLOT_PID(r1) +- ld r8, STACK_SLOT_IAMR(r1) + mtspr SPRN_TIDR, r5 + mtspr SPRN_PSSCR, r6 + mtspr SPRN_PID, r7 +- mtspr SPRN_IAMR, r8 + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) + + #ifdef CONFIG_PPC_RADIX_MMU +diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c +index bc3914d54e26..5986df48359b 100644 +--- a/arch/powerpc/mm/slb.c ++++ b/arch/powerpc/mm/slb.c +@@ -69,6 +69,11 @@ static void assert_slb_presence(bool present, unsigned long ea) + if (!cpu_has_feature(CPU_FTR_ARCH_206)) + return; + ++ /* ++ * slbfee. requires bit 24 (PPC bit 39) be clear in RB. Hardware ++ * ignores all other bits from 0-27, so just clear them all. ++ */ ++ ea &= ~((1UL << 28) - 1); + asm volatile(__PPC_SLBFEE_DOT(%0, %1) : "=r"(tmp) : "r"(ea) : "cr0"); + + WARN_ON(present == (tmp == 0)); +diff --git a/arch/powerpc/platforms/83xx/suspend-asm.S b/arch/powerpc/platforms/83xx/suspend-asm.S +index 3d1ecd211776..8137f77abad5 100644 +--- a/arch/powerpc/platforms/83xx/suspend-asm.S ++++ b/arch/powerpc/platforms/83xx/suspend-asm.S +@@ -26,13 +26,13 @@ + #define SS_MSR 0x74 + #define SS_SDR1 0x78 + #define SS_LR 0x7c +-#define SS_SPRG 0x80 /* 4 SPRGs */ +-#define SS_DBAT 0x90 /* 8 DBATs */ +-#define SS_IBAT 0xd0 /* 8 IBATs */ +-#define SS_TB 0x110 +-#define SS_CR 0x118 +-#define SS_GPREG 0x11c /* r12-r31 */ +-#define STATE_SAVE_SIZE 0x16c ++#define SS_SPRG 0x80 /* 8 SPRGs */ ++#define SS_DBAT 0xa0 /* 8 DBATs */ ++#define SS_IBAT 0xe0 /* 8 IBATs */ ++#define SS_TB 0x120 ++#define SS_CR 0x128 ++#define SS_GPREG 0x12c /* r12-r31 */ ++#define STATE_SAVE_SIZE 0x17c + + .section .data + .align 5 +@@ -103,6 +103,16 @@ _GLOBAL(mpc83xx_enter_deep_sleep) + stw r7, SS_SPRG+12(r3) + stw r8, SS_SDR1(r3) + ++ mfspr r4, SPRN_SPRG4 ++ mfspr r5, SPRN_SPRG5 ++ mfspr r6, SPRN_SPRG6 ++ mfspr r7, SPRN_SPRG7 ++ ++ stw r4, SS_SPRG+16(r3) ++ stw r5, SS_SPRG+20(r3) ++ stw r6, SS_SPRG+24(r3) ++ stw r7, SS_SPRG+28(r3) ++ + mfspr r4, SPRN_DBAT0U + mfspr r5, SPRN_DBAT0L + mfspr r6, SPRN_DBAT1U +@@ -493,6 +503,16 @@ mpc83xx_deep_resume: + mtspr SPRN_IBAT7U, r6 + mtspr SPRN_IBAT7L, r7 + ++ lwz r4, SS_SPRG+16(r3) ++ lwz r5, SS_SPRG+20(r3) ++ lwz r6, SS_SPRG+24(r3) ++ lwz r7, SS_SPRG+28(r3) ++ ++ mtspr SPRN_SPRG4, r4 ++ mtspr SPRN_SPRG5, r5 ++ mtspr SPRN_SPRG6, r6 ++ mtspr SPRN_SPRG7, r7 ++ + lwz r4, SS_SPRG+0(r3) + lwz r5, SS_SPRG+4(r3) + lwz r6, SS_SPRG+8(r3) +diff --git a/arch/powerpc/platforms/embedded6xx/wii.c b/arch/powerpc/platforms/embedded6xx/wii.c +index ecf703ee3a76..ac4ee88efc80 100644 +--- a/arch/powerpc/platforms/embedded6xx/wii.c ++++ b/arch/powerpc/platforms/embedded6xx/wii.c +@@ -83,6 +83,10 @@ unsigned long __init wii_mmu_mapin_mem2(unsigned long top) + /* MEM2 64MB@0x10000000 */ + delta = wii_hole_start + wii_hole_size; + size = top - delta; ++ ++ if (__map_without_bats) ++ return delta; ++ + for (bl = 128<<10; bl < max_size; bl <<= 1) { + if (bl * 2 > size) + break; +diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c +index 35f699ebb662..e52f9b06dd9c 100644 +--- a/arch/powerpc/platforms/powernv/idle.c ++++ b/arch/powerpc/platforms/powernv/idle.c +@@ -458,7 +458,8 @@ EXPORT_SYMBOL_GPL(pnv_power9_force_smt4_release); + #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ + + #ifdef CONFIG_HOTPLUG_CPU +-static void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val) ++ ++void pnv_program_cpu_hotplug_lpcr(unsigned int cpu, u64 lpcr_val) + { + u64 pir = get_hard_smp_processor_id(cpu); + +@@ -481,20 +482,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu) + { + unsigned long srr1; + u32 idle_states = pnv_get_supported_cpuidle_states(); +- u64 lpcr_val; +- +- /* +- * We don't want to take decrementer interrupts while we are +- * offline, so clear LPCR:PECE1. We keep PECE2 (and +- * LPCR_PECE_HVEE on P9) enabled as to let IPIs in. +- * +- * If the CPU gets woken up by a special wakeup, ensure that +- * the SLW engine sets LPCR with decrementer bit cleared, else +- * the CPU will come back to the kernel due to a spurious +- * wakeup. +- */ +- lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1; +- pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val); + + __ppc64_runlatch_off(); + +@@ -526,16 +513,6 @@ unsigned long pnv_cpu_offline(unsigned int cpu) + + __ppc64_runlatch_on(); + +- /* +- * Re-enable decrementer interrupts in LPCR. +- * +- * Further, we want stop states to be woken up by decrementer +- * for non-hotplug cases. So program the LPCR via stop api as +- * well. +- */ +- lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1; +- pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val); +- + return srr1; + } + #endif +diff --git a/arch/powerpc/platforms/powernv/opal-msglog.c b/arch/powerpc/platforms/powernv/opal-msglog.c +index acd3206dfae3..06628c71cef6 100644 +--- a/arch/powerpc/platforms/powernv/opal-msglog.c ++++ b/arch/powerpc/platforms/powernv/opal-msglog.c +@@ -98,7 +98,7 @@ static ssize_t opal_msglog_read(struct file *file, struct kobject *kobj, + } + + static struct bin_attribute opal_msglog_attr = { +- .attr = {.name = "msglog", .mode = 0444}, ++ .attr = {.name = "msglog", .mode = 0400}, + .read = opal_msglog_read + }; + +diff --git a/arch/powerpc/platforms/powernv/smp.c b/arch/powerpc/platforms/powernv/smp.c +index 0d354e19ef92..db09c7022635 100644 +--- a/arch/powerpc/platforms/powernv/smp.c ++++ b/arch/powerpc/platforms/powernv/smp.c +@@ -39,6 +39,7 @@ + #include + #include + #include ++#include + + #include "powernv.h" + +@@ -153,6 +154,7 @@ static void pnv_smp_cpu_kill_self(void) + { + unsigned int cpu; + unsigned long srr1, wmask; ++ u64 lpcr_val; + + /* Standard hot unplug procedure */ + /* +@@ -174,6 +176,19 @@ static void pnv_smp_cpu_kill_self(void) + if (cpu_has_feature(CPU_FTR_ARCH_207S)) + wmask = SRR1_WAKEMASK_P8; + ++ /* ++ * We don't want to take decrementer interrupts while we are ++ * offline, so clear LPCR:PECE1. We keep PECE2 (and ++ * LPCR_PECE_HVEE on P9) enabled so as to let IPIs in. ++ * ++ * If the CPU gets woken up by a special wakeup, ensure that ++ * the SLW engine sets LPCR with decrementer bit cleared, else ++ * the CPU will come back to the kernel due to a spurious ++ * wakeup. ++ */ ++ lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1; ++ pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val); ++ + while (!generic_check_cpu_restart(cpu)) { + /* + * Clear IPI flag, since we don't handle IPIs while +@@ -246,6 +261,16 @@ static void pnv_smp_cpu_kill_self(void) + + } + ++ /* ++ * Re-enable decrementer interrupts in LPCR. ++ * ++ * Further, we want stop states to be woken up by decrementer ++ * for non-hotplug cases. So program the LPCR via stop api as ++ * well. ++ */ ++ lpcr_val = mfspr(SPRN_LPCR) | (u64)LPCR_PECE1; ++ pnv_program_cpu_hotplug_lpcr(cpu, lpcr_val); ++ + DBG("CPU%d coming online...\n", cpu); + } + +diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h +index d5d24889c3bc..c2b8c8c6c9be 100644 +--- a/arch/s390/include/asm/kvm_host.h ++++ b/arch/s390/include/asm/kvm_host.h +@@ -878,7 +878,7 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {} + static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {} + static inline void kvm_arch_free_memslot(struct kvm *kvm, + struct kvm_memory_slot *free, struct kvm_memory_slot *dont) {} +-static inline void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) {} ++static inline void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) {} + static inline void kvm_arch_flush_shadow_all(struct kvm *kvm) {} + static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm, + struct kvm_memory_slot *slot) {} +diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c +index 7ed90a759135..01a3f4964d57 100644 +--- a/arch/s390/kernel/setup.c ++++ b/arch/s390/kernel/setup.c +@@ -369,7 +369,7 @@ void __init arch_call_rest_init(void) + : : [_frame] "a" (frame)); + } + +-static void __init setup_lowcore(void) ++static void __init setup_lowcore_dat_off(void) + { + struct lowcore *lc; + +@@ -380,19 +380,16 @@ static void __init setup_lowcore(void) + lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc)); + lc->restart_psw.mask = PSW_KERNEL_BITS; + lc->restart_psw.addr = (unsigned long) restart_int_handler; +- lc->external_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_MCHECK; ++ lc->external_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; + lc->external_new_psw.addr = (unsigned long) ext_int_handler; + lc->svc_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; ++ PSW_MASK_IO | PSW_MASK_EXT | PSW_MASK_MCHECK; + lc->svc_new_psw.addr = (unsigned long) system_call; +- lc->program_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_MCHECK; ++ lc->program_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; + lc->program_new_psw.addr = (unsigned long) pgm_check_handler; + lc->mcck_new_psw.mask = PSW_KERNEL_BITS; + lc->mcck_new_psw.addr = (unsigned long) mcck_int_handler; +- lc->io_new_psw.mask = PSW_KERNEL_BITS | +- PSW_MASK_DAT | PSW_MASK_MCHECK; ++ lc->io_new_psw.mask = PSW_KERNEL_BITS | PSW_MASK_MCHECK; + lc->io_new_psw.addr = (unsigned long) io_int_handler; + lc->clock_comparator = clock_comparator_max; + lc->nodat_stack = ((unsigned long) &init_thread_union) +@@ -452,6 +449,16 @@ static void __init setup_lowcore(void) + lowcore_ptr[0] = lc; + } + ++static void __init setup_lowcore_dat_on(void) ++{ ++ __ctl_clear_bit(0, 28); ++ S390_lowcore.external_new_psw.mask |= PSW_MASK_DAT; ++ S390_lowcore.svc_new_psw.mask |= PSW_MASK_DAT; ++ S390_lowcore.program_new_psw.mask |= PSW_MASK_DAT; ++ S390_lowcore.io_new_psw.mask |= PSW_MASK_DAT; ++ __ctl_set_bit(0, 28); ++} ++ + static struct resource code_resource = { + .name = "Kernel code", + .flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM, +@@ -1072,7 +1079,7 @@ void __init setup_arch(char **cmdline_p) + #endif + + setup_resources(); +- setup_lowcore(); ++ setup_lowcore_dat_off(); + smp_fill_possible_mask(); + cpu_detect_mhz_feature(); + cpu_init(); +@@ -1085,6 +1092,12 @@ void __init setup_arch(char **cmdline_p) + */ + paging_init(); + ++ /* ++ * After paging_init created the kernel page table, the new PSWs ++ * in lowcore can now run with DAT enabled. ++ */ ++ setup_lowcore_dat_on(); ++ + /* Setup default console */ + conmode_default(); + set_preferred_console(); +diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c +index 2a356b948720..3ea71b871813 100644 +--- a/arch/x86/crypto/aegis128-aesni-glue.c ++++ b/arch/x86/crypto/aegis128-aesni-glue.c +@@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad( + } + + static void crypto_aegis128_aesni_process_crypt( +- struct aegis_state *state, struct aead_request *req, ++ struct aegis_state *state, struct skcipher_walk *walk, + const struct aegis_crypt_ops *ops) + { +- struct skcipher_walk walk; +- u8 *src, *dst; +- unsigned int chunksize, base; +- +- ops->skcipher_walk_init(&walk, req, false); +- +- while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; +- +- ops->crypt_blocks(state, chunksize, src, dst); +- +- base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1); +- src += base; +- dst += base; +- chunksize &= AEGIS128_BLOCK_SIZE - 1; +- +- if (chunksize > 0) +- ops->crypt_tail(state, chunksize, src, dst); ++ while (walk->nbytes >= AEGIS128_BLOCK_SIZE) { ++ ops->crypt_blocks(state, ++ round_down(walk->nbytes, AEGIS128_BLOCK_SIZE), ++ walk->src.virt.addr, walk->dst.virt.addr); ++ skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE); ++ } + +- skcipher_walk_done(&walk, 0); ++ if (walk->nbytes) { ++ ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr, ++ walk->dst.virt.addr); ++ skcipher_walk_done(walk, 0); + } + } + +@@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req, + { + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm); ++ struct skcipher_walk walk; + struct aegis_state state; + ++ ops->skcipher_walk_init(&walk, req, true); ++ + kernel_fpu_begin(); + + crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv); + crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen); +- crypto_aegis128_aesni_process_crypt(&state, req, ops); ++ crypto_aegis128_aesni_process_crypt(&state, &walk, ops); + crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen); + + kernel_fpu_end(); +diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c +index dbe8bb980da1..1b1b39c66c5e 100644 +--- a/arch/x86/crypto/aegis128l-aesni-glue.c ++++ b/arch/x86/crypto/aegis128l-aesni-glue.c +@@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad( + } + + static void crypto_aegis128l_aesni_process_crypt( +- struct aegis_state *state, struct aead_request *req, ++ struct aegis_state *state, struct skcipher_walk *walk, + const struct aegis_crypt_ops *ops) + { +- struct skcipher_walk walk; +- u8 *src, *dst; +- unsigned int chunksize, base; +- +- ops->skcipher_walk_init(&walk, req, false); +- +- while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; +- +- ops->crypt_blocks(state, chunksize, src, dst); +- +- base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1); +- src += base; +- dst += base; +- chunksize &= AEGIS128L_BLOCK_SIZE - 1; +- +- if (chunksize > 0) +- ops->crypt_tail(state, chunksize, src, dst); ++ while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) { ++ ops->crypt_blocks(state, round_down(walk->nbytes, ++ AEGIS128L_BLOCK_SIZE), ++ walk->src.virt.addr, walk->dst.virt.addr); ++ skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE); ++ } + +- skcipher_walk_done(&walk, 0); ++ if (walk->nbytes) { ++ ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr, ++ walk->dst.virt.addr); ++ skcipher_walk_done(walk, 0); + } + } + +@@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req, + { + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm); ++ struct skcipher_walk walk; + struct aegis_state state; + ++ ops->skcipher_walk_init(&walk, req, true); ++ + kernel_fpu_begin(); + + crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv); + crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen); +- crypto_aegis128l_aesni_process_crypt(&state, req, ops); ++ crypto_aegis128l_aesni_process_crypt(&state, &walk, ops); + crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen); + + kernel_fpu_end(); +diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c +index 8bebda2de92f..6227ca3220a0 100644 +--- a/arch/x86/crypto/aegis256-aesni-glue.c ++++ b/arch/x86/crypto/aegis256-aesni-glue.c +@@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad( + } + + static void crypto_aegis256_aesni_process_crypt( +- struct aegis_state *state, struct aead_request *req, ++ struct aegis_state *state, struct skcipher_walk *walk, + const struct aegis_crypt_ops *ops) + { +- struct skcipher_walk walk; +- u8 *src, *dst; +- unsigned int chunksize, base; +- +- ops->skcipher_walk_init(&walk, req, false); +- +- while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; +- +- ops->crypt_blocks(state, chunksize, src, dst); +- +- base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1); +- src += base; +- dst += base; +- chunksize &= AEGIS256_BLOCK_SIZE - 1; +- +- if (chunksize > 0) +- ops->crypt_tail(state, chunksize, src, dst); ++ while (walk->nbytes >= AEGIS256_BLOCK_SIZE) { ++ ops->crypt_blocks(state, ++ round_down(walk->nbytes, AEGIS256_BLOCK_SIZE), ++ walk->src.virt.addr, walk->dst.virt.addr); ++ skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE); ++ } + +- skcipher_walk_done(&walk, 0); ++ if (walk->nbytes) { ++ ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr, ++ walk->dst.virt.addr); ++ skcipher_walk_done(walk, 0); + } + } + +@@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req, + { + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm); ++ struct skcipher_walk walk; + struct aegis_state state; + ++ ops->skcipher_walk_init(&walk, req, true); ++ + kernel_fpu_begin(); + + crypto_aegis256_aesni_init(&state, ctx->key, req->iv); + crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen); +- crypto_aegis256_aesni_process_crypt(&state, req, ops); ++ crypto_aegis256_aesni_process_crypt(&state, &walk, ops); + crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen); + + kernel_fpu_end(); +diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c +index 1321700d6647..ae30c8b6ec4d 100644 +--- a/arch/x86/crypto/aesni-intel_glue.c ++++ b/arch/x86/crypto/aesni-intel_glue.c +@@ -821,11 +821,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req, + scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0); + } + +- src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen); +- scatterwalk_start(&src_sg_walk, src_sg); +- if (req->src != req->dst) { +- dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen); +- scatterwalk_start(&dst_sg_walk, dst_sg); ++ if (left) { ++ src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen); ++ scatterwalk_start(&src_sg_walk, src_sg); ++ if (req->src != req->dst) { ++ dst_sg = scatterwalk_ffwd(dst_start, req->dst, ++ req->assoclen); ++ scatterwalk_start(&dst_sg_walk, dst_sg); ++ } + } + + kernel_fpu_begin(); +diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c +index 0dccdda1eb3a..7e600f8bcdad 100644 +--- a/arch/x86/crypto/morus1280_glue.c ++++ b/arch/x86/crypto/morus1280_glue.c +@@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad( + + static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state, + struct morus1280_ops ops, +- struct aead_request *req) ++ struct skcipher_walk *walk) + { +- struct skcipher_walk walk; +- u8 *cursor_src, *cursor_dst; +- unsigned int chunksize, base; +- +- ops.skcipher_walk_init(&walk, req, false); +- +- while (walk.nbytes) { +- cursor_src = walk.src.virt.addr; +- cursor_dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; +- +- ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize); +- +- base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1); +- cursor_src += base; +- cursor_dst += base; +- chunksize &= MORUS1280_BLOCK_SIZE - 1; +- +- if (chunksize > 0) +- ops.crypt_tail(state, cursor_src, cursor_dst, +- chunksize); ++ while (walk->nbytes >= MORUS1280_BLOCK_SIZE) { ++ ops.crypt_blocks(state, walk->src.virt.addr, ++ walk->dst.virt.addr, ++ round_down(walk->nbytes, ++ MORUS1280_BLOCK_SIZE)); ++ skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE); ++ } + +- skcipher_walk_done(&walk, 0); ++ if (walk->nbytes) { ++ ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr, ++ walk->nbytes); ++ skcipher_walk_done(walk, 0); + } + } + +@@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req, + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct morus1280_ctx *ctx = crypto_aead_ctx(tfm); + struct morus1280_state state; ++ struct skcipher_walk walk; ++ ++ ops.skcipher_walk_init(&walk, req, true); + + kernel_fpu_begin(); + + ctx->ops->init(&state, &ctx->key, req->iv); + crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen); +- crypto_morus1280_glue_process_crypt(&state, ops, req); ++ crypto_morus1280_glue_process_crypt(&state, ops, &walk); + ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen); + + kernel_fpu_end(); +diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c +index 7b58fe4d9bd1..cb3a81732016 100644 +--- a/arch/x86/crypto/morus640_glue.c ++++ b/arch/x86/crypto/morus640_glue.c +@@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad( + + static void crypto_morus640_glue_process_crypt(struct morus640_state *state, + struct morus640_ops ops, +- struct aead_request *req) ++ struct skcipher_walk *walk) + { +- struct skcipher_walk walk; +- u8 *cursor_src, *cursor_dst; +- unsigned int chunksize, base; +- +- ops.skcipher_walk_init(&walk, req, false); +- +- while (walk.nbytes) { +- cursor_src = walk.src.virt.addr; +- cursor_dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; +- +- ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize); +- +- base = chunksize & ~(MORUS640_BLOCK_SIZE - 1); +- cursor_src += base; +- cursor_dst += base; +- chunksize &= MORUS640_BLOCK_SIZE - 1; +- +- if (chunksize > 0) +- ops.crypt_tail(state, cursor_src, cursor_dst, +- chunksize); ++ while (walk->nbytes >= MORUS640_BLOCK_SIZE) { ++ ops.crypt_blocks(state, walk->src.virt.addr, ++ walk->dst.virt.addr, ++ round_down(walk->nbytes, MORUS640_BLOCK_SIZE)); ++ skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE); ++ } + +- skcipher_walk_done(&walk, 0); ++ if (walk->nbytes) { ++ ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr, ++ walk->nbytes); ++ skcipher_walk_done(walk, 0); + } + } + +@@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req, + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct morus640_ctx *ctx = crypto_aead_ctx(tfm); + struct morus640_state state; ++ struct skcipher_walk walk; ++ ++ ops.skcipher_walk_init(&walk, req, true); + + kernel_fpu_begin(); + + ctx->ops->init(&state, &ctx->key, req->iv); + crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen); +- crypto_morus640_glue_process_crypt(&state, ops, req); ++ crypto_morus640_glue_process_crypt(&state, ops, &walk); + ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen); + + kernel_fpu_end(); +diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c +index 27a461414b30..2690135bf83f 100644 +--- a/arch/x86/events/intel/uncore.c ++++ b/arch/x86/events/intel/uncore.c +@@ -740,6 +740,7 @@ static int uncore_pmu_event_init(struct perf_event *event) + /* fixed counters have event field hardcoded to zero */ + hwc->config = 0ULL; + } else if (is_freerunning_event(event)) { ++ hwc->config = event->attr.config; + if (!check_valid_freerunning_event(box, event)) + return -EINVAL; + event->hw.idx = UNCORE_PMC_IDX_FREERUNNING; +diff --git a/arch/x86/events/intel/uncore.h b/arch/x86/events/intel/uncore.h +index cb46d602a6b8..853a49a8ccf6 100644 +--- a/arch/x86/events/intel/uncore.h ++++ b/arch/x86/events/intel/uncore.h +@@ -292,8 +292,8 @@ static inline + unsigned int uncore_freerunning_counter(struct intel_uncore_box *box, + struct perf_event *event) + { +- unsigned int type = uncore_freerunning_type(event->attr.config); +- unsigned int idx = uncore_freerunning_idx(event->attr.config); ++ unsigned int type = uncore_freerunning_type(event->hw.config); ++ unsigned int idx = uncore_freerunning_idx(event->hw.config); + struct intel_uncore_pmu *pmu = box->pmu; + + return pmu->type->freerunning[type].counter_base + +@@ -377,7 +377,7 @@ static inline + unsigned int uncore_freerunning_bits(struct intel_uncore_box *box, + struct perf_event *event) + { +- unsigned int type = uncore_freerunning_type(event->attr.config); ++ unsigned int type = uncore_freerunning_type(event->hw.config); + + return box->pmu->type->freerunning[type].bits; + } +@@ -385,7 +385,7 @@ unsigned int uncore_freerunning_bits(struct intel_uncore_box *box, + static inline int uncore_num_freerunning(struct intel_uncore_box *box, + struct perf_event *event) + { +- unsigned int type = uncore_freerunning_type(event->attr.config); ++ unsigned int type = uncore_freerunning_type(event->hw.config); + + return box->pmu->type->freerunning[type].num_counters; + } +@@ -399,8 +399,8 @@ static inline int uncore_num_freerunning_types(struct intel_uncore_box *box, + static inline bool check_valid_freerunning_event(struct intel_uncore_box *box, + struct perf_event *event) + { +- unsigned int type = uncore_freerunning_type(event->attr.config); +- unsigned int idx = uncore_freerunning_idx(event->attr.config); ++ unsigned int type = uncore_freerunning_type(event->hw.config); ++ unsigned int idx = uncore_freerunning_idx(event->hw.config); + + return (type < uncore_num_freerunning_types(box, event)) && + (idx < uncore_num_freerunning(box, event)); +diff --git a/arch/x86/events/intel/uncore_snb.c b/arch/x86/events/intel/uncore_snb.c +index 2593b0d7aeee..ef7faf486a1a 100644 +--- a/arch/x86/events/intel/uncore_snb.c ++++ b/arch/x86/events/intel/uncore_snb.c +@@ -448,9 +448,11 @@ static int snb_uncore_imc_event_init(struct perf_event *event) + + /* must be done before validate_group */ + event->hw.event_base = base; +- event->hw.config = cfg; + event->hw.idx = idx; + ++ /* Convert to standard encoding format for freerunning counters */ ++ event->hw.config = ((cfg - 1) << 8) | 0x10ff; ++ + /* no group validation needed, we have free running counters */ + + return 0; +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index 180373360e34..e40be168c73c 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -1255,7 +1255,7 @@ void kvm_mmu_clear_dirty_pt_masked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn_offset, unsigned long mask); + void kvm_mmu_zap_all(struct kvm *kvm); +-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots); ++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen); + unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm); + void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages); + +diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c +index 8257a59704ae..763d4264d16a 100644 +--- a/arch/x86/kernel/ftrace.c ++++ b/arch/x86/kernel/ftrace.c +@@ -49,7 +49,7 @@ int ftrace_arch_code_modify_post_process(void) + union ftrace_code_union { + char code[MCOUNT_INSN_SIZE]; + struct { +- unsigned char e8; ++ unsigned char op; + int offset; + } __attribute__((packed)); + }; +@@ -59,20 +59,23 @@ static int ftrace_calc_offset(long ip, long addr) + return (int)(addr - ip); + } + +-static unsigned char *ftrace_call_replace(unsigned long ip, unsigned long addr) ++static unsigned char * ++ftrace_text_replace(unsigned char op, unsigned long ip, unsigned long addr) + { + static union ftrace_code_union calc; + +- calc.e8 = 0xe8; ++ calc.op = op; + calc.offset = ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr); + +- /* +- * No locking needed, this must be called via kstop_machine +- * which in essence is like running on a uniprocessor machine. +- */ + return calc.code; + } + ++static unsigned char * ++ftrace_call_replace(unsigned long ip, unsigned long addr) ++{ ++ return ftrace_text_replace(0xe8, ip, addr); ++} ++ + static inline int + within(unsigned long addr, unsigned long start, unsigned long end) + { +@@ -664,22 +667,6 @@ int __init ftrace_dyn_arch_init(void) + return 0; + } + +-#if defined(CONFIG_X86_64) || defined(CONFIG_FUNCTION_GRAPH_TRACER) +-static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr) +-{ +- static union ftrace_code_union calc; +- +- /* Jmp not a call (ignore the .e8) */ +- calc.e8 = 0xe9; +- calc.offset = ftrace_calc_offset(ip + MCOUNT_INSN_SIZE, addr); +- +- /* +- * ftrace external locks synchronize the access to the static variable. +- */ +- return calc.code; +-} +-#endif +- + /* Currently only x86_64 supports dynamic trampolines */ + #ifdef CONFIG_X86_64 + +@@ -891,8 +878,8 @@ static void *addr_from_call(void *ptr) + return NULL; + + /* Make sure this is a call */ +- if (WARN_ON_ONCE(calc.e8 != 0xe8)) { +- pr_warn("Expected e8, got %x\n", calc.e8); ++ if (WARN_ON_ONCE(calc.op != 0xe8)) { ++ pr_warn("Expected e8, got %x\n", calc.op); + return NULL; + } + +@@ -963,6 +950,11 @@ void arch_ftrace_trampoline_free(struct ftrace_ops *ops) + #ifdef CONFIG_DYNAMIC_FTRACE + extern void ftrace_graph_call(void); + ++static unsigned char *ftrace_jmp_replace(unsigned long ip, unsigned long addr) ++{ ++ return ftrace_text_replace(0xe9, ip, addr); ++} ++ + static int ftrace_mod_jmp(unsigned long ip, void *func) + { + unsigned char *new; +diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c +index 6adf6e6c2933..544bd41a514c 100644 +--- a/arch/x86/kernel/kprobes/opt.c ++++ b/arch/x86/kernel/kprobes/opt.c +@@ -141,6 +141,11 @@ asm ( + + void optprobe_template_func(void); + STACK_FRAME_NON_STANDARD(optprobe_template_func); ++NOKPROBE_SYMBOL(optprobe_template_func); ++NOKPROBE_SYMBOL(optprobe_template_entry); ++NOKPROBE_SYMBOL(optprobe_template_val); ++NOKPROBE_SYMBOL(optprobe_template_call); ++NOKPROBE_SYMBOL(optprobe_template_end); + + #define TMPL_MOVE_IDX \ + ((long)optprobe_template_val - (long)optprobe_template_entry) +diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c +index e811d4d1c824..d908a37bf3f3 100644 +--- a/arch/x86/kernel/kvmclock.c ++++ b/arch/x86/kernel/kvmclock.c +@@ -104,12 +104,8 @@ static u64 kvm_sched_clock_read(void) + + static inline void kvm_sched_clock_init(bool stable) + { +- if (!stable) { +- pv_ops.time.sched_clock = kvm_clock_read; ++ if (!stable) + clear_sched_clock_stable(); +- return; +- } +- + kvm_sched_clock_offset = kvm_clock_read(); + pv_ops.time.sched_clock = kvm_sched_clock_read; + +diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c +index f2d1d230d5b8..9ab33cab9486 100644 +--- a/arch/x86/kvm/mmu.c ++++ b/arch/x86/kvm/mmu.c +@@ -5635,13 +5635,8 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) + { + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; +- bool flush_tlb = true; +- bool flush = false; + int i; + +- if (kvm_available_flush_tlb_with_range()) +- flush_tlb = false; +- + spin_lock(&kvm->mmu_lock); + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); +@@ -5653,17 +5648,12 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) + if (start >= end) + continue; + +- flush |= slot_handle_level_range(kvm, memslot, +- kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL, +- PT_MAX_HUGEPAGE_LEVEL, start, +- end - 1, flush_tlb); ++ slot_handle_level_range(kvm, memslot, kvm_zap_rmapp, ++ PT_PAGE_TABLE_LEVEL, PT_MAX_HUGEPAGE_LEVEL, ++ start, end - 1, true); + } + } + +- if (flush) +- kvm_flush_remote_tlbs_with_address(kvm, gfn_start, +- gfn_end - gfn_start + 1); +- + spin_unlock(&kvm->mmu_lock); + } + +@@ -5901,13 +5891,30 @@ static bool kvm_has_zapped_obsolete_pages(struct kvm *kvm) + return unlikely(!list_empty_careful(&kvm->arch.zapped_obsolete_pages)); + } + +-void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, struct kvm_memslots *slots) ++void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm, u64 gen) + { ++ gen &= MMIO_GEN_MASK; ++ ++ /* ++ * Shift to eliminate the "update in-progress" flag, which isn't ++ * included in the spte's generation number. ++ */ ++ gen >>= 1; ++ ++ /* ++ * Generation numbers are incremented in multiples of the number of ++ * address spaces in order to provide unique generations across all ++ * address spaces. Strip what is effectively the address space ++ * modifier prior to checking for a wrap of the MMIO generation so ++ * that a wrap in any address space is detected. ++ */ ++ gen &= ~((u64)KVM_ADDRESS_SPACE_NUM - 1); ++ + /* +- * The very rare case: if the generation-number is round, ++ * The very rare case: if the MMIO generation number has wrapped, + * zap all shadow pages. + */ +- if (unlikely((slots->generation & MMIO_GEN_MASK) == 0)) { ++ if (unlikely(gen == 0)) { + kvm_debug_ratelimited("kvm: zapping shadow pages for mmio generation wraparound\n"); + kvm_mmu_invalidate_zap_all_pages(kvm); + } +diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c +index d737a51a53ca..f014e1aeee96 100644 +--- a/arch/x86/kvm/vmx/nested.c ++++ b/arch/x86/kvm/vmx/nested.c +@@ -2765,7 +2765,7 @@ static int nested_vmx_check_vmentry_hw(struct kvm_vcpu *vcpu) + "add $%c[wordsize], %%" _ASM_SP "\n\t" /* un-adjust RSP */ + + /* Check if vmlaunch or vmresume is needed */ +- "cmpl $0, %c[launched](%% " _ASM_CX")\n\t" ++ "cmpb $0, %c[launched](%% " _ASM_CX")\n\t" + + "call vmx_vmenter\n\t" + +@@ -4035,25 +4035,50 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification, + /* Addr = segment_base + offset */ + /* offset = base + [index * scale] + displacement */ + off = exit_qualification; /* holds the displacement */ ++ if (addr_size == 1) ++ off = (gva_t)sign_extend64(off, 31); ++ else if (addr_size == 0) ++ off = (gva_t)sign_extend64(off, 15); + if (base_is_valid) + off += kvm_register_read(vcpu, base_reg); + if (index_is_valid) + off += kvm_register_read(vcpu, index_reg)< s.limit); ++ if (!(s.base == 0 && s.limit == 0xffffffff && ++ ((s.type & 8) || !(s.type & 4)))) ++ exn = exn || (off + sizeof(u64) > s.limit); + } + if (exn) { + kvm_queue_exception_e(vcpu, +diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c +index 30a6bcd735ec..d86eee07d327 100644 +--- a/arch/x86/kvm/vmx/vmx.c ++++ b/arch/x86/kvm/vmx/vmx.c +@@ -6399,7 +6399,7 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) + "mov %%" _ASM_AX", %%cr2 \n\t" + "3: \n\t" + /* Check if vmlaunch or vmresume is needed */ +- "cmpl $0, %c[launched](%%" _ASM_CX ") \n\t" ++ "cmpb $0, %c[launched](%%" _ASM_CX ") \n\t" + /* Load guest registers. Don't clobber flags. */ + "mov %c[rax](%%" _ASM_CX "), %%" _ASM_AX " \n\t" + "mov %c[rbx](%%" _ASM_CX "), %%" _ASM_BX " \n\t" +@@ -6449,10 +6449,15 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) + "mov %%r13, %c[r13](%%" _ASM_CX ") \n\t" + "mov %%r14, %c[r14](%%" _ASM_CX ") \n\t" + "mov %%r15, %c[r15](%%" _ASM_CX ") \n\t" ++ + /* +- * Clear host registers marked as clobbered to prevent +- * speculative use. +- */ ++ * Clear all general purpose registers (except RSP, which is loaded by ++ * the CPU during VM-Exit) to prevent speculative use of the guest's ++ * values, even those that are saved/loaded via the stack. In theory, ++ * an L1 cache miss when restoring registers could lead to speculative ++ * execution with the guest's values. Zeroing XORs are dirt cheap, ++ * i.e. the extra paranoia is essentially free. ++ */ + "xor %%r8d, %%r8d \n\t" + "xor %%r9d, %%r9d \n\t" + "xor %%r10d, %%r10d \n\t" +@@ -6467,8 +6472,11 @@ static void __vmx_vcpu_run(struct kvm_vcpu *vcpu, struct vcpu_vmx *vmx) + + "xor %%eax, %%eax \n\t" + "xor %%ebx, %%ebx \n\t" ++ "xor %%ecx, %%ecx \n\t" ++ "xor %%edx, %%edx \n\t" + "xor %%esi, %%esi \n\t" + "xor %%edi, %%edi \n\t" ++ "xor %%ebp, %%ebp \n\t" + "pop %%" _ASM_BP "; pop %%" _ASM_DX " \n\t" + : ASM_CALL_CONSTRAINT + : "c"(vmx), "d"((unsigned long)HOST_RSP), "S"(evmcs_rsp), +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 941f932373d0..2bcef72a7c40 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -9348,13 +9348,13 @@ out_free: + return -ENOMEM; + } + +-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) ++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) + { + /* + * memslots->generation has been incremented. + * mmio generation may have reached its maximum value. + */ +- kvm_mmu_invalidate_mmio_sptes(kvm, slots); ++ kvm_mmu_invalidate_mmio_sptes(kvm, gen); + } + + int kvm_arch_prepare_memory_region(struct kvm *kvm, +diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h +index 224cd0a47568..20ede17202bf 100644 +--- a/arch/x86/kvm/x86.h ++++ b/arch/x86/kvm/x86.h +@@ -181,6 +181,11 @@ static inline bool emul_is_noncanonical_address(u64 la, + static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, + gva_t gva, gfn_t gfn, unsigned access) + { ++ u64 gen = kvm_memslots(vcpu->kvm)->generation; ++ ++ if (unlikely(gen & 1)) ++ return; ++ + /* + * If this is a shadow nested page table, the "GVA" is + * actually a nGPA. +@@ -188,7 +193,7 @@ static inline void vcpu_cache_mmio_info(struct kvm_vcpu *vcpu, + vcpu->arch.mmio_gva = mmu_is_nested(vcpu) ? 0 : gva & PAGE_MASK; + vcpu->arch.access = access; + vcpu->arch.mmio_gfn = gfn; +- vcpu->arch.mmio_gen = kvm_memslots(vcpu->kvm)->generation; ++ vcpu->arch.mmio_gen = gen; + } + + static inline bool vcpu_match_mmio_gen(struct kvm_vcpu *vcpu) +diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c +index 0f4fe206dcc2..20701977e6c0 100644 +--- a/arch/x86/xen/mmu_pv.c ++++ b/arch/x86/xen/mmu_pv.c +@@ -2114,10 +2114,10 @@ void __init xen_relocate_p2m(void) + pt = early_memremap(pt_phys, PAGE_SIZE); + clear_page(pt); + for (idx_pte = 0; +- idx_pte < min(n_pte, PTRS_PER_PTE); +- idx_pte++) { +- set_pte(pt + idx_pte, +- pfn_pte(p2m_pfn, PAGE_KERNEL)); ++ idx_pte < min(n_pte, PTRS_PER_PTE); ++ idx_pte++) { ++ pt[idx_pte] = pfn_pte(p2m_pfn, ++ PAGE_KERNEL); + p2m_pfn++; + } + n_pte -= PTRS_PER_PTE; +@@ -2125,8 +2125,7 @@ void __init xen_relocate_p2m(void) + make_lowmem_page_readonly(__va(pt_phys)); + pin_pagetable_pfn(MMUEXT_PIN_L1_TABLE, + PFN_DOWN(pt_phys)); +- set_pmd(pmd + idx_pt, +- __pmd(_PAGE_TABLE | pt_phys)); ++ pmd[idx_pt] = __pmd(_PAGE_TABLE | pt_phys); + pt_phys += PAGE_SIZE; + } + n_pt -= PTRS_PER_PMD; +@@ -2134,7 +2133,7 @@ void __init xen_relocate_p2m(void) + make_lowmem_page_readonly(__va(pmd_phys)); + pin_pagetable_pfn(MMUEXT_PIN_L2_TABLE, + PFN_DOWN(pmd_phys)); +- set_pud(pud + idx_pmd, __pud(_PAGE_TABLE | pmd_phys)); ++ pud[idx_pmd] = __pud(_PAGE_TABLE | pmd_phys); + pmd_phys += PAGE_SIZE; + } + n_pmd -= PTRS_PER_PUD; +diff --git a/crypto/aead.c b/crypto/aead.c +index 189c52d1f63a..4908b5e846f0 100644 +--- a/crypto/aead.c ++++ b/crypto/aead.c +@@ -61,8 +61,10 @@ int crypto_aead_setkey(struct crypto_aead *tfm, + else + err = crypto_aead_alg(tfm)->setkey(tfm, key, keylen); + +- if (err) ++ if (unlikely(err)) { ++ crypto_aead_set_flags(tfm, CRYPTO_TFM_NEED_KEY); + return err; ++ } + + crypto_aead_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; +diff --git a/crypto/aegis128.c b/crypto/aegis128.c +index c22f4414856d..789716f92e4c 100644 +--- a/crypto/aegis128.c ++++ b/crypto/aegis128.c +@@ -290,19 +290,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state, + const struct aegis128_ops *ops) + { + struct skcipher_walk walk; +- u8 *src, *dst; +- unsigned int chunksize; + + ops->skcipher_walk_init(&walk, req, false); + + while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; ++ unsigned int nbytes = walk.nbytes; + +- ops->crypt_chunk(state, dst, src, chunksize); ++ if (nbytes < walk.total) ++ nbytes = round_down(nbytes, walk.stride); + +- skcipher_walk_done(&walk, 0); ++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr, ++ nbytes); ++ ++ skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + } + +diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c +index b6fb21ebdc3e..73811448cb6b 100644 +--- a/crypto/aegis128l.c ++++ b/crypto/aegis128l.c +@@ -353,19 +353,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state, + const struct aegis128l_ops *ops) + { + struct skcipher_walk walk; +- u8 *src, *dst; +- unsigned int chunksize; + + ops->skcipher_walk_init(&walk, req, false); + + while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; ++ unsigned int nbytes = walk.nbytes; + +- ops->crypt_chunk(state, dst, src, chunksize); ++ if (nbytes < walk.total) ++ nbytes = round_down(nbytes, walk.stride); + +- skcipher_walk_done(&walk, 0); ++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr, ++ nbytes); ++ ++ skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + } + +diff --git a/crypto/aegis256.c b/crypto/aegis256.c +index 11f0f8ec9c7c..8a71e9c06193 100644 +--- a/crypto/aegis256.c ++++ b/crypto/aegis256.c +@@ -303,19 +303,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state, + const struct aegis256_ops *ops) + { + struct skcipher_walk walk; +- u8 *src, *dst; +- unsigned int chunksize; + + ops->skcipher_walk_init(&walk, req, false); + + while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; +- chunksize = walk.nbytes; ++ unsigned int nbytes = walk.nbytes; + +- ops->crypt_chunk(state, dst, src, chunksize); ++ if (nbytes < walk.total) ++ nbytes = round_down(nbytes, walk.stride); + +- skcipher_walk_done(&walk, 0); ++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr, ++ nbytes); ++ ++ skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + } + +diff --git a/crypto/ahash.c b/crypto/ahash.c +index 5d320a811f75..81e2767e2164 100644 +--- a/crypto/ahash.c ++++ b/crypto/ahash.c +@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk) + int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) + { + unsigned int alignmask = walk->alignmask; +- unsigned int nbytes = walk->entrylen; + + walk->data -= walk->offset; + +- if (nbytes && walk->offset & alignmask && !err) { +- walk->offset = ALIGN(walk->offset, alignmask + 1); +- nbytes = min(nbytes, +- ((unsigned int)(PAGE_SIZE)) - walk->offset); +- walk->entrylen -= nbytes; ++ if (walk->entrylen && (walk->offset & alignmask) && !err) { ++ unsigned int nbytes; + ++ walk->offset = ALIGN(walk->offset, alignmask + 1); ++ nbytes = min(walk->entrylen, ++ (unsigned int)(PAGE_SIZE - walk->offset)); + if (nbytes) { ++ walk->entrylen -= nbytes; + walk->data += walk->offset; + return nbytes; + } +@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err) + if (err) + return err; + +- if (nbytes) { ++ if (walk->entrylen) { + walk->offset = 0; + walk->pg++; + return hash_walk_next(walk); +@@ -190,6 +190,21 @@ static int ahash_setkey_unaligned(struct crypto_ahash *tfm, const u8 *key, + return ret; + } + ++static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, ++ unsigned int keylen) ++{ ++ return -ENOSYS; ++} ++ ++static void ahash_set_needkey(struct crypto_ahash *tfm) ++{ ++ const struct hash_alg_common *alg = crypto_hash_alg_common(tfm); ++ ++ if (tfm->setkey != ahash_nosetkey && ++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) ++ crypto_ahash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); ++} ++ + int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) + { +@@ -201,20 +216,16 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key, + else + err = tfm->setkey(tfm, key, keylen); + +- if (err) ++ if (unlikely(err)) { ++ ahash_set_needkey(tfm); + return err; ++ } + + crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; + } + EXPORT_SYMBOL_GPL(crypto_ahash_setkey); + +-static int ahash_nosetkey(struct crypto_ahash *tfm, const u8 *key, +- unsigned int keylen) +-{ +- return -ENOSYS; +-} +- + static inline unsigned int ahash_align_buffer_size(unsigned len, + unsigned long mask) + { +@@ -489,8 +500,7 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm) + + if (alg->setkey) { + hash->setkey = alg->setkey; +- if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) +- crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY); ++ ahash_set_needkey(hash); + } + + return 0; +diff --git a/crypto/cfb.c b/crypto/cfb.c +index e81e45673498..4abfe32ff845 100644 +--- a/crypto/cfb.c ++++ b/crypto/cfb.c +@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk, + do { + crypto_cfb_encrypt_one(tfm, iv, dst); + crypto_xor(dst, src, bsize); +- memcpy(iv, dst, bsize); ++ iv = dst; + + src += bsize; + dst += bsize; + } while ((nbytes -= bsize) >= bsize); + ++ memcpy(walk->iv, iv, bsize); ++ + return nbytes; + } + +@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk, + const unsigned int bsize = crypto_cfb_bsize(tfm); + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + u8 tmp[MAX_CIPHER_BLOCKSIZE]; + + do { +@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk, + src += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb) + inst->alg.base.cra_blocksize = 1; + inst->alg.base.cra_alignmask = alg->cra_alignmask; + ++ /* ++ * To simplify the implementation, configure the skcipher walk to only ++ * give a partial block at the very end, never earlier. ++ */ ++ inst->alg.chunksize = alg->cra_blocksize; ++ + inst->alg.ivsize = alg->cra_blocksize; + inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize; + inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize; +diff --git a/crypto/morus1280.c b/crypto/morus1280.c +index 3889c188f266..b83576b4eb55 100644 +--- a/crypto/morus1280.c ++++ b/crypto/morus1280.c +@@ -366,18 +366,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state, + const struct morus1280_ops *ops) + { + struct skcipher_walk walk; +- u8 *dst; +- const u8 *src; + + ops->skcipher_walk_init(&walk, req, false); + + while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; ++ unsigned int nbytes = walk.nbytes; + +- ops->crypt_chunk(state, dst, src, walk.nbytes); ++ if (nbytes < walk.total) ++ nbytes = round_down(nbytes, walk.stride); + +- skcipher_walk_done(&walk, 0); ++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr, ++ nbytes); ++ ++ skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + } + +diff --git a/crypto/morus640.c b/crypto/morus640.c +index da06ec2f6a80..b6a477444f6d 100644 +--- a/crypto/morus640.c ++++ b/crypto/morus640.c +@@ -365,18 +365,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state, + const struct morus640_ops *ops) + { + struct skcipher_walk walk; +- u8 *dst; +- const u8 *src; + + ops->skcipher_walk_init(&walk, req, false); + + while (walk.nbytes) { +- src = walk.src.virt.addr; +- dst = walk.dst.virt.addr; ++ unsigned int nbytes = walk.nbytes; + +- ops->crypt_chunk(state, dst, src, walk.nbytes); ++ if (nbytes < walk.total) ++ nbytes = round_down(nbytes, walk.stride); + +- skcipher_walk_done(&walk, 0); ++ ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr, ++ nbytes); ++ ++ skcipher_walk_done(&walk, walk.nbytes - nbytes); + } + } + +diff --git a/crypto/ofb.c b/crypto/ofb.c +index 886631708c5e..cab0b80953fe 100644 +--- a/crypto/ofb.c ++++ b/crypto/ofb.c +@@ -5,9 +5,6 @@ + * + * Copyright (C) 2018 ARM Limited or its affiliates. + * All rights reserved. +- * +- * Based loosely on public domain code gleaned from libtomcrypt +- * (https://github.com/libtom/libtomcrypt). + */ + + #include +@@ -21,7 +18,6 @@ + + struct crypto_ofb_ctx { + struct crypto_cipher *child; +- int cnt; + }; + + +@@ -41,58 +37,40 @@ static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key, + return err; + } + +-static int crypto_ofb_encrypt_segment(struct crypto_ofb_ctx *ctx, +- struct skcipher_walk *walk, +- struct crypto_cipher *tfm) ++static int crypto_ofb_crypt(struct skcipher_request *req) + { +- int bsize = crypto_cipher_blocksize(tfm); +- int nbytes = walk->nbytes; +- +- u8 *src = walk->src.virt.addr; +- u8 *dst = walk->dst.virt.addr; +- u8 *iv = walk->iv; +- +- do { +- if (ctx->cnt == bsize) { +- if (nbytes < bsize) +- break; +- crypto_cipher_encrypt_one(tfm, iv, iv); +- ctx->cnt = 0; +- } +- *dst = *src ^ iv[ctx->cnt]; +- src++; +- dst++; +- ctx->cnt++; +- } while (--nbytes); +- return nbytes; +-} +- +-static int crypto_ofb_encrypt(struct skcipher_request *req) +-{ +- struct skcipher_walk walk; + struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); +- unsigned int bsize; + struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm); +- struct crypto_cipher *child = ctx->child; +- int ret = 0; ++ struct crypto_cipher *cipher = ctx->child; ++ const unsigned int bsize = crypto_cipher_blocksize(cipher); ++ struct skcipher_walk walk; ++ int err; + +- bsize = crypto_cipher_blocksize(child); +- ctx->cnt = bsize; ++ err = skcipher_walk_virt(&walk, req, false); + +- ret = skcipher_walk_virt(&walk, req, false); ++ while (walk.nbytes >= bsize) { ++ const u8 *src = walk.src.virt.addr; ++ u8 *dst = walk.dst.virt.addr; ++ u8 * const iv = walk.iv; ++ unsigned int nbytes = walk.nbytes; + +- while (walk.nbytes) { +- ret = crypto_ofb_encrypt_segment(ctx, &walk, child); +- ret = skcipher_walk_done(&walk, ret); +- } ++ do { ++ crypto_cipher_encrypt_one(cipher, iv, iv); ++ crypto_xor_cpy(dst, src, iv, bsize); ++ dst += bsize; ++ src += bsize; ++ } while ((nbytes -= bsize) >= bsize); + +- return ret; +-} ++ err = skcipher_walk_done(&walk, nbytes); ++ } + +-/* OFB encrypt and decrypt are identical */ +-static int crypto_ofb_decrypt(struct skcipher_request *req) +-{ +- return crypto_ofb_encrypt(req); ++ if (walk.nbytes) { ++ crypto_cipher_encrypt_one(cipher, walk.iv, walk.iv); ++ crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, walk.iv, ++ walk.nbytes); ++ err = skcipher_walk_done(&walk, 0); ++ } ++ return err; + } + + static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm) +@@ -165,13 +143,18 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb) + if (err) + goto err_drop_spawn; + ++ /* OFB mode is a stream cipher. */ ++ inst->alg.base.cra_blocksize = 1; ++ ++ /* ++ * To simplify the implementation, configure the skcipher walk to only ++ * give a partial block at the very end, never earlier. ++ */ ++ inst->alg.chunksize = alg->cra_blocksize; ++ + inst->alg.base.cra_priority = alg->cra_priority; +- inst->alg.base.cra_blocksize = alg->cra_blocksize; + inst->alg.base.cra_alignmask = alg->cra_alignmask; + +- /* We access the data as u32s when xoring. */ +- inst->alg.base.cra_alignmask |= __alignof__(u32) - 1; +- + inst->alg.ivsize = alg->cra_blocksize; + inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize; + inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize; +@@ -182,8 +165,8 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb) + inst->alg.exit = crypto_ofb_exit_tfm; + + inst->alg.setkey = crypto_ofb_setkey; +- inst->alg.encrypt = crypto_ofb_encrypt; +- inst->alg.decrypt = crypto_ofb_decrypt; ++ inst->alg.encrypt = crypto_ofb_crypt; ++ inst->alg.decrypt = crypto_ofb_crypt; + + inst->free = crypto_ofb_free; + +diff --git a/crypto/pcbc.c b/crypto/pcbc.c +index 8aa10144407c..1b182dfedc94 100644 +--- a/crypto/pcbc.c ++++ b/crypto/pcbc.c +@@ -51,7 +51,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req, + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; + u8 *dst = walk->dst.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + + do { + crypto_xor(iv, src, bsize); +@@ -72,7 +72,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req, + int bsize = crypto_cipher_blocksize(tfm); + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + u8 tmpbuf[MAX_CIPHER_BLOCKSIZE]; + + do { +@@ -84,8 +84,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req, + src += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +@@ -121,7 +119,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req, + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; + u8 *dst = walk->dst.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + + do { + crypto_cipher_decrypt_one(tfm, dst, src); +@@ -132,8 +130,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req, + dst += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +@@ -144,7 +140,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req, + int bsize = crypto_cipher_blocksize(tfm); + unsigned int nbytes = walk->nbytes; + u8 *src = walk->src.virt.addr; +- u8 *iv = walk->iv; ++ u8 * const iv = walk->iv; + u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32)); + + do { +@@ -156,8 +152,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req, + src += bsize; + } while ((nbytes -= bsize) >= bsize); + +- memcpy(walk->iv, iv, bsize); +- + return nbytes; + } + +diff --git a/crypto/shash.c b/crypto/shash.c +index 44d297b82a8f..40311ccad3fa 100644 +--- a/crypto/shash.c ++++ b/crypto/shash.c +@@ -53,6 +53,13 @@ static int shash_setkey_unaligned(struct crypto_shash *tfm, const u8 *key, + return err; + } + ++static void shash_set_needkey(struct crypto_shash *tfm, struct shash_alg *alg) ++{ ++ if (crypto_shash_alg_has_setkey(alg) && ++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) ++ crypto_shash_set_flags(tfm, CRYPTO_TFM_NEED_KEY); ++} ++ + int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, + unsigned int keylen) + { +@@ -65,8 +72,10 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key, + else + err = shash->setkey(tfm, key, keylen); + +- if (err) ++ if (unlikely(err)) { ++ shash_set_needkey(tfm, shash); + return err; ++ } + + crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; +@@ -373,7 +382,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm) + crt->final = shash_async_final; + crt->finup = shash_async_finup; + crt->digest = shash_async_digest; +- crt->setkey = shash_async_setkey; ++ if (crypto_shash_alg_has_setkey(alg)) ++ crt->setkey = shash_async_setkey; + + crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) & + CRYPTO_TFM_NEED_KEY); +@@ -395,9 +405,7 @@ static int crypto_shash_init_tfm(struct crypto_tfm *tfm) + + hash->descsize = alg->descsize; + +- if (crypto_shash_alg_has_setkey(alg) && +- !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY)) +- crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY); ++ shash_set_needkey(hash, alg); + + return 0; + } +diff --git a/crypto/skcipher.c b/crypto/skcipher.c +index 2a969296bc24..de09ff60991e 100644 +--- a/crypto/skcipher.c ++++ b/crypto/skcipher.c +@@ -585,6 +585,12 @@ static unsigned int crypto_skcipher_extsize(struct crypto_alg *alg) + return crypto_alg_extsize(alg); + } + ++static void skcipher_set_needkey(struct crypto_skcipher *tfm) ++{ ++ if (tfm->keysize) ++ crypto_skcipher_set_flags(tfm, CRYPTO_TFM_NEED_KEY); ++} ++ + static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm, + const u8 *key, unsigned int keylen) + { +@@ -598,8 +604,10 @@ static int skcipher_setkey_blkcipher(struct crypto_skcipher *tfm, + err = crypto_blkcipher_setkey(blkcipher, key, keylen); + crypto_skcipher_set_flags(tfm, crypto_blkcipher_get_flags(blkcipher) & + CRYPTO_TFM_RES_MASK); +- if (err) ++ if (unlikely(err)) { ++ skcipher_set_needkey(tfm); + return err; ++ } + + crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; +@@ -677,8 +685,7 @@ static int crypto_init_skcipher_ops_blkcipher(struct crypto_tfm *tfm) + skcipher->ivsize = crypto_blkcipher_ivsize(blkcipher); + skcipher->keysize = calg->cra_blkcipher.max_keysize; + +- if (skcipher->keysize) +- crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY); ++ skcipher_set_needkey(skcipher); + + return 0; + } +@@ -698,8 +705,10 @@ static int skcipher_setkey_ablkcipher(struct crypto_skcipher *tfm, + crypto_skcipher_set_flags(tfm, + crypto_ablkcipher_get_flags(ablkcipher) & + CRYPTO_TFM_RES_MASK); +- if (err) ++ if (unlikely(err)) { ++ skcipher_set_needkey(tfm); + return err; ++ } + + crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; +@@ -776,8 +785,7 @@ static int crypto_init_skcipher_ops_ablkcipher(struct crypto_tfm *tfm) + sizeof(struct ablkcipher_request); + skcipher->keysize = calg->cra_ablkcipher.max_keysize; + +- if (skcipher->keysize) +- crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY); ++ skcipher_set_needkey(skcipher); + + return 0; + } +@@ -820,8 +828,10 @@ static int skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key, + else + err = cipher->setkey(tfm, key, keylen); + +- if (err) ++ if (unlikely(err)) { ++ skcipher_set_needkey(tfm); + return err; ++ } + + crypto_skcipher_clear_flags(tfm, CRYPTO_TFM_NEED_KEY); + return 0; +@@ -852,8 +862,7 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm) + skcipher->ivsize = alg->ivsize; + skcipher->keysize = alg->max_keysize; + +- if (skcipher->keysize) +- crypto_skcipher_set_flags(skcipher, CRYPTO_TFM_NEED_KEY); ++ skcipher_set_needkey(skcipher); + + if (alg->exit) + skcipher->base.exit = crypto_skcipher_exit_tfm; +diff --git a/crypto/testmgr.c b/crypto/testmgr.c +index 0f684a414acb..b8e4a3ccbfe0 100644 +--- a/crypto/testmgr.c ++++ b/crypto/testmgr.c +@@ -1894,14 +1894,21 @@ static int alg_test_crc32c(const struct alg_test_desc *desc, + + err = alg_test_hash(desc, driver, type, mask); + if (err) +- goto out; ++ return err; + + tfm = crypto_alloc_shash(driver, type, mask); + if (IS_ERR(tfm)) { ++ if (PTR_ERR(tfm) == -ENOENT) { ++ /* ++ * This crc32c implementation is only available through ++ * ahash API, not the shash API, so the remaining part ++ * of the test is not applicable to it. ++ */ ++ return 0; ++ } + printk(KERN_ERR "alg: crc32c: Failed to load transform for %s: " + "%ld\n", driver, PTR_ERR(tfm)); +- err = PTR_ERR(tfm); +- goto out; ++ return PTR_ERR(tfm); + } + + do { +@@ -1928,7 +1935,6 @@ static int alg_test_crc32c(const struct alg_test_desc *desc, + + crypto_free_shash(tfm); + +-out: + return err; + } + +diff --git a/crypto/testmgr.h b/crypto/testmgr.h +index e8f47d7b92cd..ca8e8ebef309 100644 +--- a/crypto/testmgr.h ++++ b/crypto/testmgr.h +@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = { + "\x75\xa3\x85\x74\x1a\xb9\xce\xf8" + "\x20\x31\x62\x3d\x55\xb1\xe4\x71", + .len = 64, ++ .also_non_np = 1, ++ .np = 2, ++ .tap = { 31, 33 }, ++ }, { /* > 16 bytes, not a multiple of 16 bytes */ ++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6" ++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c", ++ .klen = 16, ++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" ++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", ++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" ++ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" ++ "\xae", ++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20" ++ "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a" ++ "\xc8", ++ .len = 17, ++ }, { /* < 16 bytes */ ++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6" ++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c", ++ .klen = 16, ++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" ++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", ++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f", ++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad", ++ .len = 7, + }, + }; + +@@ -16656,8 +16681,7 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = { + }; + + static const struct cipher_testvec aes_ofb_tv_template[] = { +- /* From NIST Special Publication 800-38A, Appendix F.5 */ +- { ++ { /* From NIST Special Publication 800-38A, Appendix F.5 */ + .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6" + "\xab\xf7\x15\x88\x09\xcf\x4f\x3c", + .klen = 16, +@@ -16680,6 +16704,31 @@ static const struct cipher_testvec aes_ofb_tv_template[] = { + "\x30\x4c\x65\x28\xf6\x59\xc7\x78" + "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e", + .len = 64, ++ .also_non_np = 1, ++ .np = 2, ++ .tap = { 31, 33 }, ++ }, { /* > 16 bytes, not a multiple of 16 bytes */ ++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6" ++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c", ++ .klen = 16, ++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" ++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", ++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96" ++ "\xe9\x3d\x7e\x11\x73\x93\x17\x2a" ++ "\xae", ++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20" ++ "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a" ++ "\x77", ++ .len = 17, ++ }, { /* < 16 bytes */ ++ .key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6" ++ "\xab\xf7\x15\x88\x09\xcf\x4f\x3c", ++ .klen = 16, ++ .iv = "\x00\x01\x02\x03\x04\x05\x06\x07" ++ "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", ++ .ptext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f", ++ .ctext = "\x3b\x3f\xd9\x2e\xb7\x2d\xad", ++ .len = 7, + } + }; + +diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c +index 545e91420cde..8940054d6250 100644 +--- a/drivers/acpi/device_sysfs.c ++++ b/drivers/acpi/device_sysfs.c +@@ -202,11 +202,15 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias, + { + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; + const union acpi_object *of_compatible, *obj; ++ acpi_status status; + int len, count; + int i, nval; + char *c; + +- acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf); ++ status = acpi_get_name(acpi_dev->handle, ACPI_SINGLE_NAME, &buf); ++ if (ACPI_FAILURE(status)) ++ return -ENODEV; ++ + /* DT strings are all in lower case */ + for (c = buf.pointer; *c != '\0'; c++) + *c = tolower(*c); +diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c +index e18ade5d74e9..f75f8f870ce3 100644 +--- a/drivers/acpi/nfit/core.c ++++ b/drivers/acpi/nfit/core.c +@@ -415,7 +415,7 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd, + if (call_pkg) { + int i; + +- if (nfit_mem->family != call_pkg->nd_family) ++ if (nfit_mem && nfit_mem->family != call_pkg->nd_family) + return -ENOTTY; + + for (i = 0; i < ARRAY_SIZE(call_pkg->nd_reserved2); i++) +@@ -424,6 +424,10 @@ static int cmd_to_func(struct nfit_mem *nfit_mem, unsigned int cmd, + return call_pkg->nd_command; + } + ++ /* In the !call_pkg case, bus commands == bus functions */ ++ if (!nfit_mem) ++ return cmd; ++ + /* Linux ND commands == NVDIMM_FAMILY_INTEL function numbers */ + if (nfit_mem->family == NVDIMM_FAMILY_INTEL) + return cmd; +@@ -454,17 +458,18 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + if (cmd_rc) + *cmd_rc = -EINVAL; + ++ if (cmd == ND_CMD_CALL) ++ call_pkg = buf; ++ func = cmd_to_func(nfit_mem, cmd, call_pkg); ++ if (func < 0) ++ return func; ++ + if (nvdimm) { + struct acpi_device *adev = nfit_mem->adev; + + if (!adev) + return -ENOTTY; + +- if (cmd == ND_CMD_CALL) +- call_pkg = buf; +- func = cmd_to_func(nfit_mem, cmd, call_pkg); +- if (func < 0) +- return func; + dimm_name = nvdimm_name(nvdimm); + cmd_name = nvdimm_cmd_name(cmd); + cmd_mask = nvdimm_cmd_mask(nvdimm); +@@ -475,12 +480,9 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + } else { + struct acpi_device *adev = to_acpi_dev(acpi_desc); + +- func = cmd; + cmd_name = nvdimm_bus_cmd_name(cmd); + cmd_mask = nd_desc->cmd_mask; +- dsm_mask = cmd_mask; +- if (cmd == ND_CMD_CALL) +- dsm_mask = nd_desc->bus_dsm_mask; ++ dsm_mask = nd_desc->bus_dsm_mask; + desc = nd_cmd_bus_desc(cmd); + guid = to_nfit_uuid(NFIT_DEV_BUS); + handle = adev->handle; +@@ -554,6 +556,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + return -EINVAL; + } + ++ if (out_obj->type != ACPI_TYPE_BUFFER) { ++ dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n", ++ dimm_name, cmd_name, out_obj->type); ++ rc = -EINVAL; ++ goto out; ++ } ++ + if (call_pkg) { + call_pkg->nd_fw_size = out_obj->buffer.length; + memcpy(call_pkg->nd_payload + call_pkg->nd_size_in, +@@ -572,13 +581,6 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm, + return 0; + } + +- if (out_obj->package.type != ACPI_TYPE_BUFFER) { +- dev_dbg(dev, "%s unexpected output object type cmd: %s type: %d\n", +- dimm_name, cmd_name, out_obj->type); +- rc = -EINVAL; +- goto out; +- } +- + dev_dbg(dev, "%s cmd: %s output length: %d\n", dimm_name, + cmd_name, out_obj->buffer.length); + print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4, 4, +@@ -1759,14 +1761,14 @@ static bool acpi_nvdimm_has_method(struct acpi_device *adev, char *method) + + __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem) + { ++ struct device *dev = &nfit_mem->adev->dev; + struct nd_intel_smart smart = { 0 }; + union acpi_object in_buf = { +- .type = ACPI_TYPE_BUFFER, +- .buffer.pointer = (char *) &smart, +- .buffer.length = sizeof(smart), ++ .buffer.type = ACPI_TYPE_BUFFER, ++ .buffer.length = 0, + }; + union acpi_object in_obj = { +- .type = ACPI_TYPE_PACKAGE, ++ .package.type = ACPI_TYPE_PACKAGE, + .package.count = 1, + .package.elements = &in_buf, + }; +@@ -1781,8 +1783,15 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem) + return; + + out_obj = acpi_evaluate_dsm(handle, guid, revid, func, &in_obj); +- if (!out_obj) ++ if (!out_obj || out_obj->type != ACPI_TYPE_BUFFER ++ || out_obj->buffer.length < sizeof(smart)) { ++ dev_dbg(dev->parent, "%s: failed to retrieve initial health\n", ++ dev_name(dev)); ++ ACPI_FREE(out_obj); + return; ++ } ++ memcpy(&smart, out_obj->buffer.pointer, sizeof(smart)); ++ ACPI_FREE(out_obj); + + if (smart.flags & ND_INTEL_SMART_SHUTDOWN_VALID) { + if (smart.shutdown_state) +@@ -1793,7 +1802,6 @@ __weak void nfit_intel_shutdown_status(struct nfit_mem *nfit_mem) + set_bit(NFIT_MEM_DIRTY_COUNT, &nfit_mem->flags); + nfit_mem->dirty_shutdown = smart.shutdown_count; + } +- ACPI_FREE(out_obj); + } + + static void populate_shutdown_status(struct nfit_mem *nfit_mem) +@@ -1915,18 +1923,19 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc, + | 1 << ND_CMD_SET_CONFIG_DATA; + if (family == NVDIMM_FAMILY_INTEL + && (dsm_mask & label_mask) == label_mask) +- return 0; +- +- if (acpi_nvdimm_has_method(adev_dimm, "_LSI") +- && acpi_nvdimm_has_method(adev_dimm, "_LSR")) { +- dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev)); +- set_bit(NFIT_MEM_LSR, &nfit_mem->flags); +- } ++ /* skip _LS{I,R,W} enabling */; ++ else { ++ if (acpi_nvdimm_has_method(adev_dimm, "_LSI") ++ && acpi_nvdimm_has_method(adev_dimm, "_LSR")) { ++ dev_dbg(dev, "%s: has _LSR\n", dev_name(&adev_dimm->dev)); ++ set_bit(NFIT_MEM_LSR, &nfit_mem->flags); ++ } + +- if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags) +- && acpi_nvdimm_has_method(adev_dimm, "_LSW")) { +- dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev)); +- set_bit(NFIT_MEM_LSW, &nfit_mem->flags); ++ if (test_bit(NFIT_MEM_LSR, &nfit_mem->flags) ++ && acpi_nvdimm_has_method(adev_dimm, "_LSW")) { ++ dev_dbg(dev, "%s: has _LSW\n", dev_name(&adev_dimm->dev)); ++ set_bit(NFIT_MEM_LSW, &nfit_mem->flags); ++ } + } + + populate_shutdown_status(nfit_mem); +@@ -3004,14 +3013,16 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc, + { + int rc; + +- if (no_init_ars || test_bit(ARS_FAILED, &nfit_spa->ars_state)) ++ if (test_bit(ARS_FAILED, &nfit_spa->ars_state)) + return acpi_nfit_register_region(acpi_desc, nfit_spa); + + set_bit(ARS_REQ_SHORT, &nfit_spa->ars_state); +- set_bit(ARS_REQ_LONG, &nfit_spa->ars_state); ++ if (!no_init_ars) ++ set_bit(ARS_REQ_LONG, &nfit_spa->ars_state); + + switch (acpi_nfit_query_poison(acpi_desc)) { + case 0: ++ case -ENOSPC: + case -EAGAIN: + rc = ars_start(acpi_desc, nfit_spa, ARS_REQ_SHORT); + /* shouldn't happen, try again later */ +@@ -3036,7 +3047,6 @@ static int ars_register(struct acpi_nfit_desc *acpi_desc, + break; + case -EBUSY: + case -ENOMEM: +- case -ENOSPC: + /* + * BIOS was using ARS, wait for it to complete (or + * resources to become available) and then perform our +diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c +index 5fa1898755a3..7c84f64c74f7 100644 +--- a/drivers/base/power/wakeup.c ++++ b/drivers/base/power/wakeup.c +@@ -118,7 +118,6 @@ void wakeup_source_drop(struct wakeup_source *ws) + if (!ws) + return; + +- del_timer_sync(&ws->timer); + __pm_relax(ws); + } + EXPORT_SYMBOL_GPL(wakeup_source_drop); +@@ -205,6 +204,13 @@ void wakeup_source_remove(struct wakeup_source *ws) + list_del_rcu(&ws->entry); + raw_spin_unlock_irqrestore(&events_lock, flags); + synchronize_srcu(&wakeup_srcu); ++ ++ del_timer_sync(&ws->timer); ++ /* ++ * Clear timer.function to make wakeup_source_not_registered() treat ++ * this wakeup source as not registered. ++ */ ++ ws->timer.function = NULL; + } + EXPORT_SYMBOL_GPL(wakeup_source_remove); + +diff --git a/drivers/char/ipmi/ipmi_si.h b/drivers/char/ipmi/ipmi_si.h +index 52f6152d1fcb..7ae52c17618e 100644 +--- a/drivers/char/ipmi/ipmi_si.h ++++ b/drivers/char/ipmi/ipmi_si.h +@@ -25,7 +25,9 @@ void ipmi_irq_finish_setup(struct si_sm_io *io); + int ipmi_si_remove_by_dev(struct device *dev); + void ipmi_si_remove_by_data(int addr_space, enum si_type si_type, + unsigned long addr); +-int ipmi_si_hardcode_find_bmc(void); ++void ipmi_hardcode_init(void); ++void ipmi_si_hardcode_exit(void); ++int ipmi_si_hardcode_match(int addr_type, unsigned long addr); + void ipmi_si_platform_init(void); + void ipmi_si_platform_shutdown(void); + +diff --git a/drivers/char/ipmi/ipmi_si_hardcode.c b/drivers/char/ipmi/ipmi_si_hardcode.c +index 487642809c58..1e5783961b0d 100644 +--- a/drivers/char/ipmi/ipmi_si_hardcode.c ++++ b/drivers/char/ipmi/ipmi_si_hardcode.c +@@ -3,6 +3,7 @@ + #define pr_fmt(fmt) "ipmi_hardcode: " fmt + + #include ++#include + #include "ipmi_si.h" + + /* +@@ -12,23 +13,22 @@ + + #define SI_MAX_PARMS 4 + +-static char *si_type[SI_MAX_PARMS]; + #define MAX_SI_TYPE_STR 30 +-static char si_type_str[MAX_SI_TYPE_STR]; ++static char si_type_str[MAX_SI_TYPE_STR] __initdata; + static unsigned long addrs[SI_MAX_PARMS]; + static unsigned int num_addrs; + static unsigned int ports[SI_MAX_PARMS]; + static unsigned int num_ports; +-static int irqs[SI_MAX_PARMS]; +-static unsigned int num_irqs; +-static int regspacings[SI_MAX_PARMS]; +-static unsigned int num_regspacings; +-static int regsizes[SI_MAX_PARMS]; +-static unsigned int num_regsizes; +-static int regshifts[SI_MAX_PARMS]; +-static unsigned int num_regshifts; +-static int slave_addrs[SI_MAX_PARMS]; /* Leaving 0 chooses the default value */ +-static unsigned int num_slave_addrs; ++static int irqs[SI_MAX_PARMS] __initdata; ++static unsigned int num_irqs __initdata; ++static int regspacings[SI_MAX_PARMS] __initdata; ++static unsigned int num_regspacings __initdata; ++static int regsizes[SI_MAX_PARMS] __initdata; ++static unsigned int num_regsizes __initdata; ++static int regshifts[SI_MAX_PARMS] __initdata; ++static unsigned int num_regshifts __initdata; ++static int slave_addrs[SI_MAX_PARMS] __initdata; ++static unsigned int num_slave_addrs __initdata; + + module_param_string(type, si_type_str, MAX_SI_TYPE_STR, 0); + MODULE_PARM_DESC(type, "Defines the type of each interface, each" +@@ -73,12 +73,133 @@ MODULE_PARM_DESC(slave_addrs, "Set the default IPMB slave address for" + " overridden by this parm. This is an array indexed" + " by interface number."); + +-int ipmi_si_hardcode_find_bmc(void) ++static struct platform_device *ipmi_hc_pdevs[SI_MAX_PARMS]; ++ ++static void __init ipmi_hardcode_init_one(const char *si_type_str, ++ unsigned int i, ++ unsigned long addr, ++ unsigned int flags) + { +- int ret = -ENODEV; +- int i; +- struct si_sm_io io; ++ struct platform_device *pdev; ++ unsigned int num_r = 1, size; ++ struct resource r[4]; ++ struct property_entry p[6]; ++ enum si_type si_type; ++ unsigned int regspacing, regsize; ++ int rv; ++ ++ memset(p, 0, sizeof(p)); ++ memset(r, 0, sizeof(r)); ++ ++ if (!si_type_str || !*si_type_str || strcmp(si_type_str, "kcs") == 0) { ++ size = 2; ++ si_type = SI_KCS; ++ } else if (strcmp(si_type_str, "smic") == 0) { ++ size = 2; ++ si_type = SI_SMIC; ++ } else if (strcmp(si_type_str, "bt") == 0) { ++ size = 3; ++ si_type = SI_BT; ++ } else if (strcmp(si_type_str, "invalid") == 0) { ++ /* ++ * Allow a firmware-specified interface to be ++ * disabled. ++ */ ++ size = 1; ++ si_type = SI_TYPE_INVALID; ++ } else { ++ pr_warn("Interface type specified for interface %d, was invalid: %s\n", ++ i, si_type_str); ++ return; ++ } ++ ++ regsize = regsizes[i]; ++ if (regsize == 0) ++ regsize = DEFAULT_REGSIZE; ++ ++ p[0] = PROPERTY_ENTRY_U8("ipmi-type", si_type); ++ p[1] = PROPERTY_ENTRY_U8("slave-addr", slave_addrs[i]); ++ p[2] = PROPERTY_ENTRY_U8("addr-source", SI_HARDCODED); ++ p[3] = PROPERTY_ENTRY_U8("reg-shift", regshifts[i]); ++ p[4] = PROPERTY_ENTRY_U8("reg-size", regsize); ++ /* Last entry must be left NULL to terminate it. */ ++ ++ /* ++ * Register spacing is derived from the resources in ++ * the IPMI platform code. ++ */ ++ regspacing = regspacings[i]; ++ if (regspacing == 0) ++ regspacing = regsize; ++ ++ r[0].start = addr; ++ r[0].end = r[0].start + regsize - 1; ++ r[0].name = "IPMI Address 1"; ++ r[0].flags = flags; ++ ++ if (size > 1) { ++ r[1].start = r[0].start + regspacing; ++ r[1].end = r[1].start + regsize - 1; ++ r[1].name = "IPMI Address 2"; ++ r[1].flags = flags; ++ num_r++; ++ } ++ ++ if (size > 2) { ++ r[2].start = r[1].start + regspacing; ++ r[2].end = r[2].start + regsize - 1; ++ r[2].name = "IPMI Address 3"; ++ r[2].flags = flags; ++ num_r++; ++ } ++ ++ if (irqs[i]) { ++ r[num_r].start = irqs[i]; ++ r[num_r].end = irqs[i]; ++ r[num_r].name = "IPMI IRQ"; ++ r[num_r].flags = IORESOURCE_IRQ; ++ num_r++; ++ } ++ ++ pdev = platform_device_alloc("hardcode-ipmi-si", i); ++ if (!pdev) { ++ pr_err("Error allocating IPMI platform device %d\n", i); ++ return; ++ } ++ ++ rv = platform_device_add_resources(pdev, r, num_r); ++ if (rv) { ++ dev_err(&pdev->dev, ++ "Unable to add hard-code resources: %d\n", rv); ++ goto err; ++ } ++ ++ rv = platform_device_add_properties(pdev, p); ++ if (rv) { ++ dev_err(&pdev->dev, ++ "Unable to add hard-code properties: %d\n", rv); ++ goto err; ++ } ++ ++ rv = platform_device_add(pdev); ++ if (rv) { ++ dev_err(&pdev->dev, ++ "Unable to add hard-code device: %d\n", rv); ++ goto err; ++ } ++ ++ ipmi_hc_pdevs[i] = pdev; ++ return; ++ ++err: ++ platform_device_put(pdev); ++} ++ ++void __init ipmi_hardcode_init(void) ++{ ++ unsigned int i; + char *str; ++ char *si_type[SI_MAX_PARMS]; + + /* Parse out the si_type string into its components. */ + str = si_type_str; +@@ -95,54 +216,45 @@ int ipmi_si_hardcode_find_bmc(void) + } + } + +- memset(&io, 0, sizeof(io)); + for (i = 0; i < SI_MAX_PARMS; i++) { +- if (!ports[i] && !addrs[i]) +- continue; +- +- io.addr_source = SI_HARDCODED; +- pr_info("probing via hardcoded address\n"); +- +- if (!si_type[i] || strcmp(si_type[i], "kcs") == 0) { +- io.si_type = SI_KCS; +- } else if (strcmp(si_type[i], "smic") == 0) { +- io.si_type = SI_SMIC; +- } else if (strcmp(si_type[i], "bt") == 0) { +- io.si_type = SI_BT; +- } else { +- pr_warn("Interface type specified for interface %d, was invalid: %s\n", +- i, si_type[i]); +- continue; +- } ++ if (i < num_ports && ports[i]) ++ ipmi_hardcode_init_one(si_type[i], i, ports[i], ++ IORESOURCE_IO); ++ if (i < num_addrs && addrs[i]) ++ ipmi_hardcode_init_one(si_type[i], i, addrs[i], ++ IORESOURCE_MEM); ++ } ++} + +- if (ports[i]) { +- /* An I/O port */ +- io.addr_data = ports[i]; +- io.addr_type = IPMI_IO_ADDR_SPACE; +- } else if (addrs[i]) { +- /* A memory port */ +- io.addr_data = addrs[i]; +- io.addr_type = IPMI_MEM_ADDR_SPACE; +- } else { +- pr_warn("Interface type specified for interface %d, but port and address were not set or set to zero\n", +- i); +- continue; +- } ++void ipmi_si_hardcode_exit(void) ++{ ++ unsigned int i; + +- io.addr = NULL; +- io.regspacing = regspacings[i]; +- if (!io.regspacing) +- io.regspacing = DEFAULT_REGSPACING; +- io.regsize = regsizes[i]; +- if (!io.regsize) +- io.regsize = DEFAULT_REGSIZE; +- io.regshift = regshifts[i]; +- io.irq = irqs[i]; +- if (io.irq) +- io.irq_setup = ipmi_std_irq_setup; +- io.slave_addr = slave_addrs[i]; +- +- ret = ipmi_si_add_smi(&io); ++ for (i = 0; i < SI_MAX_PARMS; i++) { ++ if (ipmi_hc_pdevs[i]) ++ platform_device_unregister(ipmi_hc_pdevs[i]); + } +- return ret; ++} ++ ++/* ++ * Returns true of the given address exists as a hardcoded address, ++ * false if not. ++ */ ++int ipmi_si_hardcode_match(int addr_type, unsigned long addr) ++{ ++ unsigned int i; ++ ++ if (addr_type == IPMI_IO_ADDR_SPACE) { ++ for (i = 0; i < num_ports; i++) { ++ if (ports[i] == addr) ++ return 1; ++ } ++ } else { ++ for (i = 0; i < num_addrs; i++) { ++ if (addrs[i] == addr) ++ return 1; ++ } ++ } ++ ++ return 0; + } +diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c +index dc8603d34320..5294abc4c96c 100644 +--- a/drivers/char/ipmi/ipmi_si_intf.c ++++ b/drivers/char/ipmi/ipmi_si_intf.c +@@ -1862,6 +1862,18 @@ int ipmi_si_add_smi(struct si_sm_io *io) + int rv = 0; + struct smi_info *new_smi, *dup; + ++ /* ++ * If the user gave us a hard-coded device at the same ++ * address, they presumably want us to use it and not what is ++ * in the firmware. ++ */ ++ if (io->addr_source != SI_HARDCODED && ++ ipmi_si_hardcode_match(io->addr_type, io->addr_data)) { ++ dev_info(io->dev, ++ "Hard-coded device at this address already exists"); ++ return -ENODEV; ++ } ++ + if (!io->io_setup) { + if (io->addr_type == IPMI_IO_ADDR_SPACE) { + io->io_setup = ipmi_si_port_setup; +@@ -2085,11 +2097,16 @@ static int try_smi_init(struct smi_info *new_smi) + WARN_ON(new_smi->io.dev->init_name != NULL); + + out_err: ++ if (rv && new_smi->io.io_cleanup) { ++ new_smi->io.io_cleanup(&new_smi->io); ++ new_smi->io.io_cleanup = NULL; ++ } ++ + kfree(init_name); + return rv; + } + +-static int init_ipmi_si(void) ++static int __init init_ipmi_si(void) + { + struct smi_info *e; + enum ipmi_addr_src type = SI_INVALID; +@@ -2097,11 +2114,9 @@ static int init_ipmi_si(void) + if (initialized) + return 0; + +- pr_info("IPMI System Interface driver\n"); ++ ipmi_hardcode_init(); + +- /* If the user gave us a device, they presumably want us to use it */ +- if (!ipmi_si_hardcode_find_bmc()) +- goto do_scan; ++ pr_info("IPMI System Interface driver\n"); + + ipmi_si_platform_init(); + +@@ -2113,7 +2128,6 @@ static int init_ipmi_si(void) + with multiple BMCs we assume that there will be several instances + of a given type so if we succeed in registering a type then also + try to register everything else of the same type */ +-do_scan: + mutex_lock(&smi_infos_lock); + list_for_each_entry(e, &smi_infos, link) { + /* Try to register a device if it has an IRQ and we either +@@ -2299,6 +2313,8 @@ static void cleanup_ipmi_si(void) + list_for_each_entry_safe(e, tmp_e, &smi_infos, link) + cleanup_one_si(e); + mutex_unlock(&smi_infos_lock); ++ ++ ipmi_si_hardcode_exit(); + } + module_exit(cleanup_ipmi_si); + +diff --git a/drivers/char/ipmi/ipmi_si_mem_io.c b/drivers/char/ipmi/ipmi_si_mem_io.c +index fd0ec8d6bf0e..75583612ab10 100644 +--- a/drivers/char/ipmi/ipmi_si_mem_io.c ++++ b/drivers/char/ipmi/ipmi_si_mem_io.c +@@ -81,8 +81,6 @@ int ipmi_si_mem_setup(struct si_sm_io *io) + if (!addr) + return -ENODEV; + +- io->io_cleanup = mem_cleanup; +- + /* + * Figure out the actual readb/readw/readl/etc routine to use based + * upon the register size. +@@ -141,5 +139,8 @@ int ipmi_si_mem_setup(struct si_sm_io *io) + mem_region_cleanup(io, io->io_size); + return -EIO; + } ++ ++ io->io_cleanup = mem_cleanup; ++ + return 0; + } +diff --git a/drivers/char/ipmi/ipmi_si_platform.c b/drivers/char/ipmi/ipmi_si_platform.c +index 15cf819f884f..8158d03542f4 100644 +--- a/drivers/char/ipmi/ipmi_si_platform.c ++++ b/drivers/char/ipmi/ipmi_si_platform.c +@@ -128,8 +128,6 @@ ipmi_get_info_from_resources(struct platform_device *pdev, + if (res_second->start > io->addr_data) + io->regspacing = res_second->start - io->addr_data; + } +- io->regsize = DEFAULT_REGSIZE; +- io->regshift = 0; + + return res; + } +@@ -137,7 +135,7 @@ ipmi_get_info_from_resources(struct platform_device *pdev, + static int platform_ipmi_probe(struct platform_device *pdev) + { + struct si_sm_io io; +- u8 type, slave_addr, addr_source; ++ u8 type, slave_addr, addr_source, regsize, regshift; + int rv; + + rv = device_property_read_u8(&pdev->dev, "addr-source", &addr_source); +@@ -149,7 +147,7 @@ static int platform_ipmi_probe(struct platform_device *pdev) + if (addr_source == SI_SMBIOS) { + if (!si_trydmi) + return -ENODEV; +- } else { ++ } else if (addr_source != SI_HARDCODED) { + if (!si_tryplatform) + return -ENODEV; + } +@@ -169,11 +167,23 @@ static int platform_ipmi_probe(struct platform_device *pdev) + case SI_BT: + io.si_type = type; + break; ++ case SI_TYPE_INVALID: /* User disabled this in hardcode. */ ++ return -ENODEV; + default: + dev_err(&pdev->dev, "ipmi-type property is invalid\n"); + return -EINVAL; + } + ++ io.regsize = DEFAULT_REGSIZE; ++ rv = device_property_read_u8(&pdev->dev, "reg-size", ®size); ++ if (!rv) ++ io.regsize = regsize; ++ ++ io.regshift = 0; ++ rv = device_property_read_u8(&pdev->dev, "reg-shift", ®shift); ++ if (!rv) ++ io.regshift = regshift; ++ + if (!ipmi_get_info_from_resources(pdev, &io)) + return -EINVAL; + +@@ -193,7 +203,8 @@ static int platform_ipmi_probe(struct platform_device *pdev) + + io.dev = &pdev->dev; + +- pr_info("ipmi_si: SMBIOS: %s %#lx regsize %d spacing %d irq %d\n", ++ pr_info("ipmi_si: %s: %s %#lx regsize %d spacing %d irq %d\n", ++ ipmi_addr_src_to_str(addr_source), + (io.addr_type == IPMI_IO_ADDR_SPACE) ? "io" : "mem", + io.addr_data, io.regsize, io.regspacing, io.irq); + +@@ -358,6 +369,9 @@ static int acpi_ipmi_probe(struct platform_device *pdev) + goto err_free; + } + ++ io.regsize = DEFAULT_REGSIZE; ++ io.regshift = 0; ++ + res = ipmi_get_info_from_resources(pdev, &io); + if (!res) { + rv = -EINVAL; +@@ -420,8 +434,9 @@ static int ipmi_remove(struct platform_device *pdev) + } + + static const struct platform_device_id si_plat_ids[] = { +- { "dmi-ipmi-si", 0 }, +- { } ++ { "dmi-ipmi-si", 0 }, ++ { "hardcode-ipmi-si", 0 }, ++ { } + }; + + struct platform_driver ipmi_platform_driver = { +diff --git a/drivers/char/ipmi/ipmi_si_port_io.c b/drivers/char/ipmi/ipmi_si_port_io.c +index ef6dffcea9fa..03924c32b6e9 100644 +--- a/drivers/char/ipmi/ipmi_si_port_io.c ++++ b/drivers/char/ipmi/ipmi_si_port_io.c +@@ -68,8 +68,6 @@ int ipmi_si_port_setup(struct si_sm_io *io) + if (!addr) + return -ENODEV; + +- io->io_cleanup = port_cleanup; +- + /* + * Figure out the actual inb/inw/inl/etc routine to use based + * upon the register size. +@@ -109,5 +107,8 @@ int ipmi_si_port_setup(struct si_sm_io *io) + return -EIO; + } + } ++ ++ io->io_cleanup = port_cleanup; ++ + return 0; + } +diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c +index 64dc560859f2..13dc614b7ebc 100644 +--- a/drivers/char/tpm/st33zp24/st33zp24.c ++++ b/drivers/char/tpm/st33zp24/st33zp24.c +@@ -436,7 +436,7 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf, + goto out_err; + } + +- return len; ++ return 0; + out_err: + st33zp24_cancel(chip); + release_locality(chip); +diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c +index d9439f9abe78..88d2e01a651d 100644 +--- a/drivers/char/tpm/tpm-interface.c ++++ b/drivers/char/tpm/tpm-interface.c +@@ -230,10 +230,19 @@ static ssize_t tpm_try_transmit(struct tpm_chip *chip, + if (rc < 0) { + if (rc != -EPIPE) + dev_err(&chip->dev, +- "%s: tpm_send: error %d\n", __func__, rc); ++ "%s: send(): error %d\n", __func__, rc); + goto out; + } + ++ /* A sanity check. send() should just return zero on success e.g. ++ * not the command length. ++ */ ++ if (rc > 0) { ++ dev_warn(&chip->dev, ++ "%s: send(): invalid value %d\n", __func__, rc); ++ rc = 0; ++ } ++ + if (chip->flags & TPM_CHIP_FLAG_IRQ) + goto out_recv; + +diff --git a/drivers/char/tpm/tpm_atmel.c b/drivers/char/tpm/tpm_atmel.c +index 66a14526aaf4..a290b30a0c35 100644 +--- a/drivers/char/tpm/tpm_atmel.c ++++ b/drivers/char/tpm/tpm_atmel.c +@@ -105,7 +105,7 @@ static int tpm_atml_send(struct tpm_chip *chip, u8 *buf, size_t count) + iowrite8(buf[i], priv->iobase); + } + +- return count; ++ return 0; + } + + static void tpm_atml_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c +index 36952ef98f90..763fc7e6c005 100644 +--- a/drivers/char/tpm/tpm_crb.c ++++ b/drivers/char/tpm/tpm_crb.c +@@ -287,19 +287,29 @@ static int crb_recv(struct tpm_chip *chip, u8 *buf, size_t count) + struct crb_priv *priv = dev_get_drvdata(&chip->dev); + unsigned int expected; + +- /* sanity check */ +- if (count < 6) ++ /* A sanity check that the upper layer wants to get at least the header ++ * as that is the minimum size for any TPM response. ++ */ ++ if (count < TPM_HEADER_SIZE) + return -EIO; + ++ /* If this bit is set, according to the spec, the TPM is in ++ * unrecoverable condition. ++ */ + if (ioread32(&priv->regs_t->ctrl_sts) & CRB_CTRL_STS_ERROR) + return -EIO; + +- memcpy_fromio(buf, priv->rsp, 6); +- expected = be32_to_cpup((__be32 *) &buf[2]); +- if (expected > count || expected < 6) ++ /* Read the first 8 bytes in order to get the length of the response. ++ * We read exactly a quad word in order to make sure that the remaining ++ * reads will be aligned. ++ */ ++ memcpy_fromio(buf, priv->rsp, 8); ++ ++ expected = be32_to_cpup((__be32 *)&buf[2]); ++ if (expected > count || expected < TPM_HEADER_SIZE) + return -EIO; + +- memcpy_fromio(&buf[6], &priv->rsp[6], expected - 6); ++ memcpy_fromio(&buf[8], &priv->rsp[8], expected - 8); + + return expected; + } +diff --git a/drivers/char/tpm/tpm_i2c_atmel.c b/drivers/char/tpm/tpm_i2c_atmel.c +index 95ce2e9ccdc6..32a8e27c5382 100644 +--- a/drivers/char/tpm/tpm_i2c_atmel.c ++++ b/drivers/char/tpm/tpm_i2c_atmel.c +@@ -65,7 +65,11 @@ static int i2c_atmel_send(struct tpm_chip *chip, u8 *buf, size_t len) + dev_dbg(&chip->dev, + "%s(buf=%*ph len=%0zx) -> sts=%d\n", __func__, + (int)min_t(size_t, 64, len), buf, len, status); +- return status; ++ ++ if (status < 0) ++ return status; ++ ++ return 0; + } + + static int i2c_atmel_recv(struct tpm_chip *chip, u8 *buf, size_t count) +diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c +index 9086edc9066b..977fd42daa1b 100644 +--- a/drivers/char/tpm/tpm_i2c_infineon.c ++++ b/drivers/char/tpm/tpm_i2c_infineon.c +@@ -587,7 +587,7 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len) + /* go and do it */ + iic_tpm_write(TPM_STS(tpm_dev.locality), &sts, 1); + +- return len; ++ return 0; + out_err: + tpm_tis_i2c_ready(chip); + /* The TPM needs some time to clean up here, +diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c +index 217f7f1cbde8..058220edb8b3 100644 +--- a/drivers/char/tpm/tpm_i2c_nuvoton.c ++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c +@@ -467,7 +467,7 @@ static int i2c_nuvoton_send(struct tpm_chip *chip, u8 *buf, size_t len) + } + + dev_dbg(dev, "%s() -> %zd\n", __func__, len); +- return len; ++ return 0; + } + + static bool i2c_nuvoton_req_canceled(struct tpm_chip *chip, u8 status) +diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c +index 07b5a487d0c8..757ca45b39b8 100644 +--- a/drivers/char/tpm/tpm_ibmvtpm.c ++++ b/drivers/char/tpm/tpm_ibmvtpm.c +@@ -139,14 +139,14 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) + } + + /** +- * tpm_ibmvtpm_send - Send tpm request +- * ++ * tpm_ibmvtpm_send() - Send a TPM command + * @chip: tpm chip struct + * @buf: buffer contains data to send + * @count: size of buffer + * + * Return: +- * Number of bytes sent or < 0 on error. ++ * 0 on success, ++ * -errno on error + */ + static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + { +@@ -192,7 +192,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + rc = 0; + ibmvtpm->tpm_processing_cmd = false; + } else +- rc = count; ++ rc = 0; + + spin_unlock(&ibmvtpm->rtce_lock); + return rc; +diff --git a/drivers/char/tpm/tpm_infineon.c b/drivers/char/tpm/tpm_infineon.c +index d8f10047fbba..97f6d4fe0aee 100644 +--- a/drivers/char/tpm/tpm_infineon.c ++++ b/drivers/char/tpm/tpm_infineon.c +@@ -354,7 +354,7 @@ static int tpm_inf_send(struct tpm_chip *chip, u8 * buf, size_t count) + for (i = 0; i < count; i++) { + wait_and_send(chip, buf[i]); + } +- return count; ++ return 0; + } + + static void tpm_inf_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/tpm_nsc.c b/drivers/char/tpm/tpm_nsc.c +index 5d6cce74cd3f..9bee3c5eb4bf 100644 +--- a/drivers/char/tpm/tpm_nsc.c ++++ b/drivers/char/tpm/tpm_nsc.c +@@ -226,7 +226,7 @@ static int tpm_nsc_send(struct tpm_chip *chip, u8 * buf, size_t count) + } + outb(NSC_COMMAND_EOC, priv->base + NSC_COMMAND); + +- return count; ++ return 0; + } + + static void tpm_nsc_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c +index bf7e49cfa643..bb0c2e160562 100644 +--- a/drivers/char/tpm/tpm_tis_core.c ++++ b/drivers/char/tpm/tpm_tis_core.c +@@ -481,7 +481,7 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len) + goto out_err; + } + } +- return len; ++ return 0; + out_err: + tpm_tis_ready(chip); + return rc; +diff --git a/drivers/char/tpm/tpm_vtpm_proxy.c b/drivers/char/tpm/tpm_vtpm_proxy.c +index 87a0ce47f201..ecbb63f8d231 100644 +--- a/drivers/char/tpm/tpm_vtpm_proxy.c ++++ b/drivers/char/tpm/tpm_vtpm_proxy.c +@@ -335,7 +335,6 @@ static int vtpm_proxy_is_driver_command(struct tpm_chip *chip, + static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) + { + struct proxy_dev *proxy_dev = dev_get_drvdata(&chip->dev); +- int rc = 0; + + if (count > sizeof(proxy_dev->buffer)) { + dev_err(&chip->dev, +@@ -366,7 +365,7 @@ static int vtpm_proxy_tpm_op_send(struct tpm_chip *chip, u8 *buf, size_t count) + + wake_up_interruptible(&proxy_dev->wq); + +- return rc; ++ return 0; + } + + static void vtpm_proxy_tpm_op_cancel(struct tpm_chip *chip) +diff --git a/drivers/char/tpm/xen-tpmfront.c b/drivers/char/tpm/xen-tpmfront.c +index b150f87f38f5..5a327eb7f63a 100644 +--- a/drivers/char/tpm/xen-tpmfront.c ++++ b/drivers/char/tpm/xen-tpmfront.c +@@ -173,7 +173,7 @@ static int vtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + return -ETIME; + } + +- return count; ++ return 0; + } + + static int vtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count) +diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c +index ea846f77750b..0cad5748bf0e 100644 +--- a/drivers/clk/clk-twl6040.c ++++ b/drivers/clk/clk-twl6040.c +@@ -41,6 +41,43 @@ static int twl6040_pdmclk_is_prepared(struct clk_hw *hw) + return pdmclk->enabled; + } + ++static int twl6040_pdmclk_reset_one_clock(struct twl6040_pdmclk *pdmclk, ++ unsigned int reg) ++{ ++ const u8 reset_mask = TWL6040_HPLLRST; /* Same for HPPLL and LPPLL */ ++ int ret; ++ ++ ret = twl6040_set_bits(pdmclk->twl6040, reg, reset_mask); ++ if (ret < 0) ++ return ret; ++ ++ ret = twl6040_clear_bits(pdmclk->twl6040, reg, reset_mask); ++ if (ret < 0) ++ return ret; ++ ++ return 0; ++} ++ ++/* ++ * TWL6040A2 Phoenix Audio IC erratum #6: "PDM Clock Generation Issue At ++ * Cold Temperature". This affects cold boot and deeper idle states it ++ * seems. The workaround consists of resetting HPPLL and LPPLL. ++ */ ++static int twl6040_pdmclk_quirk_reset_clocks(struct twl6040_pdmclk *pdmclk) ++{ ++ int ret; ++ ++ ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_HPPLLCTL); ++ if (ret) ++ return ret; ++ ++ ret = twl6040_pdmclk_reset_one_clock(pdmclk, TWL6040_REG_LPPLLCTL); ++ if (ret) ++ return ret; ++ ++ return 0; ++} ++ + static int twl6040_pdmclk_prepare(struct clk_hw *hw) + { + struct twl6040_pdmclk *pdmclk = container_of(hw, struct twl6040_pdmclk, +@@ -48,8 +85,20 @@ static int twl6040_pdmclk_prepare(struct clk_hw *hw) + int ret; + + ret = twl6040_power(pdmclk->twl6040, 1); +- if (!ret) +- pdmclk->enabled = 1; ++ if (ret) ++ return ret; ++ ++ ret = twl6040_pdmclk_quirk_reset_clocks(pdmclk); ++ if (ret) ++ goto out_err; ++ ++ pdmclk->enabled = 1; ++ ++ return 0; ++ ++out_err: ++ dev_err(pdmclk->dev, "%s: error %i\n", __func__, ret); ++ twl6040_power(pdmclk->twl6040, 0); + + return ret; + } +diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c +index 5ef7d9ba2195..b40160eb3372 100644 +--- a/drivers/clk/ingenic/cgu.c ++++ b/drivers/clk/ingenic/cgu.c +@@ -426,16 +426,16 @@ ingenic_clk_round_rate(struct clk_hw *hw, unsigned long req_rate, + struct ingenic_clk *ingenic_clk = to_ingenic_clk(hw); + struct ingenic_cgu *cgu = ingenic_clk->cgu; + const struct ingenic_cgu_clk_info *clk_info; +- long rate = *parent_rate; ++ unsigned int div = 1; + + clk_info = &cgu->clock_info[ingenic_clk->idx]; + + if (clk_info->type & CGU_CLK_DIV) +- rate /= ingenic_clk_calc_div(clk_info, *parent_rate, req_rate); ++ div = ingenic_clk_calc_div(clk_info, *parent_rate, req_rate); + else if (clk_info->type & CGU_CLK_FIXDIV) +- rate /= clk_info->fixdiv.div; ++ div = clk_info->fixdiv.div; + +- return rate; ++ return DIV_ROUND_UP(*parent_rate, div); + } + + static int +@@ -455,7 +455,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate, + + if (clk_info->type & CGU_CLK_DIV) { + div = ingenic_clk_calc_div(clk_info, parent_rate, req_rate); +- rate = parent_rate / div; ++ rate = DIV_ROUND_UP(parent_rate, div); + + if (rate != req_rate) + return -EINVAL; +diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h +index 502bcbb61b04..e12716d8ce3c 100644 +--- a/drivers/clk/ingenic/cgu.h ++++ b/drivers/clk/ingenic/cgu.h +@@ -80,7 +80,7 @@ struct ingenic_cgu_mux_info { + * @reg: offset of the divider control register within the CGU + * @shift: number of bits to left shift the divide value by (ie. the index of + * the lowest bit of the divide value within its control register) +- * @div: number of bits to divide the divider value by (i.e. if the ++ * @div: number to divide the divider value by (i.e. if the + * effective divider value is the value written to the register + * multiplied by some constant) + * @bits: the size of the divide value in bits +diff --git a/drivers/clk/samsung/clk-exynos5-subcmu.c b/drivers/clk/samsung/clk-exynos5-subcmu.c +index 93306283d764..8ae44b5db4c2 100644 +--- a/drivers/clk/samsung/clk-exynos5-subcmu.c ++++ b/drivers/clk/samsung/clk-exynos5-subcmu.c +@@ -136,15 +136,20 @@ static int __init exynos5_clk_register_subcmu(struct device *parent, + { + struct of_phandle_args genpdspec = { .np = pd_node }; + struct platform_device *pdev; ++ int ret; ++ ++ pdev = platform_device_alloc("exynos5-subcmu", PLATFORM_DEVID_AUTO); ++ if (!pdev) ++ return -ENOMEM; + +- pdev = platform_device_alloc(info->pd_name, -1); + pdev->dev.parent = parent; +- pdev->driver_override = "exynos5-subcmu"; + platform_set_drvdata(pdev, (void *)info); + of_genpd_add_device(&genpdspec, &pdev->dev); +- platform_device_add(pdev); ++ ret = platform_device_add(pdev); ++ if (ret) ++ platform_device_put(pdev); + +- return 0; ++ return ret; + } + + static int __init exynos5_clk_probe(struct platform_device *pdev) +diff --git a/drivers/clk/uniphier/clk-uniphier-cpugear.c b/drivers/clk/uniphier/clk-uniphier-cpugear.c +index ec11f55594ad..5d2d42b7e182 100644 +--- a/drivers/clk/uniphier/clk-uniphier-cpugear.c ++++ b/drivers/clk/uniphier/clk-uniphier-cpugear.c +@@ -47,7 +47,7 @@ static int uniphier_clk_cpugear_set_parent(struct clk_hw *hw, u8 index) + return ret; + + ret = regmap_write_bits(gear->regmap, +- gear->regbase + UNIPHIER_CLK_CPUGEAR_SET, ++ gear->regbase + UNIPHIER_CLK_CPUGEAR_UPD, + UNIPHIER_CLK_CPUGEAR_UPD_BIT, + UNIPHIER_CLK_CPUGEAR_UPD_BIT); + if (ret) +diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig +index a9e26f6a81a1..8dfd3bc448d0 100644 +--- a/drivers/clocksource/Kconfig ++++ b/drivers/clocksource/Kconfig +@@ -360,6 +360,16 @@ config ARM64_ERRATUM_858921 + The workaround will be dynamically enabled when an affected + core is detected. + ++config SUN50I_ERRATUM_UNKNOWN1 ++ bool "Workaround for Allwinner A64 erratum UNKNOWN1" ++ default y ++ depends on ARM_ARCH_TIMER && ARM64 && ARCH_SUNXI ++ select ARM_ARCH_TIMER_OOL_WORKAROUND ++ help ++ This option enables a workaround for instability in the timer on ++ the Allwinner A64 SoC. The workaround will only be active if the ++ allwinner,erratum-unknown1 property is found in the timer node. ++ + config ARM_GLOBAL_TIMER + bool "Support for the ARM global timer" if COMPILE_TEST + select TIMER_OF if OF +diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c +index 9a7d4dc00b6e..a8b20b65bd4b 100644 +--- a/drivers/clocksource/arm_arch_timer.c ++++ b/drivers/clocksource/arm_arch_timer.c +@@ -326,6 +326,48 @@ static u64 notrace arm64_1188873_read_cntvct_el0(void) + } + #endif + ++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1 ++/* ++ * The low bits of the counter registers are indeterminate while bit 10 or ++ * greater is rolling over. Since the counter value can jump both backward ++ * (7ff -> 000 -> 800) and forward (7ff -> fff -> 800), ignore register values ++ * with all ones or all zeros in the low bits. Bound the loop by the maximum ++ * number of CPU cycles in 3 consecutive 24 MHz counter periods. ++ */ ++#define __sun50i_a64_read_reg(reg) ({ \ ++ u64 _val; \ ++ int _retries = 150; \ ++ \ ++ do { \ ++ _val = read_sysreg(reg); \ ++ _retries--; \ ++ } while (((_val + 1) & GENMASK(9, 0)) <= 1 && _retries); \ ++ \ ++ WARN_ON_ONCE(!_retries); \ ++ _val; \ ++}) ++ ++static u64 notrace sun50i_a64_read_cntpct_el0(void) ++{ ++ return __sun50i_a64_read_reg(cntpct_el0); ++} ++ ++static u64 notrace sun50i_a64_read_cntvct_el0(void) ++{ ++ return __sun50i_a64_read_reg(cntvct_el0); ++} ++ ++static u32 notrace sun50i_a64_read_cntp_tval_el0(void) ++{ ++ return read_sysreg(cntp_cval_el0) - sun50i_a64_read_cntpct_el0(); ++} ++ ++static u32 notrace sun50i_a64_read_cntv_tval_el0(void) ++{ ++ return read_sysreg(cntv_cval_el0) - sun50i_a64_read_cntvct_el0(); ++} ++#endif ++ + #ifdef CONFIG_ARM_ARCH_TIMER_OOL_WORKAROUND + DEFINE_PER_CPU(const struct arch_timer_erratum_workaround *, timer_unstable_counter_workaround); + EXPORT_SYMBOL_GPL(timer_unstable_counter_workaround); +@@ -423,6 +465,19 @@ static const struct arch_timer_erratum_workaround ool_workarounds[] = { + .read_cntvct_el0 = arm64_1188873_read_cntvct_el0, + }, + #endif ++#ifdef CONFIG_SUN50I_ERRATUM_UNKNOWN1 ++ { ++ .match_type = ate_match_dt, ++ .id = "allwinner,erratum-unknown1", ++ .desc = "Allwinner erratum UNKNOWN1", ++ .read_cntp_tval_el0 = sun50i_a64_read_cntp_tval_el0, ++ .read_cntv_tval_el0 = sun50i_a64_read_cntv_tval_el0, ++ .read_cntpct_el0 = sun50i_a64_read_cntpct_el0, ++ .read_cntvct_el0 = sun50i_a64_read_cntvct_el0, ++ .set_next_event_phys = erratum_set_next_event_tval_phys, ++ .set_next_event_virt = erratum_set_next_event_tval_virt, ++ }, ++#endif + }; + + typedef bool (*ate_match_fn_t)(const struct arch_timer_erratum_workaround *, +diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c +index 7a244b681876..d55c30f6981d 100644 +--- a/drivers/clocksource/exynos_mct.c ++++ b/drivers/clocksource/exynos_mct.c +@@ -388,6 +388,13 @@ static void exynos4_mct_tick_start(unsigned long cycles, + exynos4_mct_write(tmp, mevt->base + MCT_L_TCON_OFFSET); + } + ++static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) ++{ ++ /* Clear the MCT tick interrupt */ ++ if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1) ++ exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET); ++} ++ + static int exynos4_tick_set_next_event(unsigned long cycles, + struct clock_event_device *evt) + { +@@ -404,6 +411,7 @@ static int set_state_shutdown(struct clock_event_device *evt) + + mevt = container_of(evt, struct mct_clock_event_device, evt); + exynos4_mct_tick_stop(mevt); ++ exynos4_mct_tick_clear(mevt); + return 0; + } + +@@ -420,8 +428,11 @@ static int set_state_periodic(struct clock_event_device *evt) + return 0; + } + +-static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) ++static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id) + { ++ struct mct_clock_event_device *mevt = dev_id; ++ struct clock_event_device *evt = &mevt->evt; ++ + /* + * This is for supporting oneshot mode. + * Mct would generate interrupt periodically +@@ -430,16 +441,6 @@ static void exynos4_mct_tick_clear(struct mct_clock_event_device *mevt) + if (!clockevent_state_periodic(&mevt->evt)) + exynos4_mct_tick_stop(mevt); + +- /* Clear the MCT tick interrupt */ +- if (readl_relaxed(reg_base + mevt->base + MCT_L_INT_CSTAT_OFFSET) & 1) +- exynos4_mct_write(0x1, mevt->base + MCT_L_INT_CSTAT_OFFSET); +-} +- +-static irqreturn_t exynos4_mct_tick_isr(int irq, void *dev_id) +-{ +- struct mct_clock_event_device *mevt = dev_id; +- struct clock_event_device *evt = &mevt->evt; +- + exynos4_mct_tick_clear(mevt); + + evt->event_handler(evt); +diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c +index 46254e583982..74e0e0c20c46 100644 +--- a/drivers/cpufreq/pxa2xx-cpufreq.c ++++ b/drivers/cpufreq/pxa2xx-cpufreq.c +@@ -143,7 +143,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq) + return ret; + } + +-static void __init pxa_cpufreq_init_voltages(void) ++static void pxa_cpufreq_init_voltages(void) + { + vcc_core = regulator_get(NULL, "vcc_core"); + if (IS_ERR(vcc_core)) { +@@ -159,7 +159,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq) + return 0; + } + +-static void __init pxa_cpufreq_init_voltages(void) { } ++static void pxa_cpufreq_init_voltages(void) { } + #endif + + static void find_freq_tables(struct cpufreq_frequency_table **freq_table, +diff --git a/drivers/cpufreq/qcom-cpufreq-kryo.c b/drivers/cpufreq/qcom-cpufreq-kryo.c +index 2a3675c24032..a472b814058f 100644 +--- a/drivers/cpufreq/qcom-cpufreq-kryo.c ++++ b/drivers/cpufreq/qcom-cpufreq-kryo.c +@@ -75,7 +75,7 @@ static enum _msm8996_version qcom_cpufreq_kryo_get_msm_id(void) + + static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) + { +- struct opp_table *opp_tables[NR_CPUS] = {0}; ++ struct opp_table **opp_tables; + enum _msm8996_version msm8996_version; + struct nvmem_cell *speedbin_nvmem; + struct device_node *np; +@@ -133,6 +133,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) + } + kfree(speedbin); + ++ opp_tables = kcalloc(num_possible_cpus(), sizeof(*opp_tables), GFP_KERNEL); ++ if (!opp_tables) ++ return -ENOMEM; ++ + for_each_possible_cpu(cpu) { + cpu_dev = get_cpu_device(cpu); + if (NULL == cpu_dev) { +@@ -151,8 +155,10 @@ static int qcom_cpufreq_kryo_probe(struct platform_device *pdev) + + cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1, + NULL, 0); +- if (!IS_ERR(cpufreq_dt_pdev)) ++ if (!IS_ERR(cpufreq_dt_pdev)) { ++ platform_set_drvdata(pdev, opp_tables); + return 0; ++ } + + ret = PTR_ERR(cpufreq_dt_pdev); + dev_err(cpu_dev, "Failed to register platform device\n"); +@@ -163,13 +169,23 @@ free_opp: + break; + dev_pm_opp_put_supported_hw(opp_tables[cpu]); + } ++ kfree(opp_tables); + + return ret; + } + + static int qcom_cpufreq_kryo_remove(struct platform_device *pdev) + { ++ struct opp_table **opp_tables = platform_get_drvdata(pdev); ++ unsigned int cpu; ++ + platform_device_unregister(cpufreq_dt_pdev); ++ ++ for_each_possible_cpu(cpu) ++ dev_pm_opp_put_supported_hw(opp_tables[cpu]); ++ ++ kfree(opp_tables); ++ + return 0; + } + +diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c +index 43530254201a..4bb154f6c54c 100644 +--- a/drivers/cpufreq/tegra124-cpufreq.c ++++ b/drivers/cpufreq/tegra124-cpufreq.c +@@ -134,6 +134,8 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev) + + platform_set_drvdata(pdev, priv); + ++ of_node_put(np); ++ + return 0; + + out_switch_to_pllx: +diff --git a/drivers/cpuidle/governor.c b/drivers/cpuidle/governor.c +index bb93e5cf6a4a..9fddf828a76f 100644 +--- a/drivers/cpuidle/governor.c ++++ b/drivers/cpuidle/governor.c +@@ -89,6 +89,7 @@ int cpuidle_register_governor(struct cpuidle_governor *gov) + mutex_lock(&cpuidle_lock); + if (__cpuidle_find_governor(gov->name) == NULL) { + ret = 0; ++ list_add_tail(&gov->governor_list, &cpuidle_governors); + if (!cpuidle_curr_governor || + !strncasecmp(param_governor, gov->name, CPUIDLE_NAME_LEN) || + (cpuidle_curr_governor->rating < gov->rating && +diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c +index 80ae69f906fb..1c4f3a046dc5 100644 +--- a/drivers/crypto/caam/caamalg.c ++++ b/drivers/crypto/caam/caamalg.c +@@ -1040,6 +1040,7 @@ static void init_aead_job(struct aead_request *req, + if (unlikely(req->src != req->dst)) { + if (edesc->dst_nents == 1) { + dst_dma = sg_dma_address(req->dst); ++ out_options = 0; + } else { + dst_dma = edesc->sec4_sg_dma + + sec4_sg_index * +diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c +index bb1a2cdf1951..0f11811a3585 100644 +--- a/drivers/crypto/caam/caamhash.c ++++ b/drivers/crypto/caam/caamhash.c +@@ -113,6 +113,7 @@ struct caam_hash_ctx { + struct caam_hash_state { + dma_addr_t buf_dma; + dma_addr_t ctx_dma; ++ int ctx_dma_len; + u8 buf_0[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned; + int buflen_0; + u8 buf_1[CAAM_MAX_HASH_BLOCK_SIZE] ____cacheline_aligned; +@@ -165,6 +166,7 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev, + struct caam_hash_state *state, + int ctx_len) + { ++ state->ctx_dma_len = ctx_len; + state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, + ctx_len, DMA_FROM_DEVICE); + if (dma_mapping_error(jrdev, state->ctx_dma)) { +@@ -178,18 +180,6 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev, + return 0; + } + +-/* Map req->result, and append seq_out_ptr command that points to it */ +-static inline dma_addr_t map_seq_out_ptr_result(u32 *desc, struct device *jrdev, +- u8 *result, int digestsize) +-{ +- dma_addr_t dst_dma; +- +- dst_dma = dma_map_single(jrdev, result, digestsize, DMA_FROM_DEVICE); +- append_seq_out_ptr(desc, dst_dma, digestsize, 0); +- +- return dst_dma; +-} +- + /* Map current buffer in state (if length > 0) and put it in link table */ + static inline int buf_map_to_sec4_sg(struct device *jrdev, + struct sec4_sg_entry *sec4_sg, +@@ -218,6 +208,7 @@ static inline int ctx_map_to_sec4_sg(struct device *jrdev, + struct caam_hash_state *state, int ctx_len, + struct sec4_sg_entry *sec4_sg, u32 flag) + { ++ state->ctx_dma_len = ctx_len; + state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, ctx_len, flag); + if (dma_mapping_error(jrdev, state->ctx_dma)) { + dev_err(jrdev, "unable to map ctx\n"); +@@ -426,7 +417,6 @@ static int ahash_setkey(struct crypto_ahash *ahash, + + /* + * ahash_edesc - s/w-extended ahash descriptor +- * @dst_dma: physical mapped address of req->result + * @sec4_sg_dma: physical mapped address of h/w link table + * @src_nents: number of segments in input scatterlist + * @sec4_sg_bytes: length of dma mapped sec4_sg space +@@ -434,7 +424,6 @@ static int ahash_setkey(struct crypto_ahash *ahash, + * @sec4_sg: h/w link table + */ + struct ahash_edesc { +- dma_addr_t dst_dma; + dma_addr_t sec4_sg_dma; + int src_nents; + int sec4_sg_bytes; +@@ -450,8 +439,6 @@ static inline void ahash_unmap(struct device *dev, + + if (edesc->src_nents) + dma_unmap_sg(dev, req->src, edesc->src_nents, DMA_TO_DEVICE); +- if (edesc->dst_dma) +- dma_unmap_single(dev, edesc->dst_dma, dst_len, DMA_FROM_DEVICE); + + if (edesc->sec4_sg_bytes) + dma_unmap_single(dev, edesc->sec4_sg_dma, +@@ -468,12 +455,10 @@ static inline void ahash_unmap_ctx(struct device *dev, + struct ahash_edesc *edesc, + struct ahash_request *req, int dst_len, u32 flag) + { +- struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); +- struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); + struct caam_hash_state *state = ahash_request_ctx(req); + + if (state->ctx_dma) { +- dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag); ++ dma_unmap_single(dev, state->ctx_dma, state->ctx_dma_len, flag); + state->ctx_dma = 0; + } + ahash_unmap(dev, edesc, req, dst_len); +@@ -486,9 +471,9 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, + struct ahash_edesc *edesc; + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); + int digestsize = crypto_ahash_digestsize(ahash); ++ struct caam_hash_state *state = ahash_request_ctx(req); + #ifdef DEBUG + struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); +- struct caam_hash_state *state = ahash_request_ctx(req); + + dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); + #endif +@@ -497,17 +482,14 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err, + if (err) + caam_jr_strstatus(jrdev, err); + +- ahash_unmap(jrdev, edesc, req, digestsize); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); ++ memcpy(req->result, state->caam_ctx, digestsize); + kfree(edesc); + + #ifdef DEBUG + print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, + ctx->ctx_len, 1); +- if (req->result) +- print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ", +- DUMP_PREFIX_ADDRESS, 16, 4, req->result, +- digestsize, 1); + #endif + + req->base.complete(&req->base, err); +@@ -555,9 +537,9 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, + struct ahash_edesc *edesc; + struct crypto_ahash *ahash = crypto_ahash_reqtfm(req); + int digestsize = crypto_ahash_digestsize(ahash); ++ struct caam_hash_state *state = ahash_request_ctx(req); + #ifdef DEBUG + struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash); +- struct caam_hash_state *state = ahash_request_ctx(req); + + dev_err(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err); + #endif +@@ -566,17 +548,14 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err, + if (err) + caam_jr_strstatus(jrdev, err); + +- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_TO_DEVICE); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); ++ memcpy(req->result, state->caam_ctx, digestsize); + kfree(edesc); + + #ifdef DEBUG + print_hex_dump(KERN_ERR, "ctx@"__stringify(__LINE__)": ", + DUMP_PREFIX_ADDRESS, 16, 4, state->caam_ctx, + ctx->ctx_len, 1); +- if (req->result) +- print_hex_dump(KERN_ERR, "result@"__stringify(__LINE__)": ", +- DUMP_PREFIX_ADDRESS, 16, 4, req->result, +- digestsize, 1); + #endif + + req->base.complete(&req->base, err); +@@ -837,7 +816,7 @@ static int ahash_final_ctx(struct ahash_request *req) + edesc->sec4_sg_bytes = sec4_sg_bytes; + + ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len, +- edesc->sec4_sg, DMA_TO_DEVICE); ++ edesc->sec4_sg, DMA_BIDIRECTIONAL); + if (ret) + goto unmap_ctx; + +@@ -857,14 +836,7 @@ static int ahash_final_ctx(struct ahash_request *req) + + append_seq_in_ptr(desc, edesc->sec4_sg_dma, ctx->ctx_len + buflen, + LDST_SGF); +- +- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result, +- digestsize); +- if (dma_mapping_error(jrdev, edesc->dst_dma)) { +- dev_err(jrdev, "unable to map dst\n"); +- ret = -ENOMEM; +- goto unmap_ctx; +- } ++ append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0); + + #ifdef DEBUG + print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ", +@@ -877,7 +849,7 @@ static int ahash_final_ctx(struct ahash_request *req) + + return -EINPROGRESS; + unmap_ctx: +- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); + kfree(edesc); + return ret; + } +@@ -931,7 +903,7 @@ static int ahash_finup_ctx(struct ahash_request *req) + edesc->src_nents = src_nents; + + ret = ctx_map_to_sec4_sg(jrdev, state, ctx->ctx_len, +- edesc->sec4_sg, DMA_TO_DEVICE); ++ edesc->sec4_sg, DMA_BIDIRECTIONAL); + if (ret) + goto unmap_ctx; + +@@ -945,13 +917,7 @@ static int ahash_finup_ctx(struct ahash_request *req) + if (ret) + goto unmap_ctx; + +- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result, +- digestsize); +- if (dma_mapping_error(jrdev, edesc->dst_dma)) { +- dev_err(jrdev, "unable to map dst\n"); +- ret = -ENOMEM; +- goto unmap_ctx; +- } ++ append_seq_out_ptr(desc, state->ctx_dma, digestsize, 0); + + #ifdef DEBUG + print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ", +@@ -964,7 +930,7 @@ static int ahash_finup_ctx(struct ahash_request *req) + + return -EINPROGRESS; + unmap_ctx: +- ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_BIDIRECTIONAL); + kfree(edesc); + return ret; + } +@@ -1023,10 +989,8 @@ static int ahash_digest(struct ahash_request *req) + + desc = edesc->hw_desc; + +- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result, +- digestsize); +- if (dma_mapping_error(jrdev, edesc->dst_dma)) { +- dev_err(jrdev, "unable to map dst\n"); ++ ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize); ++ if (ret) { + ahash_unmap(jrdev, edesc, req, digestsize); + kfree(edesc); + return -ENOMEM; +@@ -1041,7 +1005,7 @@ static int ahash_digest(struct ahash_request *req) + if (!ret) { + ret = -EINPROGRESS; + } else { +- ahash_unmap(jrdev, edesc, req, digestsize); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); + } + +@@ -1083,12 +1047,9 @@ static int ahash_final_no_ctx(struct ahash_request *req) + append_seq_in_ptr(desc, state->buf_dma, buflen, 0); + } + +- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result, +- digestsize); +- if (dma_mapping_error(jrdev, edesc->dst_dma)) { +- dev_err(jrdev, "unable to map dst\n"); ++ ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize); ++ if (ret) + goto unmap; +- } + + #ifdef DEBUG + print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ", +@@ -1099,7 +1060,7 @@ static int ahash_final_no_ctx(struct ahash_request *req) + if (!ret) { + ret = -EINPROGRESS; + } else { +- ahash_unmap(jrdev, edesc, req, digestsize); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); + } + +@@ -1298,12 +1259,9 @@ static int ahash_finup_no_ctx(struct ahash_request *req) + goto unmap; + } + +- edesc->dst_dma = map_seq_out_ptr_result(desc, jrdev, req->result, +- digestsize); +- if (dma_mapping_error(jrdev, edesc->dst_dma)) { +- dev_err(jrdev, "unable to map dst\n"); ++ ret = map_seq_out_ptr_ctx(desc, jrdev, state, digestsize); ++ if (ret) + goto unmap; +- } + + #ifdef DEBUG + print_hex_dump(KERN_ERR, "jobdesc@"__stringify(__LINE__)": ", +@@ -1314,7 +1272,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req) + if (!ret) { + ret = -EINPROGRESS; + } else { +- ahash_unmap(jrdev, edesc, req, digestsize); ++ ahash_unmap_ctx(jrdev, edesc, req, digestsize, DMA_FROM_DEVICE); + kfree(edesc); + } + +@@ -1446,6 +1404,7 @@ static int ahash_init(struct ahash_request *req) + state->final = ahash_final_no_ctx; + + state->ctx_dma = 0; ++ state->ctx_dma_len = 0; + state->current_buf = 0; + state->buf_dma = 0; + state->buflen_0 = 0; +diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c +index dd948e1df9e5..3bcb6bce666e 100644 +--- a/drivers/crypto/ccree/cc_buffer_mgr.c ++++ b/drivers/crypto/ccree/cc_buffer_mgr.c +@@ -614,10 +614,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req) + hw_iv_size, DMA_BIDIRECTIONAL); + } + +- /*In case a pool was set, a table was +- *allocated and should be released +- */ +- if (areq_ctx->mlli_params.curr_pool) { ++ /* Release pool */ ++ if ((areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI || ++ areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) && ++ (areq_ctx->mlli_params.mlli_virt_addr)) { + dev_dbg(dev, "free MLLI buffer: dma=%pad virt=%pK\n", + &areq_ctx->mlli_params.mlli_dma_addr, + areq_ctx->mlli_params.mlli_virt_addr); +diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c +index cc92b031fad1..4ec93079daaf 100644 +--- a/drivers/crypto/ccree/cc_cipher.c ++++ b/drivers/crypto/ccree/cc_cipher.c +@@ -80,6 +80,7 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size) + default: + break; + } ++ break; + case S_DIN_to_DES: + if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE) + return 0; +@@ -652,6 +653,8 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err) + unsigned int ivsize = crypto_skcipher_ivsize(sk_tfm); + unsigned int len; + ++ cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst); ++ + switch (ctx_p->cipher_mode) { + case DRV_CIPHER_CBC: + /* +@@ -681,7 +684,6 @@ static void cc_cipher_complete(struct device *dev, void *cc_req, int err) + break; + } + +- cc_unmap_cipher_request(dev, req_ctx, ivsize, src, dst); + kzfree(req_ctx->iv); + + skcipher_request_complete(req, err); +@@ -799,7 +801,8 @@ static int cc_cipher_decrypt(struct skcipher_request *req) + + memset(req_ctx, 0, sizeof(*req_ctx)); + +- if (ctx_p->cipher_mode == DRV_CIPHER_CBC) { ++ if ((ctx_p->cipher_mode == DRV_CIPHER_CBC) && ++ (req->cryptlen >= ivsize)) { + + /* Allocate and save the last IV sized bytes of the source, + * which will be lost in case of in-place decryption. +diff --git a/drivers/crypto/rockchip/rk3288_crypto.c b/drivers/crypto/rockchip/rk3288_crypto.c +index c9d622abd90c..0ce4a65b95f5 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto.c ++++ b/drivers/crypto/rockchip/rk3288_crypto.c +@@ -119,7 +119,7 @@ static int rk_load_data(struct rk_crypto_info *dev, + count = (dev->left_bytes > PAGE_SIZE) ? + PAGE_SIZE : dev->left_bytes; + +- if (!sg_pcopy_to_buffer(dev->first, dev->nents, ++ if (!sg_pcopy_to_buffer(dev->first, dev->src_nents, + dev->addr_vir, count, + dev->total - dev->left_bytes)) { + dev_err(dev->dev, "[%s:%d] pcopy err\n", +diff --git a/drivers/crypto/rockchip/rk3288_crypto.h b/drivers/crypto/rockchip/rk3288_crypto.h +index d5fb4013fb42..54ee5b3ed9db 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto.h ++++ b/drivers/crypto/rockchip/rk3288_crypto.h +@@ -207,7 +207,8 @@ struct rk_crypto_info { + void *addr_vir; + int aligned; + int align_size; +- size_t nents; ++ size_t src_nents; ++ size_t dst_nents; + unsigned int total; + unsigned int count; + dma_addr_t addr_in; +@@ -244,6 +245,7 @@ struct rk_cipher_ctx { + struct rk_crypto_info *dev; + unsigned int keylen; + u32 mode; ++ u8 iv[AES_BLOCK_SIZE]; + }; + + enum alg_type { +diff --git a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c +index 639c15c5364b..23305f22072f 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c ++++ b/drivers/crypto/rockchip/rk3288_crypto_ablkcipher.c +@@ -242,6 +242,17 @@ static void crypto_dma_start(struct rk_crypto_info *dev) + static int rk_set_data_start(struct rk_crypto_info *dev) + { + int err; ++ struct ablkcipher_request *req = ++ ablkcipher_request_cast(dev->async_req); ++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); ++ struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); ++ u32 ivsize = crypto_ablkcipher_ivsize(tfm); ++ u8 *src_last_blk = page_address(sg_page(dev->sg_src)) + ++ dev->sg_src->offset + dev->sg_src->length - ivsize; ++ ++ /* store the iv that need to be updated in chain mode */ ++ if (ctx->mode & RK_CRYPTO_DEC) ++ memcpy(ctx->iv, src_last_blk, ivsize); + + err = dev->load_data(dev, dev->sg_src, dev->sg_dst); + if (!err) +@@ -260,8 +271,9 @@ static int rk_ablk_start(struct rk_crypto_info *dev) + dev->total = req->nbytes; + dev->sg_src = req->src; + dev->first = req->src; +- dev->nents = sg_nents(req->src); ++ dev->src_nents = sg_nents(req->src); + dev->sg_dst = req->dst; ++ dev->dst_nents = sg_nents(req->dst); + dev->aligned = 1; + + spin_lock_irqsave(&dev->lock, flags); +@@ -285,6 +297,28 @@ static void rk_iv_copyback(struct rk_crypto_info *dev) + memcpy_fromio(req->info, dev->reg + RK_CRYPTO_AES_IV_0, ivsize); + } + ++static void rk_update_iv(struct rk_crypto_info *dev) ++{ ++ struct ablkcipher_request *req = ++ ablkcipher_request_cast(dev->async_req); ++ struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(req); ++ struct rk_cipher_ctx *ctx = crypto_ablkcipher_ctx(tfm); ++ u32 ivsize = crypto_ablkcipher_ivsize(tfm); ++ u8 *new_iv = NULL; ++ ++ if (ctx->mode & RK_CRYPTO_DEC) { ++ new_iv = ctx->iv; ++ } else { ++ new_iv = page_address(sg_page(dev->sg_dst)) + ++ dev->sg_dst->offset + dev->sg_dst->length - ivsize; ++ } ++ ++ if (ivsize == DES_BLOCK_SIZE) ++ memcpy_toio(dev->reg + RK_CRYPTO_TDES_IV_0, new_iv, ivsize); ++ else if (ivsize == AES_BLOCK_SIZE) ++ memcpy_toio(dev->reg + RK_CRYPTO_AES_IV_0, new_iv, ivsize); ++} ++ + /* return: + * true some err was occurred + * fault no err, continue +@@ -297,7 +331,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev) + + dev->unload_data(dev); + if (!dev->aligned) { +- if (!sg_pcopy_from_buffer(req->dst, dev->nents, ++ if (!sg_pcopy_from_buffer(req->dst, dev->dst_nents, + dev->addr_vir, dev->count, + dev->total - dev->left_bytes - + dev->count)) { +@@ -306,6 +340,7 @@ static int rk_ablk_rx(struct rk_crypto_info *dev) + } + } + if (dev->left_bytes) { ++ rk_update_iv(dev); + if (dev->aligned) { + if (sg_is_last(dev->sg_src)) { + dev_err(dev->dev, "[%s:%d] Lack of data\n", +diff --git a/drivers/crypto/rockchip/rk3288_crypto_ahash.c b/drivers/crypto/rockchip/rk3288_crypto_ahash.c +index 821a506b9e17..c336ae75e361 100644 +--- a/drivers/crypto/rockchip/rk3288_crypto_ahash.c ++++ b/drivers/crypto/rockchip/rk3288_crypto_ahash.c +@@ -206,7 +206,7 @@ static int rk_ahash_start(struct rk_crypto_info *dev) + dev->sg_dst = NULL; + dev->sg_src = req->src; + dev->first = req->src; +- dev->nents = sg_nents(req->src); ++ dev->src_nents = sg_nents(req->src); + rctx = ahash_request_ctx(req); + rctx->mode = 0; + +diff --git a/drivers/dma/sh/usb-dmac.c b/drivers/dma/sh/usb-dmac.c +index 7f7184c3cf95..59403f6d008a 100644 +--- a/drivers/dma/sh/usb-dmac.c ++++ b/drivers/dma/sh/usb-dmac.c +@@ -694,6 +694,8 @@ static int usb_dmac_runtime_resume(struct device *dev) + #endif /* CONFIG_PM */ + + static const struct dev_pm_ops usb_dmac_pm = { ++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, ++ pm_runtime_force_resume) + SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume, + NULL) + }; +diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c +index 0dc96419efe3..d8a985fc6a5d 100644 +--- a/drivers/gpio/gpio-pca953x.c ++++ b/drivers/gpio/gpio-pca953x.c +@@ -587,7 +587,8 @@ static int pca953x_irq_set_type(struct irq_data *d, unsigned int type) + + static void pca953x_irq_shutdown(struct irq_data *d) + { +- struct pca953x_chip *chip = irq_data_get_irq_chip_data(d); ++ struct gpio_chip *gc = irq_data_get_irq_chip_data(d); ++ struct pca953x_chip *chip = gpiochip_get_data(gc); + u8 mask = 1 << (d->hwirq % BANK_SZ); + + chip->irq_trig_raise[d->hwirq / BANK_SZ] &= ~mask; +diff --git a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c +index 43e4a2be0fa6..57cc11d0e9a5 100644 +--- a/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c ++++ b/drivers/gpu/drm/amd/display/dc/calcs/dcn_calcs.c +@@ -1355,12 +1355,12 @@ void dcn_bw_update_from_pplib(struct dc *dc) + struct dm_pp_clock_levels_with_voltage fclks = {0}, dcfclks = {0}; + bool res; + +- kernel_fpu_begin(); +- + /* TODO: This is not the proper way to obtain fabric_and_dram_bandwidth, should be min(fclk, memclk) */ + res = dm_pp_get_clock_levels_by_type_with_voltage( + ctx, DM_PP_CLOCK_TYPE_FCLK, &fclks); + ++ kernel_fpu_begin(); ++ + if (res) + res = verify_clock_values(&fclks); + +@@ -1379,9 +1379,13 @@ void dcn_bw_update_from_pplib(struct dc *dc) + } else + BREAK_TO_DEBUGGER(); + ++ kernel_fpu_end(); ++ + res = dm_pp_get_clock_levels_by_type_with_voltage( + ctx, DM_PP_CLOCK_TYPE_DCFCLK, &dcfclks); + ++ kernel_fpu_begin(); ++ + if (res) + res = verify_clock_values(&dcfclks); + +diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +index c8f5c00dd1e7..86e3fb27c125 100644 +--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c ++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c +@@ -3491,14 +3491,14 @@ static int smu7_get_gpu_power(struct pp_hwmgr *hwmgr, u32 *query) + + smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogStart); + cgs_write_ind_register(hwmgr->device, CGS_IND_REG__SMC, +- ixSMU_PM_STATUS_94, 0); ++ ixSMU_PM_STATUS_95, 0); + + for (i = 0; i < 10; i++) { +- mdelay(1); ++ mdelay(500); + smum_send_msg_to_smc(hwmgr, PPSMC_MSG_PmStatusLogSample); + tmp = cgs_read_ind_register(hwmgr->device, + CGS_IND_REG__SMC, +- ixSMU_PM_STATUS_94); ++ ixSMU_PM_STATUS_95); + if (tmp != 0) + break; + } +diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c +index d73703a695e8..70fc8e356b18 100644 +--- a/drivers/gpu/drm/drm_fb_helper.c ++++ b/drivers/gpu/drm/drm_fb_helper.c +@@ -3170,9 +3170,7 @@ static void drm_fbdev_client_unregister(struct drm_client_dev *client) + + static int drm_fbdev_client_restore(struct drm_client_dev *client) + { +- struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client); +- +- drm_fb_helper_restore_fbdev_mode_unlocked(fb_helper); ++ drm_fb_helper_lastclose(client->dev); + + return 0; + } +diff --git a/drivers/gpu/drm/radeon/evergreen_cs.c b/drivers/gpu/drm/radeon/evergreen_cs.c +index f471537c852f..1e14c6921454 100644 +--- a/drivers/gpu/drm/radeon/evergreen_cs.c ++++ b/drivers/gpu/drm/radeon/evergreen_cs.c +@@ -1299,6 +1299,7 @@ static int evergreen_cs_handle_reg(struct radeon_cs_parser *p, u32 reg, u32 idx) + return -EINVAL; + } + ib[idx] += (u32)((reloc->gpu_offset >> 8) & 0xffffffff); ++ break; + case CB_TARGET_MASK: + track->cb_target_mask = radeon_get_ib_value(p, idx); + track->cb_dirty = true; +diff --git a/drivers/hwtracing/intel_th/gth.c b/drivers/hwtracing/intel_th/gth.c +index 8426b7970c14..cc287cf6eb29 100644 +--- a/drivers/hwtracing/intel_th/gth.c ++++ b/drivers/hwtracing/intel_th/gth.c +@@ -607,6 +607,7 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev, + { + struct gth_device *gth = dev_get_drvdata(&thdev->dev); + int port = othdev->output.port; ++ int master; + + if (thdev->host_mode) + return; +@@ -615,6 +616,9 @@ static void intel_th_gth_unassign(struct intel_th_device *thdev, + othdev->output.port = -1; + othdev->output.active = false; + gth->output[port].output = NULL; ++ for (master = 0; master < TH_CONFIGURABLE_MASTERS; master++) ++ if (gth->master[master] == port) ++ gth->master[master] = -1; + spin_unlock(>h->gth_lock); + } + +diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c +index 93ce3aa740a9..c7ba8acfd4d5 100644 +--- a/drivers/hwtracing/stm/core.c ++++ b/drivers/hwtracing/stm/core.c +@@ -244,6 +244,9 @@ static int find_free_channels(unsigned long *bitmap, unsigned int start, + ; + if (i == width) + return pos; ++ ++ /* step over [pos..pos+i) to continue search */ ++ pos += i; + } + + return -1; +@@ -732,7 +735,7 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg) + struct stm_device *stm = stmf->stm; + struct stp_policy_id *id; + char *ids[] = { NULL, NULL }; +- int ret = -EINVAL; ++ int ret = -EINVAL, wlimit = 1; + u32 size; + + if (stmf->output.nr_chans) +@@ -760,8 +763,10 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg) + if (id->__reserved_0 || id->__reserved_1) + goto err_free; + +- if (id->width < 1 || +- id->width > PAGE_SIZE / stm->data->sw_mmiosz) ++ if (stm->data->sw_mmiosz) ++ wlimit = PAGE_SIZE / stm->data->sw_mmiosz; ++ ++ if (id->width < 1 || id->width > wlimit) + goto err_free; + + ids[0] = id->id; +diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c +index c77adbbea0c7..e85dc8583896 100644 +--- a/drivers/i2c/busses/i2c-tegra.c ++++ b/drivers/i2c/busses/i2c-tegra.c +@@ -118,6 +118,9 @@ + #define I2C_MST_FIFO_STATUS_TX_MASK 0xff0000 + #define I2C_MST_FIFO_STATUS_TX_SHIFT 16 + ++/* Packet header size in bytes */ ++#define I2C_PACKET_HEADER_SIZE 12 ++ + /* + * msg_end_type: The bus control which need to be send at end of transfer. + * @MSG_END_STOP: Send stop pulse at end of transfer. +@@ -836,12 +839,13 @@ static const struct i2c_algorithm tegra_i2c_algo = { + /* payload size is only 12 bit */ + static const struct i2c_adapter_quirks tegra_i2c_quirks = { + .flags = I2C_AQ_NO_ZERO_LEN, +- .max_read_len = 4096, +- .max_write_len = 4096, ++ .max_read_len = SZ_4K, ++ .max_write_len = SZ_4K - I2C_PACKET_HEADER_SIZE, + }; + + static const struct i2c_adapter_quirks tegra194_i2c_quirks = { + .flags = I2C_AQ_NO_ZERO_LEN, ++ .max_write_len = SZ_64K - I2C_PACKET_HEADER_SIZE, + }; + + static const struct tegra_i2c_hw_feature tegra20_i2c_hw = { +diff --git a/drivers/iio/adc/exynos_adc.c b/drivers/iio/adc/exynos_adc.c +index fa2d2b5767f3..1ca2c4d39f87 100644 +--- a/drivers/iio/adc/exynos_adc.c ++++ b/drivers/iio/adc/exynos_adc.c +@@ -115,6 +115,7 @@ + #define MAX_ADC_V2_CHANNELS 10 + #define MAX_ADC_V1_CHANNELS 8 + #define MAX_EXYNOS3250_ADC_CHANNELS 2 ++#define MAX_EXYNOS4212_ADC_CHANNELS 4 + #define MAX_S5PV210_ADC_CHANNELS 10 + + /* Bit definitions common for ADC_V1 and ADC_V2 */ +@@ -271,6 +272,19 @@ static void exynos_adc_v1_start_conv(struct exynos_adc *info, + writel(con1 | ADC_CON_EN_START, ADC_V1_CON(info->regs)); + } + ++/* Exynos4212 and 4412 is like ADCv1 but with four channels only */ ++static const struct exynos_adc_data exynos4212_adc_data = { ++ .num_channels = MAX_EXYNOS4212_ADC_CHANNELS, ++ .mask = ADC_DATX_MASK, /* 12 bit ADC resolution */ ++ .needs_adc_phy = true, ++ .phy_offset = EXYNOS_ADCV1_PHY_OFFSET, ++ ++ .init_hw = exynos_adc_v1_init_hw, ++ .exit_hw = exynos_adc_v1_exit_hw, ++ .clear_irq = exynos_adc_v1_clear_irq, ++ .start_conv = exynos_adc_v1_start_conv, ++}; ++ + static const struct exynos_adc_data exynos_adc_v1_data = { + .num_channels = MAX_ADC_V1_CHANNELS, + .mask = ADC_DATX_MASK, /* 12 bit ADC resolution */ +@@ -492,6 +506,9 @@ static const struct of_device_id exynos_adc_match[] = { + }, { + .compatible = "samsung,s5pv210-adc", + .data = &exynos_adc_s5pv210_data, ++ }, { ++ .compatible = "samsung,exynos4212-adc", ++ .data = &exynos4212_adc_data, + }, { + .compatible = "samsung,exynos-adc-v1", + .data = &exynos_adc_v1_data, +@@ -929,7 +946,7 @@ static int exynos_adc_remove(struct platform_device *pdev) + struct iio_dev *indio_dev = platform_get_drvdata(pdev); + struct exynos_adc *info = iio_priv(indio_dev); + +- if (IS_REACHABLE(CONFIG_INPUT)) { ++ if (IS_REACHABLE(CONFIG_INPUT) && info->input) { + free_irq(info->tsirq, info); + input_unregister_device(info->input); + } +diff --git a/drivers/infiniband/hw/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h +index 6db2276f5c13..15ec3e1feb09 100644 +--- a/drivers/infiniband/hw/hfi1/hfi.h ++++ b/drivers/infiniband/hw/hfi1/hfi.h +@@ -1435,7 +1435,7 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd, + struct hfi1_devdata *dd, u8 hw_pidx, u8 port); + void hfi1_free_ctxtdata(struct hfi1_devdata *dd, struct hfi1_ctxtdata *rcd); + int hfi1_rcd_put(struct hfi1_ctxtdata *rcd); +-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd); ++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd); + struct hfi1_ctxtdata *hfi1_rcd_get_by_index_safe(struct hfi1_devdata *dd, + u16 ctxt); + struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt); +diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c +index 7835eb52e7c5..c532ceb0bb9a 100644 +--- a/drivers/infiniband/hw/hfi1/init.c ++++ b/drivers/infiniband/hw/hfi1/init.c +@@ -215,12 +215,12 @@ static void hfi1_rcd_free(struct kref *kref) + struct hfi1_ctxtdata *rcd = + container_of(kref, struct hfi1_ctxtdata, kref); + +- hfi1_free_ctxtdata(rcd->dd, rcd); +- + spin_lock_irqsave(&rcd->dd->uctxt_lock, flags); + rcd->dd->rcd[rcd->ctxt] = NULL; + spin_unlock_irqrestore(&rcd->dd->uctxt_lock, flags); + ++ hfi1_free_ctxtdata(rcd->dd, rcd); ++ + kfree(rcd); + } + +@@ -243,10 +243,13 @@ int hfi1_rcd_put(struct hfi1_ctxtdata *rcd) + * @rcd: pointer to an initialized rcd data structure + * + * Use this to get a reference after the init. ++ * ++ * Return : reflect kref_get_unless_zero(), which returns non-zero on ++ * increment, otherwise 0. + */ +-void hfi1_rcd_get(struct hfi1_ctxtdata *rcd) ++int hfi1_rcd_get(struct hfi1_ctxtdata *rcd) + { +- kref_get(&rcd->kref); ++ return kref_get_unless_zero(&rcd->kref); + } + + /** +@@ -326,7 +329,8 @@ struct hfi1_ctxtdata *hfi1_rcd_get_by_index(struct hfi1_devdata *dd, u16 ctxt) + spin_lock_irqsave(&dd->uctxt_lock, flags); + if (dd->rcd[ctxt]) { + rcd = dd->rcd[ctxt]; +- hfi1_rcd_get(rcd); ++ if (!hfi1_rcd_get(rcd)) ++ rcd = NULL; + } + spin_unlock_irqrestore(&dd->uctxt_lock, flags); + +diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c +index c6cc3e4ab71d..c45b8359b389 100644 +--- a/drivers/infiniband/sw/rdmavt/qp.c ++++ b/drivers/infiniband/sw/rdmavt/qp.c +@@ -2785,6 +2785,18 @@ again: + } + EXPORT_SYMBOL(rvt_copy_sge); + ++static enum ib_wc_status loopback_qp_drop(struct rvt_ibport *rvp, ++ struct rvt_qp *sqp) ++{ ++ rvp->n_pkt_drops++; ++ /* ++ * For RC, the requester would timeout and retry so ++ * shortcut the timeouts and just signal too many retries. ++ */ ++ return sqp->ibqp.qp_type == IB_QPT_RC ? ++ IB_WC_RETRY_EXC_ERR : IB_WC_SUCCESS; ++} ++ + /** + * ruc_loopback - handle UC and RC loopback requests + * @sqp: the sending QP +@@ -2857,17 +2869,14 @@ again: + } + spin_unlock_irqrestore(&sqp->s_lock, flags); + +- if (!qp || !(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) || ++ if (!qp) { ++ send_status = loopback_qp_drop(rvp, sqp); ++ goto serr_no_r_lock; ++ } ++ spin_lock_irqsave(&qp->r_lock, flags); ++ if (!(ib_rvt_state_ops[qp->state] & RVT_PROCESS_RECV_OK) || + qp->ibqp.qp_type != sqp->ibqp.qp_type) { +- rvp->n_pkt_drops++; +- /* +- * For RC, the requester would timeout and retry so +- * shortcut the timeouts and just signal too many retries. +- */ +- if (sqp->ibqp.qp_type == IB_QPT_RC) +- send_status = IB_WC_RETRY_EXC_ERR; +- else +- send_status = IB_WC_SUCCESS; ++ send_status = loopback_qp_drop(rvp, sqp); + goto serr; + } + +@@ -2893,18 +2902,8 @@ again: + goto send_comp; + + case IB_WR_SEND_WITH_INV: +- if (!rvt_invalidate_rkey(qp, wqe->wr.ex.invalidate_rkey)) { +- wc.wc_flags = IB_WC_WITH_INVALIDATE; +- wc.ex.invalidate_rkey = wqe->wr.ex.invalidate_rkey; +- } +- goto send; +- + case IB_WR_SEND_WITH_IMM: +- wc.wc_flags = IB_WC_WITH_IMM; +- wc.ex.imm_data = wqe->wr.ex.imm_data; +- /* FALLTHROUGH */ + case IB_WR_SEND: +-send: + ret = rvt_get_rwqe(qp, false); + if (ret < 0) + goto op_err; +@@ -2912,6 +2911,22 @@ send: + goto rnr_nak; + if (wqe->length > qp->r_len) + goto inv_err; ++ switch (wqe->wr.opcode) { ++ case IB_WR_SEND_WITH_INV: ++ if (!rvt_invalidate_rkey(qp, ++ wqe->wr.ex.invalidate_rkey)) { ++ wc.wc_flags = IB_WC_WITH_INVALIDATE; ++ wc.ex.invalidate_rkey = ++ wqe->wr.ex.invalidate_rkey; ++ } ++ break; ++ case IB_WR_SEND_WITH_IMM: ++ wc.wc_flags = IB_WC_WITH_IMM; ++ wc.ex.imm_data = wqe->wr.ex.imm_data; ++ break; ++ default: ++ break; ++ } + break; + + case IB_WR_RDMA_WRITE_WITH_IMM: +@@ -3041,6 +3056,7 @@ do_write: + wqe->wr.send_flags & IB_SEND_SOLICITED); + + send_comp: ++ spin_unlock_irqrestore(&qp->r_lock, flags); + spin_lock_irqsave(&sqp->s_lock, flags); + rvp->n_loop_pkts++; + flush_send: +@@ -3067,6 +3083,7 @@ rnr_nak: + } + if (sqp->s_rnr_retry_cnt < 7) + sqp->s_rnr_retry--; ++ spin_unlock_irqrestore(&qp->r_lock, flags); + spin_lock_irqsave(&sqp->s_lock, flags); + if (!(ib_rvt_state_ops[sqp->state] & RVT_PROCESS_RECV_OK)) + goto clr_busy; +@@ -3095,6 +3112,8 @@ err: + rvt_rc_error(qp, wc.status); + + serr: ++ spin_unlock_irqrestore(&qp->r_lock, flags); ++serr_no_r_lock: + spin_lock_irqsave(&sqp->s_lock, flags); + rvt_send_complete(sqp, wqe, send_status); + if (sqp->ibqp.qp_type == IB_QPT_RC) { +diff --git a/drivers/irqchip/irq-brcmstb-l2.c b/drivers/irqchip/irq-brcmstb-l2.c +index 0e65f609352e..83364fedbf0a 100644 +--- a/drivers/irqchip/irq-brcmstb-l2.c ++++ b/drivers/irqchip/irq-brcmstb-l2.c +@@ -129,8 +129,9 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d) + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); + struct irq_chip_type *ct = irq_data_get_chip_type(d); + struct brcmstb_l2_intc_data *b = gc->private; ++ unsigned long flags; + +- irq_gc_lock(gc); ++ irq_gc_lock_irqsave(gc, flags); + /* Save the current mask */ + b->saved_mask = irq_reg_readl(gc, ct->regs.mask); + +@@ -139,7 +140,7 @@ static void brcmstb_l2_intc_suspend(struct irq_data *d) + irq_reg_writel(gc, ~gc->wake_active, ct->regs.disable); + irq_reg_writel(gc, gc->wake_active, ct->regs.enable); + } +- irq_gc_unlock(gc); ++ irq_gc_unlock_irqrestore(gc, flags); + } + + static void brcmstb_l2_intc_resume(struct irq_data *d) +@@ -147,8 +148,9 @@ static void brcmstb_l2_intc_resume(struct irq_data *d) + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); + struct irq_chip_type *ct = irq_data_get_chip_type(d); + struct brcmstb_l2_intc_data *b = gc->private; ++ unsigned long flags; + +- irq_gc_lock(gc); ++ irq_gc_lock_irqsave(gc, flags); + if (ct->chip.irq_ack) { + /* Clear unmasked non-wakeup interrupts */ + irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active, +@@ -158,7 +160,7 @@ static void brcmstb_l2_intc_resume(struct irq_data *d) + /* Restore the saved mask */ + irq_reg_writel(gc, b->saved_mask, ct->regs.disable); + irq_reg_writel(gc, ~b->saved_mask, ct->regs.enable); +- irq_gc_unlock(gc); ++ irq_gc_unlock_irqrestore(gc, flags); + } + + static int __init brcmstb_l2_intc_of_init(struct device_node *np, +diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c +index c3aba3fc818d..f867d41b0aa1 100644 +--- a/drivers/irqchip/irq-gic-v3-its.c ++++ b/drivers/irqchip/irq-gic-v3-its.c +@@ -1955,6 +1955,8 @@ static int its_alloc_tables(struct its_node *its) + indirect = its_parse_indirect_baser(its, baser, + psz, &order, + its->device_ids); ++ break; ++ + case GITS_BASER_TYPE_VCPU: + indirect = its_parse_indirect_baser(its, baser, + psz, &order, +diff --git a/drivers/md/bcache/extents.c b/drivers/md/bcache/extents.c +index 956004366699..886710043025 100644 +--- a/drivers/md/bcache/extents.c ++++ b/drivers/md/bcache/extents.c +@@ -538,6 +538,7 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k) + { + struct btree *b = container_of(bk, struct btree, keys); + unsigned int i, stale; ++ char buf[80]; + + if (!KEY_PTRS(k) || + bch_extent_invalid(bk, k)) +@@ -547,19 +548,19 @@ static bool bch_extent_bad(struct btree_keys *bk, const struct bkey *k) + if (!ptr_available(b->c, k, i)) + return true; + +- if (!expensive_debug_checks(b->c) && KEY_DIRTY(k)) +- return false; +- + for (i = 0; i < KEY_PTRS(k); i++) { + stale = ptr_stale(b->c, k, i); + ++ if (stale && KEY_DIRTY(k)) { ++ bch_extent_to_text(buf, sizeof(buf), k); ++ pr_info("stale dirty pointer, stale %u, key: %s", ++ stale, buf); ++ } ++ + btree_bug_on(stale > BUCKET_GC_GEN_MAX, b, + "key too stale: %i, need_gc %u", + stale, b->c->need_gc); + +- btree_bug_on(stale && KEY_DIRTY(k) && KEY_SIZE(k), +- b, "stale dirty pointer"); +- + if (stale) + return true; + +diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c +index 15070412a32e..f101bfe8657a 100644 +--- a/drivers/md/bcache/request.c ++++ b/drivers/md/bcache/request.c +@@ -392,10 +392,11 @@ static bool check_should_bypass(struct cached_dev *dc, struct bio *bio) + + /* + * Flag for bypass if the IO is for read-ahead or background, +- * unless the read-ahead request is for metadata (eg, for gfs2). ++ * unless the read-ahead request is for metadata ++ * (eg, for gfs2 or xfs). + */ + if (bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) && +- !(bio->bi_opf & REQ_PRIO)) ++ !(bio->bi_opf & (REQ_META|REQ_PRIO))) + goto skip; + + if (bio->bi_iter.bi_sector & (c->sb.block_size - 1) || +@@ -877,7 +878,7 @@ static int cached_dev_cache_miss(struct btree *b, struct search *s, + } + + if (!(bio->bi_opf & REQ_RAHEAD) && +- !(bio->bi_opf & REQ_PRIO) && ++ !(bio->bi_opf & (REQ_META|REQ_PRIO)) && + s->iop.c->gc_stats.in_use < CUTOFF_CACHE_READA) + reada = min_t(sector_t, dc->readahead >> 9, + get_capacity(bio->bi_disk) - bio_end_sector(bio)); +diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h +index 6a743d3bb338..4e4c6810dc3c 100644 +--- a/drivers/md/bcache/writeback.h ++++ b/drivers/md/bcache/writeback.h +@@ -71,6 +71,9 @@ static inline bool should_writeback(struct cached_dev *dc, struct bio *bio, + in_use > bch_cutoff_writeback_sync) + return false; + ++ if (bio_op(bio) == REQ_OP_DISCARD) ++ return false; ++ + if (dc->partial_stripes_expensive && + bcache_dev_stripe_dirty(dc, bio->bi_iter.bi_sector, + bio_sectors(bio))) +diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c +index 457200ca6287..2e823252d797 100644 +--- a/drivers/md/dm-integrity.c ++++ b/drivers/md/dm-integrity.c +@@ -1368,8 +1368,8 @@ again: + checksums_ptr - checksums, !dio->write ? TAG_CMP : TAG_WRITE); + if (unlikely(r)) { + if (r > 0) { +- DMERR("Checksum failed at sector 0x%llx", +- (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size))); ++ DMERR_LIMIT("Checksum failed at sector 0x%llx", ++ (unsigned long long)(sector - ((r + ic->tag_size - 1) / ic->tag_size))); + r = -EILSEQ; + atomic64_inc(&ic->number_of_mismatches); + } +@@ -1561,8 +1561,8 @@ retry_kmap: + + integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack); + if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) { +- DMERR("Checksum failed when reading from journal, at sector 0x%llx", +- (unsigned long long)logical_sector); ++ DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx", ++ (unsigned long long)logical_sector); + } + } + #endif +diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c +index ecef42bfe19d..3b6880dd648d 100644 +--- a/drivers/md/raid10.c ++++ b/drivers/md/raid10.c +@@ -3939,6 +3939,8 @@ static int raid10_run(struct mddev *mddev) + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + mddev->sync_thread = md_register_thread(md_do_sync, mddev, + "reshape"); ++ if (!mddev->sync_thread) ++ goto out_free_conf; + } + + return 0; +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index cecea901ab8c..5b68f2d0da60 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -7402,6 +7402,8 @@ static int raid5_run(struct mddev *mddev) + set_bit(MD_RECOVERY_RUNNING, &mddev->recovery); + mddev->sync_thread = md_register_thread(md_do_sync, mddev, + "reshape"); ++ if (!mddev->sync_thread) ++ goto abort; + } + + /* Ok, everything is just fine now */ +diff --git a/drivers/media/dvb-frontends/lgdt330x.c b/drivers/media/dvb-frontends/lgdt330x.c +index 96807e134886..8abb1a510a81 100644 +--- a/drivers/media/dvb-frontends/lgdt330x.c ++++ b/drivers/media/dvb-frontends/lgdt330x.c +@@ -783,7 +783,7 @@ static int lgdt3303_read_status(struct dvb_frontend *fe, + + if ((buf[0] & 0x02) == 0x00) + *status |= FE_HAS_SYNC; +- if ((buf[0] & 0xfd) == 0x01) ++ if ((buf[0] & 0x01) == 0x01) + *status |= FE_HAS_VITERBI | FE_HAS_LOCK; + break; + default: +diff --git a/drivers/media/i2c/cx25840/cx25840-core.c b/drivers/media/i2c/cx25840/cx25840-core.c +index b168bf3635b6..8b0b8b5aa531 100644 +--- a/drivers/media/i2c/cx25840/cx25840-core.c ++++ b/drivers/media/i2c/cx25840/cx25840-core.c +@@ -5216,8 +5216,9 @@ static int cx25840_probe(struct i2c_client *client, + * those extra inputs. So, let's add it only when needed. + */ + state->pads[CX25840_PAD_INPUT].flags = MEDIA_PAD_FL_SINK; ++ state->pads[CX25840_PAD_INPUT].sig_type = PAD_SIGNAL_ANALOG; + state->pads[CX25840_PAD_VID_OUT].flags = MEDIA_PAD_FL_SOURCE; +- state->pads[CX25840_PAD_VBI_OUT].flags = MEDIA_PAD_FL_SOURCE; ++ state->pads[CX25840_PAD_VID_OUT].sig_type = PAD_SIGNAL_DV; + sd->entity.function = MEDIA_ENT_F_ATV_DECODER; + + ret = media_entity_pads_init(&sd->entity, ARRAY_SIZE(state->pads), +diff --git a/drivers/media/i2c/cx25840/cx25840-core.h b/drivers/media/i2c/cx25840/cx25840-core.h +index c323b1af1f83..9efefa15d090 100644 +--- a/drivers/media/i2c/cx25840/cx25840-core.h ++++ b/drivers/media/i2c/cx25840/cx25840-core.h +@@ -40,7 +40,6 @@ enum cx25840_model { + enum cx25840_media_pads { + CX25840_PAD_INPUT, + CX25840_PAD_VID_OUT, +- CX25840_PAD_VBI_OUT, + + CX25840_NUM_PADS + }; +diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c +index bef3f3aae0ed..9f8fc1ad9b1a 100644 +--- a/drivers/media/i2c/ov5640.c ++++ b/drivers/media/i2c/ov5640.c +@@ -1893,7 +1893,7 @@ static void ov5640_reset(struct ov5640_dev *sensor) + usleep_range(1000, 2000); + + gpiod_set_value_cansleep(sensor->reset_gpio, 0); +- usleep_range(5000, 10000); ++ usleep_range(20000, 25000); + } + + static int ov5640_set_power_on(struct ov5640_dev *sensor) +diff --git a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c +index 6950585edb5a..d16f54cdc3b0 100644 +--- a/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c ++++ b/drivers/media/platform/sunxi/sun6i-csi/sun6i_csi.c +@@ -793,7 +793,7 @@ static const struct regmap_config sun6i_csi_regmap_config = { + .reg_bits = 32, + .reg_stride = 4, + .val_bits = 32, +- .max_register = 0x1000, ++ .max_register = 0x9c, + }; + + static int sun6i_csi_resource_request(struct sun6i_csi_dev *sdev, +diff --git a/drivers/media/platform/vimc/Makefile b/drivers/media/platform/vimc/Makefile +index 4b2e3de7856e..c4fc8e7d365a 100644 +--- a/drivers/media/platform/vimc/Makefile ++++ b/drivers/media/platform/vimc/Makefile +@@ -5,6 +5,7 @@ vimc_common-objs := vimc-common.o + vimc_debayer-objs := vimc-debayer.o + vimc_scaler-objs := vimc-scaler.o + vimc_sensor-objs := vimc-sensor.o ++vimc_streamer-objs := vimc-streamer.o + + obj-$(CONFIG_VIDEO_VIMC) += vimc.o vimc_capture.o vimc_common.o vimc-debayer.o \ +- vimc_scaler.o vimc_sensor.o ++ vimc_scaler.o vimc_sensor.o vimc_streamer.o +diff --git a/drivers/media/platform/vimc/vimc-capture.c b/drivers/media/platform/vimc/vimc-capture.c +index 3f7e9ed56633..80d7515ec420 100644 +--- a/drivers/media/platform/vimc/vimc-capture.c ++++ b/drivers/media/platform/vimc/vimc-capture.c +@@ -24,6 +24,7 @@ + #include + + #include "vimc-common.h" ++#include "vimc-streamer.h" + + #define VIMC_CAP_DRV_NAME "vimc-capture" + +@@ -44,7 +45,7 @@ struct vimc_cap_device { + spinlock_t qlock; + struct mutex lock; + u32 sequence; +- struct media_pipeline pipe; ++ struct vimc_stream stream; + }; + + static const struct v4l2_pix_format fmt_default = { +@@ -248,14 +249,13 @@ static int vimc_cap_start_streaming(struct vb2_queue *vq, unsigned int count) + vcap->sequence = 0; + + /* Start the media pipeline */ +- ret = media_pipeline_start(entity, &vcap->pipe); ++ ret = media_pipeline_start(entity, &vcap->stream.pipe); + if (ret) { + vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED); + return ret; + } + +- /* Enable streaming from the pipe */ +- ret = vimc_pipeline_s_stream(&vcap->vdev.entity, 1); ++ ret = vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 1); + if (ret) { + media_pipeline_stop(entity); + vimc_cap_return_all_buffers(vcap, VB2_BUF_STATE_QUEUED); +@@ -273,8 +273,7 @@ static void vimc_cap_stop_streaming(struct vb2_queue *vq) + { + struct vimc_cap_device *vcap = vb2_get_drv_priv(vq); + +- /* Disable streaming from the pipe */ +- vimc_pipeline_s_stream(&vcap->vdev.entity, 0); ++ vimc_streamer_s_stream(&vcap->stream, &vcap->ved, 0); + + /* Stop the media pipeline */ + media_pipeline_stop(&vcap->vdev.entity); +@@ -355,8 +354,8 @@ static void vimc_cap_comp_unbind(struct device *comp, struct device *master, + kfree(vcap); + } + +-static void vimc_cap_process_frame(struct vimc_ent_device *ved, +- struct media_pad *sink, const void *frame) ++static void *vimc_cap_process_frame(struct vimc_ent_device *ved, ++ const void *frame) + { + struct vimc_cap_device *vcap = container_of(ved, struct vimc_cap_device, + ved); +@@ -370,7 +369,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved, + typeof(*vimc_buf), list); + if (!vimc_buf) { + spin_unlock(&vcap->qlock); +- return; ++ return ERR_PTR(-EAGAIN); + } + + /* Remove this entry from the list */ +@@ -391,6 +390,7 @@ static void vimc_cap_process_frame(struct vimc_ent_device *ved, + vb2_set_plane_payload(&vimc_buf->vb2.vb2_buf, 0, + vcap->format.sizeimage); + vb2_buffer_done(&vimc_buf->vb2.vb2_buf, VB2_BUF_STATE_DONE); ++ return NULL; + } + + static int vimc_cap_comp_bind(struct device *comp, struct device *master, +diff --git a/drivers/media/platform/vimc/vimc-common.c b/drivers/media/platform/vimc/vimc-common.c +index 867e24dbd6b5..c1a74bb2df58 100644 +--- a/drivers/media/platform/vimc/vimc-common.c ++++ b/drivers/media/platform/vimc/vimc-common.c +@@ -207,41 +207,6 @@ const struct vimc_pix_map *vimc_pix_map_by_pixelformat(u32 pixelformat) + } + EXPORT_SYMBOL_GPL(vimc_pix_map_by_pixelformat); + +-int vimc_propagate_frame(struct media_pad *src, const void *frame) +-{ +- struct media_link *link; +- +- if (!(src->flags & MEDIA_PAD_FL_SOURCE)) +- return -EINVAL; +- +- /* Send this frame to all sink pads that are direct linked */ +- list_for_each_entry(link, &src->entity->links, list) { +- if (link->source == src && +- (link->flags & MEDIA_LNK_FL_ENABLED)) { +- struct vimc_ent_device *ved = NULL; +- struct media_entity *entity = link->sink->entity; +- +- if (is_media_entity_v4l2_subdev(entity)) { +- struct v4l2_subdev *sd = +- container_of(entity, struct v4l2_subdev, +- entity); +- ved = v4l2_get_subdevdata(sd); +- } else if (is_media_entity_v4l2_video_device(entity)) { +- struct video_device *vdev = +- container_of(entity, +- struct video_device, +- entity); +- ved = video_get_drvdata(vdev); +- } +- if (ved && ved->process_frame) +- ved->process_frame(ved, link->sink, frame); +- } +- } +- +- return 0; +-} +-EXPORT_SYMBOL_GPL(vimc_propagate_frame); +- + /* Helper function to allocate and initialize pads */ + struct media_pad *vimc_pads_init(u16 num_pads, const unsigned long *pads_flag) + { +diff --git a/drivers/media/platform/vimc/vimc-common.h b/drivers/media/platform/vimc/vimc-common.h +index 2e9981b18166..6ed969d9efbb 100644 +--- a/drivers/media/platform/vimc/vimc-common.h ++++ b/drivers/media/platform/vimc/vimc-common.h +@@ -113,23 +113,12 @@ struct vimc_pix_map { + struct vimc_ent_device { + struct media_entity *ent; + struct media_pad *pads; +- void (*process_frame)(struct vimc_ent_device *ved, +- struct media_pad *sink, const void *frame); ++ void * (*process_frame)(struct vimc_ent_device *ved, ++ const void *frame); + void (*vdev_get_format)(struct vimc_ent_device *ved, + struct v4l2_pix_format *fmt); + }; + +-/** +- * vimc_propagate_frame - propagate a frame through the topology +- * +- * @src: the source pad where the frame is being originated +- * @frame: the frame to be propagated +- * +- * This function will call the process_frame callback from the vimc_ent_device +- * struct of the nodes directly connected to the @src pad +- */ +-int vimc_propagate_frame(struct media_pad *src, const void *frame); +- + /** + * vimc_pads_init - initialize pads + * +diff --git a/drivers/media/platform/vimc/vimc-debayer.c b/drivers/media/platform/vimc/vimc-debayer.c +index 77887f66f323..7d77c63b99d2 100644 +--- a/drivers/media/platform/vimc/vimc-debayer.c ++++ b/drivers/media/platform/vimc/vimc-debayer.c +@@ -321,7 +321,6 @@ static void vimc_deb_set_rgb_mbus_fmt_rgb888_1x24(struct vimc_deb_device *vdeb, + static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable) + { + struct vimc_deb_device *vdeb = v4l2_get_subdevdata(sd); +- int ret; + + if (enable) { + const struct vimc_pix_map *vpix; +@@ -351,22 +350,10 @@ static int vimc_deb_s_stream(struct v4l2_subdev *sd, int enable) + if (!vdeb->src_frame) + return -ENOMEM; + +- /* Turn the stream on in the subdevices directly connected */ +- ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 1); +- if (ret) { +- vfree(vdeb->src_frame); +- vdeb->src_frame = NULL; +- return ret; +- } + } else { + if (!vdeb->src_frame) + return 0; + +- /* Disable streaming from the pipe */ +- ret = vimc_pipeline_s_stream(&vdeb->sd.entity, 0); +- if (ret) +- return ret; +- + vfree(vdeb->src_frame); + vdeb->src_frame = NULL; + } +@@ -480,9 +467,8 @@ static void vimc_deb_calc_rgb_sink(struct vimc_deb_device *vdeb, + } + } + +-static void vimc_deb_process_frame(struct vimc_ent_device *ved, +- struct media_pad *sink, +- const void *sink_frame) ++static void *vimc_deb_process_frame(struct vimc_ent_device *ved, ++ const void *sink_frame) + { + struct vimc_deb_device *vdeb = container_of(ved, struct vimc_deb_device, + ved); +@@ -491,7 +477,7 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved, + + /* If the stream in this node is not active, just return */ + if (!vdeb->src_frame) +- return; ++ return ERR_PTR(-EINVAL); + + for (i = 0; i < vdeb->sink_fmt.height; i++) + for (j = 0; j < vdeb->sink_fmt.width; j++) { +@@ -499,12 +485,8 @@ static void vimc_deb_process_frame(struct vimc_ent_device *ved, + vdeb->set_rgb_src(vdeb, i, j, rgb); + } + +- /* Propagate the frame through all source pads */ +- for (i = 1; i < vdeb->sd.entity.num_pads; i++) { +- struct media_pad *pad = &vdeb->sd.entity.pads[i]; ++ return vdeb->src_frame; + +- vimc_propagate_frame(pad, vdeb->src_frame); +- } + } + + static void vimc_deb_comp_unbind(struct device *comp, struct device *master, +diff --git a/drivers/media/platform/vimc/vimc-scaler.c b/drivers/media/platform/vimc/vimc-scaler.c +index b0952ee86296..39b2a73dfcc1 100644 +--- a/drivers/media/platform/vimc/vimc-scaler.c ++++ b/drivers/media/platform/vimc/vimc-scaler.c +@@ -217,7 +217,6 @@ static const struct v4l2_subdev_pad_ops vimc_sca_pad_ops = { + static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable) + { + struct vimc_sca_device *vsca = v4l2_get_subdevdata(sd); +- int ret; + + if (enable) { + const struct vimc_pix_map *vpix; +@@ -245,22 +244,10 @@ static int vimc_sca_s_stream(struct v4l2_subdev *sd, int enable) + if (!vsca->src_frame) + return -ENOMEM; + +- /* Turn the stream on in the subdevices directly connected */ +- ret = vimc_pipeline_s_stream(&vsca->sd.entity, 1); +- if (ret) { +- vfree(vsca->src_frame); +- vsca->src_frame = NULL; +- return ret; +- } + } else { + if (!vsca->src_frame) + return 0; + +- /* Disable streaming from the pipe */ +- ret = vimc_pipeline_s_stream(&vsca->sd.entity, 0); +- if (ret) +- return ret; +- + vfree(vsca->src_frame); + vsca->src_frame = NULL; + } +@@ -346,26 +333,19 @@ static void vimc_sca_fill_src_frame(const struct vimc_sca_device *const vsca, + vimc_sca_scale_pix(vsca, i, j, sink_frame); + } + +-static void vimc_sca_process_frame(struct vimc_ent_device *ved, +- struct media_pad *sink, +- const void *sink_frame) ++static void *vimc_sca_process_frame(struct vimc_ent_device *ved, ++ const void *sink_frame) + { + struct vimc_sca_device *vsca = container_of(ved, struct vimc_sca_device, + ved); +- unsigned int i; + + /* If the stream in this node is not active, just return */ + if (!vsca->src_frame) +- return; ++ return ERR_PTR(-EINVAL); + + vimc_sca_fill_src_frame(vsca, sink_frame); + +- /* Propagate the frame through all source pads */ +- for (i = 1; i < vsca->sd.entity.num_pads; i++) { +- struct media_pad *pad = &vsca->sd.entity.pads[i]; +- +- vimc_propagate_frame(pad, vsca->src_frame); +- } ++ return vsca->src_frame; + }; + + static void vimc_sca_comp_unbind(struct device *comp, struct device *master, +diff --git a/drivers/media/platform/vimc/vimc-sensor.c b/drivers/media/platform/vimc/vimc-sensor.c +index 32ca9c6172b1..93961a1e694f 100644 +--- a/drivers/media/platform/vimc/vimc-sensor.c ++++ b/drivers/media/platform/vimc/vimc-sensor.c +@@ -16,8 +16,6 @@ + */ + + #include +-#include +-#include + #include + #include + #include +@@ -201,38 +199,27 @@ static const struct v4l2_subdev_pad_ops vimc_sen_pad_ops = { + .set_fmt = vimc_sen_set_fmt, + }; + +-static int vimc_sen_tpg_thread(void *data) ++static void *vimc_sen_process_frame(struct vimc_ent_device *ved, ++ const void *sink_frame) + { +- struct vimc_sen_device *vsen = data; +- unsigned int i; +- +- set_freezable(); +- set_current_state(TASK_UNINTERRUPTIBLE); +- +- for (;;) { +- try_to_freeze(); +- if (kthread_should_stop()) +- break; +- +- tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame); ++ struct vimc_sen_device *vsen = container_of(ved, struct vimc_sen_device, ++ ved); ++ const struct vimc_pix_map *vpix; ++ unsigned int frame_size; + +- /* Send the frame to all source pads */ +- for (i = 0; i < vsen->sd.entity.num_pads; i++) +- vimc_propagate_frame(&vsen->sd.entity.pads[i], +- vsen->frame); ++ /* Calculate the frame size */ ++ vpix = vimc_pix_map_by_code(vsen->mbus_format.code); ++ frame_size = vsen->mbus_format.width * vpix->bpp * ++ vsen->mbus_format.height; + +- /* 60 frames per second */ +- schedule_timeout(HZ/60); +- } +- +- return 0; ++ tpg_fill_plane_buffer(&vsen->tpg, 0, 0, vsen->frame); ++ return vsen->frame; + } + + static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable) + { + struct vimc_sen_device *vsen = + container_of(sd, struct vimc_sen_device, sd); +- int ret; + + if (enable) { + const struct vimc_pix_map *vpix; +@@ -258,26 +245,8 @@ static int vimc_sen_s_stream(struct v4l2_subdev *sd, int enable) + /* configure the test pattern generator */ + vimc_sen_tpg_s_format(vsen); + +- /* Initialize the image generator thread */ +- vsen->kthread_sen = kthread_run(vimc_sen_tpg_thread, vsen, +- "%s-sen", vsen->sd.v4l2_dev->name); +- if (IS_ERR(vsen->kthread_sen)) { +- dev_err(vsen->dev, "%s: kernel_thread() failed\n", +- vsen->sd.name); +- vfree(vsen->frame); +- vsen->frame = NULL; +- return PTR_ERR(vsen->kthread_sen); +- } + } else { +- if (!vsen->kthread_sen) +- return 0; +- +- /* Stop image generator */ +- ret = kthread_stop(vsen->kthread_sen); +- if (ret) +- return ret; + +- vsen->kthread_sen = NULL; + vfree(vsen->frame); + vsen->frame = NULL; + return 0; +@@ -413,6 +382,7 @@ static int vimc_sen_comp_bind(struct device *comp, struct device *master, + if (ret) + goto err_free_hdl; + ++ vsen->ved.process_frame = vimc_sen_process_frame; + dev_set_drvdata(comp, &vsen->ved); + vsen->dev = comp; + +diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c +new file mode 100644 +index 000000000000..fcc897fb247b +--- /dev/null ++++ b/drivers/media/platform/vimc/vimc-streamer.c +@@ -0,0 +1,188 @@ ++// SPDX-License-Identifier: GPL-2.0+ ++/* ++ * vimc-streamer.c Virtual Media Controller Driver ++ * ++ * Copyright (C) 2018 Lucas A. M. Magalhães ++ * ++ */ ++ ++#include ++#include ++#include ++#include ++ ++#include "vimc-streamer.h" ++ ++/** ++ * vimc_get_source_entity - get the entity connected with the first sink pad ++ * ++ * @ent: reference media_entity ++ * ++ * Helper function that returns the media entity containing the source pad ++ * linked with the first sink pad from the given media entity pad list. ++ */ ++static struct media_entity *vimc_get_source_entity(struct media_entity *ent) ++{ ++ struct media_pad *pad; ++ int i; ++ ++ for (i = 0; i < ent->num_pads; i++) { ++ if (ent->pads[i].flags & MEDIA_PAD_FL_SOURCE) ++ continue; ++ pad = media_entity_remote_pad(&ent->pads[i]); ++ return pad ? pad->entity : NULL; ++ } ++ return NULL; ++} ++ ++/* ++ * vimc_streamer_pipeline_terminate - Disable stream in all ved in stream ++ * ++ * @stream: the pointer to the stream structure with the pipeline to be ++ * disabled. ++ * ++ * Calls s_stream to disable the stream in each entity of the pipeline ++ * ++ */ ++static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream) ++{ ++ struct media_entity *entity; ++ struct v4l2_subdev *sd; ++ ++ while (stream->pipe_size) { ++ stream->pipe_size--; ++ entity = stream->ved_pipeline[stream->pipe_size]->ent; ++ entity = vimc_get_source_entity(entity); ++ stream->ved_pipeline[stream->pipe_size] = NULL; ++ ++ if (!is_media_entity_v4l2_subdev(entity)) ++ continue; ++ ++ sd = media_entity_to_v4l2_subdev(entity); ++ v4l2_subdev_call(sd, video, s_stream, 0); ++ } ++} ++ ++/* ++ * vimc_streamer_pipeline_init - initializes the stream structure ++ * ++ * @stream: the pointer to the stream structure to be initialized ++ * @ved: the pointer to the vimc entity initializing the stream ++ * ++ * Initializes the stream structure. Walks through the entity graph to ++ * construct the pipeline used later on the streamer thread. ++ * Calls s_stream to enable stream in all entities of the pipeline. ++ */ ++static int vimc_streamer_pipeline_init(struct vimc_stream *stream, ++ struct vimc_ent_device *ved) ++{ ++ struct media_entity *entity; ++ struct video_device *vdev; ++ struct v4l2_subdev *sd; ++ int ret = 0; ++ ++ stream->pipe_size = 0; ++ while (stream->pipe_size < VIMC_STREAMER_PIPELINE_MAX_SIZE) { ++ if (!ved) { ++ vimc_streamer_pipeline_terminate(stream); ++ return -EINVAL; ++ } ++ stream->ved_pipeline[stream->pipe_size++] = ved; ++ ++ entity = vimc_get_source_entity(ved->ent); ++ /* Check if the end of the pipeline was reached*/ ++ if (!entity) ++ return 0; ++ ++ if (is_media_entity_v4l2_subdev(entity)) { ++ sd = media_entity_to_v4l2_subdev(entity); ++ ret = v4l2_subdev_call(sd, video, s_stream, 1); ++ if (ret && ret != -ENOIOCTLCMD) { ++ vimc_streamer_pipeline_terminate(stream); ++ return ret; ++ } ++ ved = v4l2_get_subdevdata(sd); ++ } else { ++ vdev = container_of(entity, ++ struct video_device, ++ entity); ++ ved = video_get_drvdata(vdev); ++ } ++ } ++ ++ vimc_streamer_pipeline_terminate(stream); ++ return -EINVAL; ++} ++ ++static int vimc_streamer_thread(void *data) ++{ ++ struct vimc_stream *stream = data; ++ int i; ++ ++ set_freezable(); ++ set_current_state(TASK_UNINTERRUPTIBLE); ++ ++ for (;;) { ++ try_to_freeze(); ++ if (kthread_should_stop()) ++ break; ++ ++ for (i = stream->pipe_size - 1; i >= 0; i--) { ++ stream->frame = stream->ved_pipeline[i]->process_frame( ++ stream->ved_pipeline[i], ++ stream->frame); ++ if (!stream->frame) ++ break; ++ if (IS_ERR(stream->frame)) ++ break; ++ } ++ //wait for 60hz ++ schedule_timeout(HZ / 60); ++ } ++ ++ return 0; ++} ++ ++int vimc_streamer_s_stream(struct vimc_stream *stream, ++ struct vimc_ent_device *ved, ++ int enable) ++{ ++ int ret; ++ ++ if (!stream || !ved) ++ return -EINVAL; ++ ++ if (enable) { ++ if (stream->kthread) ++ return 0; ++ ++ ret = vimc_streamer_pipeline_init(stream, ved); ++ if (ret) ++ return ret; ++ ++ stream->kthread = kthread_run(vimc_streamer_thread, stream, ++ "vimc-streamer thread"); ++ ++ if (IS_ERR(stream->kthread)) ++ return PTR_ERR(stream->kthread); ++ ++ } else { ++ if (!stream->kthread) ++ return 0; ++ ++ ret = kthread_stop(stream->kthread); ++ if (ret) ++ return ret; ++ ++ stream->kthread = NULL; ++ ++ vimc_streamer_pipeline_terminate(stream); ++ } ++ ++ return 0; ++} ++EXPORT_SYMBOL_GPL(vimc_streamer_s_stream); ++ ++MODULE_DESCRIPTION("Virtual Media Controller Driver (VIMC) Streamer"); ++MODULE_AUTHOR("Lucas A. M. Magalhães "); ++MODULE_LICENSE("GPL"); +diff --git a/drivers/media/platform/vimc/vimc-streamer.h b/drivers/media/platform/vimc/vimc-streamer.h +new file mode 100644 +index 000000000000..752af2e2d5a2 +--- /dev/null ++++ b/drivers/media/platform/vimc/vimc-streamer.h +@@ -0,0 +1,38 @@ ++/* SPDX-License-Identifier: GPL-2.0+ */ ++/* ++ * vimc-streamer.h Virtual Media Controller Driver ++ * ++ * Copyright (C) 2018 Lucas A. M. Magalhães ++ * ++ */ ++ ++#ifndef _VIMC_STREAMER_H_ ++#define _VIMC_STREAMER_H_ ++ ++#include ++ ++#include "vimc-common.h" ++ ++#define VIMC_STREAMER_PIPELINE_MAX_SIZE 16 ++ ++struct vimc_stream { ++ struct media_pipeline pipe; ++ struct vimc_ent_device *ved_pipeline[VIMC_STREAMER_PIPELINE_MAX_SIZE]; ++ unsigned int pipe_size; ++ u8 *frame; ++ struct task_struct *kthread; ++}; ++ ++/** ++ * vimc_streamer_s_streamer - start/stop the stream ++ * ++ * @stream: the pointer to the stream to start or stop ++ * @ved: The last entity of the streamer pipeline ++ * @enable: any non-zero number start the stream, zero stop ++ * ++ */ ++int vimc_streamer_s_stream(struct vimc_stream *stream, ++ struct vimc_ent_device *ved, ++ int enable); ++ ++#endif //_VIMC_STREAMER_H_ +diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c +index 84525ff04745..e314657a1843 100644 +--- a/drivers/media/usb/uvc/uvc_video.c ++++ b/drivers/media/usb/uvc/uvc_video.c +@@ -676,6 +676,14 @@ void uvc_video_clock_update(struct uvc_streaming *stream, + if (!uvc_hw_timestamps_param) + return; + ++ /* ++ * We will get called from __vb2_queue_cancel() if there are buffers ++ * done but not dequeued by the user, but the sample array has already ++ * been released at that time. Just bail out in that case. ++ */ ++ if (!clock->samples) ++ return; ++ + spin_lock_irqsave(&clock->lock, flags); + + if (clock->count < clock->size) +diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c +index a530972c5a7e..e0173bf4b0dc 100644 +--- a/drivers/mfd/sm501.c ++++ b/drivers/mfd/sm501.c +@@ -1145,6 +1145,9 @@ static int sm501_register_gpio_i2c_instance(struct sm501_devdata *sm, + lookup = devm_kzalloc(&pdev->dev, + sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup), + GFP_KERNEL); ++ if (!lookup) ++ return -ENOMEM; ++ + lookup->dev_id = "i2c-gpio"; + if (iic->pin_sda < 32) + lookup->table[0].chip_label = "SM501-LOW"; +diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c +index 5d28d9e454f5..08f4a512afad 100644 +--- a/drivers/misc/cxl/guest.c ++++ b/drivers/misc/cxl/guest.c +@@ -267,6 +267,7 @@ static int guest_reset(struct cxl *adapter) + int i, rc; + + pr_devel("Adapter reset request\n"); ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + if ((afu = adapter->afu[i])) { + pci_error_handlers(afu, CXL_ERROR_DETECTED_EVENT, +@@ -283,6 +284,7 @@ static int guest_reset(struct cxl *adapter) + pci_error_handlers(afu, CXL_RESUME_EVENT, 0); + } + } ++ spin_unlock(&adapter->afu_list_lock); + return rc; + } + +diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c +index c79ba1c699ad..300531d6136f 100644 +--- a/drivers/misc/cxl/pci.c ++++ b/drivers/misc/cxl/pci.c +@@ -1805,7 +1805,7 @@ static pci_ers_result_t cxl_vphb_error_detected(struct cxl_afu *afu, + /* There should only be one entry, but go through the list + * anyway + */ +- if (afu->phb == NULL) ++ if (afu == NULL || afu->phb == NULL) + return result; + + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { +@@ -1832,7 +1832,8 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + { + struct cxl *adapter = pci_get_drvdata(pdev); + struct cxl_afu *afu; +- pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET, afu_result; ++ pci_ers_result_t result = PCI_ERS_RESULT_NEED_RESET; ++ pci_ers_result_t afu_result = PCI_ERS_RESULT_NEED_RESET; + int i; + + /* At this point, we could still have an interrupt pending. +@@ -1843,6 +1844,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + + /* If we're permanently dead, give up. */ + if (state == pci_channel_io_perm_failure) { ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + /* +@@ -1851,6 +1853,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + */ + cxl_vphb_error_detected(afu, state); + } ++ spin_unlock(&adapter->afu_list_lock); + return PCI_ERS_RESULT_DISCONNECT; + } + +@@ -1932,11 +1935,17 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + * * In slot_reset, free the old resources and allocate new ones. + * * In resume, clear the flag to allow things to start. + */ ++ ++ /* Make sure no one else changes the afu list */ ++ spin_lock(&adapter->afu_list_lock); ++ + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + +- afu_result = cxl_vphb_error_detected(afu, state); ++ if (afu == NULL) ++ continue; + ++ afu_result = cxl_vphb_error_detected(afu, state); + cxl_context_detach_all(afu); + cxl_ops->afu_deactivate_mode(afu, afu->current_mode); + pci_deconfigure_afu(afu); +@@ -1948,6 +1957,7 @@ static pci_ers_result_t cxl_pci_error_detected(struct pci_dev *pdev, + (result == PCI_ERS_RESULT_NEED_RESET)) + result = PCI_ERS_RESULT_NONE; + } ++ spin_unlock(&adapter->afu_list_lock); + + /* should take the context lock here */ + if (cxl_adapter_context_lock(adapter) != 0) +@@ -1980,14 +1990,18 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) + */ + cxl_adapter_context_unlock(adapter); + ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + ++ if (afu == NULL) ++ continue; ++ + if (pci_configure_afu(afu, adapter, pdev)) +- goto err; ++ goto err_unlock; + + if (cxl_afu_select_best_mode(afu)) +- goto err; ++ goto err_unlock; + + if (afu->phb == NULL) + continue; +@@ -1999,16 +2013,16 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) + ctx = cxl_get_context(afu_dev); + + if (ctx && cxl_release_context(ctx)) +- goto err; ++ goto err_unlock; + + ctx = cxl_dev_context_init(afu_dev); + if (IS_ERR(ctx)) +- goto err; ++ goto err_unlock; + + afu_dev->dev.archdata.cxl_ctx = ctx; + + if (cxl_ops->afu_check_and_enable(afu)) +- goto err; ++ goto err_unlock; + + afu_dev->error_state = pci_channel_io_normal; + +@@ -2029,8 +2043,13 @@ static pci_ers_result_t cxl_pci_slot_reset(struct pci_dev *pdev) + result = PCI_ERS_RESULT_DISCONNECT; + } + } ++ ++ spin_unlock(&adapter->afu_list_lock); + return result; + ++err_unlock: ++ spin_unlock(&adapter->afu_list_lock); ++ + err: + /* All the bits that happen in both error_detected and cxl_remove + * should be idempotent, so we don't need to worry about leaving a mix +@@ -2051,10 +2070,11 @@ static void cxl_pci_resume(struct pci_dev *pdev) + * This is not the place to be checking if everything came back up + * properly, because there's no return value: do that in slot_reset. + */ ++ spin_lock(&adapter->afu_list_lock); + for (i = 0; i < adapter->slices; i++) { + afu = adapter->afu[i]; + +- if (afu->phb == NULL) ++ if (afu == NULL || afu->phb == NULL) + continue; + + list_for_each_entry(afu_dev, &afu->phb->bus->devices, bus_list) { +@@ -2063,6 +2083,7 @@ static void cxl_pci_resume(struct pci_dev *pdev) + afu_dev->driver->err_handler->resume(afu_dev); + } + } ++ spin_unlock(&adapter->afu_list_lock); + } + + static const struct pci_error_handlers cxl_err_handler = { +diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c +index fc3872fe7b25..c383322ec2ba 100644 +--- a/drivers/misc/mei/bus.c ++++ b/drivers/misc/mei/bus.c +@@ -541,17 +541,9 @@ int mei_cldev_enable(struct mei_cl_device *cldev) + goto out; + } + +- if (!mei_cl_bus_module_get(cldev)) { +- dev_err(&cldev->dev, "get hw module failed"); +- ret = -ENODEV; +- goto out; +- } +- + ret = mei_cl_connect(cl, cldev->me_cl, NULL); +- if (ret < 0) { ++ if (ret < 0) + dev_err(&cldev->dev, "cannot connect\n"); +- mei_cl_bus_module_put(cldev); +- } + + out: + mutex_unlock(&bus->device_lock); +@@ -614,7 +606,6 @@ int mei_cldev_disable(struct mei_cl_device *cldev) + if (err < 0) + dev_err(bus->dev, "Could not disconnect from the ME client\n"); + +- mei_cl_bus_module_put(cldev); + out: + /* Flush queues and remove any pending read */ + mei_cl_flush_queues(cl, NULL); +@@ -725,9 +716,16 @@ static int mei_cl_device_probe(struct device *dev) + if (!id) + return -ENODEV; + ++ if (!mei_cl_bus_module_get(cldev)) { ++ dev_err(&cldev->dev, "get hw module failed"); ++ return -ENODEV; ++ } ++ + ret = cldrv->probe(cldev, id); +- if (ret) ++ if (ret) { ++ mei_cl_bus_module_put(cldev); + return ret; ++ } + + __module_get(THIS_MODULE); + return 0; +@@ -755,6 +753,7 @@ static int mei_cl_device_remove(struct device *dev) + + mei_cldev_unregister_callbacks(cldev); + ++ mei_cl_bus_module_put(cldev); + module_put(THIS_MODULE); + dev->driver = NULL; + return ret; +diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c +index 8f7616557c97..e6207f614816 100644 +--- a/drivers/misc/mei/hbm.c ++++ b/drivers/misc/mei/hbm.c +@@ -1029,29 +1029,36 @@ static void mei_hbm_config_features(struct mei_device *dev) + dev->version.minor_version >= HBM_MINOR_VERSION_PGI) + dev->hbm_f_pg_supported = 1; + ++ dev->hbm_f_dc_supported = 0; + if (dev->version.major_version >= HBM_MAJOR_VERSION_DC) + dev->hbm_f_dc_supported = 1; + ++ dev->hbm_f_ie_supported = 0; + if (dev->version.major_version >= HBM_MAJOR_VERSION_IE) + dev->hbm_f_ie_supported = 1; + + /* disconnect on connect timeout instead of link reset */ ++ dev->hbm_f_dot_supported = 0; + if (dev->version.major_version >= HBM_MAJOR_VERSION_DOT) + dev->hbm_f_dot_supported = 1; + + /* Notification Event Support */ ++ dev->hbm_f_ev_supported = 0; + if (dev->version.major_version >= HBM_MAJOR_VERSION_EV) + dev->hbm_f_ev_supported = 1; + + /* Fixed Address Client Support */ ++ dev->hbm_f_fa_supported = 0; + if (dev->version.major_version >= HBM_MAJOR_VERSION_FA) + dev->hbm_f_fa_supported = 1; + + /* OS ver message Support */ ++ dev->hbm_f_os_supported = 0; + if (dev->version.major_version >= HBM_MAJOR_VERSION_OS) + dev->hbm_f_os_supported = 1; + + /* DMA Ring Support */ ++ dev->hbm_f_dr_supported = 0; + if (dev->version.major_version > HBM_MAJOR_VERSION_DR || + (dev->version.major_version == HBM_MAJOR_VERSION_DR && + dev->version.minor_version >= HBM_MINOR_VERSION_DR)) +diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c +index f8240b87df22..f69acb5d4a50 100644 +--- a/drivers/misc/vmw_balloon.c ++++ b/drivers/misc/vmw_balloon.c +@@ -1287,7 +1287,7 @@ static void vmballoon_reset(struct vmballoon *b) + vmballoon_pop(b); + + if (vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES)) +- return; ++ goto unlock; + + if ((b->capabilities & VMW_BALLOON_BATCHED_CMDS) != 0) { + if (vmballoon_init_batching(b)) { +@@ -1298,7 +1298,7 @@ static void vmballoon_reset(struct vmballoon *b) + * The guest will retry in one second. + */ + vmballoon_send_start(b, 0); +- return; ++ goto unlock; + } + } else if ((b->capabilities & VMW_BALLOON_BASIC_CMDS) != 0) { + vmballoon_deinit_batching(b); +@@ -1314,6 +1314,7 @@ static void vmballoon_reset(struct vmballoon *b) + if (vmballoon_send_guest_id(b)) + pr_err("failed to send guest ID to the host\n"); + ++unlock: + up_write(&b->conf_sem); + } + +diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c +index b27a1e620233..1e6b07c176dc 100644 +--- a/drivers/mmc/core/core.c ++++ b/drivers/mmc/core/core.c +@@ -2381,9 +2381,9 @@ unsigned int mmc_calc_max_discard(struct mmc_card *card) + return card->pref_erase; + + max_discard = mmc_do_calc_max_discard(card, MMC_ERASE_ARG); +- if (max_discard && mmc_can_trim(card)) { ++ if (mmc_can_trim(card)) { + max_trim = mmc_do_calc_max_discard(card, MMC_TRIM_ARG); +- if (max_trim < max_discard) ++ if (max_trim < max_discard || max_discard == 0) + max_discard = max_trim; + } else if (max_discard < card->erase_size) { + max_discard = 0; +diff --git a/drivers/mmc/host/renesas_sdhi_core.c b/drivers/mmc/host/renesas_sdhi_core.c +index 31a351a20dc0..7e2a75c4f36f 100644 +--- a/drivers/mmc/host/renesas_sdhi_core.c ++++ b/drivers/mmc/host/renesas_sdhi_core.c +@@ -723,6 +723,13 @@ int renesas_sdhi_probe(struct platform_device *pdev, + host->ops.start_signal_voltage_switch = + renesas_sdhi_start_signal_voltage_switch; + host->sdcard_irq_setbit_mask = TMIO_STAT_ALWAYS_SET_27; ++ ++ /* SDR and HS200/400 registers requires HW reset */ ++ if (of_data && of_data->scc_offset) { ++ priv->scc_ctl = host->ctl + of_data->scc_offset; ++ host->mmc->caps |= MMC_CAP_HW_RESET; ++ host->hw_reset = renesas_sdhi_hw_reset; ++ } + } + + /* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */ +@@ -775,8 +782,6 @@ int renesas_sdhi_probe(struct platform_device *pdev, + const struct renesas_sdhi_scc *taps = of_data->taps; + bool hit = false; + +- host->mmc->caps |= MMC_CAP_HW_RESET; +- + for (i = 0; i < of_data->taps_num; i++) { + if (taps[i].clk_rate == 0 || + taps[i].clk_rate == host->mmc->f_max) { +@@ -789,12 +794,10 @@ int renesas_sdhi_probe(struct platform_device *pdev, + if (!hit) + dev_warn(&host->pdev->dev, "Unknown clock rate for SDR104\n"); + +- priv->scc_ctl = host->ctl + of_data->scc_offset; + host->init_tuning = renesas_sdhi_init_tuning; + host->prepare_tuning = renesas_sdhi_prepare_tuning; + host->select_tuning = renesas_sdhi_select_tuning; + host->check_scc_error = renesas_sdhi_check_scc_error; +- host->hw_reset = renesas_sdhi_hw_reset; + host->prepare_hs400_tuning = + renesas_sdhi_prepare_hs400_tuning; + host->hs400_downgrade = renesas_sdhi_disable_scc; +diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c +index 00d41b312c79..a6f25c796aed 100644 +--- a/drivers/mmc/host/sdhci-esdhc-imx.c ++++ b/drivers/mmc/host/sdhci-esdhc-imx.c +@@ -979,6 +979,7 @@ static void esdhc_set_uhs_signaling(struct sdhci_host *host, unsigned timing) + case MMC_TIMING_UHS_SDR25: + case MMC_TIMING_UHS_SDR50: + case MMC_TIMING_UHS_SDR104: ++ case MMC_TIMING_MMC_HS: + case MMC_TIMING_MMC_HS200: + writel(m, host->ioaddr + ESDHC_MIX_CTRL); + break; +diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c +index ddc1f9ca8ebc..4543ac97f077 100644 +--- a/drivers/net/dsa/lantiq_gswip.c ++++ b/drivers/net/dsa/lantiq_gswip.c +@@ -1069,10 +1069,10 @@ static int gswip_probe(struct platform_device *pdev) + version = gswip_switch_r(priv, GSWIP_VERSION); + + /* bring up the mdio bus */ +- gphy_fw_np = of_find_compatible_node(pdev->dev.of_node, NULL, +- "lantiq,gphy-fw"); ++ gphy_fw_np = of_get_compatible_child(dev->of_node, "lantiq,gphy-fw"); + if (gphy_fw_np) { + err = gswip_gphy_fw_list(priv, gphy_fw_np, version); ++ of_node_put(gphy_fw_np); + if (err) { + dev_err(dev, "gphy fw probe failed\n"); + return err; +@@ -1080,13 +1080,12 @@ static int gswip_probe(struct platform_device *pdev) + } + + /* bring up the mdio bus */ +- mdio_np = of_find_compatible_node(pdev->dev.of_node, NULL, +- "lantiq,xrx200-mdio"); ++ mdio_np = of_get_compatible_child(dev->of_node, "lantiq,xrx200-mdio"); + if (mdio_np) { + err = gswip_mdio(priv, mdio_np); + if (err) { + dev_err(dev, "mdio probe failed\n"); +- goto gphy_fw; ++ goto put_mdio_node; + } + } + +@@ -1099,7 +1098,7 @@ static int gswip_probe(struct platform_device *pdev) + dev_err(dev, "wrong CPU port defined, HW only supports port: %i", + priv->hw_info->cpu_port); + err = -EINVAL; +- goto mdio_bus; ++ goto disable_switch; + } + + platform_set_drvdata(pdev, priv); +@@ -1109,10 +1108,14 @@ static int gswip_probe(struct platform_device *pdev) + (version & GSWIP_VERSION_MOD_MASK) >> GSWIP_VERSION_MOD_SHIFT); + return 0; + ++disable_switch: ++ gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB); ++ dsa_unregister_switch(priv->ds); + mdio_bus: + if (mdio_np) + mdiobus_unregister(priv->ds->slave_mii_bus); +-gphy_fw: ++put_mdio_node: ++ of_node_put(mdio_np); + for (i = 0; i < priv->num_gphy_fw; i++) + gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]); + return err; +@@ -1131,8 +1134,10 @@ static int gswip_remove(struct platform_device *pdev) + + dsa_unregister_switch(priv->ds); + +- if (priv->ds->slave_mii_bus) ++ if (priv->ds->slave_mii_bus) { + mdiobus_unregister(priv->ds->slave_mii_bus); ++ of_node_put(priv->ds->slave_mii_bus->dev.of_node); ++ } + + for (i = 0; i < priv->num_gphy_fw; i++) + gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]); +diff --git a/drivers/net/wireless/marvell/libertas_tf/if_usb.c b/drivers/net/wireless/marvell/libertas_tf/if_usb.c +index 789337ea676a..6ede6168bd85 100644 +--- a/drivers/net/wireless/marvell/libertas_tf/if_usb.c ++++ b/drivers/net/wireless/marvell/libertas_tf/if_usb.c +@@ -433,8 +433,6 @@ static int __if_usb_submit_rx_urb(struct if_usb_card *cardp, + skb_tail_pointer(skb), + MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, cardp); + +- cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET; +- + lbtf_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", + cardp->rx_urb); + ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC); +diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c +index c08bf371e527..7c9dfa54fee8 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mac.c +@@ -309,7 +309,7 @@ void mt76x02_mac_write_txwi(struct mt76x02_dev *dev, struct mt76x02_txwi *txwi, + ccmp_pn[6] = pn >> 32; + ccmp_pn[7] = pn >> 40; + txwi->iv = *((__le32 *)&ccmp_pn[0]); +- txwi->eiv = *((__le32 *)&ccmp_pn[1]); ++ txwi->eiv = *((__le32 *)&ccmp_pn[4]); + } + + spin_lock_bh(&dev->mt76.lock); +diff --git a/drivers/nvdimm/label.c b/drivers/nvdimm/label.c +index a11bf4e6b451..6d6e9a12150b 100644 +--- a/drivers/nvdimm/label.c ++++ b/drivers/nvdimm/label.c +@@ -755,7 +755,7 @@ static const guid_t *to_abstraction_guid(enum nvdimm_claim_class claim_class, + + static int __pmem_label_update(struct nd_region *nd_region, + struct nd_mapping *nd_mapping, struct nd_namespace_pmem *nspm, +- int pos) ++ int pos, unsigned long flags) + { + struct nd_namespace_common *ndns = &nspm->nsio.common; + struct nd_interleave_set *nd_set = nd_region->nd_set; +@@ -796,7 +796,7 @@ static int __pmem_label_update(struct nd_region *nd_region, + memcpy(nd_label->uuid, nspm->uuid, NSLABEL_UUID_LEN); + if (nspm->alt_name) + memcpy(nd_label->name, nspm->alt_name, NSLABEL_NAME_LEN); +- nd_label->flags = __cpu_to_le32(NSLABEL_FLAG_UPDATING); ++ nd_label->flags = __cpu_to_le32(flags); + nd_label->nlabel = __cpu_to_le16(nd_region->ndr_mappings); + nd_label->position = __cpu_to_le16(pos); + nd_label->isetcookie = __cpu_to_le64(cookie); +@@ -1249,13 +1249,13 @@ static int del_labels(struct nd_mapping *nd_mapping, u8 *uuid) + int nd_pmem_namespace_label_update(struct nd_region *nd_region, + struct nd_namespace_pmem *nspm, resource_size_t size) + { +- int i; ++ int i, rc; + + for (i = 0; i < nd_region->ndr_mappings; i++) { + struct nd_mapping *nd_mapping = &nd_region->mapping[i]; + struct nvdimm_drvdata *ndd = to_ndd(nd_mapping); + struct resource *res; +- int rc, count = 0; ++ int count = 0; + + if (size == 0) { + rc = del_labels(nd_mapping, nspm->uuid); +@@ -1273,7 +1273,20 @@ int nd_pmem_namespace_label_update(struct nd_region *nd_region, + if (rc < 0) + return rc; + +- rc = __pmem_label_update(nd_region, nd_mapping, nspm, i); ++ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, ++ NSLABEL_FLAG_UPDATING); ++ if (rc) ++ return rc; ++ } ++ ++ if (size == 0) ++ return 0; ++ ++ /* Clear the UPDATING flag per UEFI 2.7 expectations */ ++ for (i = 0; i < nd_region->ndr_mappings; i++) { ++ struct nd_mapping *nd_mapping = &nd_region->mapping[i]; ++ ++ rc = __pmem_label_update(nd_region, nd_mapping, nspm, i, 0); + if (rc) + return rc; + } +diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c +index 4b077555ac70..33a3b23b3db7 100644 +--- a/drivers/nvdimm/namespace_devs.c ++++ b/drivers/nvdimm/namespace_devs.c +@@ -138,6 +138,7 @@ bool nd_is_uuid_unique(struct device *dev, u8 *uuid) + bool pmem_should_map_pages(struct device *dev) + { + struct nd_region *nd_region = to_nd_region(dev->parent); ++ struct nd_namespace_common *ndns = to_ndns(dev); + struct nd_namespace_io *nsio; + + if (!IS_ENABLED(CONFIG_ZONE_DEVICE)) +@@ -149,6 +150,9 @@ bool pmem_should_map_pages(struct device *dev) + if (is_nd_pfn(dev) || is_nd_btt(dev)) + return false; + ++ if (ndns->force_raw) ++ return false; ++ + nsio = to_nd_namespace_io(dev); + if (region_intersects(nsio->res.start, resource_size(&nsio->res), + IORESOURCE_SYSTEM_RAM, +diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c +index 6f22272e8d80..7760c1b91853 100644 +--- a/drivers/nvdimm/pfn_devs.c ++++ b/drivers/nvdimm/pfn_devs.c +@@ -593,7 +593,7 @@ static unsigned long init_altmap_base(resource_size_t base) + + static unsigned long init_altmap_reserve(resource_size_t base) + { +- unsigned long reserve = PHYS_PFN(SZ_8K); ++ unsigned long reserve = PFN_UP(SZ_8K); + unsigned long base_pfn = PHYS_PFN(base); + + reserve += base_pfn - PFN_SECTION_ALIGN_DOWN(base_pfn); +@@ -678,7 +678,7 @@ static void trim_pfn_device(struct nd_pfn *nd_pfn, u32 *start_pad, u32 *end_trun + if (region_intersects(start, size, IORESOURCE_SYSTEM_RAM, + IORES_DESC_NONE) == REGION_MIXED + || !IS_ALIGNED(end, nd_pfn->align) +- || nd_region_conflict(nd_region, start, size + adjust)) ++ || nd_region_conflict(nd_region, start, size)) + *end_trunc = end - phys_pmem_align_down(nd_pfn, end); + } + +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c +index f7301bb4ef3b..3ce65927e11c 100644 +--- a/drivers/nvmem/core.c ++++ b/drivers/nvmem/core.c +@@ -686,9 +686,7 @@ struct nvmem_device *nvmem_register(const struct nvmem_config *config) + if (rval) + goto err_remove_cells; + +- rval = blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem); +- if (rval) +- goto err_remove_cells; ++ blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem); + + return nvmem; + +diff --git a/drivers/opp/core.c b/drivers/opp/core.c +index 18f1639dbc4a..f5d2fa195f5f 100644 +--- a/drivers/opp/core.c ++++ b/drivers/opp/core.c +@@ -743,7 +743,7 @@ int dev_pm_opp_set_rate(struct device *dev, unsigned long target_freq) + old_freq, freq); + + /* Scaling up? Configure required OPPs before frequency */ +- if (freq > old_freq) { ++ if (freq >= old_freq) { + ret = _set_required_opps(dev, opp_table, opp); + if (ret) + goto put_opp; +diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c +index 9c8249f74479..6296dbb83d47 100644 +--- a/drivers/parport/parport_pc.c ++++ b/drivers/parport/parport_pc.c +@@ -1377,7 +1377,7 @@ static struct superio_struct *find_superio(struct parport *p) + { + int i; + for (i = 0; i < NR_SUPERIOS; i++) +- if (superios[i].io != p->base) ++ if (superios[i].io == p->base) + return &superios[i]; + return NULL; + } +diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c +index 721d60a5d9e4..9c5614f21b8e 100644 +--- a/drivers/pci/controller/dwc/pcie-designware-host.c ++++ b/drivers/pci/controller/dwc/pcie-designware-host.c +@@ -439,7 +439,7 @@ int dw_pcie_host_init(struct pcie_port *pp) + if (ret) + pci->num_viewport = 2; + +- if (IS_ENABLED(CONFIG_PCI_MSI)) { ++ if (IS_ENABLED(CONFIG_PCI_MSI) && pci_msi_enabled()) { + /* + * If a specific SoC driver needs to change the + * default number of vectors, it needs to implement +diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c +index d185ea5fe996..a7f703556790 100644 +--- a/drivers/pci/controller/dwc/pcie-qcom.c ++++ b/drivers/pci/controller/dwc/pcie-qcom.c +@@ -1228,7 +1228,7 @@ static int qcom_pcie_probe(struct platform_device *pdev) + + pcie->ops = of_device_get_match_data(dev); + +- pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_LOW); ++ pcie->reset = devm_gpiod_get_optional(dev, "perst", GPIOD_OUT_HIGH); + if (IS_ERR(pcie->reset)) { + ret = PTR_ERR(pcie->reset); + goto err_pm_runtime_put; +diff --git a/drivers/pci/controller/pci-aardvark.c b/drivers/pci/controller/pci-aardvark.c +index 750081c1cb48..6eecae447af3 100644 +--- a/drivers/pci/controller/pci-aardvark.c ++++ b/drivers/pci/controller/pci-aardvark.c +@@ -499,7 +499,7 @@ static void advk_sw_pci_bridge_init(struct advk_pcie *pcie) + bridge->data = pcie; + bridge->ops = &advk_pci_bridge_emul_ops; + +- pci_bridge_emul_init(bridge); ++ pci_bridge_emul_init(bridge, 0); + + } + +diff --git a/drivers/pci/controller/pci-mvebu.c b/drivers/pci/controller/pci-mvebu.c +index fa0fc46edb0c..d3a0419e42f2 100644 +--- a/drivers/pci/controller/pci-mvebu.c ++++ b/drivers/pci/controller/pci-mvebu.c +@@ -583,7 +583,7 @@ static void mvebu_pci_bridge_emul_init(struct mvebu_pcie_port *port) + bridge->data = port; + bridge->ops = &mvebu_pci_bridge_emul_ops; + +- pci_bridge_emul_init(bridge); ++ pci_bridge_emul_init(bridge, PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR); + } + + static inline struct mvebu_pcie *sys_to_pcie(struct pci_sys_data *sys) +diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c +index 7dd443aea5a5..c0fb64ace05a 100644 +--- a/drivers/pci/hotplug/pciehp_hpc.c ++++ b/drivers/pci/hotplug/pciehp_hpc.c +@@ -736,12 +736,25 @@ void pcie_clear_hotplug_events(struct controller *ctrl) + + void pcie_enable_interrupt(struct controller *ctrl) + { +- pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_HPIE, PCI_EXP_SLTCTL_HPIE); ++ u16 mask; ++ ++ mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE; ++ pcie_write_cmd(ctrl, mask, mask); + } + + void pcie_disable_interrupt(struct controller *ctrl) + { +- pcie_write_cmd(ctrl, 0, PCI_EXP_SLTCTL_HPIE); ++ u16 mask; ++ ++ /* ++ * Mask hot-plug interrupt to prevent it triggering immediately ++ * when the link goes inactive (we still get PME when any of the ++ * enabled events is detected). Same goes with Link Layer State ++ * changed event which generates PME immediately when the link goes ++ * inactive so mask it as well. ++ */ ++ mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE; ++ pcie_write_cmd(ctrl, 0, mask); + } + + /* +diff --git a/drivers/pci/pci-bridge-emul.c b/drivers/pci/pci-bridge-emul.c +index 129738362d90..83fb077d0b41 100644 +--- a/drivers/pci/pci-bridge-emul.c ++++ b/drivers/pci/pci-bridge-emul.c +@@ -24,29 +24,6 @@ + #define PCI_CAP_PCIE_START PCI_BRIDGE_CONF_END + #define PCI_CAP_PCIE_END (PCI_CAP_PCIE_START + PCI_EXP_SLTSTA2 + 2) + +-/* +- * Initialize a pci_bridge_emul structure to represent a fake PCI +- * bridge configuration space. The caller needs to have initialized +- * the PCI configuration space with whatever values make sense +- * (typically at least vendor, device, revision), the ->ops pointer, +- * and optionally ->data and ->has_pcie. +- */ +-void pci_bridge_emul_init(struct pci_bridge_emul *bridge) +-{ +- bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; +- bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; +- bridge->conf.cache_line_size = 0x10; +- bridge->conf.status = PCI_STATUS_CAP_LIST; +- +- if (bridge->has_pcie) { +- bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; +- bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; +- /* Set PCIe v2, root port, slot support */ +- bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | +- PCI_EXP_FLAGS_SLOT; +- } +-} +- + struct pci_bridge_reg_behavior { + /* Read-only bits */ + u32 ro; +@@ -283,6 +260,61 @@ const static struct pci_bridge_reg_behavior pcie_cap_regs_behavior[] = { + }, + }; + ++/* ++ * Initialize a pci_bridge_emul structure to represent a fake PCI ++ * bridge configuration space. The caller needs to have initialized ++ * the PCI configuration space with whatever values make sense ++ * (typically at least vendor, device, revision), the ->ops pointer, ++ * and optionally ->data and ->has_pcie. ++ */ ++int pci_bridge_emul_init(struct pci_bridge_emul *bridge, ++ unsigned int flags) ++{ ++ bridge->conf.class_revision |= PCI_CLASS_BRIDGE_PCI << 16; ++ bridge->conf.header_type = PCI_HEADER_TYPE_BRIDGE; ++ bridge->conf.cache_line_size = 0x10; ++ bridge->conf.status = PCI_STATUS_CAP_LIST; ++ bridge->pci_regs_behavior = kmemdup(pci_regs_behavior, ++ sizeof(pci_regs_behavior), ++ GFP_KERNEL); ++ if (!bridge->pci_regs_behavior) ++ return -ENOMEM; ++ ++ if (bridge->has_pcie) { ++ bridge->conf.capabilities_pointer = PCI_CAP_PCIE_START; ++ bridge->pcie_conf.cap_id = PCI_CAP_ID_EXP; ++ /* Set PCIe v2, root port, slot support */ ++ bridge->pcie_conf.cap = PCI_EXP_TYPE_ROOT_PORT << 4 | 2 | ++ PCI_EXP_FLAGS_SLOT; ++ bridge->pcie_cap_regs_behavior = ++ kmemdup(pcie_cap_regs_behavior, ++ sizeof(pcie_cap_regs_behavior), ++ GFP_KERNEL); ++ if (!bridge->pcie_cap_regs_behavior) { ++ kfree(bridge->pci_regs_behavior); ++ return -ENOMEM; ++ } ++ } ++ ++ if (flags & PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR) { ++ bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].ro = ~0; ++ bridge->pci_regs_behavior[PCI_PREF_MEMORY_BASE / 4].rw = 0; ++ } ++ ++ return 0; ++} ++ ++/* ++ * Cleanup a pci_bridge_emul structure that was previously initilized ++ * using pci_bridge_emul_init(). ++ */ ++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge) ++{ ++ if (bridge->has_pcie) ++ kfree(bridge->pcie_cap_regs_behavior); ++ kfree(bridge->pci_regs_behavior); ++} ++ + /* + * Should be called by the PCI controller driver when reading the PCI + * configuration space of the fake bridge. It will call back the +@@ -312,11 +344,11 @@ int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, + reg -= PCI_CAP_PCIE_START; + read_op = bridge->ops->read_pcie; + cfgspace = (u32 *) &bridge->pcie_conf; +- behavior = pcie_cap_regs_behavior; ++ behavior = bridge->pcie_cap_regs_behavior; + } else { + read_op = bridge->ops->read_base; + cfgspace = (u32 *) &bridge->conf; +- behavior = pci_regs_behavior; ++ behavior = bridge->pci_regs_behavior; + } + + if (read_op) +@@ -383,11 +415,11 @@ int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, + reg -= PCI_CAP_PCIE_START; + write_op = bridge->ops->write_pcie; + cfgspace = (u32 *) &bridge->pcie_conf; +- behavior = pcie_cap_regs_behavior; ++ behavior = bridge->pcie_cap_regs_behavior; + } else { + write_op = bridge->ops->write_base; + cfgspace = (u32 *) &bridge->conf; +- behavior = pci_regs_behavior; ++ behavior = bridge->pci_regs_behavior; + } + + /* Keep all bits, except the RW bits */ +diff --git a/drivers/pci/pci-bridge-emul.h b/drivers/pci/pci-bridge-emul.h +index 9d510ccf738b..e65b1b79899d 100644 +--- a/drivers/pci/pci-bridge-emul.h ++++ b/drivers/pci/pci-bridge-emul.h +@@ -107,15 +107,26 @@ struct pci_bridge_emul_ops { + u32 old, u32 new, u32 mask); + }; + ++struct pci_bridge_reg_behavior; ++ + struct pci_bridge_emul { + struct pci_bridge_emul_conf conf; + struct pci_bridge_emul_pcie_conf pcie_conf; + struct pci_bridge_emul_ops *ops; ++ struct pci_bridge_reg_behavior *pci_regs_behavior; ++ struct pci_bridge_reg_behavior *pcie_cap_regs_behavior; + void *data; + bool has_pcie; + }; + +-void pci_bridge_emul_init(struct pci_bridge_emul *bridge); ++enum { ++ PCI_BRIDGE_EMUL_NO_PREFETCHABLE_BAR = BIT(0), ++}; ++ ++int pci_bridge_emul_init(struct pci_bridge_emul *bridge, ++ unsigned int flags); ++void pci_bridge_emul_cleanup(struct pci_bridge_emul *bridge); ++ + int pci_bridge_emul_conf_read(struct pci_bridge_emul *bridge, int where, + int size, u32 *value); + int pci_bridge_emul_conf_write(struct pci_bridge_emul *bridge, int where, +diff --git a/drivers/pci/pcie/dpc.c b/drivers/pci/pcie/dpc.c +index e435d12e61a0..7b77754a82de 100644 +--- a/drivers/pci/pcie/dpc.c ++++ b/drivers/pci/pcie/dpc.c +@@ -202,6 +202,28 @@ static void dpc_process_rp_pio_error(struct dpc_dev *dpc) + pci_write_config_dword(pdev, cap + PCI_EXP_DPC_RP_PIO_STATUS, status); + } + ++static int dpc_get_aer_uncorrect_severity(struct pci_dev *dev, ++ struct aer_err_info *info) ++{ ++ int pos = dev->aer_cap; ++ u32 status, mask, sev; ++ ++ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_STATUS, &status); ++ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_MASK, &mask); ++ status &= ~mask; ++ if (!status) ++ return 0; ++ ++ pci_read_config_dword(dev, pos + PCI_ERR_UNCOR_SEVER, &sev); ++ status &= sev; ++ if (status) ++ info->severity = AER_FATAL; ++ else ++ info->severity = AER_NONFATAL; ++ ++ return 1; ++} ++ + static irqreturn_t dpc_handler(int irq, void *context) + { + struct aer_err_info info; +@@ -229,9 +251,12 @@ static irqreturn_t dpc_handler(int irq, void *context) + /* show RP PIO error detail information */ + if (dpc->rp_extensions && reason == 3 && ext_reason == 0) + dpc_process_rp_pio_error(dpc); +- else if (reason == 0 && aer_get_device_error_info(pdev, &info)) { ++ else if (reason == 0 && ++ dpc_get_aer_uncorrect_severity(pdev, &info) && ++ aer_get_device_error_info(pdev, &info)) { + aer_print_error(pdev, &info); + pci_cleanup_aer_uncorrect_error_status(pdev); ++ pci_aer_clear_fatal_status(pdev); + } + + /* We configure DPC so it only triggers on ERR_FATAL */ +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c +index 257b9f6f2ebb..c46a3fcb341e 100644 +--- a/drivers/pci/probe.c ++++ b/drivers/pci/probe.c +@@ -2071,11 +2071,8 @@ static void pci_configure_ltr(struct pci_dev *dev) + { + #ifdef CONFIG_PCIEASPM + struct pci_host_bridge *host = pci_find_host_bridge(dev->bus); +- u32 cap; + struct pci_dev *bridge; +- +- if (!host->native_ltr) +- return; ++ u32 cap, ctl; + + if (!pci_is_pcie(dev)) + return; +@@ -2084,22 +2081,35 @@ static void pci_configure_ltr(struct pci_dev *dev) + if (!(cap & PCI_EXP_DEVCAP2_LTR)) + return; + +- /* +- * Software must not enable LTR in an Endpoint unless the Root +- * Complex and all intermediate Switches indicate support for LTR. +- * PCIe r3.1, sec 6.18. +- */ +- if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) +- dev->ltr_path = 1; +- else { ++ pcie_capability_read_dword(dev, PCI_EXP_DEVCTL2, &ctl); ++ if (ctl & PCI_EXP_DEVCTL2_LTR_EN) { ++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT) { ++ dev->ltr_path = 1; ++ return; ++ } ++ + bridge = pci_upstream_bridge(dev); + if (bridge && bridge->ltr_path) + dev->ltr_path = 1; ++ ++ return; + } + +- if (dev->ltr_path) ++ if (!host->native_ltr) ++ return; ++ ++ /* ++ * Software must not enable LTR in an Endpoint unless the Root ++ * Complex and all intermediate Switches indicate support for LTR. ++ * PCIe r4.0, sec 6.18. ++ */ ++ if (pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT || ++ ((bridge = pci_upstream_bridge(dev)) && ++ bridge->ltr_path)) { + pcie_capability_set_word(dev, PCI_EXP_DEVCTL2, + PCI_EXP_DEVCTL2_LTR_EN); ++ dev->ltr_path = 1; ++ } + #endif + } + +diff --git a/drivers/power/supply/cpcap-charger.c b/drivers/power/supply/cpcap-charger.c +index c843eaff8ad0..c3ed7b476676 100644 +--- a/drivers/power/supply/cpcap-charger.c ++++ b/drivers/power/supply/cpcap-charger.c +@@ -458,6 +458,7 @@ static void cpcap_usb_detect(struct work_struct *work) + goto out_err; + } + ++ power_supply_changed(ddata->usb); + return; + + out_err: +diff --git a/drivers/regulator/max77620-regulator.c b/drivers/regulator/max77620-regulator.c +index b94e3a721721..cd93cf53e23c 100644 +--- a/drivers/regulator/max77620-regulator.c ++++ b/drivers/regulator/max77620-regulator.c +@@ -1,7 +1,7 @@ + /* + * Maxim MAX77620 Regulator driver + * +- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. ++ * Copyright (c) 2016-2018, NVIDIA CORPORATION. All rights reserved. + * + * Author: Mallikarjun Kasoju + * Laxman Dewangan +@@ -803,6 +803,14 @@ static int max77620_regulator_probe(struct platform_device *pdev) + rdesc = &rinfo[id].desc; + pmic->rinfo[id] = &max77620_regs_info[id]; + pmic->enable_power_mode[id] = MAX77620_POWER_MODE_NORMAL; ++ pmic->reg_pdata[id].active_fps_src = -1; ++ pmic->reg_pdata[id].active_fps_pd_slot = -1; ++ pmic->reg_pdata[id].active_fps_pu_slot = -1; ++ pmic->reg_pdata[id].suspend_fps_src = -1; ++ pmic->reg_pdata[id].suspend_fps_pd_slot = -1; ++ pmic->reg_pdata[id].suspend_fps_pu_slot = -1; ++ pmic->reg_pdata[id].power_ok = -1; ++ pmic->reg_pdata[id].ramp_rate_setting = -1; + + ret = max77620_read_slew_rate(pmic, id); + if (ret < 0) +diff --git a/drivers/regulator/s2mpa01.c b/drivers/regulator/s2mpa01.c +index 095d25f3d2ea..58a1fe583a6c 100644 +--- a/drivers/regulator/s2mpa01.c ++++ b/drivers/regulator/s2mpa01.c +@@ -298,13 +298,13 @@ static const struct regulator_desc regulators[] = { + regulator_desc_ldo(2, STEP_50_MV), + regulator_desc_ldo(3, STEP_50_MV), + regulator_desc_ldo(4, STEP_50_MV), +- regulator_desc_ldo(5, STEP_50_MV), ++ regulator_desc_ldo(5, STEP_25_MV), + regulator_desc_ldo(6, STEP_25_MV), + regulator_desc_ldo(7, STEP_50_MV), + regulator_desc_ldo(8, STEP_50_MV), + regulator_desc_ldo(9, STEP_50_MV), + regulator_desc_ldo(10, STEP_50_MV), +- regulator_desc_ldo(11, STEP_25_MV), ++ regulator_desc_ldo(11, STEP_50_MV), + regulator_desc_ldo(12, STEP_50_MV), + regulator_desc_ldo(13, STEP_50_MV), + regulator_desc_ldo(14, STEP_50_MV), +@@ -315,11 +315,11 @@ static const struct regulator_desc regulators[] = { + regulator_desc_ldo(19, STEP_50_MV), + regulator_desc_ldo(20, STEP_50_MV), + regulator_desc_ldo(21, STEP_50_MV), +- regulator_desc_ldo(22, STEP_25_MV), +- regulator_desc_ldo(23, STEP_25_MV), ++ regulator_desc_ldo(22, STEP_50_MV), ++ regulator_desc_ldo(23, STEP_50_MV), + regulator_desc_ldo(24, STEP_50_MV), + regulator_desc_ldo(25, STEP_50_MV), +- regulator_desc_ldo(26, STEP_50_MV), ++ regulator_desc_ldo(26, STEP_25_MV), + regulator_desc_buck1_4(1), + regulator_desc_buck1_4(2), + regulator_desc_buck1_4(3), +diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c +index ee4a23ab0663..134c62db36c5 100644 +--- a/drivers/regulator/s2mps11.c ++++ b/drivers/regulator/s2mps11.c +@@ -362,7 +362,7 @@ static const struct regulator_desc s2mps11_regulators[] = { + regulator_desc_s2mps11_ldo(32, STEP_50_MV), + regulator_desc_s2mps11_ldo(33, STEP_50_MV), + regulator_desc_s2mps11_ldo(34, STEP_50_MV), +- regulator_desc_s2mps11_ldo(35, STEP_50_MV), ++ regulator_desc_s2mps11_ldo(35, STEP_25_MV), + regulator_desc_s2mps11_ldo(36, STEP_50_MV), + regulator_desc_s2mps11_ldo(37, STEP_50_MV), + regulator_desc_s2mps11_ldo(38, STEP_50_MV), +@@ -372,8 +372,8 @@ static const struct regulator_desc s2mps11_regulators[] = { + regulator_desc_s2mps11_buck1_4(4), + regulator_desc_s2mps11_buck5, + regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV), +- regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_6_25_MV), +- regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_6_25_MV), ++ regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV), ++ regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV), + regulator_desc_s2mps11_buck9, + regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV), + }; +diff --git a/drivers/s390/crypto/vfio_ap_drv.c b/drivers/s390/crypto/vfio_ap_drv.c +index 31c6c847eaca..e9824c35c34f 100644 +--- a/drivers/s390/crypto/vfio_ap_drv.c ++++ b/drivers/s390/crypto/vfio_ap_drv.c +@@ -15,7 +15,6 @@ + #include "vfio_ap_private.h" + + #define VFIO_AP_ROOT_NAME "vfio_ap" +-#define VFIO_AP_DEV_TYPE_NAME "ap_matrix" + #define VFIO_AP_DEV_NAME "matrix" + + MODULE_AUTHOR("IBM Corporation"); +@@ -24,10 +23,6 @@ MODULE_LICENSE("GPL v2"); + + static struct ap_driver vfio_ap_drv; + +-static struct device_type vfio_ap_dev_type = { +- .name = VFIO_AP_DEV_TYPE_NAME, +-}; +- + struct ap_matrix_dev *matrix_dev; + + /* Only type 10 adapters (CEX4 and later) are supported +@@ -62,6 +57,22 @@ static void vfio_ap_matrix_dev_release(struct device *dev) + kfree(matrix_dev); + } + ++static int matrix_bus_match(struct device *dev, struct device_driver *drv) ++{ ++ return 1; ++} ++ ++static struct bus_type matrix_bus = { ++ .name = "matrix", ++ .match = &matrix_bus_match, ++}; ++ ++static struct device_driver matrix_driver = { ++ .name = "vfio_ap", ++ .bus = &matrix_bus, ++ .suppress_bind_attrs = true, ++}; ++ + static int vfio_ap_matrix_dev_create(void) + { + int ret; +@@ -71,6 +82,10 @@ static int vfio_ap_matrix_dev_create(void) + if (IS_ERR(root_device)) + return PTR_ERR(root_device); + ++ ret = bus_register(&matrix_bus); ++ if (ret) ++ goto bus_register_err; ++ + matrix_dev = kzalloc(sizeof(*matrix_dev), GFP_KERNEL); + if (!matrix_dev) { + ret = -ENOMEM; +@@ -87,30 +102,41 @@ static int vfio_ap_matrix_dev_create(void) + mutex_init(&matrix_dev->lock); + INIT_LIST_HEAD(&matrix_dev->mdev_list); + +- matrix_dev->device.type = &vfio_ap_dev_type; + dev_set_name(&matrix_dev->device, "%s", VFIO_AP_DEV_NAME); + matrix_dev->device.parent = root_device; ++ matrix_dev->device.bus = &matrix_bus; + matrix_dev->device.release = vfio_ap_matrix_dev_release; +- matrix_dev->device.driver = &vfio_ap_drv.driver; ++ matrix_dev->vfio_ap_drv = &vfio_ap_drv; + + ret = device_register(&matrix_dev->device); + if (ret) + goto matrix_reg_err; + ++ ret = driver_register(&matrix_driver); ++ if (ret) ++ goto matrix_drv_err; ++ + return 0; + ++matrix_drv_err: ++ device_unregister(&matrix_dev->device); + matrix_reg_err: + put_device(&matrix_dev->device); + matrix_alloc_err: ++ bus_unregister(&matrix_bus); ++bus_register_err: + root_device_unregister(root_device); +- + return ret; + } + + static void vfio_ap_matrix_dev_destroy(void) + { ++ struct device *root_device = matrix_dev->device.parent; ++ ++ driver_unregister(&matrix_driver); + device_unregister(&matrix_dev->device); +- root_device_unregister(matrix_dev->device.parent); ++ bus_unregister(&matrix_bus); ++ root_device_unregister(root_device); + } + + static int __init vfio_ap_init(void) +diff --git a/drivers/s390/crypto/vfio_ap_ops.c b/drivers/s390/crypto/vfio_ap_ops.c +index 272ef427dcc0..900b9cf20ca5 100644 +--- a/drivers/s390/crypto/vfio_ap_ops.c ++++ b/drivers/s390/crypto/vfio_ap_ops.c +@@ -198,8 +198,8 @@ static int vfio_ap_verify_queue_reserved(unsigned long *apid, + qres.apqi = apqi; + qres.reserved = false; + +- ret = driver_for_each_device(matrix_dev->device.driver, NULL, &qres, +- vfio_ap_has_queue); ++ ret = driver_for_each_device(&matrix_dev->vfio_ap_drv->driver, NULL, ++ &qres, vfio_ap_has_queue); + if (ret) + return ret; + +diff --git a/drivers/s390/crypto/vfio_ap_private.h b/drivers/s390/crypto/vfio_ap_private.h +index 5675492233c7..76b7f98e47e9 100644 +--- a/drivers/s390/crypto/vfio_ap_private.h ++++ b/drivers/s390/crypto/vfio_ap_private.h +@@ -40,6 +40,7 @@ struct ap_matrix_dev { + struct ap_config_info info; + struct list_head mdev_list; + struct mutex lock; ++ struct ap_driver *vfio_ap_drv; + }; + + extern struct ap_matrix_dev *matrix_dev; +diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c +index ae1d56da671d..1a738fe9f26b 100644 +--- a/drivers/s390/virtio/virtio_ccw.c ++++ b/drivers/s390/virtio/virtio_ccw.c +@@ -272,6 +272,8 @@ static void virtio_ccw_drop_indicators(struct virtio_ccw_device *vcdev) + { + struct virtio_ccw_vq_info *info; + ++ if (!vcdev->airq_info) ++ return; + list_for_each_entry(info, &vcdev->virtqueues, node) + drop_airq_indicator(info->vq, vcdev->airq_info); + } +@@ -413,7 +415,7 @@ static int virtio_ccw_read_vq_conf(struct virtio_ccw_device *vcdev, + ret = ccw_io_helper(vcdev, ccw, VIRTIO_CCW_DOING_READ_VQ_CONF); + if (ret) + return ret; +- return vcdev->config_block->num; ++ return vcdev->config_block->num ?: -ENOENT; + } + + static void virtio_ccw_del_vq(struct virtqueue *vq, struct ccw1 *ccw) +diff --git a/drivers/scsi/aacraid/linit.c b/drivers/scsi/aacraid/linit.c +index 7e56a11836c1..ccefface7e31 100644 +--- a/drivers/scsi/aacraid/linit.c ++++ b/drivers/scsi/aacraid/linit.c +@@ -413,13 +413,16 @@ static int aac_slave_configure(struct scsi_device *sdev) + if (chn < AAC_MAX_BUSES && tid < AAC_MAX_TARGETS && aac->sa_firmware) { + devtype = aac->hba_map[chn][tid].devtype; + +- if (devtype == AAC_DEVTYPE_NATIVE_RAW) ++ if (devtype == AAC_DEVTYPE_NATIVE_RAW) { + depth = aac->hba_map[chn][tid].qd_limit; +- else if (devtype == AAC_DEVTYPE_ARC_RAW) ++ set_timeout = 1; ++ goto common_config; ++ } ++ if (devtype == AAC_DEVTYPE_ARC_RAW) { + set_qd_dev_type = true; +- +- set_timeout = 1; +- goto common_config; ++ set_timeout = 1; ++ goto common_config; ++ } + } + + if (aac->jbod && (sdev->type == TYPE_DISK)) +diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c +index 8d1acc802a67..f44e640229e7 100644 +--- a/drivers/scsi/qla2xxx/qla_init.c ++++ b/drivers/scsi/qla2xxx/qla_init.c +@@ -644,11 +644,14 @@ static void qla24xx_handle_gnl_done_event(scsi_qla_host_t *vha, + break; + case DSC_LS_PORT_UNAVAIL: + default: +- if (fcport->loop_id != FC_NO_LOOP_ID) +- qla2x00_clear_loop_id(fcport); +- +- fcport->loop_id = loop_id; +- fcport->fw_login_state = DSC_LS_PORT_UNAVAIL; ++ if (fcport->loop_id == FC_NO_LOOP_ID) { ++ qla2x00_find_new_loop_id(vha, fcport); ++ fcport->fw_login_state = ++ DSC_LS_PORT_UNAVAIL; ++ } ++ ql_dbg(ql_dbg_disc, vha, 0x20e5, ++ "%s %d %8phC\n", __func__, __LINE__, ++ fcport->port_name); + qla24xx_fcport_handle_login(vha, fcport); + break; + } +@@ -1471,29 +1474,6 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport) + return 0; + } + +-static +-void qla24xx_handle_rscn_event(fc_port_t *fcport, struct event_arg *ea) +-{ +- fcport->rscn_gen++; +- +- ql_dbg(ql_dbg_disc, fcport->vha, 0x210c, +- "%s %8phC DS %d LS %d\n", +- __func__, fcport->port_name, fcport->disc_state, +- fcport->fw_login_state); +- +- if (fcport->flags & FCF_ASYNC_SENT) +- return; +- +- switch (fcport->disc_state) { +- case DSC_DELETED: +- case DSC_LOGIN_COMPLETE: +- qla24xx_post_gpnid_work(fcport->vha, &ea->id); +- break; +- default: +- break; +- } +-} +- + int qla24xx_post_newsess_work(struct scsi_qla_host *vha, port_id_t *id, + u8 *port_name, u8 *node_name, void *pla, u8 fc4_type) + { +@@ -1560,8 +1540,6 @@ static void qla_handle_els_plogi_done(scsi_qla_host_t *vha, + + void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea) + { +- fc_port_t *f, *tf; +- uint32_t id = 0, mask, rid; + fc_port_t *fcport; + + switch (ea->event) { +@@ -1574,10 +1552,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea) + case FCME_RSCN: + if (test_bit(UNLOADING, &vha->dpc_flags)) + return; +- switch (ea->id.b.rsvd_1) { +- case RSCN_PORT_ADDR: +-#define BIGSCAN 1 +-#if defined BIGSCAN & BIGSCAN > 0 + { + unsigned long flags; + fcport = qla2x00_find_fcport_by_nportid +@@ -1596,59 +1570,6 @@ void qla2x00_fcport_event_handler(scsi_qla_host_t *vha, struct event_arg *ea) + } + spin_unlock_irqrestore(&vha->work_lock, flags); + } +-#else +- { +- int rc; +- fcport = qla2x00_find_fcport_by_nportid(vha, &ea->id, 1); +- if (!fcport) { +- /* cable moved */ +- rc = qla24xx_post_gpnid_work(vha, &ea->id); +- if (rc) { +- ql_log(ql_log_warn, vha, 0xd044, +- "RSCN GPNID work failed %06x\n", +- ea->id.b24); +- } +- } else { +- ea->fcport = fcport; +- fcport->scan_needed = 1; +- qla24xx_handle_rscn_event(fcport, ea); +- } +- } +-#endif +- break; +- case RSCN_AREA_ADDR: +- case RSCN_DOM_ADDR: +- if (ea->id.b.rsvd_1 == RSCN_AREA_ADDR) { +- mask = 0xffff00; +- ql_dbg(ql_dbg_async, vha, 0x5044, +- "RSCN: Area 0x%06x was affected\n", +- ea->id.b24); +- } else { +- mask = 0xff0000; +- ql_dbg(ql_dbg_async, vha, 0x507a, +- "RSCN: Domain 0x%06x was affected\n", +- ea->id.b24); +- } +- +- rid = ea->id.b24 & mask; +- list_for_each_entry_safe(f, tf, &vha->vp_fcports, +- list) { +- id = f->d_id.b24 & mask; +- if (rid == id) { +- ea->fcport = f; +- qla24xx_handle_rscn_event(f, ea); +- } +- } +- break; +- case RSCN_FAB_ADDR: +- default: +- ql_log(ql_log_warn, vha, 0xd045, +- "RSCN: Fabric was affected. Addr format %d\n", +- ea->id.b.rsvd_1); +- qla2x00_mark_all_devices_lost(vha, 1); +- set_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags); +- set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags); +- } + break; + case FCME_GNL_DONE: + qla24xx_handle_gnl_done_event(vha, ea); +@@ -1709,11 +1630,7 @@ void qla_rscn_replay(fc_port_t *fcport) + ea.event = FCME_RSCN; + ea.id = fcport->d_id; + ea.id.b.rsvd_1 = RSCN_PORT_ADDR; +-#if defined BIGSCAN & BIGSCAN > 0 + qla2x00_fcport_event_handler(fcport->vha, &ea); +-#else +- qla24xx_post_gpnid_work(fcport->vha, &ea.id); +-#endif + } + } + +diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c +index 8507c43b918c..1a20e5d8f057 100644 +--- a/drivers/scsi/qla2xxx/qla_isr.c ++++ b/drivers/scsi/qla2xxx/qla_isr.c +@@ -3410,7 +3410,7 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp) + min_vecs++; + } + +- if (USER_CTRL_IRQ(ha)) { ++ if (USER_CTRL_IRQ(ha) || !ha->mqiobase) { + /* user wants to control IRQ setting for target mode */ + ret = pci_alloc_irq_vectors(ha->pdev, min_vecs, + ha->msix_count, PCI_IRQ_MSIX); +diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c +index c6ef83d0d99b..7e35ce2162d0 100644 +--- a/drivers/scsi/qla2xxx/qla_os.c ++++ b/drivers/scsi/qla2xxx/qla_os.c +@@ -6936,7 +6936,7 @@ static int qla2xxx_map_queues(struct Scsi_Host *shost) + scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata; + struct blk_mq_queue_map *qmap = &shost->tag_set.map[0]; + +- if (USER_CTRL_IRQ(vha->hw)) ++ if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase) + rc = blk_mq_map_queues(qmap); + else + rc = blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset); +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index 5464d467e23e..b84099479fe0 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -3047,6 +3047,55 @@ static void sd_read_security(struct scsi_disk *sdkp, unsigned char *buffer) + sdkp->security = 1; + } + ++/* ++ * Determine the device's preferred I/O size for reads and writes ++ * unless the reported value is unreasonably small, large, not a ++ * multiple of the physical block size, or simply garbage. ++ */ ++static bool sd_validate_opt_xfer_size(struct scsi_disk *sdkp, ++ unsigned int dev_max) ++{ ++ struct scsi_device *sdp = sdkp->device; ++ unsigned int opt_xfer_bytes = ++ logical_to_bytes(sdp, sdkp->opt_xfer_blocks); ++ ++ if (sdkp->opt_xfer_blocks > dev_max) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u logical blocks " \ ++ "> dev_max (%u logical blocks)\n", ++ sdkp->opt_xfer_blocks, dev_max); ++ return false; ++ } ++ ++ if (sdkp->opt_xfer_blocks > SD_DEF_XFER_BLOCKS) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u logical blocks " \ ++ "> sd driver limit (%u logical blocks)\n", ++ sdkp->opt_xfer_blocks, SD_DEF_XFER_BLOCKS); ++ return false; ++ } ++ ++ if (opt_xfer_bytes < PAGE_SIZE) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u bytes < " \ ++ "PAGE_SIZE (%u bytes)\n", ++ opt_xfer_bytes, (unsigned int)PAGE_SIZE); ++ return false; ++ } ++ ++ if (opt_xfer_bytes & (sdkp->physical_block_size - 1)) { ++ sd_first_printk(KERN_WARNING, sdkp, ++ "Optimal transfer size %u bytes not a " \ ++ "multiple of physical block size (%u bytes)\n", ++ opt_xfer_bytes, sdkp->physical_block_size); ++ return false; ++ } ++ ++ sd_first_printk(KERN_INFO, sdkp, "Optimal transfer size %u bytes\n", ++ opt_xfer_bytes); ++ return true; ++} ++ + /** + * sd_revalidate_disk - called the first time a new disk is seen, + * performs disk spin up, read_capacity, etc. +@@ -3125,15 +3174,7 @@ static int sd_revalidate_disk(struct gendisk *disk) + dev_max = min_not_zero(dev_max, sdkp->max_xfer_blocks); + q->limits.max_dev_sectors = logical_to_sectors(sdp, dev_max); + +- /* +- * Determine the device's preferred I/O size for reads and writes +- * unless the reported value is unreasonably small, large, or +- * garbage. +- */ +- if (sdkp->opt_xfer_blocks && +- sdkp->opt_xfer_blocks <= dev_max && +- sdkp->opt_xfer_blocks <= SD_DEF_XFER_BLOCKS && +- logical_to_bytes(sdp, sdkp->opt_xfer_blocks) >= PAGE_SIZE) { ++ if (sd_validate_opt_xfer_size(sdkp, dev_max)) { + q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks); + rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks); + } else +diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c +index 772b976e4ee4..464cba521fb6 100644 +--- a/drivers/scsi/virtio_scsi.c ++++ b/drivers/scsi/virtio_scsi.c +@@ -594,7 +594,6 @@ static int virtscsi_device_reset(struct scsi_cmnd *sc) + return FAILED; + + memset(cmd, 0, sizeof(*cmd)); +- cmd->sc = sc; + cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){ + .type = VIRTIO_SCSI_T_TMF, + .subtype = cpu_to_virtio32(vscsi->vdev, +@@ -653,7 +652,6 @@ static int virtscsi_abort(struct scsi_cmnd *sc) + return FAILED; + + memset(cmd, 0, sizeof(*cmd)); +- cmd->sc = sc; + cmd->req.tmf = (struct virtio_scsi_ctrl_tmf_req){ + .type = VIRTIO_SCSI_T_TMF, + .subtype = VIRTIO_SCSI_T_TMF_ABORT_TASK, +diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c +index c7beb6841289..ab8f731a3426 100644 +--- a/drivers/soc/qcom/rpmh.c ++++ b/drivers/soc/qcom/rpmh.c +@@ -80,6 +80,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r) + struct rpmh_request *rpm_msg = container_of(msg, struct rpmh_request, + msg); + struct completion *compl = rpm_msg->completion; ++ bool free = rpm_msg->needs_free; + + rpm_msg->err = r; + +@@ -94,7 +95,7 @@ void rpmh_tx_done(const struct tcs_request *msg, int r) + complete(compl); + + exit: +- if (rpm_msg->needs_free) ++ if (free) + kfree(rpm_msg); + } + +@@ -348,11 +349,12 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state, + { + struct batch_cache_req *req; + struct rpmh_request *rpm_msgs; +- DECLARE_COMPLETION_ONSTACK(compl); ++ struct completion *compls; + struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); + unsigned long time_left; + int count = 0; +- int ret, i, j; ++ int ret, i; ++ void *ptr; + + if (!cmd || !n) + return -EINVAL; +@@ -362,10 +364,15 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state, + if (!count) + return -EINVAL; + +- req = kzalloc(sizeof(*req) + count * sizeof(req->rpm_msgs[0]), ++ ptr = kzalloc(sizeof(*req) + ++ count * (sizeof(req->rpm_msgs[0]) + sizeof(*compls)), + GFP_ATOMIC); +- if (!req) ++ if (!ptr) + return -ENOMEM; ++ ++ req = ptr; ++ compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs); ++ + req->count = count; + rpm_msgs = req->rpm_msgs; + +@@ -380,25 +387,26 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state, + } + + for (i = 0; i < count; i++) { +- rpm_msgs[i].completion = &compl; ++ struct completion *compl = &compls[i]; ++ ++ init_completion(compl); ++ rpm_msgs[i].completion = compl; + ret = rpmh_rsc_send_data(ctrlr_to_drv(ctrlr), &rpm_msgs[i].msg); + if (ret) { + pr_err("Error(%d) sending RPMH message addr=%#x\n", + ret, rpm_msgs[i].msg.cmds[0].addr); +- for (j = i; j < count; j++) +- rpmh_tx_done(&rpm_msgs[j].msg, ret); + break; + } + } + + time_left = RPMH_TIMEOUT_MS; +- for (i = 0; i < count; i++) { +- time_left = wait_for_completion_timeout(&compl, time_left); ++ while (i--) { ++ time_left = wait_for_completion_timeout(&compls[i], time_left); + if (!time_left) { + /* + * Better hope they never finish because they'll signal +- * the completion on our stack and that's bad once +- * we've returned from the function. ++ * the completion that we're going to free once ++ * we've returned from this function. + */ + WARN_ON(1); + ret = -ETIMEDOUT; +@@ -407,7 +415,7 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state, + } + + exit: +- kfree(req); ++ kfree(ptr); + + return ret; + } +diff --git a/drivers/spi/spi-gpio.c b/drivers/spi/spi-gpio.c +index a4aee26028cd..53b35c56a557 100644 +--- a/drivers/spi/spi-gpio.c ++++ b/drivers/spi/spi-gpio.c +@@ -428,7 +428,8 @@ static int spi_gpio_probe(struct platform_device *pdev) + return status; + + master->bits_per_word_mask = SPI_BPW_RANGE_MASK(1, 32); +- master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL; ++ master->mode_bits = SPI_3WIRE | SPI_3WIRE_HIZ | SPI_CPHA | SPI_CPOL | ++ SPI_CS_HIGH; + master->flags = master_flags; + master->bus_num = pdev->id; + /* The master needs to think there is a chipselect even if not connected */ +@@ -455,7 +456,6 @@ static int spi_gpio_probe(struct platform_device *pdev) + spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3; + } + spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer; +- spi_gpio->bitbang.flags = SPI_CS_HIGH; + + status = spi_bitbang_start(&spi_gpio->bitbang); + if (status) +diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c +index 2fd8881fcd65..8be304379628 100644 +--- a/drivers/spi/spi-omap2-mcspi.c ++++ b/drivers/spi/spi-omap2-mcspi.c +@@ -623,8 +623,8 @@ omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer) + cfg.dst_addr = cs->phys + OMAP2_MCSPI_TX0; + cfg.src_addr_width = width; + cfg.dst_addr_width = width; +- cfg.src_maxburst = es; +- cfg.dst_maxburst = es; ++ cfg.src_maxburst = 1; ++ cfg.dst_maxburst = 1; + + rx = xfer->rx_buf; + tx = xfer->tx_buf; +diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c +index d84b893a64d7..3e82eaad0f2d 100644 +--- a/drivers/spi/spi-pxa2xx.c ++++ b/drivers/spi/spi-pxa2xx.c +@@ -1696,6 +1696,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev) + platform_info->enable_dma = false; + } else { + master->can_dma = pxa2xx_spi_can_dma; ++ master->max_dma_len = MAX_DMA_LEN; + } + } + +diff --git a/drivers/spi/spi-ti-qspi.c b/drivers/spi/spi-ti-qspi.c +index 5f19016bbf10..b9fb6493cd6b 100644 +--- a/drivers/spi/spi-ti-qspi.c ++++ b/drivers/spi/spi-ti-qspi.c +@@ -490,8 +490,8 @@ static void ti_qspi_enable_memory_map(struct spi_device *spi) + ti_qspi_write(qspi, MM_SWITCH, QSPI_SPI_SWITCH_REG); + if (qspi->ctrl_base) { + regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg, +- MEM_CS_EN(spi->chip_select), +- MEM_CS_MASK); ++ MEM_CS_MASK, ++ MEM_CS_EN(spi->chip_select)); + } + qspi->mmap_enabled = true; + } +@@ -503,7 +503,7 @@ static void ti_qspi_disable_memory_map(struct spi_device *spi) + ti_qspi_write(qspi, 0, QSPI_SPI_SWITCH_REG); + if (qspi->ctrl_base) + regmap_update_bits(qspi->ctrl_base, qspi->ctrl_reg, +- 0, MEM_CS_MASK); ++ MEM_CS_MASK, 0); + qspi->mmap_enabled = false; + } + +diff --git a/drivers/staging/media/imx/imx-ic-prpencvf.c b/drivers/staging/media/imx/imx-ic-prpencvf.c +index 28f41caba05d..fb442499f806 100644 +--- a/drivers/staging/media/imx/imx-ic-prpencvf.c ++++ b/drivers/staging/media/imx/imx-ic-prpencvf.c +@@ -680,12 +680,23 @@ static int prp_start(struct prp_priv *priv) + goto out_free_nfb4eof_irq; + } + ++ /* start upstream */ ++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1); ++ ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; ++ if (ret) { ++ v4l2_err(&ic_priv->sd, ++ "upstream stream on failed: %d\n", ret); ++ goto out_free_eof_irq; ++ } ++ + /* start the EOF timeout timer */ + mod_timer(&priv->eof_timeout_timer, + jiffies + msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT)); + + return 0; + ++out_free_eof_irq: ++ devm_free_irq(ic_priv->dev, priv->eof_irq, priv); + out_free_nfb4eof_irq: + devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv); + out_unsetup: +@@ -717,6 +728,12 @@ static void prp_stop(struct prp_priv *priv) + if (ret == 0) + v4l2_warn(&ic_priv->sd, "wait last EOF timeout\n"); + ++ /* stop upstream */ ++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 0); ++ if (ret && ret != -ENOIOCTLCMD) ++ v4l2_warn(&ic_priv->sd, ++ "upstream stream off failed: %d\n", ret); ++ + devm_free_irq(ic_priv->dev, priv->eof_irq, priv); + devm_free_irq(ic_priv->dev, priv->nfb4eof_irq, priv); + +@@ -1148,15 +1165,6 @@ static int prp_s_stream(struct v4l2_subdev *sd, int enable) + if (ret) + goto out; + +- /* start/stop upstream */ +- ret = v4l2_subdev_call(priv->src_sd, video, s_stream, enable); +- ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; +- if (ret) { +- if (enable) +- prp_stop(priv); +- goto out; +- } +- + update_count: + priv->stream_count += enable ? 1 : -1; + if (priv->stream_count < 0) +diff --git a/drivers/staging/media/imx/imx-media-csi.c b/drivers/staging/media/imx/imx-media-csi.c +index 4223f8d418ae..be1e9e52b2a0 100644 +--- a/drivers/staging/media/imx/imx-media-csi.c ++++ b/drivers/staging/media/imx/imx-media-csi.c +@@ -629,7 +629,7 @@ out_put_ipu: + return ret; + } + +-static void csi_idmac_stop(struct csi_priv *priv) ++static void csi_idmac_wait_last_eof(struct csi_priv *priv) + { + unsigned long flags; + int ret; +@@ -646,7 +646,10 @@ static void csi_idmac_stop(struct csi_priv *priv) + &priv->last_eof_comp, msecs_to_jiffies(IMX_MEDIA_EOF_TIMEOUT)); + if (ret == 0) + v4l2_warn(&priv->sd, "wait last EOF timeout\n"); ++} + ++static void csi_idmac_stop(struct csi_priv *priv) ++{ + devm_free_irq(priv->dev, priv->eof_irq, priv); + devm_free_irq(priv->dev, priv->nfb4eof_irq, priv); + +@@ -722,10 +725,16 @@ static int csi_start(struct csi_priv *priv) + + output_fi = &priv->frame_interval[priv->active_output_pad]; + ++ /* start upstream */ ++ ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1); ++ ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; ++ if (ret) ++ return ret; ++ + if (priv->dest == IPU_CSI_DEST_IDMAC) { + ret = csi_idmac_start(priv); + if (ret) +- return ret; ++ goto stop_upstream; + } + + ret = csi_setup(priv); +@@ -753,11 +762,26 @@ fim_off: + idmac_stop: + if (priv->dest == IPU_CSI_DEST_IDMAC) + csi_idmac_stop(priv); ++stop_upstream: ++ v4l2_subdev_call(priv->src_sd, video, s_stream, 0); + return ret; + } + + static void csi_stop(struct csi_priv *priv) + { ++ if (priv->dest == IPU_CSI_DEST_IDMAC) ++ csi_idmac_wait_last_eof(priv); ++ ++ /* ++ * Disable the CSI asap, after syncing with the last EOF. ++ * Doing so after the IDMA channel is disabled has shown to ++ * create hard system-wide hangs. ++ */ ++ ipu_csi_disable(priv->csi); ++ ++ /* stop upstream */ ++ v4l2_subdev_call(priv->src_sd, video, s_stream, 0); ++ + if (priv->dest == IPU_CSI_DEST_IDMAC) { + csi_idmac_stop(priv); + +@@ -765,8 +789,6 @@ static void csi_stop(struct csi_priv *priv) + if (priv->fim) + imx_media_fim_set_stream(priv->fim, NULL, false); + } +- +- ipu_csi_disable(priv->csi); + } + + static const struct csi_skip_desc csi_skip[12] = { +@@ -927,23 +949,13 @@ static int csi_s_stream(struct v4l2_subdev *sd, int enable) + goto update_count; + + if (enable) { +- /* upstream must be started first, before starting CSI */ +- ret = v4l2_subdev_call(priv->src_sd, video, s_stream, 1); +- ret = (ret && ret != -ENOIOCTLCMD) ? ret : 0; +- if (ret) +- goto out; +- + dev_dbg(priv->dev, "stream ON\n"); + ret = csi_start(priv); +- if (ret) { +- v4l2_subdev_call(priv->src_sd, video, s_stream, 0); ++ if (ret) + goto out; +- } + } else { + dev_dbg(priv->dev, "stream OFF\n"); +- /* CSI must be stopped first, then stop upstream */ + csi_stop(priv); +- v4l2_subdev_call(priv->src_sd, video, s_stream, 0); + } + + update_count: +@@ -1787,7 +1799,7 @@ static int imx_csi_parse_endpoint(struct device *dev, + struct v4l2_fwnode_endpoint *vep, + struct v4l2_async_subdev *asd) + { +- return fwnode_device_is_available(asd->match.fwnode) ? 0 : -EINVAL; ++ return fwnode_device_is_available(asd->match.fwnode) ? 0 : -ENOTCONN; + } + + static int imx_csi_async_register(struct csi_priv *priv) +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c +index bd15a564fe24..3ad2659630e8 100644 +--- a/drivers/target/iscsi/iscsi_target.c ++++ b/drivers/target/iscsi/iscsi_target.c +@@ -4040,9 +4040,9 @@ static void iscsit_release_commands_from_conn(struct iscsi_conn *conn) + struct se_cmd *se_cmd = &cmd->se_cmd; + + if (se_cmd->se_tfo != NULL) { +- spin_lock(&se_cmd->t_state_lock); ++ spin_lock_irq(&se_cmd->t_state_lock); + se_cmd->transport_state |= CMD_T_FABRIC_STOP; +- spin_unlock(&se_cmd->t_state_lock); ++ spin_unlock_irq(&se_cmd->t_state_lock); + } + } + spin_unlock_bh(&conn->cmd_lock); +diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c +index a1a85805d010..2488de1c4bc4 100644 +--- a/drivers/tty/serial/8250/8250_of.c ++++ b/drivers/tty/serial/8250/8250_of.c +@@ -130,6 +130,10 @@ static int of_platform_serial_setup(struct platform_device *ofdev, + port->flags |= UPF_IOREMAP; + } + ++ /* Compatibility with the deprecated pxa driver and 8250_pxa drivers. */ ++ if (of_device_is_compatible(np, "mrvl,mmp-uart")) ++ port->regshift = 2; ++ + /* Check for registers offset within the devices address range */ + if (of_property_read_u32(np, "reg-shift", &prop) == 0) + port->regshift = prop; +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index 48bd694a5fa1..bbe5cba21522 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -2027,6 +2027,111 @@ static struct pci_serial_quirk pci_serial_quirks[] __refdata = { + .setup = pci_default_setup, + .exit = pci_plx9050_exit, + }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4S, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_COM_4SM, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, ++ { ++ .vendor = PCI_VENDOR_ID_ACCESIO, ++ .device = PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .setup = pci_pericom_setup, ++ }, + /* + * SBS Technologies, Inc., PMC-OCTALPRO 232 + */ +@@ -4575,10 +4680,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + */ + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2S, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4587,10 +4692,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_2DB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM232_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4DB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4599,10 +4704,10 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_2SMDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_COM_2SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SMDB, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4611,13 +4716,13 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_1, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7951 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM485_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM422_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4626,16 +4731,16 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2S, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4S, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_MPCIE_ICM232_2, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM232_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7954 }, +@@ -4644,13 +4749,13 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_2SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7954 }, ++ pbn_pericom_PI7C9X7952 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM485_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM422_8, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7958 }, +@@ -4659,19 +4764,19 @@ static const struct pci_device_id serial_pci_tbl[] = { + pbn_pericom_PI7C9X7958 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_4, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM232_8, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7958 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_4SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_COM_8SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, + pbn_pericom_PI7C9X7958 }, + { PCI_VENDOR_ID_ACCESIO, PCI_DEVICE_ID_ACCESIO_PCIE_ICM_4SM, + PCI_ANY_ID, PCI_ANY_ID, 0, 0, +- pbn_pericom_PI7C9X7958 }, ++ pbn_pericom_PI7C9X7954 }, + /* + * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke) + */ +diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c +index 094f2958cb2b..ee9f18c52d29 100644 +--- a/drivers/tty/serial/xilinx_uartps.c ++++ b/drivers/tty/serial/xilinx_uartps.c +@@ -364,7 +364,13 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id) + cdns_uart_handle_tx(dev_id); + isrstatus &= ~CDNS_UART_IXR_TXEMPTY; + } +- if (isrstatus & CDNS_UART_IXR_RXMASK) ++ ++ /* ++ * Skip RX processing if RX is disabled as RXEMPTY will never be set ++ * as read bytes will not be removed from the FIFO. ++ */ ++ if (isrstatus & CDNS_UART_IXR_RXMASK && ++ !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS)) + cdns_uart_handle_rx(dev_id, isrstatus); + + spin_unlock(&port->lock); +diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c +index bba75560d11e..9646ff63e77a 100644 +--- a/drivers/tty/vt/vt.c ++++ b/drivers/tty/vt/vt.c +@@ -935,8 +935,11 @@ static void flush_scrollback(struct vc_data *vc) + { + WARN_CONSOLE_UNLOCKED(); + ++ set_origin(vc); + if (vc->vc_sw->con_flush_scrollback) + vc->vc_sw->con_flush_scrollback(vc); ++ else ++ vc->vc_sw->con_switch(vc); + } + + /* +@@ -1503,8 +1506,10 @@ static void csi_J(struct vc_data *vc, int vpar) + count = ((vc->vc_pos - vc->vc_origin) >> 1) + 1; + start = (unsigned short *)vc->vc_origin; + break; ++ case 3: /* include scrollback */ ++ flush_scrollback(vc); ++ /* fallthrough */ + case 2: /* erase whole display */ +- case 3: /* (and scrollback buffer later) */ + vc_uniscr_clear_lines(vc, 0, vc->vc_rows); + count = vc->vc_cols * vc->vc_rows; + start = (unsigned short *)vc->vc_origin; +@@ -1513,13 +1518,7 @@ static void csi_J(struct vc_data *vc, int vpar) + return; + } + scr_memsetw(start, vc->vc_video_erase_char, 2 * count); +- if (vpar == 3) { +- set_origin(vc); +- flush_scrollback(vc); +- if (con_is_visible(vc)) +- update_screen(vc); +- } else if (con_should_update(vc)) +- do_update_region(vc, (unsigned long) start, count); ++ update_region(vc, (unsigned long) start, count); + vc->vc_need_wrap = 0; + } + +diff --git a/drivers/usb/chipidea/ci_hdrc_tegra.c b/drivers/usb/chipidea/ci_hdrc_tegra.c +index 772851bee99b..12025358bb3c 100644 +--- a/drivers/usb/chipidea/ci_hdrc_tegra.c ++++ b/drivers/usb/chipidea/ci_hdrc_tegra.c +@@ -130,6 +130,7 @@ static int tegra_udc_remove(struct platform_device *pdev) + { + struct tegra_udc *udc = platform_get_drvdata(pdev); + ++ ci_hdrc_remove_device(udc->dev); + usb_phy_set_suspend(udc->phy, 1); + clk_disable_unprepare(udc->clk); + +diff --git a/drivers/usb/typec/tps6598x.c b/drivers/usb/typec/tps6598x.c +index 1c0033ad8738..e1109b15636d 100644 +--- a/drivers/usb/typec/tps6598x.c ++++ b/drivers/usb/typec/tps6598x.c +@@ -110,6 +110,20 @@ tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len) + return 0; + } + ++static int tps6598x_block_write(struct tps6598x *tps, u8 reg, ++ void *val, size_t len) ++{ ++ u8 data[TPS_MAX_LEN + 1]; ++ ++ if (!tps->i2c_protocol) ++ return regmap_raw_write(tps->regmap, reg, val, len); ++ ++ data[0] = len; ++ memcpy(&data[1], val, len); ++ ++ return regmap_raw_write(tps->regmap, reg, data, sizeof(data)); ++} ++ + static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val) + { + return tps6598x_block_read(tps, reg, val, sizeof(u16)); +@@ -127,23 +141,23 @@ static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val) + + static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val) + { +- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u16)); ++ return tps6598x_block_write(tps, reg, &val, sizeof(u16)); + } + + static inline int tps6598x_write32(struct tps6598x *tps, u8 reg, u32 val) + { +- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32)); ++ return tps6598x_block_write(tps, reg, &val, sizeof(u32)); + } + + static inline int tps6598x_write64(struct tps6598x *tps, u8 reg, u64 val) + { +- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u64)); ++ return tps6598x_block_write(tps, reg, &val, sizeof(u64)); + } + + static inline int + tps6598x_write_4cc(struct tps6598x *tps, u8 reg, const char *val) + { +- return regmap_raw_write(tps->regmap, reg, &val, sizeof(u32)); ++ return tps6598x_block_write(tps, reg, &val, sizeof(u32)); + } + + static int tps6598x_read_partner_identity(struct tps6598x *tps) +@@ -229,8 +243,8 @@ static int tps6598x_exec_cmd(struct tps6598x *tps, const char *cmd, + return -EBUSY; + + if (in_len) { +- ret = regmap_raw_write(tps->regmap, TPS_REG_DATA1, +- in_data, in_len); ++ ret = tps6598x_block_write(tps, TPS_REG_DATA1, ++ in_data, in_len); + if (ret) + return ret; + } +diff --git a/fs/9p/v9fs_vfs.h b/fs/9p/v9fs_vfs.h +index 5a0db6dec8d1..aaee1e6584e6 100644 +--- a/fs/9p/v9fs_vfs.h ++++ b/fs/9p/v9fs_vfs.h +@@ -40,6 +40,9 @@ + */ + #define P9_LOCK_TIMEOUT (30*HZ) + ++/* flags for v9fs_stat2inode() & v9fs_stat2inode_dotl() */ ++#define V9FS_STAT2INODE_KEEP_ISIZE 1 ++ + extern struct file_system_type v9fs_fs_type; + extern const struct address_space_operations v9fs_addr_operations; + extern const struct file_operations v9fs_file_operations; +@@ -61,8 +64,10 @@ int v9fs_init_inode(struct v9fs_session_info *v9ses, + struct inode *inode, umode_t mode, dev_t); + void v9fs_evict_inode(struct inode *inode); + ino_t v9fs_qid2ino(struct p9_qid *qid); +-void v9fs_stat2inode(struct p9_wstat *, struct inode *, struct super_block *); +-void v9fs_stat2inode_dotl(struct p9_stat_dotl *, struct inode *); ++void v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, ++ struct super_block *sb, unsigned int flags); ++void v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, ++ unsigned int flags); + int v9fs_dir_release(struct inode *inode, struct file *filp); + int v9fs_file_open(struct inode *inode, struct file *file); + void v9fs_inode2stat(struct inode *inode, struct p9_wstat *stat); +@@ -83,4 +88,18 @@ static inline void v9fs_invalidate_inode_attr(struct inode *inode) + } + + int v9fs_open_to_dotl_flags(int flags); ++ ++static inline void v9fs_i_size_write(struct inode *inode, loff_t i_size) ++{ ++ /* ++ * 32-bit need the lock, concurrent updates could break the ++ * sequences and make i_size_read() loop forever. ++ * 64-bit updates are atomic and can skip the locking. ++ */ ++ if (sizeof(i_size) > sizeof(long)) ++ spin_lock(&inode->i_lock); ++ i_size_write(inode, i_size); ++ if (sizeof(i_size) > sizeof(long)) ++ spin_unlock(&inode->i_lock); ++} + #endif +diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c +index a25efa782fcc..9a1125305d84 100644 +--- a/fs/9p/vfs_file.c ++++ b/fs/9p/vfs_file.c +@@ -446,7 +446,11 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) + i_size = i_size_read(inode); + if (iocb->ki_pos > i_size) { + inode_add_bytes(inode, iocb->ki_pos - i_size); +- i_size_write(inode, iocb->ki_pos); ++ /* ++ * Need to serialize against i_size_write() in ++ * v9fs_stat2inode() ++ */ ++ v9fs_i_size_write(inode, iocb->ki_pos); + } + return retval; + } +diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c +index 85ff859d3af5..72b779bc0942 100644 +--- a/fs/9p/vfs_inode.c ++++ b/fs/9p/vfs_inode.c +@@ -538,7 +538,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb, + if (retval) + goto error; + +- v9fs_stat2inode(st, inode, sb); ++ v9fs_stat2inode(st, inode, sb, 0); + v9fs_cache_inode_get_cookie(inode); + unlock_new_inode(inode); + return inode; +@@ -1092,7 +1092,7 @@ v9fs_vfs_getattr(const struct path *path, struct kstat *stat, + if (IS_ERR(st)) + return PTR_ERR(st); + +- v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb); ++ v9fs_stat2inode(st, d_inode(dentry), dentry->d_sb, 0); + generic_fillattr(d_inode(dentry), stat); + + p9stat_free(st); +@@ -1170,12 +1170,13 @@ static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr) + * @stat: Plan 9 metadata (mistat) structure + * @inode: inode to populate + * @sb: superblock of filesystem ++ * @flags: control flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE) + * + */ + + void + v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, +- struct super_block *sb) ++ struct super_block *sb, unsigned int flags) + { + umode_t mode; + char ext[32]; +@@ -1216,10 +1217,11 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, + mode = p9mode2perm(v9ses, stat); + mode |= inode->i_mode & ~S_IALLUGO; + inode->i_mode = mode; +- i_size_write(inode, stat->length); + ++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) ++ v9fs_i_size_write(inode, stat->length); + /* not real number of blocks, but 512 byte ones ... */ +- inode->i_blocks = (i_size_read(inode) + 512 - 1) >> 9; ++ inode->i_blocks = (stat->length + 512 - 1) >> 9; + v9inode->cache_validity &= ~V9FS_INO_INVALID_ATTR; + } + +@@ -1416,9 +1418,9 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode) + { + int umode; + dev_t rdev; +- loff_t i_size; + struct p9_wstat *st; + struct v9fs_session_info *v9ses; ++ unsigned int flags; + + v9ses = v9fs_inode2v9ses(inode); + st = p9_client_stat(fid); +@@ -1431,16 +1433,13 @@ int v9fs_refresh_inode(struct p9_fid *fid, struct inode *inode) + if ((inode->i_mode & S_IFMT) != (umode & S_IFMT)) + goto out; + +- spin_lock(&inode->i_lock); + /* + * We don't want to refresh inode->i_size, + * because we may have cached data + */ +- i_size = inode->i_size; +- v9fs_stat2inode(st, inode, inode->i_sb); +- if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) +- inode->i_size = i_size; +- spin_unlock(&inode->i_lock); ++ flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ? ++ V9FS_STAT2INODE_KEEP_ISIZE : 0; ++ v9fs_stat2inode(st, inode, inode->i_sb, flags); + out: + p9stat_free(st); + kfree(st); +diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c +index 4823e1c46999..a950a927a626 100644 +--- a/fs/9p/vfs_inode_dotl.c ++++ b/fs/9p/vfs_inode_dotl.c +@@ -143,7 +143,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb, + if (retval) + goto error; + +- v9fs_stat2inode_dotl(st, inode); ++ v9fs_stat2inode_dotl(st, inode, 0); + v9fs_cache_inode_get_cookie(inode); + retval = v9fs_get_acl(inode, fid); + if (retval) +@@ -496,7 +496,7 @@ v9fs_vfs_getattr_dotl(const struct path *path, struct kstat *stat, + if (IS_ERR(st)) + return PTR_ERR(st); + +- v9fs_stat2inode_dotl(st, d_inode(dentry)); ++ v9fs_stat2inode_dotl(st, d_inode(dentry), 0); + generic_fillattr(d_inode(dentry), stat); + /* Change block size to what the server returned */ + stat->blksize = st->st_blksize; +@@ -607,11 +607,13 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr) + * v9fs_stat2inode_dotl - populate an inode structure with stat info + * @stat: stat structure + * @inode: inode to populate ++ * @flags: ctrl flags (e.g. V9FS_STAT2INODE_KEEP_ISIZE) + * + */ + + void +-v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode) ++v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, ++ unsigned int flags) + { + umode_t mode; + struct v9fs_inode *v9inode = V9FS_I(inode); +@@ -631,7 +633,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode) + mode |= inode->i_mode & ~S_IALLUGO; + inode->i_mode = mode; + +- i_size_write(inode, stat->st_size); ++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) ++ v9fs_i_size_write(inode, stat->st_size); + inode->i_blocks = stat->st_blocks; + } else { + if (stat->st_result_mask & P9_STATS_ATIME) { +@@ -661,8 +664,9 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode) + } + if (stat->st_result_mask & P9_STATS_RDEV) + inode->i_rdev = new_decode_dev(stat->st_rdev); +- if (stat->st_result_mask & P9_STATS_SIZE) +- i_size_write(inode, stat->st_size); ++ if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) && ++ stat->st_result_mask & P9_STATS_SIZE) ++ v9fs_i_size_write(inode, stat->st_size); + if (stat->st_result_mask & P9_STATS_BLOCKS) + inode->i_blocks = stat->st_blocks; + } +@@ -928,9 +932,9 @@ v9fs_vfs_get_link_dotl(struct dentry *dentry, + + int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode) + { +- loff_t i_size; + struct p9_stat_dotl *st; + struct v9fs_session_info *v9ses; ++ unsigned int flags; + + v9ses = v9fs_inode2v9ses(inode); + st = p9_client_getattr_dotl(fid, P9_STATS_ALL); +@@ -942,16 +946,13 @@ int v9fs_refresh_inode_dotl(struct p9_fid *fid, struct inode *inode) + if ((inode->i_mode & S_IFMT) != (st->st_mode & S_IFMT)) + goto out; + +- spin_lock(&inode->i_lock); + /* + * We don't want to refresh inode->i_size, + * because we may have cached data + */ +- i_size = inode->i_size; +- v9fs_stat2inode_dotl(st, inode); +- if (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) +- inode->i_size = i_size; +- spin_unlock(&inode->i_lock); ++ flags = (v9ses->cache == CACHE_LOOSE || v9ses->cache == CACHE_FSCACHE) ? ++ V9FS_STAT2INODE_KEEP_ISIZE : 0; ++ v9fs_stat2inode_dotl(st, inode, flags); + out: + kfree(st); + return 0; +diff --git a/fs/9p/vfs_super.c b/fs/9p/vfs_super.c +index 48ce50484e80..eeab9953af89 100644 +--- a/fs/9p/vfs_super.c ++++ b/fs/9p/vfs_super.c +@@ -172,7 +172,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags, + goto release_sb; + } + d_inode(root)->i_ino = v9fs_qid2ino(&st->qid); +- v9fs_stat2inode_dotl(st, d_inode(root)); ++ v9fs_stat2inode_dotl(st, d_inode(root), 0); + kfree(st); + } else { + struct p9_wstat *st = NULL; +@@ -183,7 +183,7 @@ static struct dentry *v9fs_mount(struct file_system_type *fs_type, int flags, + } + + d_inode(root)->i_ino = v9fs_qid2ino(&st->qid); +- v9fs_stat2inode(st, d_inode(root), sb); ++ v9fs_stat2inode(st, d_inode(root), sb, 0); + + p9stat_free(st); + kfree(st); +diff --git a/fs/btrfs/acl.c b/fs/btrfs/acl.c +index 3b66c957ea6f..5810463dc6d2 100644 +--- a/fs/btrfs/acl.c ++++ b/fs/btrfs/acl.c +@@ -9,6 +9,7 @@ + #include + #include + #include ++#include + #include + + #include "ctree.h" +@@ -72,8 +73,16 @@ static int __btrfs_set_acl(struct btrfs_trans_handle *trans, + } + + if (acl) { ++ unsigned int nofs_flag; ++ + size = posix_acl_xattr_size(acl->a_count); ++ /* ++ * We're holding a transaction handle, so use a NOFS memory ++ * allocation context to avoid deadlock if reclaim happens. ++ */ ++ nofs_flag = memalloc_nofs_save(); + value = kmalloc(size, GFP_KERNEL); ++ memalloc_nofs_restore(nofs_flag); + if (!value) { + ret = -ENOMEM; + goto out; +diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c +index 8750c835f535..c4dea3b7349e 100644 +--- a/fs/btrfs/dev-replace.c ++++ b/fs/btrfs/dev-replace.c +@@ -862,6 +862,7 @@ int btrfs_dev_replace_cancel(struct btrfs_fs_info *fs_info) + btrfs_destroy_dev_replace_tgtdev(tgt_device); + break; + default: ++ up_write(&dev_replace->rwsem); + result = -EINVAL; + } + +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index 6a2a2a951705..888d72dda794 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -17,6 +17,7 @@ + #include + #include + #include ++#include + #include + #include "ctree.h" + #include "disk-io.h" +@@ -1258,10 +1259,17 @@ struct btrfs_root *btrfs_create_tree(struct btrfs_trans_handle *trans, + struct btrfs_root *tree_root = fs_info->tree_root; + struct btrfs_root *root; + struct btrfs_key key; ++ unsigned int nofs_flag; + int ret = 0; + uuid_le uuid = NULL_UUID_LE; + ++ /* ++ * We're holding a transaction handle, so use a NOFS memory allocation ++ * context to avoid deadlock if reclaim happens. ++ */ ++ nofs_flag = memalloc_nofs_save(); + root = btrfs_alloc_root(fs_info, GFP_KERNEL); ++ memalloc_nofs_restore(nofs_flag); + if (!root) + return ERR_PTR(-ENOMEM); + +diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c +index 52abe4082680..1bfb7207bbf0 100644 +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -2985,11 +2985,11 @@ static int __do_readpage(struct extent_io_tree *tree, + */ + if (test_bit(EXTENT_FLAG_COMPRESSED, &em->flags) && + prev_em_start && *prev_em_start != (u64)-1 && +- *prev_em_start != em->orig_start) ++ *prev_em_start != em->start) + force_bio_submit = true; + + if (prev_em_start) +- *prev_em_start = em->orig_start; ++ *prev_em_start = em->start; + + free_extent_map(em); + em = NULL; +diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c +index 9c8e1734429c..6e1119496721 100644 +--- a/fs/btrfs/ioctl.c ++++ b/fs/btrfs/ioctl.c +@@ -3206,21 +3206,6 @@ out: + return ret; + } + +-static void btrfs_double_inode_unlock(struct inode *inode1, struct inode *inode2) +-{ +- inode_unlock(inode1); +- inode_unlock(inode2); +-} +- +-static void btrfs_double_inode_lock(struct inode *inode1, struct inode *inode2) +-{ +- if (inode1 < inode2) +- swap(inode1, inode2); +- +- inode_lock_nested(inode1, I_MUTEX_PARENT); +- inode_lock_nested(inode2, I_MUTEX_CHILD); +-} +- + static void btrfs_double_extent_unlock(struct inode *inode1, u64 loff1, + struct inode *inode2, u64 loff2, u64 len) + { +@@ -3989,7 +3974,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in, + if (same_inode) + inode_lock(inode_in); + else +- btrfs_double_inode_lock(inode_in, inode_out); ++ lock_two_nondirectories(inode_in, inode_out); + + /* + * Now that the inodes are locked, we need to start writeback ourselves +@@ -4039,7 +4024,7 @@ static int btrfs_remap_file_range_prep(struct file *file_in, loff_t pos_in, + if (same_inode) + inode_unlock(inode_in); + else +- btrfs_double_inode_unlock(inode_in, inode_out); ++ unlock_two_nondirectories(inode_in, inode_out); + + return ret; + } +@@ -4069,7 +4054,7 @@ loff_t btrfs_remap_file_range(struct file *src_file, loff_t off, + if (same_inode) + inode_unlock(src_inode); + else +- btrfs_double_inode_unlock(src_inode, dst_inode); ++ unlock_two_nondirectories(src_inode, dst_inode); + + return ret < 0 ? ret : len; + } +diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c +index 6dcd36d7b849..1aeac70d0531 100644 +--- a/fs/btrfs/scrub.c ++++ b/fs/btrfs/scrub.c +@@ -584,6 +584,7 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx( + sctx->pages_per_rd_bio = SCRUB_PAGES_PER_RD_BIO; + sctx->curr = -1; + sctx->fs_info = fs_info; ++ INIT_LIST_HEAD(&sctx->csum_list); + for (i = 0; i < SCRUB_BIOS_PER_SCTX; ++i) { + struct scrub_bio *sbio; + +@@ -608,7 +609,6 @@ static noinline_for_stack struct scrub_ctx *scrub_setup_ctx( + atomic_set(&sctx->workers_pending, 0); + atomic_set(&sctx->cancel_req, 0); + sctx->csum_size = btrfs_super_csum_size(fs_info->super_copy); +- INIT_LIST_HEAD(&sctx->csum_list); + + spin_lock_init(&sctx->list_lock); + spin_lock_init(&sctx->stat_lock); +@@ -3770,16 +3770,6 @@ fail_scrub_workers: + return -ENOMEM; + } + +-static noinline_for_stack void scrub_workers_put(struct btrfs_fs_info *fs_info) +-{ +- if (--fs_info->scrub_workers_refcnt == 0) { +- btrfs_destroy_workqueue(fs_info->scrub_workers); +- btrfs_destroy_workqueue(fs_info->scrub_wr_completion_workers); +- btrfs_destroy_workqueue(fs_info->scrub_parity_workers); +- } +- WARN_ON(fs_info->scrub_workers_refcnt < 0); +-} +- + int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, + u64 end, struct btrfs_scrub_progress *progress, + int readonly, int is_dev_replace) +@@ -3788,6 +3778,9 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, + int ret; + struct btrfs_device *dev; + unsigned int nofs_flag; ++ struct btrfs_workqueue *scrub_workers = NULL; ++ struct btrfs_workqueue *scrub_wr_comp = NULL; ++ struct btrfs_workqueue *scrub_parity = NULL; + + if (btrfs_fs_closing(fs_info)) + return -EINVAL; +@@ -3927,9 +3920,16 @@ int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start, + + mutex_lock(&fs_info->scrub_lock); + dev->scrub_ctx = NULL; +- scrub_workers_put(fs_info); ++ if (--fs_info->scrub_workers_refcnt == 0) { ++ scrub_workers = fs_info->scrub_workers; ++ scrub_wr_comp = fs_info->scrub_wr_completion_workers; ++ scrub_parity = fs_info->scrub_parity_workers; ++ } + mutex_unlock(&fs_info->scrub_lock); + ++ btrfs_destroy_workqueue(scrub_workers); ++ btrfs_destroy_workqueue(scrub_wr_comp); ++ btrfs_destroy_workqueue(scrub_parity); + scrub_put_ctx(sctx); + + return ret; +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 15561926ab32..48523bcabae9 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -6782,10 +6782,10 @@ static int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info, + } + + if ((type & BTRFS_BLOCK_GROUP_RAID10 && sub_stripes != 2) || +- (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < 1) || ++ (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes != 2) || + (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < 2) || + (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < 3) || +- (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) || ++ (type & BTRFS_BLOCK_GROUP_DUP && num_stripes != 2) || + ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 && + num_stripes != 1)) { + btrfs_err(fs_info, +diff --git a/fs/cifs/cifs_fs_sb.h b/fs/cifs/cifs_fs_sb.h +index 42f0d67f1054..ed49222abecb 100644 +--- a/fs/cifs/cifs_fs_sb.h ++++ b/fs/cifs/cifs_fs_sb.h +@@ -58,6 +58,7 @@ struct cifs_sb_info { + spinlock_t tlink_tree_lock; + struct tcon_link *master_tlink; + struct nls_table *local_nls; ++ unsigned int bsize; + unsigned int rsize; + unsigned int wsize; + unsigned long actimeo; /* attribute cache timeout (jiffies) */ +diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c +index 62d48d486d8f..f2c0d863fb52 100644 +--- a/fs/cifs/cifsfs.c ++++ b/fs/cifs/cifsfs.c +@@ -554,6 +554,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root) + + seq_printf(s, ",rsize=%u", cifs_sb->rsize); + seq_printf(s, ",wsize=%u", cifs_sb->wsize); ++ seq_printf(s, ",bsize=%u", cifs_sb->bsize); + seq_printf(s, ",echo_interval=%lu", + tcon->ses->server->echo_interval / HZ); + if (tcon->snapshot_time) +diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h +index 94dbdbe5be34..1b25e6e95d45 100644 +--- a/fs/cifs/cifsglob.h ++++ b/fs/cifs/cifsglob.h +@@ -236,6 +236,8 @@ struct smb_version_operations { + int * (*get_credits_field)(struct TCP_Server_Info *, const int); + unsigned int (*get_credits)(struct mid_q_entry *); + __u64 (*get_next_mid)(struct TCP_Server_Info *); ++ void (*revert_current_mid)(struct TCP_Server_Info *server, ++ const unsigned int val); + /* data offset from read response message */ + unsigned int (*read_data_offset)(char *); + /* +@@ -557,6 +559,7 @@ struct smb_vol { + bool resilient:1; /* noresilient not required since not fored for CA */ + bool domainauto:1; + bool rdma:1; ++ unsigned int bsize; + unsigned int rsize; + unsigned int wsize; + bool sockopt_tcp_nodelay:1; +@@ -770,6 +773,22 @@ get_next_mid(struct TCP_Server_Info *server) + return cpu_to_le16(mid); + } + ++static inline void ++revert_current_mid(struct TCP_Server_Info *server, const unsigned int val) ++{ ++ if (server->ops->revert_current_mid) ++ server->ops->revert_current_mid(server, val); ++} ++ ++static inline void ++revert_current_mid_from_hdr(struct TCP_Server_Info *server, ++ const struct smb2_sync_hdr *shdr) ++{ ++ unsigned int num = le16_to_cpu(shdr->CreditCharge); ++ ++ return revert_current_mid(server, num > 0 ? num : 1); ++} ++ + static inline __u16 + get_mid(const struct smb_hdr *smb) + { +@@ -1422,6 +1441,7 @@ struct mid_q_entry { + struct kref refcount; + struct TCP_Server_Info *server; /* server corresponding to this mid */ + __u64 mid; /* multiplex id */ ++ __u16 credits; /* number of credits consumed by this mid */ + __u32 pid; /* process id */ + __u32 sequence_number; /* for CIFS signing */ + unsigned long when_alloc; /* when mid was created */ +diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c +index bb54ccf8481c..551924beb86f 100644 +--- a/fs/cifs/cifssmb.c ++++ b/fs/cifs/cifssmb.c +@@ -2125,12 +2125,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata) + + wdata2->cfile = find_writable_file(CIFS_I(inode), false); + if (!wdata2->cfile) { +- cifs_dbg(VFS, "No writable handles for inode\n"); ++ cifs_dbg(VFS, "No writable handle to retry writepages\n"); + rc = -EBADF; +- break; ++ } else { ++ wdata2->pid = wdata2->cfile->pid; ++ rc = server->ops->async_writev(wdata2, ++ cifs_writedata_release); + } +- wdata2->pid = wdata2->cfile->pid; +- rc = server->ops->async_writev(wdata2, cifs_writedata_release); + + for (j = 0; j < nr_pages; j++) { + unlock_page(wdata2->pages[j]); +@@ -2145,6 +2146,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) + kref_put(&wdata2->refcount, cifs_writedata_release); + if (is_retryable_error(rc)) + continue; ++ i += nr_pages; + break; + } + +@@ -2152,6 +2154,13 @@ cifs_writev_requeue(struct cifs_writedata *wdata) + i += nr_pages; + } while (i < wdata->nr_pages); + ++ /* cleanup remaining pages from the original wdata */ ++ for (; i < wdata->nr_pages; i++) { ++ SetPageError(wdata->pages[i]); ++ end_page_writeback(wdata->pages[i]); ++ put_page(wdata->pages[i]); ++ } ++ + if (rc != 0 && !is_retryable_error(rc)) + mapping_set_error(inode->i_mapping, rc); + kref_put(&wdata->refcount, cifs_writedata_release); +diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c +index 8463c940e0e5..e61cd2938c9e 100644 +--- a/fs/cifs/connect.c ++++ b/fs/cifs/connect.c +@@ -102,7 +102,7 @@ enum { + Opt_backupuid, Opt_backupgid, Opt_uid, + Opt_cruid, Opt_gid, Opt_file_mode, + Opt_dirmode, Opt_port, +- Opt_rsize, Opt_wsize, Opt_actimeo, ++ Opt_blocksize, Opt_rsize, Opt_wsize, Opt_actimeo, + Opt_echo_interval, Opt_max_credits, + Opt_snapshot, + +@@ -204,6 +204,7 @@ static const match_table_t cifs_mount_option_tokens = { + { Opt_dirmode, "dirmode=%s" }, + { Opt_dirmode, "dir_mode=%s" }, + { Opt_port, "port=%s" }, ++ { Opt_blocksize, "bsize=%s" }, + { Opt_rsize, "rsize=%s" }, + { Opt_wsize, "wsize=%s" }, + { Opt_actimeo, "actimeo=%s" }, +@@ -1571,7 +1572,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname, + vol->cred_uid = current_uid(); + vol->linux_uid = current_uid(); + vol->linux_gid = current_gid(); +- ++ vol->bsize = 1024 * 1024; /* can improve cp performance significantly */ + /* + * default to SFM style remapping of seven reserved characters + * unless user overrides it or we negotiate CIFS POSIX where +@@ -1944,6 +1945,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname, + } + port = (unsigned short)option; + break; ++ case Opt_blocksize: ++ if (get_option_ul(args, &option)) { ++ cifs_dbg(VFS, "%s: Invalid blocksize value\n", ++ __func__); ++ goto cifs_parse_mount_err; ++ } ++ /* ++ * inode blocksize realistically should never need to be ++ * less than 16K or greater than 16M and default is 1MB. ++ * Note that small inode block sizes (e.g. 64K) can lead ++ * to very poor performance of common tools like cp and scp ++ */ ++ if ((option < CIFS_MAX_MSGSIZE) || ++ (option > (4 * SMB3_DEFAULT_IOSIZE))) { ++ cifs_dbg(VFS, "%s: Invalid blocksize\n", ++ __func__); ++ goto cifs_parse_mount_err; ++ } ++ vol->bsize = option; ++ break; + case Opt_rsize: + if (get_option_ul(args, &option)) { + cifs_dbg(VFS, "%s: Invalid rsize value\n", +@@ -3839,6 +3860,7 @@ int cifs_setup_cifs_sb(struct smb_vol *pvolume_info, + spin_lock_init(&cifs_sb->tlink_tree_lock); + cifs_sb->tlink_tree = RB_ROOT; + ++ cifs_sb->bsize = pvolume_info->bsize; + /* + * Temporarily set r/wsize for matching superblock. If we end up using + * new sb then client will later negotiate it downward if needed. +diff --git a/fs/cifs/file.c b/fs/cifs/file.c +index 659ce1b92c44..95461db80011 100644 +--- a/fs/cifs/file.c ++++ b/fs/cifs/file.c +@@ -3028,14 +3028,16 @@ cifs_strict_writev(struct kiocb *iocb, struct iov_iter *from) + * these pages but not on the region from pos to ppos+len-1. + */ + written = cifs_user_writev(iocb, from); +- if (written > 0 && CIFS_CACHE_READ(cinode)) { ++ if (CIFS_CACHE_READ(cinode)) { + /* +- * Windows 7 server can delay breaking level2 oplock if a write +- * request comes - break it on the client to prevent reading +- * an old data. ++ * We have read level caching and we have just sent a write ++ * request to the server thus making data in the cache stale. ++ * Zap the cache and set oplock/lease level to NONE to avoid ++ * reading stale data from the cache. All subsequent read ++ * operations will read new data from the server. + */ + cifs_zap_mapping(inode); +- cifs_dbg(FYI, "Set no oplock for inode=%p after a write operation\n", ++ cifs_dbg(FYI, "Set Oplock/Lease to NONE for inode=%p after write\n", + inode); + cinode->oplock = 0; + } +diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c +index 478003644916..53fdb5df0d2e 100644 +--- a/fs/cifs/inode.c ++++ b/fs/cifs/inode.c +@@ -2080,7 +2080,7 @@ int cifs_getattr(const struct path *path, struct kstat *stat, + return rc; + + generic_fillattr(inode, stat); +- stat->blksize = CIFS_MAX_MSGSIZE; ++ stat->blksize = cifs_sb->bsize; + stat->ino = CIFS_I(inode)->uniqueid; + + /* old CIFS Unix Extensions doesn't return create time */ +diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c +index 7b8b58fb4d3f..58700d2ba8cd 100644 +--- a/fs/cifs/smb2misc.c ++++ b/fs/cifs/smb2misc.c +@@ -517,7 +517,6 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp, + __u8 lease_state; + struct list_head *tmp; + struct cifsFileInfo *cfile; +- struct TCP_Server_Info *server = tcon->ses->server; + struct cifs_pending_open *open; + struct cifsInodeInfo *cinode; + int ack_req = le32_to_cpu(rsp->Flags & +@@ -537,13 +536,25 @@ smb2_tcon_has_lease(struct cifs_tcon *tcon, struct smb2_lease_break *rsp, + cifs_dbg(FYI, "lease key match, lease break 0x%x\n", + le32_to_cpu(rsp->NewLeaseState)); + +- server->ops->set_oplock_level(cinode, lease_state, 0, NULL); +- + if (ack_req) + cfile->oplock_break_cancelled = false; + else + cfile->oplock_break_cancelled = true; + ++ set_bit(CIFS_INODE_PENDING_OPLOCK_BREAK, &cinode->flags); ++ ++ /* ++ * Set or clear flags depending on the lease state being READ. ++ * HANDLE caching flag should be added when the client starts ++ * to defer closing remote file handles with HANDLE leases. ++ */ ++ if (lease_state & SMB2_LEASE_READ_CACHING_HE) ++ set_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2, ++ &cinode->flags); ++ else ++ clear_bit(CIFS_INODE_DOWNGRADE_OPLOCK_TO_L2, ++ &cinode->flags); ++ + queue_work(cifsoplockd_wq, &cfile->oplock_break); + kfree(lw); + return true; +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index 6f96e2292856..b29f711ab965 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -219,6 +219,15 @@ smb2_get_next_mid(struct TCP_Server_Info *server) + return mid; + } + ++static void ++smb2_revert_current_mid(struct TCP_Server_Info *server, const unsigned int val) ++{ ++ spin_lock(&GlobalMid_Lock); ++ if (server->CurrentMid >= val) ++ server->CurrentMid -= val; ++ spin_unlock(&GlobalMid_Lock); ++} ++ + static struct mid_q_entry * + smb2_find_mid(struct TCP_Server_Info *server, char *buf) + { +@@ -2594,6 +2603,15 @@ smb2_downgrade_oplock(struct TCP_Server_Info *server, + server->ops->set_oplock_level(cinode, 0, 0, NULL); + } + ++static void ++smb21_downgrade_oplock(struct TCP_Server_Info *server, ++ struct cifsInodeInfo *cinode, bool set_level2) ++{ ++ server->ops->set_oplock_level(cinode, ++ set_level2 ? SMB2_LEASE_READ_CACHING_HE : ++ 0, 0, NULL); ++} ++ + static void + smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock, + unsigned int epoch, bool *purge_cache) +@@ -3541,6 +3559,7 @@ struct smb_version_operations smb20_operations = { + .get_credits = smb2_get_credits, + .wait_mtu_credits = cifs_wait_mtu_credits, + .get_next_mid = smb2_get_next_mid, ++ .revert_current_mid = smb2_revert_current_mid, + .read_data_offset = smb2_read_data_offset, + .read_data_length = smb2_read_data_length, + .map_error = map_smb2_to_linux_error, +@@ -3636,6 +3655,7 @@ struct smb_version_operations smb21_operations = { + .get_credits = smb2_get_credits, + .wait_mtu_credits = smb2_wait_mtu_credits, + .get_next_mid = smb2_get_next_mid, ++ .revert_current_mid = smb2_revert_current_mid, + .read_data_offset = smb2_read_data_offset, + .read_data_length = smb2_read_data_length, + .map_error = map_smb2_to_linux_error, +@@ -3646,7 +3666,7 @@ struct smb_version_operations smb21_operations = { + .print_stats = smb2_print_stats, + .is_oplock_break = smb2_is_valid_oplock_break, + .handle_cancelled_mid = smb2_handle_cancelled_mid, +- .downgrade_oplock = smb2_downgrade_oplock, ++ .downgrade_oplock = smb21_downgrade_oplock, + .need_neg = smb2_need_neg, + .negotiate = smb2_negotiate, + .negotiate_wsize = smb2_negotiate_wsize, +@@ -3732,6 +3752,7 @@ struct smb_version_operations smb30_operations = { + .get_credits = smb2_get_credits, + .wait_mtu_credits = smb2_wait_mtu_credits, + .get_next_mid = smb2_get_next_mid, ++ .revert_current_mid = smb2_revert_current_mid, + .read_data_offset = smb2_read_data_offset, + .read_data_length = smb2_read_data_length, + .map_error = map_smb2_to_linux_error, +@@ -3743,7 +3764,7 @@ struct smb_version_operations smb30_operations = { + .dump_share_caps = smb2_dump_share_caps, + .is_oplock_break = smb2_is_valid_oplock_break, + .handle_cancelled_mid = smb2_handle_cancelled_mid, +- .downgrade_oplock = smb2_downgrade_oplock, ++ .downgrade_oplock = smb21_downgrade_oplock, + .need_neg = smb2_need_neg, + .negotiate = smb2_negotiate, + .negotiate_wsize = smb3_negotiate_wsize, +@@ -3837,6 +3858,7 @@ struct smb_version_operations smb311_operations = { + .get_credits = smb2_get_credits, + .wait_mtu_credits = smb2_wait_mtu_credits, + .get_next_mid = smb2_get_next_mid, ++ .revert_current_mid = smb2_revert_current_mid, + .read_data_offset = smb2_read_data_offset, + .read_data_length = smb2_read_data_length, + .map_error = map_smb2_to_linux_error, +@@ -3848,7 +3870,7 @@ struct smb_version_operations smb311_operations = { + .dump_share_caps = smb2_dump_share_caps, + .is_oplock_break = smb2_is_valid_oplock_break, + .handle_cancelled_mid = smb2_handle_cancelled_mid, +- .downgrade_oplock = smb2_downgrade_oplock, ++ .downgrade_oplock = smb21_downgrade_oplock, + .need_neg = smb2_need_neg, + .negotiate = smb2_negotiate, + .negotiate_wsize = smb3_negotiate_wsize, +diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c +index 7b351c65ee46..63264db78b89 100644 +--- a/fs/cifs/smb2transport.c ++++ b/fs/cifs/smb2transport.c +@@ -576,6 +576,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr, + struct TCP_Server_Info *server) + { + struct mid_q_entry *temp; ++ unsigned int credits = le16_to_cpu(shdr->CreditCharge); + + if (server == NULL) { + cifs_dbg(VFS, "Null TCP session in smb2_mid_entry_alloc\n"); +@@ -586,6 +587,7 @@ smb2_mid_entry_alloc(const struct smb2_sync_hdr *shdr, + memset(temp, 0, sizeof(struct mid_q_entry)); + kref_init(&temp->refcount); + temp->mid = le64_to_cpu(shdr->MessageId); ++ temp->credits = credits > 0 ? credits : 1; + temp->pid = current->pid; + temp->command = shdr->Command; /* Always LE */ + temp->when_alloc = jiffies; +@@ -674,13 +676,18 @@ smb2_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst) + smb2_seq_num_into_buf(ses->server, shdr); + + rc = smb2_get_mid_entry(ses, shdr, &mid); +- if (rc) ++ if (rc) { ++ revert_current_mid_from_hdr(ses->server, shdr); + return ERR_PTR(rc); ++ } ++ + rc = smb2_sign_rqst(rqst, ses->server); + if (rc) { ++ revert_current_mid_from_hdr(ses->server, shdr); + cifs_delete_mid(mid); + return ERR_PTR(rc); + } ++ + return mid; + } + +@@ -695,11 +702,14 @@ smb2_setup_async_request(struct TCP_Server_Info *server, struct smb_rqst *rqst) + smb2_seq_num_into_buf(server, shdr); + + mid = smb2_mid_entry_alloc(shdr, server); +- if (mid == NULL) ++ if (mid == NULL) { ++ revert_current_mid_from_hdr(server, shdr); + return ERR_PTR(-ENOMEM); ++ } + + rc = smb2_sign_rqst(rqst, server); + if (rc) { ++ revert_current_mid_from_hdr(server, shdr); + DeleteMidQEntry(mid); + return ERR_PTR(rc); + } +diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c +index 53532bd3f50d..9544eb99b5a2 100644 +--- a/fs/cifs/transport.c ++++ b/fs/cifs/transport.c +@@ -647,6 +647,7 @@ cifs_call_async(struct TCP_Server_Info *server, struct smb_rqst *rqst, + cifs_in_send_dec(server); + + if (rc < 0) { ++ revert_current_mid(server, mid->credits); + server->sequence_number -= 2; + cifs_delete_mid(mid); + } +@@ -868,6 +869,7 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + for (i = 0; i < num_rqst; i++) { + midQ[i] = ses->server->ops->setup_request(ses, &rqst[i]); + if (IS_ERR(midQ[i])) { ++ revert_current_mid(ses->server, i); + for (j = 0; j < i; j++) + cifs_delete_mid(midQ[j]); + mutex_unlock(&ses->server->srv_mutex); +@@ -897,8 +899,10 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + for (i = 0; i < num_rqst; i++) + cifs_save_when_sent(midQ[i]); + +- if (rc < 0) ++ if (rc < 0) { ++ revert_current_mid(ses->server, num_rqst); + ses->server->sequence_number -= 2; ++ } + + mutex_unlock(&ses->server->srv_mutex); + +diff --git a/fs/dax.c b/fs/dax.c +index 6959837cc465..05cca2214ae3 100644 +--- a/fs/dax.c ++++ b/fs/dax.c +@@ -843,9 +843,8 @@ unlock_pte: + static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, + struct address_space *mapping, void *entry) + { +- unsigned long pfn; ++ unsigned long pfn, index, count; + long ret = 0; +- size_t size; + + /* + * A page got tagged dirty in DAX mapping? Something is seriously +@@ -894,17 +893,18 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, + xas_unlock_irq(xas); + + /* +- * Even if dax_writeback_mapping_range() was given a wbc->range_start +- * in the middle of a PMD, the 'index' we are given will be aligned to +- * the start index of the PMD, as will the pfn we pull from 'entry'. ++ * If dax_writeback_mapping_range() was given a wbc->range_start ++ * in the middle of a PMD, the 'index' we use needs to be ++ * aligned to the start of the PMD. + * This allows us to flush for PMD_SIZE and not have to worry about + * partial PMD writebacks. + */ + pfn = dax_to_pfn(entry); +- size = PAGE_SIZE << dax_entry_order(entry); ++ count = 1UL << dax_entry_order(entry); ++ index = xas->xa_index & ~(count - 1); + +- dax_entry_mkclean(mapping, xas->xa_index, pfn); +- dax_flush(dax_dev, page_address(pfn_to_page(pfn)), size); ++ dax_entry_mkclean(mapping, index, pfn); ++ dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE); + /* + * After we have flushed the cache, we can clear the dirty tag. There + * cannot be new dirty data in the pfn after the flush has completed as +@@ -917,8 +917,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev, + xas_clear_mark(xas, PAGECACHE_TAG_DIRTY); + dax_wake_entry(xas, entry, false); + +- trace_dax_writeback_one(mapping->host, xas->xa_index, +- size >> PAGE_SHIFT); ++ trace_dax_writeback_one(mapping->host, index, count); + return ret; + + put_unlocked: +diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c +index c53814539070..553a3f3300ae 100644 +--- a/fs/devpts/inode.c ++++ b/fs/devpts/inode.c +@@ -455,6 +455,7 @@ devpts_fill_super(struct super_block *s, void *data, int silent) + s->s_blocksize_bits = 10; + s->s_magic = DEVPTS_SUPER_MAGIC; + s->s_op = &devpts_sops; ++ s->s_d_op = &simple_dentry_operations; + s->s_time_gran = 1; + + error = -ENOMEM; +diff --git a/fs/ext2/super.c b/fs/ext2/super.c +index 73b2d528237f..a9ea38182578 100644 +--- a/fs/ext2/super.c ++++ b/fs/ext2/super.c +@@ -757,7 +757,8 @@ static loff_t ext2_max_size(int bits) + { + loff_t res = EXT2_NDIR_BLOCKS; + int meta_blocks; +- loff_t upper_limit; ++ unsigned int upper_limit; ++ unsigned int ppb = 1 << (bits-2); + + /* This is calculated to be the largest file size for a + * dense, file such that the total number of +@@ -771,24 +772,34 @@ static loff_t ext2_max_size(int bits) + /* total blocks in file system block size */ + upper_limit >>= (bits - 9); + ++ /* Compute how many blocks we can address by block tree */ ++ res += 1LL << (bits-2); ++ res += 1LL << (2*(bits-2)); ++ res += 1LL << (3*(bits-2)); ++ /* Does block tree limit file size? */ ++ if (res < upper_limit) ++ goto check_lfs; + ++ res = upper_limit; ++ /* How many metadata blocks are needed for addressing upper_limit? */ ++ upper_limit -= EXT2_NDIR_BLOCKS; + /* indirect blocks */ + meta_blocks = 1; ++ upper_limit -= ppb; + /* double indirect blocks */ +- meta_blocks += 1 + (1LL << (bits-2)); +- /* tripple indirect blocks */ +- meta_blocks += 1 + (1LL << (bits-2)) + (1LL << (2*(bits-2))); +- +- upper_limit -= meta_blocks; +- upper_limit <<= bits; +- +- res += 1LL << (bits-2); +- res += 1LL << (2*(bits-2)); +- res += 1LL << (3*(bits-2)); ++ if (upper_limit < ppb * ppb) { ++ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb); ++ res -= meta_blocks; ++ goto check_lfs; ++ } ++ meta_blocks += 1 + ppb; ++ upper_limit -= ppb * ppb; ++ /* tripple indirect blocks for the rest */ ++ meta_blocks += 1 + DIV_ROUND_UP(upper_limit, ppb) + ++ DIV_ROUND_UP(upper_limit, ppb*ppb); ++ res -= meta_blocks; ++check_lfs: + res <<= bits; +- if (res > upper_limit) +- res = upper_limit; +- + if (res > MAX_LFS_FILESIZE) + res = MAX_LFS_FILESIZE; + +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h +index 185a05d3257e..508a37ec9271 100644 +--- a/fs/ext4/ext4.h ++++ b/fs/ext4/ext4.h +@@ -426,6 +426,9 @@ struct flex_groups { + /* Flags that are appropriate for non-directories/regular files. */ + #define EXT4_OTHER_FLMASK (EXT4_NODUMP_FL | EXT4_NOATIME_FL) + ++/* The only flags that should be swapped */ ++#define EXT4_FL_SHOULD_SWAP (EXT4_HUGE_FILE_FL | EXT4_EXTENTS_FL) ++ + /* Mask out flags that are inappropriate for the given type of inode. */ + static inline __u32 ext4_mask_flags(umode_t mode, __u32 flags) + { +diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c +index d37dafa1d133..2e76fb55d94a 100644 +--- a/fs/ext4/ioctl.c ++++ b/fs/ext4/ioctl.c +@@ -63,18 +63,20 @@ static void swap_inode_data(struct inode *inode1, struct inode *inode2) + loff_t isize; + struct ext4_inode_info *ei1; + struct ext4_inode_info *ei2; ++ unsigned long tmp; + + ei1 = EXT4_I(inode1); + ei2 = EXT4_I(inode2); + + swap(inode1->i_version, inode2->i_version); +- swap(inode1->i_blocks, inode2->i_blocks); +- swap(inode1->i_bytes, inode2->i_bytes); + swap(inode1->i_atime, inode2->i_atime); + swap(inode1->i_mtime, inode2->i_mtime); + + memswap(ei1->i_data, ei2->i_data, sizeof(ei1->i_data)); +- swap(ei1->i_flags, ei2->i_flags); ++ tmp = ei1->i_flags & EXT4_FL_SHOULD_SWAP; ++ ei1->i_flags = (ei2->i_flags & EXT4_FL_SHOULD_SWAP) | ++ (ei1->i_flags & ~EXT4_FL_SHOULD_SWAP); ++ ei2->i_flags = tmp | (ei2->i_flags & ~EXT4_FL_SHOULD_SWAP); + swap(ei1->i_disksize, ei2->i_disksize); + ext4_es_remove_extent(inode1, 0, EXT_MAX_BLOCKS); + ext4_es_remove_extent(inode2, 0, EXT_MAX_BLOCKS); +@@ -115,28 +117,41 @@ static long swap_inode_boot_loader(struct super_block *sb, + int err; + struct inode *inode_bl; + struct ext4_inode_info *ei_bl; +- +- if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) || +- IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) || +- ext4_has_inline_data(inode)) +- return -EINVAL; +- +- if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) || +- !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) +- return -EPERM; ++ qsize_t size, size_bl, diff; ++ blkcnt_t blocks; ++ unsigned short bytes; + + inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO, EXT4_IGET_SPECIAL); + if (IS_ERR(inode_bl)) + return PTR_ERR(inode_bl); + ei_bl = EXT4_I(inode_bl); + +- filemap_flush(inode->i_mapping); +- filemap_flush(inode_bl->i_mapping); +- + /* Protect orig inodes against a truncate and make sure, + * that only 1 swap_inode_boot_loader is running. */ + lock_two_nondirectories(inode, inode_bl); + ++ if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) || ++ IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) || ++ ext4_has_inline_data(inode)) { ++ err = -EINVAL; ++ goto journal_err_out; ++ } ++ ++ if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) || ++ !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) { ++ err = -EPERM; ++ goto journal_err_out; ++ } ++ ++ down_write(&EXT4_I(inode)->i_mmap_sem); ++ err = filemap_write_and_wait(inode->i_mapping); ++ if (err) ++ goto err_out; ++ ++ err = filemap_write_and_wait(inode_bl->i_mapping); ++ if (err) ++ goto err_out; ++ + /* Wait for all existing dio workers */ + inode_dio_wait(inode); + inode_dio_wait(inode_bl); +@@ -147,7 +162,7 @@ static long swap_inode_boot_loader(struct super_block *sb, + handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2); + if (IS_ERR(handle)) { + err = -EINVAL; +- goto journal_err_out; ++ goto err_out; + } + + /* Protect extent tree against block allocations via delalloc */ +@@ -170,6 +185,13 @@ static long swap_inode_boot_loader(struct super_block *sb, + memset(ei_bl->i_data, 0, sizeof(ei_bl->i_data)); + } + ++ err = dquot_initialize(inode); ++ if (err) ++ goto err_out1; ++ ++ size = (qsize_t)(inode->i_blocks) * (1 << 9) + inode->i_bytes; ++ size_bl = (qsize_t)(inode_bl->i_blocks) * (1 << 9) + inode_bl->i_bytes; ++ diff = size - size_bl; + swap_inode_data(inode, inode_bl); + + inode->i_ctime = inode_bl->i_ctime = current_time(inode); +@@ -183,27 +205,51 @@ static long swap_inode_boot_loader(struct super_block *sb, + + err = ext4_mark_inode_dirty(handle, inode); + if (err < 0) { ++ /* No need to update quota information. */ + ext4_warning(inode->i_sb, + "couldn't mark inode #%lu dirty (err %d)", + inode->i_ino, err); + /* Revert all changes: */ + swap_inode_data(inode, inode_bl); + ext4_mark_inode_dirty(handle, inode); +- } else { +- err = ext4_mark_inode_dirty(handle, inode_bl); +- if (err < 0) { +- ext4_warning(inode_bl->i_sb, +- "couldn't mark inode #%lu dirty (err %d)", +- inode_bl->i_ino, err); +- /* Revert all changes: */ +- swap_inode_data(inode, inode_bl); +- ext4_mark_inode_dirty(handle, inode); +- ext4_mark_inode_dirty(handle, inode_bl); +- } ++ goto err_out1; ++ } ++ ++ blocks = inode_bl->i_blocks; ++ bytes = inode_bl->i_bytes; ++ inode_bl->i_blocks = inode->i_blocks; ++ inode_bl->i_bytes = inode->i_bytes; ++ err = ext4_mark_inode_dirty(handle, inode_bl); ++ if (err < 0) { ++ /* No need to update quota information. */ ++ ext4_warning(inode_bl->i_sb, ++ "couldn't mark inode #%lu dirty (err %d)", ++ inode_bl->i_ino, err); ++ goto revert; ++ } ++ ++ /* Bootloader inode should not be counted into quota information. */ ++ if (diff > 0) ++ dquot_free_space(inode, diff); ++ else ++ err = dquot_alloc_space(inode, -1 * diff); ++ ++ if (err < 0) { ++revert: ++ /* Revert all changes: */ ++ inode_bl->i_blocks = blocks; ++ inode_bl->i_bytes = bytes; ++ swap_inode_data(inode, inode_bl); ++ ext4_mark_inode_dirty(handle, inode); ++ ext4_mark_inode_dirty(handle, inode_bl); + } ++ ++err_out1: + ext4_journal_stop(handle); + ext4_double_up_write_data_sem(inode, inode_bl); + ++err_out: ++ up_write(&EXT4_I(inode)->i_mmap_sem); + journal_err_out: + unlock_two_nondirectories(inode, inode_bl); + iput(inode_bl); +diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c +index 48421de803b7..3d9b18505c0c 100644 +--- a/fs/ext4/resize.c ++++ b/fs/ext4/resize.c +@@ -1960,7 +1960,8 @@ retry: + le16_to_cpu(es->s_reserved_gdt_blocks); + n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb); + n_blocks_count = (ext4_fsblk_t)n_group * +- EXT4_BLOCKS_PER_GROUP(sb); ++ EXT4_BLOCKS_PER_GROUP(sb) + ++ le32_to_cpu(es->s_first_data_block); + n_group--; /* set to last group number */ + } + +diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c +index cc35537232f2..f0d8dabe1ff5 100644 +--- a/fs/jbd2/transaction.c ++++ b/fs/jbd2/transaction.c +@@ -1252,11 +1252,12 @@ int jbd2_journal_get_undo_access(handle_t *handle, struct buffer_head *bh) + struct journal_head *jh; + char *committed_data = NULL; + +- JBUFFER_TRACE(jh, "entry"); + if (jbd2_write_access_granted(handle, bh, true)) + return 0; + + jh = jbd2_journal_add_journal_head(bh); ++ JBUFFER_TRACE(jh, "entry"); ++ + /* + * Do this first --- it can drop the journal lock, so we want to + * make sure that obtaining the committed_data is done +@@ -1367,15 +1368,17 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh) + + if (is_handle_aborted(handle)) + return -EROFS; +- if (!buffer_jbd(bh)) { +- ret = -EUCLEAN; +- goto out; +- } ++ if (!buffer_jbd(bh)) ++ return -EUCLEAN; ++ + /* + * We don't grab jh reference here since the buffer must be part + * of the running transaction. + */ + jh = bh2jh(bh); ++ jbd_debug(5, "journal_head %p\n", jh); ++ JBUFFER_TRACE(jh, "entry"); ++ + /* + * This and the following assertions are unreliable since we may see jh + * in inconsistent state unless we grab bh_state lock. But this is +@@ -1409,9 +1412,6 @@ int jbd2_journal_dirty_metadata(handle_t *handle, struct buffer_head *bh) + } + + journal = transaction->t_journal; +- jbd_debug(5, "journal_head %p\n", jh); +- JBUFFER_TRACE(jh, "entry"); +- + jbd_lock_bh_state(bh); + + if (jh->b_modified == 0) { +@@ -1609,14 +1609,21 @@ int jbd2_journal_forget (handle_t *handle, struct buffer_head *bh) + /* However, if the buffer is still owned by a prior + * (committing) transaction, we can't drop it yet... */ + JBUFFER_TRACE(jh, "belongs to older transaction"); +- /* ... but we CAN drop it from the new transaction if we +- * have also modified it since the original commit. */ ++ /* ... but we CAN drop it from the new transaction through ++ * marking the buffer as freed and set j_next_transaction to ++ * the new transaction, so that not only the commit code ++ * knows it should clear dirty bits when it is done with the ++ * buffer, but also the buffer can be checkpointed only ++ * after the new transaction commits. */ + +- if (jh->b_next_transaction) { +- J_ASSERT(jh->b_next_transaction == transaction); ++ set_buffer_freed(bh); ++ ++ if (!jh->b_next_transaction) { + spin_lock(&journal->j_list_lock); +- jh->b_next_transaction = NULL; ++ jh->b_next_transaction = transaction; + spin_unlock(&journal->j_list_lock); ++ } else { ++ J_ASSERT(jh->b_next_transaction == transaction); + + /* + * only drop a reference if this transaction modified +diff --git a/fs/kernfs/mount.c b/fs/kernfs/mount.c +index fdf527b6d79c..d71c9405874a 100644 +--- a/fs/kernfs/mount.c ++++ b/fs/kernfs/mount.c +@@ -196,8 +196,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn, + return dentry; + + knparent = find_next_ancestor(kn, NULL); +- if (WARN_ON(!knparent)) ++ if (WARN_ON(!knparent)) { ++ dput(dentry); + return ERR_PTR(-EINVAL); ++ } + + do { + struct dentry *dtmp; +@@ -206,8 +208,10 @@ struct dentry *kernfs_node_dentry(struct kernfs_node *kn, + if (kn == knparent) + return dentry; + kntmp = find_next_ancestor(kn, knparent); +- if (WARN_ON(!kntmp)) ++ if (WARN_ON(!kntmp)) { ++ dput(dentry); + return ERR_PTR(-EINVAL); ++ } + dtmp = lookup_one_len_unlocked(kntmp->name, dentry, + strlen(kntmp->name)); + dput(dentry); +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 557a5d636183..64ac80ec6b7b 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -947,6 +947,13 @@ nfs4_sequence_process_interrupted(struct nfs_client *client, + + #endif /* !CONFIG_NFS_V4_1 */ + ++static void nfs41_sequence_res_init(struct nfs4_sequence_res *res) ++{ ++ res->sr_timestamp = jiffies; ++ res->sr_status_flags = 0; ++ res->sr_status = 1; ++} ++ + static + void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args, + struct nfs4_sequence_res *res, +@@ -958,10 +965,6 @@ void nfs4_sequence_attach_slot(struct nfs4_sequence_args *args, + args->sa_slot = slot; + + res->sr_slot = slot; +- res->sr_timestamp = jiffies; +- res->sr_status_flags = 0; +- res->sr_status = 1; +- + } + + int nfs4_setup_sequence(struct nfs_client *client, +@@ -1007,6 +1010,7 @@ int nfs4_setup_sequence(struct nfs_client *client, + + trace_nfs4_setup_sequence(session, args); + out_start: ++ nfs41_sequence_res_init(res); + rpc_call_start(task); + return 0; + +diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c +index e54d899c1848..a8951f1f7b4e 100644 +--- a/fs/nfs/pagelist.c ++++ b/fs/nfs/pagelist.c +@@ -988,6 +988,17 @@ static void nfs_pageio_doio(struct nfs_pageio_descriptor *desc) + } + } + ++static void ++nfs_pageio_cleanup_request(struct nfs_pageio_descriptor *desc, ++ struct nfs_page *req) ++{ ++ LIST_HEAD(head); ++ ++ nfs_list_remove_request(req); ++ nfs_list_add_request(req, &head); ++ desc->pg_completion_ops->error_cleanup(&head); ++} ++ + /** + * nfs_pageio_add_request - Attempt to coalesce a request into a page list. + * @desc: destination io descriptor +@@ -1025,10 +1036,8 @@ static int __nfs_pageio_add_request(struct nfs_pageio_descriptor *desc, + nfs_page_group_unlock(req); + desc->pg_moreio = 1; + nfs_pageio_doio(desc); +- if (desc->pg_error < 0) +- return 0; +- if (mirror->pg_recoalesce) +- return 0; ++ if (desc->pg_error < 0 || mirror->pg_recoalesce) ++ goto out_cleanup_subreq; + /* retry add_request for this subreq */ + nfs_page_group_lock(req); + continue; +@@ -1061,6 +1070,10 @@ err_ptr: + desc->pg_error = PTR_ERR(subreq); + nfs_page_group_unlock(req); + return 0; ++out_cleanup_subreq: ++ if (req != subreq) ++ nfs_pageio_cleanup_request(desc, subreq); ++ return 0; + } + + static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) +@@ -1079,7 +1092,6 @@ static int nfs_do_recoalesce(struct nfs_pageio_descriptor *desc) + struct nfs_page *req; + + req = list_first_entry(&head, struct nfs_page, wb_list); +- nfs_list_remove_request(req); + if (__nfs_pageio_add_request(desc, req)) + continue; + if (desc->pg_error < 0) { +@@ -1168,11 +1180,14 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc, + if (nfs_pgio_has_mirroring(desc)) + desc->pg_mirror_idx = midx; + if (!nfs_pageio_add_request_mirror(desc, dupreq)) +- goto out_failed; ++ goto out_cleanup_subreq; + } + + return 1; + ++out_cleanup_subreq: ++ if (req != dupreq) ++ nfs_pageio_cleanup_request(desc, dupreq); + out_failed: + nfs_pageio_error_cleanup(desc); + return 0; +@@ -1194,7 +1209,7 @@ static void nfs_pageio_complete_mirror(struct nfs_pageio_descriptor *desc, + desc->pg_mirror_idx = mirror_idx; + for (;;) { + nfs_pageio_doio(desc); +- if (!mirror->pg_recoalesce) ++ if (desc->pg_error < 0 || !mirror->pg_recoalesce) + break; + if (!nfs_do_recoalesce(desc)) + break; +diff --git a/fs/nfsd/nfs3proc.c b/fs/nfsd/nfs3proc.c +index 9eb8086ea841..c9cf46e0c040 100644 +--- a/fs/nfsd/nfs3proc.c ++++ b/fs/nfsd/nfs3proc.c +@@ -463,8 +463,19 @@ nfsd3_proc_readdir(struct svc_rqst *rqstp) + &resp->common, nfs3svc_encode_entry); + memcpy(resp->verf, argp->verf, 8); + resp->count = resp->buffer - argp->buffer; +- if (resp->offset) +- xdr_encode_hyper(resp->offset, argp->cookie); ++ if (resp->offset) { ++ loff_t offset = argp->cookie; ++ ++ if (unlikely(resp->offset1)) { ++ /* we ended up with offset on a page boundary */ ++ *resp->offset = htonl(offset >> 32); ++ *resp->offset1 = htonl(offset & 0xffffffff); ++ resp->offset1 = NULL; ++ } else { ++ xdr_encode_hyper(resp->offset, offset); ++ } ++ resp->offset = NULL; ++ } + + RETURN_STATUS(nfserr); + } +@@ -533,6 +544,7 @@ nfsd3_proc_readdirplus(struct svc_rqst *rqstp) + } else { + xdr_encode_hyper(resp->offset, offset); + } ++ resp->offset = NULL; + } + + RETURN_STATUS(nfserr); +diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c +index 9b973f4f7d01..83919116d5cb 100644 +--- a/fs/nfsd/nfs3xdr.c ++++ b/fs/nfsd/nfs3xdr.c +@@ -921,6 +921,7 @@ encode_entry(struct readdir_cd *ccd, const char *name, int namlen, + } else { + xdr_encode_hyper(cd->offset, offset64); + } ++ cd->offset = NULL; + } + + /* +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index fb3c9844c82a..6a45fb00c5fc 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1544,16 +1544,16 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca) + { + u32 slotsize = slot_bytes(ca); + u32 num = ca->maxreqs; +- int avail; ++ unsigned long avail, total_avail; + + spin_lock(&nfsd_drc_lock); +- avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, +- nfsd_drc_max_mem - nfsd_drc_mem_used); ++ total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; ++ avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); + /* + * Never use more than a third of the remaining memory, + * unless it's the only way to give this client a slot: + */ +- avail = clamp_t(int, avail, slotsize, avail/3); ++ avail = clamp_t(int, avail, slotsize, total_avail/3); + num = min_t(int, num, avail / slotsize); + nfsd_drc_mem_used += num * slotsize; + spin_unlock(&nfsd_drc_lock); +diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c +index 72a7681f4046..f2feb2d11bae 100644 +--- a/fs/nfsd/nfsctl.c ++++ b/fs/nfsd/nfsctl.c +@@ -1126,7 +1126,7 @@ static ssize_t write_v4_end_grace(struct file *file, char *buf, size_t size) + case 'Y': + case 'y': + case '1': +- if (nn->nfsd_serv) ++ if (!nn->nfsd_serv) + return -EBUSY; + nfsd4_end_grace(nn); + break; +diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c +index 9e62dcf06fc4..68b3303e4b46 100644 +--- a/fs/overlayfs/copy_up.c ++++ b/fs/overlayfs/copy_up.c +@@ -443,6 +443,24 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp) + { + int err; + ++ /* ++ * Copy up data first and then xattrs. Writing data after ++ * xattrs will remove security.capability xattr automatically. ++ */ ++ if (S_ISREG(c->stat.mode) && !c->metacopy) { ++ struct path upperpath, datapath; ++ ++ ovl_path_upper(c->dentry, &upperpath); ++ if (WARN_ON(upperpath.dentry != NULL)) ++ return -EIO; ++ upperpath.dentry = temp; ++ ++ ovl_path_lowerdata(c->dentry, &datapath); ++ err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size); ++ if (err) ++ return err; ++ } ++ + err = ovl_copy_xattr(c->lowerpath.dentry, temp); + if (err) + return err; +@@ -460,19 +478,6 @@ static int ovl_copy_up_inode(struct ovl_copy_up_ctx *c, struct dentry *temp) + return err; + } + +- if (S_ISREG(c->stat.mode) && !c->metacopy) { +- struct path upperpath, datapath; +- +- ovl_path_upper(c->dentry, &upperpath); +- BUG_ON(upperpath.dentry != NULL); +- upperpath.dentry = temp; +- +- ovl_path_lowerdata(c->dentry, &datapath); +- err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size); +- if (err) +- return err; +- } +- + if (c->metacopy) { + err = ovl_check_setxattr(c->dentry, temp, OVL_XATTR_METACOPY, + NULL, 0, -EOPNOTSUPP); +@@ -737,6 +742,8 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c) + { + struct path upperpath, datapath; + int err; ++ char *capability = NULL; ++ ssize_t uninitialized_var(cap_size); + + ovl_path_upper(c->dentry, &upperpath); + if (WARN_ON(upperpath.dentry == NULL)) +@@ -746,15 +753,37 @@ static int ovl_copy_up_meta_inode_data(struct ovl_copy_up_ctx *c) + if (WARN_ON(datapath.dentry == NULL)) + return -EIO; + ++ if (c->stat.size) { ++ err = cap_size = ovl_getxattr(upperpath.dentry, XATTR_NAME_CAPS, ++ &capability, 0); ++ if (err < 0 && err != -ENODATA) ++ goto out; ++ } ++ + err = ovl_copy_up_data(&datapath, &upperpath, c->stat.size); + if (err) +- return err; ++ goto out_free; ++ ++ /* ++ * Writing to upper file will clear security.capability xattr. We ++ * don't want that to happen for normal copy-up operation. ++ */ ++ if (capability) { ++ err = ovl_do_setxattr(upperpath.dentry, XATTR_NAME_CAPS, ++ capability, cap_size, 0); ++ if (err) ++ goto out_free; ++ } ++ + + err = vfs_removexattr(upperpath.dentry, OVL_XATTR_METACOPY); + if (err) +- return err; ++ goto out_free; + + ovl_set_upperdata(d_inode(c->dentry)); ++out_free: ++ kfree(capability); ++out: + return err; + } + +diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h +index 5e45cb3630a0..9c6018287d57 100644 +--- a/fs/overlayfs/overlayfs.h ++++ b/fs/overlayfs/overlayfs.h +@@ -277,6 +277,8 @@ int ovl_lock_rename_workdir(struct dentry *workdir, struct dentry *upperdir); + int ovl_check_metacopy_xattr(struct dentry *dentry); + bool ovl_is_metacopy_dentry(struct dentry *dentry); + char *ovl_get_redirect_xattr(struct dentry *dentry, int padding); ++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value, ++ size_t padding); + + static inline bool ovl_is_impuredir(struct dentry *dentry) + { +diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c +index 7c01327b1852..4035e640f402 100644 +--- a/fs/overlayfs/util.c ++++ b/fs/overlayfs/util.c +@@ -863,28 +863,49 @@ bool ovl_is_metacopy_dentry(struct dentry *dentry) + return (oe->numlower > 1); + } + +-char *ovl_get_redirect_xattr(struct dentry *dentry, int padding) ++ssize_t ovl_getxattr(struct dentry *dentry, char *name, char **value, ++ size_t padding) + { +- int res; +- char *s, *next, *buf = NULL; ++ ssize_t res; ++ char *buf = NULL; + +- res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, NULL, 0); ++ res = vfs_getxattr(dentry, name, NULL, 0); + if (res < 0) { + if (res == -ENODATA || res == -EOPNOTSUPP) +- return NULL; ++ return -ENODATA; + goto fail; + } + +- buf = kzalloc(res + padding + 1, GFP_KERNEL); +- if (!buf) +- return ERR_PTR(-ENOMEM); ++ if (res != 0) { ++ buf = kzalloc(res + padding, GFP_KERNEL); ++ if (!buf) ++ return -ENOMEM; + +- if (res == 0) +- goto invalid; ++ res = vfs_getxattr(dentry, name, buf, res); ++ if (res < 0) ++ goto fail; ++ } ++ *value = buf; ++ ++ return res; ++ ++fail: ++ pr_warn_ratelimited("overlayfs: failed to get xattr %s: err=%zi)\n", ++ name, res); ++ kfree(buf); ++ return res; ++} + +- res = vfs_getxattr(dentry, OVL_XATTR_REDIRECT, buf, res); ++char *ovl_get_redirect_xattr(struct dentry *dentry, int padding) ++{ ++ int res; ++ char *s, *next, *buf = NULL; ++ ++ res = ovl_getxattr(dentry, OVL_XATTR_REDIRECT, &buf, padding + 1); ++ if (res == -ENODATA) ++ return NULL; + if (res < 0) +- goto fail; ++ return ERR_PTR(res); + if (res == 0) + goto invalid; + +@@ -900,15 +921,9 @@ char *ovl_get_redirect_xattr(struct dentry *dentry, int padding) + } + + return buf; +- +-err_free: +- kfree(buf); +- return ERR_PTR(res); +-fail: +- pr_warn_ratelimited("overlayfs: failed to get redirect (%i)\n", res); +- goto err_free; + invalid: + pr_warn_ratelimited("overlayfs: invalid redirect (%s)\n", buf); + res = -EINVAL; +- goto err_free; ++ kfree(buf); ++ return ERR_PTR(res); + } +diff --git a/fs/pipe.c b/fs/pipe.c +index bdc5d3c0977d..c51750ed4011 100644 +--- a/fs/pipe.c ++++ b/fs/pipe.c +@@ -234,6 +234,14 @@ static const struct pipe_buf_operations anon_pipe_buf_ops = { + .get = generic_pipe_buf_get, + }; + ++static const struct pipe_buf_operations anon_pipe_buf_nomerge_ops = { ++ .can_merge = 0, ++ .confirm = generic_pipe_buf_confirm, ++ .release = anon_pipe_buf_release, ++ .steal = anon_pipe_buf_steal, ++ .get = generic_pipe_buf_get, ++}; ++ + static const struct pipe_buf_operations packet_pipe_buf_ops = { + .can_merge = 0, + .confirm = generic_pipe_buf_confirm, +@@ -242,6 +250,12 @@ static const struct pipe_buf_operations packet_pipe_buf_ops = { + .get = generic_pipe_buf_get, + }; + ++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf) ++{ ++ if (buf->ops == &anon_pipe_buf_ops) ++ buf->ops = &anon_pipe_buf_nomerge_ops; ++} ++ + static ssize_t + pipe_read(struct kiocb *iocb, struct iov_iter *to) + { +diff --git a/fs/splice.c b/fs/splice.c +index de2ede048473..90c29675d573 100644 +--- a/fs/splice.c ++++ b/fs/splice.c +@@ -1597,6 +1597,8 @@ retry: + */ + obuf->flags &= ~PIPE_BUF_FLAG_GIFT; + ++ pipe_buf_mark_unmergeable(obuf); ++ + obuf->len = len; + opipe->nrbufs++; + ibuf->offset += obuf->len; +@@ -1671,6 +1673,8 @@ static int link_pipe(struct pipe_inode_info *ipipe, + */ + obuf->flags &= ~PIPE_BUF_FLAG_GIFT; + ++ pipe_buf_mark_unmergeable(obuf); ++ + if (obuf->len > len) + obuf->len = len; + +diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h +index 3d7a6a9c2370..f8f6f04c4453 100644 +--- a/include/asm-generic/vmlinux.lds.h ++++ b/include/asm-generic/vmlinux.lds.h +@@ -733,7 +733,7 @@ + KEEP(*(.orc_unwind_ip)) \ + __stop_orc_unwind_ip = .; \ + } \ +- . = ALIGN(6); \ ++ . = ALIGN(2); \ + .orc_unwind : AT(ADDR(.orc_unwind) - LOAD_OFFSET) { \ + __start_orc_unwind = .; \ + KEEP(*(.orc_unwind)) \ +diff --git a/include/linux/device-mapper.h b/include/linux/device-mapper.h +index e528baebad69..bee4bb9f81bc 100644 +--- a/include/linux/device-mapper.h ++++ b/include/linux/device-mapper.h +@@ -609,7 +609,7 @@ do { \ + */ + #define dm_target_offset(ti, sector) ((sector) - (ti)->begin) + +-static inline sector_t to_sector(unsigned long n) ++static inline sector_t to_sector(unsigned long long n) + { + return (n >> SECTOR_SHIFT); + } +diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h +index f6ded992c183..5b21f14802e1 100644 +--- a/include/linux/dma-mapping.h ++++ b/include/linux/dma-mapping.h +@@ -130,6 +130,7 @@ struct dma_map_ops { + enum dma_data_direction direction); + int (*dma_supported)(struct device *dev, u64 mask); + u64 (*get_required_mask)(struct device *dev); ++ size_t (*max_mapping_size)(struct device *dev); + }; + + #define DMA_MAPPING_ERROR (~(dma_addr_t)0) +@@ -257,6 +258,8 @@ static inline void dma_direct_sync_sg_for_cpu(struct device *dev, + } + #endif + ++size_t dma_direct_max_mapping_size(struct device *dev); ++ + #ifdef CONFIG_HAS_DMA + #include + +@@ -460,6 +463,7 @@ int dma_supported(struct device *dev, u64 mask); + int dma_set_mask(struct device *dev, u64 mask); + int dma_set_coherent_mask(struct device *dev, u64 mask); + u64 dma_get_required_mask(struct device *dev); ++size_t dma_max_mapping_size(struct device *dev); + #else /* CONFIG_HAS_DMA */ + static inline dma_addr_t dma_map_page_attrs(struct device *dev, + struct page *page, size_t offset, size_t size, +@@ -561,6 +565,10 @@ static inline u64 dma_get_required_mask(struct device *dev) + { + return 0; + } ++static inline size_t dma_max_mapping_size(struct device *dev) ++{ ++ return 0; ++} + #endif /* CONFIG_HAS_DMA */ + + static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr, +diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h +index 0fbbcdf0c178..da0af631ded5 100644 +--- a/include/linux/hardirq.h ++++ b/include/linux/hardirq.h +@@ -60,8 +60,14 @@ extern void irq_enter(void); + */ + extern void irq_exit(void); + ++#ifndef arch_nmi_enter ++#define arch_nmi_enter() do { } while (0) ++#define arch_nmi_exit() do { } while (0) ++#endif ++ + #define nmi_enter() \ + do { \ ++ arch_nmi_enter(); \ + printk_nmi_enter(); \ + lockdep_off(); \ + ftrace_nmi_enter(); \ +@@ -80,6 +86,7 @@ extern void irq_exit(void); + ftrace_nmi_exit(); \ + lockdep_on(); \ + printk_nmi_exit(); \ ++ arch_nmi_exit(); \ + } while (0) + + #endif /* LINUX_HARDIRQ_H */ +diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h +index c38cc5eb7e73..cf761ff58224 100644 +--- a/include/linux/kvm_host.h ++++ b/include/linux/kvm_host.h +@@ -634,7 +634,7 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free, + struct kvm_memory_slot *dont); + int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned long npages); +-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots); ++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); + int kvm_arch_prepare_memory_region(struct kvm *kvm, + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region *mem, +diff --git a/include/linux/pipe_fs_i.h b/include/linux/pipe_fs_i.h +index 5a3bb3b7c9ad..3ecd7ea212ae 100644 +--- a/include/linux/pipe_fs_i.h ++++ b/include/linux/pipe_fs_i.h +@@ -182,6 +182,7 @@ void generic_pipe_buf_get(struct pipe_inode_info *, struct pipe_buffer *); + int generic_pipe_buf_confirm(struct pipe_inode_info *, struct pipe_buffer *); + int generic_pipe_buf_steal(struct pipe_inode_info *, struct pipe_buffer *); + void generic_pipe_buf_release(struct pipe_inode_info *, struct pipe_buffer *); ++void pipe_buf_mark_unmergeable(struct pipe_buffer *buf); + + extern const struct pipe_buf_operations nosteal_pipe_buf_ops; + +diff --git a/include/linux/property.h b/include/linux/property.h +index 3789ec755fb6..65d3420dd5d1 100644 +--- a/include/linux/property.h ++++ b/include/linux/property.h +@@ -258,7 +258,7 @@ struct property_entry { + #define PROPERTY_ENTRY_STRING(_name_, _val_) \ + (struct property_entry) { \ + .name = _name_, \ +- .length = sizeof(_val_), \ ++ .length = sizeof(const char *), \ + .type = DEV_PROP_STRING, \ + { .value = { .str = _val_ } }, \ + } +diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h +index 7c007ed7505f..29bc3a203283 100644 +--- a/include/linux/swiotlb.h ++++ b/include/linux/swiotlb.h +@@ -76,6 +76,8 @@ bool swiotlb_map(struct device *dev, phys_addr_t *phys, dma_addr_t *dma_addr, + size_t size, enum dma_data_direction dir, unsigned long attrs); + void __init swiotlb_exit(void); + unsigned int swiotlb_max_segment(void); ++size_t swiotlb_max_mapping_size(struct device *dev); ++bool is_swiotlb_active(void); + #else + #define swiotlb_force SWIOTLB_NO_FORCE + static inline bool is_swiotlb_buffer(phys_addr_t paddr) +@@ -95,6 +97,15 @@ static inline unsigned int swiotlb_max_segment(void) + { + return 0; + } ++static inline size_t swiotlb_max_mapping_size(struct device *dev) ++{ ++ return SIZE_MAX; ++} ++ ++static inline bool is_swiotlb_active(void) ++{ ++ return false; ++} + #endif /* CONFIG_SWIOTLB */ + + extern void swiotlb_print_info(void); +diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c +index f31bd61c9466..503bba3c4bae 100644 +--- a/kernel/cgroup/cgroup.c ++++ b/kernel/cgroup/cgroup.c +@@ -2033,7 +2033,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags, + struct cgroup_namespace *ns) + { + struct dentry *dentry; +- bool new_sb; ++ bool new_sb = false; + + dentry = kernfs_mount(fs_type, flags, root->kf_root, magic, &new_sb); + +@@ -2043,6 +2043,7 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags, + */ + if (!IS_ERR(dentry) && ns != &init_cgroup_ns) { + struct dentry *nsdentry; ++ struct super_block *sb = dentry->d_sb; + struct cgroup *cgrp; + + mutex_lock(&cgroup_mutex); +@@ -2053,12 +2054,14 @@ struct dentry *cgroup_do_mount(struct file_system_type *fs_type, int flags, + spin_unlock_irq(&css_set_lock); + mutex_unlock(&cgroup_mutex); + +- nsdentry = kernfs_node_dentry(cgrp->kn, dentry->d_sb); ++ nsdentry = kernfs_node_dentry(cgrp->kn, sb); + dput(dentry); ++ if (IS_ERR(nsdentry)) ++ deactivate_locked_super(sb); + dentry = nsdentry; + } + +- if (IS_ERR(dentry) || !new_sb) ++ if (!new_sb) + cgroup_put(&root->cgrp); + + return dentry; +diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c +index 355d16acee6d..6310ad01f915 100644 +--- a/kernel/dma/direct.c ++++ b/kernel/dma/direct.c +@@ -380,3 +380,14 @@ int dma_direct_supported(struct device *dev, u64 mask) + */ + return mask >= __phys_to_dma(dev, min_mask); + } ++ ++size_t dma_direct_max_mapping_size(struct device *dev) ++{ ++ size_t size = SIZE_MAX; ++ ++ /* If SWIOTLB is active, use its maximum mapping size */ ++ if (is_swiotlb_active()) ++ size = swiotlb_max_mapping_size(dev); ++ ++ return size; ++} +diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c +index a11006b6d8e8..5753008ab286 100644 +--- a/kernel/dma/mapping.c ++++ b/kernel/dma/mapping.c +@@ -357,3 +357,17 @@ void dma_cache_sync(struct device *dev, void *vaddr, size_t size, + ops->cache_sync(dev, vaddr, size, dir); + } + EXPORT_SYMBOL(dma_cache_sync); ++ ++size_t dma_max_mapping_size(struct device *dev) ++{ ++ const struct dma_map_ops *ops = get_dma_ops(dev); ++ size_t size = SIZE_MAX; ++ ++ if (dma_is_direct(ops)) ++ size = dma_direct_max_mapping_size(dev); ++ else if (ops && ops->max_mapping_size) ++ size = ops->max_mapping_size(dev); ++ ++ return size; ++} ++EXPORT_SYMBOL_GPL(dma_max_mapping_size); +diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c +index 1fb6fd68b9c7..c873f9cc2146 100644 +--- a/kernel/dma/swiotlb.c ++++ b/kernel/dma/swiotlb.c +@@ -662,3 +662,17 @@ swiotlb_dma_supported(struct device *hwdev, u64 mask) + { + return __phys_to_dma(hwdev, io_tlb_end - 1) <= mask; + } ++ ++size_t swiotlb_max_mapping_size(struct device *dev) ++{ ++ return ((size_t)1 << IO_TLB_SHIFT) * IO_TLB_SEGSIZE; ++} ++ ++bool is_swiotlb_active(void) ++{ ++ /* ++ * When SWIOTLB is initialized, even if io_tlb_start points to physical ++ * address zero, io_tlb_end surely doesn't. ++ */ ++ return io_tlb_end != 0; ++} +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index 9180158756d2..38d44d27e37a 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -1557,14 +1557,23 @@ static bool rcu_future_gp_cleanup(struct rcu_node *rnp) + } + + /* +- * Awaken the grace-period kthread. Don't do a self-awaken, and don't +- * bother awakening when there is nothing for the grace-period kthread +- * to do (as in several CPUs raced to awaken, and we lost), and finally +- * don't try to awaken a kthread that has not yet been created. ++ * Awaken the grace-period kthread. Don't do a self-awaken (unless in ++ * an interrupt or softirq handler), and don't bother awakening when there ++ * is nothing for the grace-period kthread to do (as in several CPUs raced ++ * to awaken, and we lost), and finally don't try to awaken a kthread that ++ * has not yet been created. If all those checks are passed, track some ++ * debug information and awaken. ++ * ++ * So why do the self-wakeup when in an interrupt or softirq handler ++ * in the grace-period kthread's context? Because the kthread might have ++ * been interrupted just as it was going to sleep, and just after the final ++ * pre-sleep check of the awaken condition. In this case, a wakeup really ++ * is required, and is therefore supplied. + */ + static void rcu_gp_kthread_wake(void) + { +- if (current == rcu_state.gp_kthread || ++ if ((current == rcu_state.gp_kthread && ++ !in_interrupt() && !in_serving_softirq()) || + !READ_ONCE(rcu_state.gp_flags) || + !rcu_state.gp_kthread) + return; +diff --git a/kernel/sysctl.c b/kernel/sysctl.c +index ba4d9e85feb8..d80bee8ff12e 100644 +--- a/kernel/sysctl.c ++++ b/kernel/sysctl.c +@@ -2579,7 +2579,16 @@ static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp, + { + struct do_proc_dointvec_minmax_conv_param *param = data; + if (write) { +- int val = *negp ? -*lvalp : *lvalp; ++ int val; ++ if (*negp) { ++ if (*lvalp > (unsigned long) INT_MAX + 1) ++ return -EINVAL; ++ val = -*lvalp; ++ } else { ++ if (*lvalp > (unsigned long) INT_MAX) ++ return -EINVAL; ++ val = *lvalp; ++ } + if ((param->min && *param->min > val) || + (param->max && *param->max < val)) + return -EINVAL; +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index c4238b441624..5f40db27aaf2 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -5626,7 +5626,6 @@ out: + return ret; + + fail: +- kfree(iter->trace); + kfree(iter); + __trace_array_put(tr); + mutex_unlock(&trace_types_lock); +diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c +index 76217bbef815..4629a6104474 100644 +--- a/kernel/trace/trace_event_perf.c ++++ b/kernel/trace/trace_event_perf.c +@@ -299,15 +299,13 @@ int perf_uprobe_init(struct perf_event *p_event, + + if (!p_event->attr.uprobe_path) + return -EINVAL; +- path = kzalloc(PATH_MAX, GFP_KERNEL); +- if (!path) +- return -ENOMEM; +- ret = strncpy_from_user( +- path, u64_to_user_ptr(p_event->attr.uprobe_path), PATH_MAX); +- if (ret == PATH_MAX) +- return -E2BIG; +- if (ret < 0) +- goto out; ++ ++ path = strndup_user(u64_to_user_ptr(p_event->attr.uprobe_path), ++ PATH_MAX); ++ if (IS_ERR(path)) { ++ ret = PTR_ERR(path); ++ return (ret == -EINVAL) ? -E2BIG : ret; ++ } + if (path[0] == '\0') { + ret = -EINVAL; + goto out; +diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c +index 449d90cfa151..55b72b1c63a0 100644 +--- a/kernel/trace/trace_events_hist.c ++++ b/kernel/trace/trace_events_hist.c +@@ -4695,9 +4695,10 @@ static inline void add_to_key(char *compound_key, void *key, + /* ensure NULL-termination */ + if (size > key_field->size - 1) + size = key_field->size - 1; +- } + +- memcpy(compound_key + key_field->offset, key, size); ++ strncpy(compound_key + key_field->offset, (char *)key, size); ++ } else ++ memcpy(compound_key + key_field->offset, key, size); + } + + static void +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index 831be5ff5f4d..fc8b51744579 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1825,19 +1825,17 @@ static int soft_offline_in_use_page(struct page *page, int flags) + struct page *hpage = compound_head(page); + + if (!PageHuge(page) && PageTransHuge(hpage)) { +- lock_page(hpage); +- if (!PageAnon(hpage) || unlikely(split_huge_page(hpage))) { +- unlock_page(hpage); +- if (!PageAnon(hpage)) ++ lock_page(page); ++ if (!PageAnon(page) || unlikely(split_huge_page(page))) { ++ unlock_page(page); ++ if (!PageAnon(page)) + pr_info("soft offline: %#lx: non anonymous thp\n", page_to_pfn(page)); + else + pr_info("soft offline: %#lx: thp split failed\n", page_to_pfn(page)); +- put_hwpoison_page(hpage); ++ put_hwpoison_page(page); + return -EBUSY; + } +- unlock_page(hpage); +- get_hwpoison_page(page); +- put_hwpoison_page(hpage); ++ unlock_page(page); + } + + /* +diff --git a/mm/memory.c b/mm/memory.c +index e11ca9dd823f..e8d69ade5acc 100644 +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -3517,10 +3517,13 @@ static vm_fault_t do_shared_fault(struct vm_fault *vmf) + * but allow concurrent faults). + * The mmap_sem may have been released depending on flags and our + * return value. See filemap_fault() and __lock_page_or_retry(). ++ * If mmap_sem is released, vma may become invalid (for example ++ * by other thread calling munmap()). + */ + static vm_fault_t do_fault(struct vm_fault *vmf) + { + struct vm_area_struct *vma = vmf->vma; ++ struct mm_struct *vm_mm = vma->vm_mm; + vm_fault_t ret; + + /* +@@ -3561,7 +3564,7 @@ static vm_fault_t do_fault(struct vm_fault *vmf) + + /* preallocated pagetable is unused: free it */ + if (vmf->prealloc_pte) { +- pte_free(vma->vm_mm, vmf->prealloc_pte); ++ pte_free(vm_mm, vmf->prealloc_pte); + vmf->prealloc_pte = NULL; + } + return ret; +diff --git a/mm/vmalloc.c b/mm/vmalloc.c +index 871e41c55e23..2cd24186ba84 100644 +--- a/mm/vmalloc.c ++++ b/mm/vmalloc.c +@@ -2248,7 +2248,7 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr, + if (!(area->flags & VM_USERMAP)) + return -EINVAL; + +- if (kaddr + size > area->addr + area->size) ++ if (kaddr + size > area->addr + get_vm_area_size(area)) + return -EINVAL; + + do { +diff --git a/net/9p/client.c b/net/9p/client.c +index 357214a51f13..b85d51f4b8eb 100644 +--- a/net/9p/client.c ++++ b/net/9p/client.c +@@ -1061,7 +1061,7 @@ struct p9_client *p9_client_create(const char *dev_name, char *options) + p9_debug(P9_DEBUG_ERROR, + "Please specify a msize of at least 4k\n"); + err = -EINVAL; +- goto free_client; ++ goto close_trans; + } + + err = p9_client_version(clnt); +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c +index d7ec6132c046..d455537c8fc6 100644 +--- a/net/sunrpc/clnt.c ++++ b/net/sunrpc/clnt.c +@@ -66,9 +66,6 @@ static void call_decode(struct rpc_task *task); + static void call_bind(struct rpc_task *task); + static void call_bind_status(struct rpc_task *task); + static void call_transmit(struct rpc_task *task); +-#if defined(CONFIG_SUNRPC_BACKCHANNEL) +-static void call_bc_transmit(struct rpc_task *task); +-#endif /* CONFIG_SUNRPC_BACKCHANNEL */ + static void call_status(struct rpc_task *task); + static void call_transmit_status(struct rpc_task *task); + static void call_refresh(struct rpc_task *task); +@@ -80,6 +77,7 @@ static void call_connect_status(struct rpc_task *task); + static __be32 *rpc_encode_header(struct rpc_task *task); + static __be32 *rpc_verify_header(struct rpc_task *task); + static int rpc_ping(struct rpc_clnt *clnt); ++static void rpc_check_timeout(struct rpc_task *task); + + static void rpc_register_client(struct rpc_clnt *clnt) + { +@@ -1131,6 +1129,8 @@ rpc_call_async(struct rpc_clnt *clnt, const struct rpc_message *msg, int flags, + EXPORT_SYMBOL_GPL(rpc_call_async); + + #if defined(CONFIG_SUNRPC_BACKCHANNEL) ++static void call_bc_encode(struct rpc_task *task); ++ + /** + * rpc_run_bc_task - Allocate a new RPC task for backchannel use, then run + * rpc_execute against it +@@ -1152,7 +1152,7 @@ struct rpc_task *rpc_run_bc_task(struct rpc_rqst *req) + task = rpc_new_task(&task_setup_data); + xprt_init_bc_request(req, task); + +- task->tk_action = call_bc_transmit; ++ task->tk_action = call_bc_encode; + atomic_inc(&task->tk_count); + WARN_ON_ONCE(atomic_read(&task->tk_count) != 2); + rpc_execute(task); +@@ -1786,7 +1786,12 @@ call_encode(struct rpc_task *task) + xprt_request_enqueue_receive(task); + xprt_request_enqueue_transmit(task); + out: +- task->tk_action = call_bind; ++ task->tk_action = call_transmit; ++ /* Check that the connection is OK */ ++ if (!xprt_bound(task->tk_xprt)) ++ task->tk_action = call_bind; ++ else if (!xprt_connected(task->tk_xprt)) ++ task->tk_action = call_connect; + } + + /* +@@ -1937,8 +1942,7 @@ call_connect_status(struct rpc_task *task) + break; + if (clnt->cl_autobind) { + rpc_force_rebind(clnt); +- task->tk_action = call_bind; +- return; ++ goto out_retry; + } + /* fall through */ + case -ECONNRESET: +@@ -1958,16 +1962,19 @@ call_connect_status(struct rpc_task *task) + /* fall through */ + case -ENOTCONN: + case -EAGAIN: +- /* Check for timeouts before looping back to call_bind */ + case -ETIMEDOUT: +- task->tk_action = call_timeout; +- return; ++ goto out_retry; + case 0: + clnt->cl_stats->netreconn++; + task->tk_action = call_transmit; + return; + } + rpc_exit(task, status); ++ return; ++out_retry: ++ /* Check for timeouts before looping back to call_bind */ ++ task->tk_action = call_bind; ++ rpc_check_timeout(task); + } + + /* +@@ -1978,13 +1985,19 @@ call_transmit(struct rpc_task *task) + { + dprint_status(task); + +- task->tk_status = 0; ++ task->tk_action = call_transmit_status; + if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) { + if (!xprt_prepare_transmit(task)) + return; +- xprt_transmit(task); ++ task->tk_status = 0; ++ if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) { ++ if (!xprt_connected(task->tk_xprt)) { ++ task->tk_status = -ENOTCONN; ++ return; ++ } ++ xprt_transmit(task); ++ } + } +- task->tk_action = call_transmit_status; + xprt_end_transmit(task); + } + +@@ -2038,7 +2051,7 @@ call_transmit_status(struct rpc_task *task) + trace_xprt_ping(task->tk_xprt, + task->tk_status); + rpc_exit(task, task->tk_status); +- break; ++ return; + } + /* fall through */ + case -ECONNRESET: +@@ -2046,11 +2059,24 @@ call_transmit_status(struct rpc_task *task) + case -EADDRINUSE: + case -ENOTCONN: + case -EPIPE: ++ task->tk_action = call_bind; ++ task->tk_status = 0; + break; + } ++ rpc_check_timeout(task); + } + + #if defined(CONFIG_SUNRPC_BACKCHANNEL) ++static void call_bc_transmit(struct rpc_task *task); ++static void call_bc_transmit_status(struct rpc_task *task); ++ ++static void ++call_bc_encode(struct rpc_task *task) ++{ ++ xprt_request_enqueue_transmit(task); ++ task->tk_action = call_bc_transmit; ++} ++ + /* + * 5b. Send the backchannel RPC reply. On error, drop the reply. In + * addition, disconnect on connectivity errors. +@@ -2058,26 +2084,23 @@ call_transmit_status(struct rpc_task *task) + static void + call_bc_transmit(struct rpc_task *task) + { +- struct rpc_rqst *req = task->tk_rqstp; +- +- if (rpc_task_need_encode(task)) +- xprt_request_enqueue_transmit(task); +- if (!test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) +- goto out_wakeup; +- +- if (!xprt_prepare_transmit(task)) +- goto out_retry; +- +- if (task->tk_status < 0) { +- printk(KERN_NOTICE "RPC: Could not send backchannel reply " +- "error: %d\n", task->tk_status); +- goto out_done; ++ task->tk_action = call_bc_transmit_status; ++ if (test_bit(RPC_TASK_NEED_XMIT, &task->tk_runstate)) { ++ if (!xprt_prepare_transmit(task)) ++ return; ++ task->tk_status = 0; ++ xprt_transmit(task); + } ++ xprt_end_transmit(task); ++} + +- xprt_transmit(task); ++static void ++call_bc_transmit_status(struct rpc_task *task) ++{ ++ struct rpc_rqst *req = task->tk_rqstp; + +- xprt_end_transmit(task); + dprint_status(task); ++ + switch (task->tk_status) { + case 0: + /* Success */ +@@ -2091,8 +2114,14 @@ call_bc_transmit(struct rpc_task *task) + case -ENOTCONN: + case -EPIPE: + break; ++ case -ENOBUFS: ++ rpc_delay(task, HZ>>2); ++ /* fall through */ ++ case -EBADSLT: + case -EAGAIN: +- goto out_retry; ++ task->tk_status = 0; ++ task->tk_action = call_bc_transmit; ++ return; + case -ETIMEDOUT: + /* + * Problem reaching the server. Disconnect and let the +@@ -2111,18 +2140,11 @@ call_bc_transmit(struct rpc_task *task) + * We were unable to reply and will have to drop the + * request. The server should reconnect and retransmit. + */ +- WARN_ON_ONCE(task->tk_status == -EAGAIN); + printk(KERN_NOTICE "RPC: Could not send backchannel reply " + "error: %d\n", task->tk_status); + break; + } +-out_wakeup: +- rpc_wake_up_queued_task(&req->rq_xprt->pending, task); +-out_done: + task->tk_action = rpc_exit_task; +- return; +-out_retry: +- task->tk_status = 0; + } + #endif /* CONFIG_SUNRPC_BACKCHANNEL */ + +@@ -2178,7 +2200,7 @@ call_status(struct rpc_task *task) + case -EPIPE: + case -ENOTCONN: + case -EAGAIN: +- task->tk_action = call_encode; ++ task->tk_action = call_timeout; + break; + case -EIO: + /* shutdown or soft timeout */ +@@ -2192,20 +2214,13 @@ call_status(struct rpc_task *task) + } + } + +-/* +- * 6a. Handle RPC timeout +- * We do not release the request slot, so we keep using the +- * same XID for all retransmits. +- */ + static void +-call_timeout(struct rpc_task *task) ++rpc_check_timeout(struct rpc_task *task) + { + struct rpc_clnt *clnt = task->tk_client; + +- if (xprt_adjust_timeout(task->tk_rqstp) == 0) { +- dprintk("RPC: %5u call_timeout (minor)\n", task->tk_pid); +- goto retry; +- } ++ if (xprt_adjust_timeout(task->tk_rqstp) == 0) ++ return; + + dprintk("RPC: %5u call_timeout (major)\n", task->tk_pid); + task->tk_timeouts++; +@@ -2241,10 +2256,19 @@ call_timeout(struct rpc_task *task) + * event? RFC2203 requires the server to drop all such requests. + */ + rpcauth_invalcred(task); ++} + +-retry: ++/* ++ * 6a. Handle RPC timeout ++ * We do not release the request slot, so we keep using the ++ * same XID for all retransmits. ++ */ ++static void ++call_timeout(struct rpc_task *task) ++{ + task->tk_action = call_encode; + task->tk_status = 0; ++ rpc_check_timeout(task); + } + + /* +diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c +index a6a060925e5d..43590a968b73 100644 +--- a/net/sunrpc/svcsock.c ++++ b/net/sunrpc/svcsock.c +@@ -349,12 +349,16 @@ static ssize_t svc_recvfrom(struct svc_rqst *rqstp, struct kvec *iov, + /* + * Set socket snd and rcv buffer lengths + */ +-static void svc_sock_setbufsize(struct socket *sock, unsigned int snd, +- unsigned int rcv) ++static void svc_sock_setbufsize(struct svc_sock *svsk, unsigned int nreqs) + { ++ unsigned int max_mesg = svsk->sk_xprt.xpt_server->sv_max_mesg; ++ struct socket *sock = svsk->sk_sock; ++ ++ nreqs = min(nreqs, INT_MAX / 2 / max_mesg); ++ + lock_sock(sock->sk); +- sock->sk->sk_sndbuf = snd * 2; +- sock->sk->sk_rcvbuf = rcv * 2; ++ sock->sk->sk_sndbuf = nreqs * max_mesg * 2; ++ sock->sk->sk_rcvbuf = nreqs * max_mesg * 2; + sock->sk->sk_write_space(sock->sk); + release_sock(sock->sk); + } +@@ -516,9 +520,7 @@ static int svc_udp_recvfrom(struct svc_rqst *rqstp) + * provides an upper bound on the number of threads + * which will access the socket. + */ +- svc_sock_setbufsize(svsk->sk_sock, +- (serv->sv_nrthreads+3) * serv->sv_max_mesg, +- (serv->sv_nrthreads+3) * serv->sv_max_mesg); ++ svc_sock_setbufsize(svsk, serv->sv_nrthreads + 3); + + clear_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags); + skb = NULL; +@@ -681,9 +683,7 @@ static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv) + * receive and respond to one request. + * svc_udp_recvfrom will re-adjust if necessary + */ +- svc_sock_setbufsize(svsk->sk_sock, +- 3 * svsk->sk_xprt.xpt_server->sv_max_mesg, +- 3 * svsk->sk_xprt.xpt_server->sv_max_mesg); ++ svc_sock_setbufsize(svsk, 3); + + /* data might have come in before data_ready set up */ + set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags); +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c +index f0e36c3492ba..cf20dd36a30f 100644 +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -959,8 +959,11 @@ static int selinux_sb_clone_mnt_opts(const struct super_block *oldsb, + BUG_ON(!(oldsbsec->flags & SE_SBINITIALIZED)); + + /* if fs is reusing a sb, make sure that the contexts match */ +- if (newsbsec->flags & SE_SBINITIALIZED) ++ if (newsbsec->flags & SE_SBINITIALIZED) { ++ if ((kern_flags & SECURITY_LSM_NATIVE_LABELS) && !set_context) ++ *set_kern_flags |= SECURITY_LSM_NATIVE_LABELS; + return selinux_cmp_sb_context(oldsb, newsb); ++ } + + mutex_lock(&newsbsec->lock); + +@@ -5120,6 +5123,9 @@ static int selinux_sctp_bind_connect(struct sock *sk, int optname, + return -EINVAL; + } + ++ if (walk_size + len > addrlen) ++ return -EINVAL; ++ + err = -EINVAL; + switch (optname) { + /* Bind checks */ +diff --git a/sound/soc/codecs/pcm186x.c b/sound/soc/codecs/pcm186x.c +index 809b7e9f03ca..c5fcc632f670 100644 +--- a/sound/soc/codecs/pcm186x.c ++++ b/sound/soc/codecs/pcm186x.c +@@ -42,7 +42,7 @@ struct pcm186x_priv { + bool is_master_mode; + }; + +-static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 4000, 50); ++static const DECLARE_TLV_DB_SCALE(pcm186x_pga_tlv, -1200, 50, 0); + + static const struct snd_kcontrol_new pcm1863_snd_controls[] = { + SOC_DOUBLE_R_S_TLV("ADC Capture Volume", PCM186X_PGA_VAL_CH1_L, +@@ -158,7 +158,7 @@ static const struct snd_soc_dapm_widget pcm1863_dapm_widgets[] = { + * Put the codec into SLEEP mode when not in use, allowing the + * Energysense mechanism to operate. + */ +- SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1, 0), ++ SND_SOC_DAPM_ADC("ADC", "HiFi Capture", PCM186X_POWER_CTRL, 1, 1), + }; + + static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = { +@@ -184,8 +184,8 @@ static const struct snd_soc_dapm_widget pcm1865_dapm_widgets[] = { + * Put the codec into SLEEP mode when not in use, allowing the + * Energysense mechanism to operate. + */ +- SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1, 0), +- SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1, 0), ++ SND_SOC_DAPM_ADC("ADC1", "HiFi Capture 1", PCM186X_POWER_CTRL, 1, 1), ++ SND_SOC_DAPM_ADC("ADC2", "HiFi Capture 2", PCM186X_POWER_CTRL, 1, 1), + }; + + static const struct snd_soc_dapm_route pcm1863_dapm_routes[] = { +diff --git a/sound/soc/fsl/fsl_esai.c b/sound/soc/fsl/fsl_esai.c +index 57b484768a58..afe67c865330 100644 +--- a/sound/soc/fsl/fsl_esai.c ++++ b/sound/soc/fsl/fsl_esai.c +@@ -398,7 +398,8 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt) + break; + case SND_SOC_DAIFMT_RIGHT_J: + /* Data on rising edge of bclk, frame high, right aligned */ +- xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCR_xWA; ++ xccr |= ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP; ++ xcr |= ESAI_xCR_xWA; + break; + case SND_SOC_DAIFMT_DSP_A: + /* Data on rising edge of bclk, frame high, 1clk before data */ +@@ -455,12 +456,12 @@ static int fsl_esai_set_dai_fmt(struct snd_soc_dai *dai, unsigned int fmt) + return -EINVAL; + } + +- mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR; ++ mask = ESAI_xCR_xFSL | ESAI_xCR_xFSR | ESAI_xCR_xWA; + regmap_update_bits(esai_priv->regmap, REG_ESAI_TCR, mask, xcr); + regmap_update_bits(esai_priv->regmap, REG_ESAI_RCR, mask, xcr); + + mask = ESAI_xCCR_xCKP | ESAI_xCCR_xHCKP | ESAI_xCCR_xFSP | +- ESAI_xCCR_xFSD | ESAI_xCCR_xCKD | ESAI_xCR_xWA; ++ ESAI_xCCR_xFSD | ESAI_xCCR_xCKD; + regmap_update_bits(esai_priv->regmap, REG_ESAI_TCCR, mask, xccr); + regmap_update_bits(esai_priv->regmap, REG_ESAI_RCCR, mask, xccr); + +diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c +index f69961c4a4f3..2921ce08b198 100644 +--- a/tools/perf/util/auxtrace.c ++++ b/tools/perf/util/auxtrace.c +@@ -1278,9 +1278,9 @@ static int __auxtrace_mmap__read(struct perf_mmap *map, + } + + /* padding must be written by fn() e.g. record__process_auxtrace() */ +- padding = size & 7; ++ padding = size & (PERF_AUXTRACE_RECORD_ALIGNMENT - 1); + if (padding) +- padding = 8 - padding; ++ padding = PERF_AUXTRACE_RECORD_ALIGNMENT - padding; + + memset(&ev, 0, sizeof(ev)); + ev.auxtrace.header.type = PERF_RECORD_AUXTRACE; +diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h +index 8e50f96d4b23..fac32482db61 100644 +--- a/tools/perf/util/auxtrace.h ++++ b/tools/perf/util/auxtrace.h +@@ -40,6 +40,9 @@ struct record_opts; + struct auxtrace_info_event; + struct events_stats; + ++/* Auxtrace records must have the same alignment as perf event records */ ++#define PERF_AUXTRACE_RECORD_ALIGNMENT 8 ++ + enum auxtrace_type { + PERF_AUXTRACE_UNKNOWN, + PERF_AUXTRACE_INTEL_PT, +diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +index 4503f3ca45ab..a54d6c9a4601 100644 +--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c ++++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c +@@ -26,6 +26,7 @@ + + #include "../cache.h" + #include "../util.h" ++#include "../auxtrace.h" + + #include "intel-pt-insn-decoder.h" + #include "intel-pt-pkt-decoder.h" +@@ -1394,7 +1395,6 @@ static int intel_pt_overflow(struct intel_pt_decoder *decoder) + { + intel_pt_log("ERROR: Buffer overflow\n"); + intel_pt_clear_tx_flags(decoder); +- decoder->cbr = 0; + decoder->timestamp_insn_cnt = 0; + decoder->pkt_state = INTEL_PT_STATE_ERR_RESYNC; + decoder->overflow = true; +@@ -2575,6 +2575,34 @@ static int intel_pt_tsc_cmp(uint64_t tsc1, uint64_t tsc2) + } + } + ++#define MAX_PADDING (PERF_AUXTRACE_RECORD_ALIGNMENT - 1) ++ ++/** ++ * adj_for_padding - adjust overlap to account for padding. ++ * @buf_b: second buffer ++ * @buf_a: first buffer ++ * @len_a: size of first buffer ++ * ++ * @buf_a might have up to 7 bytes of padding appended. Adjust the overlap ++ * accordingly. ++ * ++ * Return: A pointer into @buf_b from where non-overlapped data starts ++ */ ++static unsigned char *adj_for_padding(unsigned char *buf_b, ++ unsigned char *buf_a, size_t len_a) ++{ ++ unsigned char *p = buf_b - MAX_PADDING; ++ unsigned char *q = buf_a + len_a - MAX_PADDING; ++ int i; ++ ++ for (i = MAX_PADDING; i; i--, p++, q++) { ++ if (*p != *q) ++ break; ++ } ++ ++ return p; ++} ++ + /** + * intel_pt_find_overlap_tsc - determine start of non-overlapped trace data + * using TSC. +@@ -2625,8 +2653,11 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a, + + /* Same TSC, so buffers are consecutive */ + if (!cmp && rem_b >= rem_a) { ++ unsigned char *start; ++ + *consecutive = true; +- return buf_b + len_b - (rem_b - rem_a); ++ start = buf_b + len_b - (rem_b - rem_a); ++ return adj_for_padding(start, buf_a, len_a); + } + if (cmp < 0) + return buf_b; /* tsc_a < tsc_b => no overlap */ +@@ -2689,7 +2720,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a, + found = memmem(buf_a, len_a, buf_b, len_a); + if (found) { + *consecutive = true; +- return buf_b + len_a; ++ return adj_for_padding(buf_b + len_a, buf_a, len_a); + } + + /* Try again at next PSB in buffer 'a' */ +diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c +index 2e72373ec6df..4493fc13a6fa 100644 +--- a/tools/perf/util/intel-pt.c ++++ b/tools/perf/util/intel-pt.c +@@ -2522,6 +2522,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event, + } + + pt->timeless_decoding = intel_pt_timeless_decoding(pt); ++ if (pt->timeless_decoding && !pt->tc.time_mult) ++ pt->tc.time_mult = 1; + pt->have_tsc = intel_pt_have_tsc(pt); + pt->sampling_mode = false; + pt->est_tsc = !pt->timeless_decoding; +diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c +index 48efad6d0f90..ca5f2e4796ea 100644 +--- a/tools/perf/util/symbol.c ++++ b/tools/perf/util/symbol.c +@@ -710,6 +710,8 @@ static int map_groups__split_kallsyms_for_kcore(struct map_groups *kmaps, struct + } + + pos->start -= curr_map->start - curr_map->pgoff; ++ if (pos->end > curr_map->end) ++ pos->end = curr_map->end; + if (pos->end) + pos->end -= curr_map->start - curr_map->pgoff; + symbols__insert(&curr_map->dso->symbols, pos); +diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c +index 30251e288629..5cc22cdaa5ba 100644 +--- a/virt/kvm/arm/mmu.c ++++ b/virt/kvm/arm/mmu.c +@@ -2353,7 +2353,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, + return 0; + } + +-void kvm_arch_memslots_updated(struct kvm *kvm, struct kvm_memslots *slots) ++void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) + { + } + +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 076bc38963bf..4e1024dbb73f 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -874,6 +874,7 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, + int as_id, struct kvm_memslots *slots) + { + struct kvm_memslots *old_memslots = __kvm_memslots(kvm, as_id); ++ u64 gen; + + /* + * Set the low bit in the generation, which disables SPTE caching +@@ -896,9 +897,11 @@ static struct kvm_memslots *install_new_memslots(struct kvm *kvm, + * space 0 will use generations 0, 4, 8, ... while * address space 1 will + * use generations 2, 6, 10, 14, ... + */ +- slots->generation += KVM_ADDRESS_SPACE_NUM * 2 - 1; ++ gen = slots->generation + KVM_ADDRESS_SPACE_NUM * 2 - 1; + +- kvm_arch_memslots_updated(kvm, slots); ++ kvm_arch_memslots_updated(kvm, gen); ++ ++ slots->generation = gen; + + return old_memslots; + }