From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 4CA2D138334 for ; Sun, 4 Aug 2019 16:05:10 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 791D5E0883; Sun, 4 Aug 2019 16:05:09 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 1E22BE0883 for ; Sun, 4 Aug 2019 16:05:09 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 524263498DA for ; Sun, 4 Aug 2019 16:05:07 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id D00734D4 for ; Sun, 4 Aug 2019 16:05:05 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1564934688.f5230239f42bbb178d59c65a220e973e3f892ddf.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.9 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1186_linux-4.9.187.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: f5230239f42bbb178d59c65a220e973e3f892ddf X-VCS-Branch: 4.9 Date: Sun, 4 Aug 2019 16:05:05 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 46e07b6e-6144-4b39-bf52-6de19a616176 X-Archives-Hash: 2502068a8495ca0e1f6c8c1425c2f815 commit: f5230239f42bbb178d59c65a220e973e3f892ddf Author: Mike Pagano gentoo org> AuthorDate: Sun Aug 4 16:04:48 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sun Aug 4 16:04:48 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f5230239 Linux patch 4.9.187 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1186_linux-4.9.187.patch | 11861 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 11865 insertions(+) diff --git a/0000_README b/0000_README index e61465c..ddc9ef6 100644 --- a/0000_README +++ b/0000_README @@ -787,6 +787,10 @@ Patch: 1185_linux-4.9.186.patch From: http://www.kernel.org Desc: Linux 4.9.186 +Patch: 1186_linux-4.9.187.patch +From: http://www.kernel.org +Desc: Linux 4.9.187 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1186_linux-4.9.187.patch b/1186_linux-4.9.187.patch new file mode 100644 index 0000000..f0de753 --- /dev/null +++ b/1186_linux-4.9.187.patch @@ -0,0 +1,11861 @@ +diff --git a/Documentation/devicetree/bindings/serial/mvebu-uart.txt b/Documentation/devicetree/bindings/serial/mvebu-uart.txt +index 6087defd9f93..d37fabe17bd1 100644 +--- a/Documentation/devicetree/bindings/serial/mvebu-uart.txt ++++ b/Documentation/devicetree/bindings/serial/mvebu-uart.txt +@@ -8,6 +8,6 @@ Required properties: + Example: + serial@12000 { + compatible = "marvell,armada-3700-uart"; +- reg = <0x12000 0x400>; ++ reg = <0x12000 0x200>; + interrupts = <43>; + }; +diff --git a/Makefile b/Makefile +index 1ab22a85118f..65ed5dc69ec9 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 9 +-SUBLEVEL = 186 ++SUBLEVEL = 187 + EXTRAVERSION = + NAME = Roaring Lionus + +@@ -515,6 +515,7 @@ ifneq ($(GCC_TOOLCHAIN),) + CLANG_FLAGS += --gcc-toolchain=$(GCC_TOOLCHAIN) + endif + CLANG_FLAGS += -no-integrated-as ++CLANG_FLAGS += -Werror=unknown-warning-option + KBUILD_CFLAGS += $(CLANG_FLAGS) + KBUILD_AFLAGS += $(CLANG_FLAGS) + endif +diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi +index 68e6f88bdcfe..f2004b0955f1 100644 +--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi ++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi +@@ -96,7 +96,7 @@ + + uart0: serial@12000 { + compatible = "marvell,armada-3700-uart"; +- reg = <0x12000 0x400>; ++ reg = <0x12000 0x200>; + interrupts = ; + status = "disabled"; + }; +diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi +index 906fb836d241..6a51d282ec63 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi ++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi +@@ -306,7 +306,8 @@ + regulator-max-microvolt = <1320000>; + enable-gpios = <&pmic 6 GPIO_ACTIVE_HIGH>; + regulator-ramp-delay = <80>; +- regulator-enable-ramp-delay = <1000>; ++ regulator-enable-ramp-delay = <2000>; ++ regulator-settling-time-us = <160>; + }; + }; + }; +diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi +index 46045fe719da..87ef72bffd86 100644 +--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi ++++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi +@@ -1020,7 +1020,7 @@ + compatible = "nvidia,tegra210-agic"; + #interrupt-cells = <3>; + interrupt-controller; +- reg = <0x702f9000 0x2000>, ++ reg = <0x702f9000 0x1000>, + <0x702fa000 0x2000>; + interrupts = ; + clocks = <&tegra_car TEGRA210_CLK_APE>; +diff --git a/arch/arm64/crypto/sha1-ce-glue.c b/arch/arm64/crypto/sha1-ce-glue.c +index ea319c055f5d..1b7b4684c35b 100644 +--- a/arch/arm64/crypto/sha1-ce-glue.c ++++ b/arch/arm64/crypto/sha1-ce-glue.c +@@ -50,7 +50,7 @@ static int sha1_ce_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) + { + struct sha1_ce_state *sctx = shash_desc_ctx(desc); +- bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE); ++ bool finalize = !sctx->sst.count && !(len % SHA1_BLOCK_SIZE) && len; + + /* + * Allow the asm code to perform the finalization if there is no +diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-glue.c +index 0ed9486f75dd..356ca9397a86 100644 +--- a/arch/arm64/crypto/sha2-ce-glue.c ++++ b/arch/arm64/crypto/sha2-ce-glue.c +@@ -52,7 +52,7 @@ static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, + unsigned int len, u8 *out) + { + struct sha256_ce_state *sctx = shash_desc_ctx(desc); +- bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE); ++ bool finalize = !sctx->sst.count && !(len % SHA256_BLOCK_SIZE) && len; + + /* + * Allow the asm code to perform the finalization if there is no +diff --git a/arch/arm64/include/asm/compat.h b/arch/arm64/include/asm/compat.h +index eb8432bb82b8..b69e27152ea5 100644 +--- a/arch/arm64/include/asm/compat.h ++++ b/arch/arm64/include/asm/compat.h +@@ -234,6 +234,7 @@ static inline compat_uptr_t ptr_to_compat(void __user *uptr) + } + + #define compat_user_stack_pointer() (user_stack_pointer(task_pt_regs(current))) ++#define COMPAT_MINSIGSTKSZ 2048 + + static inline void __user *arch_compat_alloc_user_space(long len) + { +diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c +index 252a6d9c1da5..1a95d135def2 100644 +--- a/arch/arm64/kernel/acpi.c ++++ b/arch/arm64/kernel/acpi.c +@@ -157,10 +157,14 @@ static int __init acpi_fadt_sanity_check(void) + */ + if (table->revision < 5 || + (table->revision == 5 && fadt->minor_revision < 1)) { +- pr_err("Unsupported FADT revision %d.%d, should be 5.1+\n", ++ pr_err(FW_BUG "Unsupported FADT revision %d.%d, should be 5.1+\n", + table->revision, fadt->minor_revision); +- ret = -EINVAL; +- goto out; ++ ++ if (!fadt->arm_boot_flags) { ++ ret = -EINVAL; ++ goto out; ++ } ++ pr_err("FADT has ARM boot flags set, assuming 5.1\n"); + } + + if (!(fadt->flags & ACPI_FADT_HW_REDUCED)) { +diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h +index c7fcb232fe47..d3e8c901274d 100644 +--- a/arch/arm64/kernel/image.h ++++ b/arch/arm64/kernel/image.h +@@ -73,7 +73,11 @@ + + #ifdef CONFIG_EFI + +-__efistub_stext_offset = stext - _text; ++/* ++ * Use ABSOLUTE() to avoid ld.lld treating this as a relative symbol: ++ * https://github.com/ClangBuiltLinux/linux/issues/561 ++ */ ++__efistub_stext_offset = ABSOLUTE(stext - _text); + + /* + * Prevent the symbol aliases below from being emitted into the kallsyms +diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile +index 90aca95fe314..ad31c76c7a29 100644 +--- a/arch/mips/boot/compressed/Makefile ++++ b/arch/mips/boot/compressed/Makefile +@@ -75,6 +75,8 @@ OBJCOPYFLAGS_piggy.o := --add-section=.image=$(obj)/vmlinux.bin.z \ + $(obj)/piggy.o: $(obj)/dummy.o $(obj)/vmlinux.bin.z FORCE + $(call if_changed,objcopy) + ++HOSTCFLAGS_calc_vmlinuz_load_addr.o += $(LINUXINCLUDE) ++ + # Calculate the load address of the compressed kernel image + hostprogs-y := calc_vmlinuz_load_addr + +diff --git a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c +index 542c3ede9722..d14f75ec8273 100644 +--- a/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c ++++ b/arch/mips/boot/compressed/calc_vmlinuz_load_addr.c +@@ -13,7 +13,7 @@ + #include + #include + #include +-#include "../../../../include/linux/sizes.h" ++#include + + int main(int argc, char *argv[]) + { +diff --git a/arch/mips/include/asm/mach-ath79/ar933x_uart.h b/arch/mips/include/asm/mach-ath79/ar933x_uart.h +index c2917b39966b..bba2c8837951 100644 +--- a/arch/mips/include/asm/mach-ath79/ar933x_uart.h ++++ b/arch/mips/include/asm/mach-ath79/ar933x_uart.h +@@ -27,8 +27,8 @@ + #define AR933X_UART_CS_PARITY_S 0 + #define AR933X_UART_CS_PARITY_M 0x3 + #define AR933X_UART_CS_PARITY_NONE 0 +-#define AR933X_UART_CS_PARITY_ODD 1 +-#define AR933X_UART_CS_PARITY_EVEN 2 ++#define AR933X_UART_CS_PARITY_ODD 2 ++#define AR933X_UART_CS_PARITY_EVEN 3 + #define AR933X_UART_CS_IF_MODE_S 2 + #define AR933X_UART_CS_IF_MODE_M 0x3 + #define AR933X_UART_CS_IF_MODE_NONE 0 +diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c +index 0780c375fe2e..e204fc49517d 100644 +--- a/arch/parisc/kernel/ptrace.c ++++ b/arch/parisc/kernel/ptrace.c +@@ -170,6 +170,9 @@ long arch_ptrace(struct task_struct *child, long request, + if ((addr & (sizeof(unsigned long)-1)) || + addr >= sizeof(struct pt_regs)) + break; ++ if (addr == PT_IAOQ0 || addr == PT_IAOQ1) { ++ data |= 3; /* ensure userspace privilege */ ++ } + if ((addr >= PT_GR1 && addr <= PT_GR31) || + addr == PT_IAOQ0 || addr == PT_IAOQ1 || + (addr >= PT_FR0 && addr <= PT_FR31 + 4) || +@@ -231,16 +234,18 @@ long arch_ptrace(struct task_struct *child, long request, + + static compat_ulong_t translate_usr_offset(compat_ulong_t offset) + { +- if (offset < 0) +- return sizeof(struct pt_regs); +- else if (offset <= 32*4) /* gr[0..31] */ +- return offset * 2 + 4; +- else if (offset <= 32*4+32*8) /* gr[0..31] + fr[0..31] */ +- return offset + 32*4; +- else if (offset < sizeof(struct pt_regs)/2 + 32*4) +- return offset * 2 + 4 - 32*8; ++ compat_ulong_t pos; ++ ++ if (offset < 32*4) /* gr[0..31] */ ++ pos = offset * 2 + 4; ++ else if (offset < 32*4+32*8) /* fr[0] ... fr[31] */ ++ pos = (offset - 32*4) + PT_FR0; ++ else if (offset < sizeof(struct pt_regs)/2 + 32*4) /* sr[0] ... ipsw */ ++ pos = (offset - 32*4 - 32*8) * 2 + PT_SR0 + 4; + else +- return sizeof(struct pt_regs); ++ pos = sizeof(struct pt_regs); ++ ++ return pos; + } + + long compat_arch_ptrace(struct task_struct *child, compat_long_t request, +@@ -284,9 +289,12 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, + addr = translate_usr_offset(addr); + if (addr >= sizeof(struct pt_regs)) + break; ++ if (addr == PT_IAOQ0+4 || addr == PT_IAOQ1+4) { ++ data |= 3; /* ensure userspace privilege */ ++ } + if (addr >= PT_FR0 && addr <= PT_FR31 + 4) { + /* Special case, fp regs are 64 bits anyway */ +- *(__u64 *) ((char *) task_regs(child) + addr) = data; ++ *(__u32 *) ((char *) task_regs(child) + addr) = data; + ret = 0; + } + else if ((addr >= PT_GR1+4 && addr <= PT_GR31+4) || +@@ -499,7 +507,8 @@ static void set_reg(struct pt_regs *regs, int num, unsigned long val) + return; + case RI(iaoq[0]): + case RI(iaoq[1]): +- regs->iaoq[num - RI(iaoq[0])] = val; ++ /* set 2 lowest bits to ensure userspace privilege: */ ++ regs->iaoq[num - RI(iaoq[0])] = val | 3; + return; + case RI(sar): regs->sar = val; + return; +diff --git a/arch/powerpc/boot/xz_config.h b/arch/powerpc/boot/xz_config.h +index 5c6afdbca642..21b52c15aafc 100644 +--- a/arch/powerpc/boot/xz_config.h ++++ b/arch/powerpc/boot/xz_config.h +@@ -19,10 +19,30 @@ static inline uint32_t swab32p(void *p) + + #ifdef __LITTLE_ENDIAN__ + #define get_le32(p) (*((uint32_t *) (p))) ++#define cpu_to_be32(x) swab32(x) ++static inline u32 be32_to_cpup(const u32 *p) ++{ ++ return swab32p((u32 *)p); ++} + #else + #define get_le32(p) swab32p(p) ++#define cpu_to_be32(x) (x) ++static inline u32 be32_to_cpup(const u32 *p) ++{ ++ return *p; ++} + #endif + ++static inline uint32_t get_unaligned_be32(const void *p) ++{ ++ return be32_to_cpup(p); ++} ++ ++static inline void put_unaligned_be32(u32 val, void *p) ++{ ++ *((u32 *)p) = cpu_to_be32(val); ++} ++ + #define memeq(a, b, size) (memcmp(a, b, size) == 0) + #define memzero(buf, size) memset(buf, 0, size) + +diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c +index 8336b9016ca9..a7f229e59892 100644 +--- a/arch/powerpc/kernel/eeh.c ++++ b/arch/powerpc/kernel/eeh.c +@@ -362,10 +362,19 @@ static inline unsigned long eeh_token_to_phys(unsigned long token) + NULL, &hugepage_shift); + if (!ptep) + return token; +- WARN_ON(hugepage_shift); +- pa = pte_pfn(*ptep) << PAGE_SHIFT; + +- return pa | (token & (PAGE_SIZE-1)); ++ pa = pte_pfn(*ptep); ++ ++ /* On radix we can do hugepage mappings for io, so handle that */ ++ if (hugepage_shift) { ++ pa <<= hugepage_shift; ++ pa |= token & ((1ul << hugepage_shift) - 1); ++ } else { ++ pa <<= PAGE_SHIFT; ++ pa |= token & (PAGE_SIZE - 1); ++ } ++ ++ return pa; + } + + /* +diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S +index d50cc9b38b80..92474227262b 100644 +--- a/arch/powerpc/kernel/exceptions-64s.S ++++ b/arch/powerpc/kernel/exceptions-64s.S +@@ -1505,7 +1505,7 @@ handle_page_fault: + addi r3,r1,STACK_FRAME_OVERHEAD + bl do_page_fault + cmpdi r3,0 +- beq+ 12f ++ beq+ ret_from_except_lite + bl save_nvgprs + mr r5,r3 + addi r3,r1,STACK_FRAME_OVERHEAD +@@ -1520,7 +1520,12 @@ handle_dabr_fault: + ld r5,_DSISR(r1) + addi r3,r1,STACK_FRAME_OVERHEAD + bl do_break +-12: b ret_from_except_lite ++ /* ++ * do_break() may have changed the NV GPRS while handling a breakpoint. ++ * If so, we need to restore them with their updated values. Don't use ++ * ret_from_except_lite here. ++ */ ++ b ret_from_except + + + #ifdef CONFIG_PPC_STD_MMU_64 +diff --git a/arch/powerpc/kernel/pci_of_scan.c b/arch/powerpc/kernel/pci_of_scan.c +index ea3d98115b88..e0648a09d9c8 100644 +--- a/arch/powerpc/kernel/pci_of_scan.c ++++ b/arch/powerpc/kernel/pci_of_scan.c +@@ -45,6 +45,8 @@ static unsigned int pci_parse_of_flags(u32 addr0, int bridge) + if (addr0 & 0x02000000) { + flags = IORESOURCE_MEM | PCI_BASE_ADDRESS_SPACE_MEMORY; + flags |= (addr0 >> 22) & PCI_BASE_ADDRESS_MEM_TYPE_64; ++ if (flags & PCI_BASE_ADDRESS_MEM_TYPE_64) ++ flags |= IORESOURCE_MEM_64; + flags |= (addr0 >> 28) & PCI_BASE_ADDRESS_MEM_TYPE_1M; + if (addr0 & 0x40000000) + flags |= IORESOURCE_PREFETCH +diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c +index 2bfa5a7bb672..a378b1e80a1a 100644 +--- a/arch/powerpc/kernel/signal_32.c ++++ b/arch/powerpc/kernel/signal_32.c +@@ -1281,6 +1281,9 @@ long sys_rt_sigreturn(int r3, int r4, int r5, int r6, int r7, int r8, + goto bad; + + if (MSR_TM_ACTIVE(msr_hi<<32)) { ++ /* Trying to start TM on non TM system */ ++ if (!cpu_has_feature(CPU_FTR_TM)) ++ goto bad; + /* We only recheckpoint on return if we're + * transaction. + */ +diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c +index bdf2f7b995bb..f4c46b0ec611 100644 +--- a/arch/powerpc/kernel/signal_64.c ++++ b/arch/powerpc/kernel/signal_64.c +@@ -741,6 +741,11 @@ int sys_rt_sigreturn(unsigned long r3, unsigned long r4, unsigned long r5, + if (MSR_TM_ACTIVE(msr)) { + /* We recheckpoint on return. */ + struct ucontext __user *uc_transact; ++ ++ /* Trying to start TM on non TM system */ ++ if (!cpu_has_feature(CPU_FTR_TM)) ++ goto badframe; ++ + if (__get_user(uc_transact, &uc->uc_link)) + goto badframe; + if (restore_tm_sigcontexts(current, &uc->uc_mcontext, +diff --git a/arch/powerpc/kernel/swsusp_32.S b/arch/powerpc/kernel/swsusp_32.S +index ba4dee3d233f..884d1c3a187b 100644 +--- a/arch/powerpc/kernel/swsusp_32.S ++++ b/arch/powerpc/kernel/swsusp_32.S +@@ -23,11 +23,19 @@ + #define SL_IBAT2 0x48 + #define SL_DBAT3 0x50 + #define SL_IBAT3 0x58 +-#define SL_TB 0x60 +-#define SL_R2 0x68 +-#define SL_CR 0x6c +-#define SL_LR 0x70 +-#define SL_R12 0x74 /* r12 to r31 */ ++#define SL_DBAT4 0x60 ++#define SL_IBAT4 0x68 ++#define SL_DBAT5 0x70 ++#define SL_IBAT5 0x78 ++#define SL_DBAT6 0x80 ++#define SL_IBAT6 0x88 ++#define SL_DBAT7 0x90 ++#define SL_IBAT7 0x98 ++#define SL_TB 0xa0 ++#define SL_R2 0xa8 ++#define SL_CR 0xac ++#define SL_LR 0xb0 ++#define SL_R12 0xb4 /* r12 to r31 */ + #define SL_SIZE (SL_R12 + 80) + + .section .data +@@ -112,6 +120,41 @@ _GLOBAL(swsusp_arch_suspend) + mfibatl r4,3 + stw r4,SL_IBAT3+4(r11) + ++BEGIN_MMU_FTR_SECTION ++ mfspr r4,SPRN_DBAT4U ++ stw r4,SL_DBAT4(r11) ++ mfspr r4,SPRN_DBAT4L ++ stw r4,SL_DBAT4+4(r11) ++ mfspr r4,SPRN_DBAT5U ++ stw r4,SL_DBAT5(r11) ++ mfspr r4,SPRN_DBAT5L ++ stw r4,SL_DBAT5+4(r11) ++ mfspr r4,SPRN_DBAT6U ++ stw r4,SL_DBAT6(r11) ++ mfspr r4,SPRN_DBAT6L ++ stw r4,SL_DBAT6+4(r11) ++ mfspr r4,SPRN_DBAT7U ++ stw r4,SL_DBAT7(r11) ++ mfspr r4,SPRN_DBAT7L ++ stw r4,SL_DBAT7+4(r11) ++ mfspr r4,SPRN_IBAT4U ++ stw r4,SL_IBAT4(r11) ++ mfspr r4,SPRN_IBAT4L ++ stw r4,SL_IBAT4+4(r11) ++ mfspr r4,SPRN_IBAT5U ++ stw r4,SL_IBAT5(r11) ++ mfspr r4,SPRN_IBAT5L ++ stw r4,SL_IBAT5+4(r11) ++ mfspr r4,SPRN_IBAT6U ++ stw r4,SL_IBAT6(r11) ++ mfspr r4,SPRN_IBAT6L ++ stw r4,SL_IBAT6+4(r11) ++ mfspr r4,SPRN_IBAT7U ++ stw r4,SL_IBAT7(r11) ++ mfspr r4,SPRN_IBAT7L ++ stw r4,SL_IBAT7+4(r11) ++END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) ++ + #if 0 + /* Backup various CPU config stuffs */ + bl __save_cpu_setup +@@ -277,27 +320,41 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC) + mtibatu 3,r4 + lwz r4,SL_IBAT3+4(r11) + mtibatl 3,r4 +-#endif +- + BEGIN_MMU_FTR_SECTION +- li r4,0 ++ lwz r4,SL_DBAT4(r11) + mtspr SPRN_DBAT4U,r4 ++ lwz r4,SL_DBAT4+4(r11) + mtspr SPRN_DBAT4L,r4 ++ lwz r4,SL_DBAT5(r11) + mtspr SPRN_DBAT5U,r4 ++ lwz r4,SL_DBAT5+4(r11) + mtspr SPRN_DBAT5L,r4 ++ lwz r4,SL_DBAT6(r11) + mtspr SPRN_DBAT6U,r4 ++ lwz r4,SL_DBAT6+4(r11) + mtspr SPRN_DBAT6L,r4 ++ lwz r4,SL_DBAT7(r11) + mtspr SPRN_DBAT7U,r4 ++ lwz r4,SL_DBAT7+4(r11) + mtspr SPRN_DBAT7L,r4 ++ lwz r4,SL_IBAT4(r11) + mtspr SPRN_IBAT4U,r4 ++ lwz r4,SL_IBAT4+4(r11) + mtspr SPRN_IBAT4L,r4 ++ lwz r4,SL_IBAT5(r11) + mtspr SPRN_IBAT5U,r4 ++ lwz r4,SL_IBAT5+4(r11) + mtspr SPRN_IBAT5L,r4 ++ lwz r4,SL_IBAT6(r11) + mtspr SPRN_IBAT6U,r4 ++ lwz r4,SL_IBAT6+4(r11) + mtspr SPRN_IBAT6L,r4 ++ lwz r4,SL_IBAT7(r11) + mtspr SPRN_IBAT7U,r4 ++ lwz r4,SL_IBAT7+4(r11) + mtspr SPRN_IBAT7L,r4 + END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) ++#endif + + /* Flush all TLBs */ + lis r4,0x1000 +diff --git a/arch/powerpc/platforms/powermac/sleep.S b/arch/powerpc/platforms/powermac/sleep.S +index 1c2802fabd57..c856cd7fcdc4 100644 +--- a/arch/powerpc/platforms/powermac/sleep.S ++++ b/arch/powerpc/platforms/powermac/sleep.S +@@ -37,10 +37,18 @@ + #define SL_IBAT2 0x48 + #define SL_DBAT3 0x50 + #define SL_IBAT3 0x58 +-#define SL_TB 0x60 +-#define SL_R2 0x68 +-#define SL_CR 0x6c +-#define SL_R12 0x70 /* r12 to r31 */ ++#define SL_DBAT4 0x60 ++#define SL_IBAT4 0x68 ++#define SL_DBAT5 0x70 ++#define SL_IBAT5 0x78 ++#define SL_DBAT6 0x80 ++#define SL_IBAT6 0x88 ++#define SL_DBAT7 0x90 ++#define SL_IBAT7 0x98 ++#define SL_TB 0xa0 ++#define SL_R2 0xa8 ++#define SL_CR 0xac ++#define SL_R12 0xb0 /* r12 to r31 */ + #define SL_SIZE (SL_R12 + 80) + + .section .text +@@ -125,6 +133,41 @@ _GLOBAL(low_sleep_handler) + mfibatl r4,3 + stw r4,SL_IBAT3+4(r1) + ++BEGIN_MMU_FTR_SECTION ++ mfspr r4,SPRN_DBAT4U ++ stw r4,SL_DBAT4(r1) ++ mfspr r4,SPRN_DBAT4L ++ stw r4,SL_DBAT4+4(r1) ++ mfspr r4,SPRN_DBAT5U ++ stw r4,SL_DBAT5(r1) ++ mfspr r4,SPRN_DBAT5L ++ stw r4,SL_DBAT5+4(r1) ++ mfspr r4,SPRN_DBAT6U ++ stw r4,SL_DBAT6(r1) ++ mfspr r4,SPRN_DBAT6L ++ stw r4,SL_DBAT6+4(r1) ++ mfspr r4,SPRN_DBAT7U ++ stw r4,SL_DBAT7(r1) ++ mfspr r4,SPRN_DBAT7L ++ stw r4,SL_DBAT7+4(r1) ++ mfspr r4,SPRN_IBAT4U ++ stw r4,SL_IBAT4(r1) ++ mfspr r4,SPRN_IBAT4L ++ stw r4,SL_IBAT4+4(r1) ++ mfspr r4,SPRN_IBAT5U ++ stw r4,SL_IBAT5(r1) ++ mfspr r4,SPRN_IBAT5L ++ stw r4,SL_IBAT5+4(r1) ++ mfspr r4,SPRN_IBAT6U ++ stw r4,SL_IBAT6(r1) ++ mfspr r4,SPRN_IBAT6L ++ stw r4,SL_IBAT6+4(r1) ++ mfspr r4,SPRN_IBAT7U ++ stw r4,SL_IBAT7(r1) ++ mfspr r4,SPRN_IBAT7L ++ stw r4,SL_IBAT7+4(r1) ++END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) ++ + /* Backup various CPU config stuffs */ + bl __save_cpu_setup + +@@ -325,22 +368,37 @@ grackle_wake_up: + mtibatl 3,r4 + + BEGIN_MMU_FTR_SECTION +- li r4,0 ++ lwz r4,SL_DBAT4(r1) + mtspr SPRN_DBAT4U,r4 ++ lwz r4,SL_DBAT4+4(r1) + mtspr SPRN_DBAT4L,r4 ++ lwz r4,SL_DBAT5(r1) + mtspr SPRN_DBAT5U,r4 ++ lwz r4,SL_DBAT5+4(r1) + mtspr SPRN_DBAT5L,r4 ++ lwz r4,SL_DBAT6(r1) + mtspr SPRN_DBAT6U,r4 ++ lwz r4,SL_DBAT6+4(r1) + mtspr SPRN_DBAT6L,r4 ++ lwz r4,SL_DBAT7(r1) + mtspr SPRN_DBAT7U,r4 ++ lwz r4,SL_DBAT7+4(r1) + mtspr SPRN_DBAT7L,r4 ++ lwz r4,SL_IBAT4(r1) + mtspr SPRN_IBAT4U,r4 ++ lwz r4,SL_IBAT4+4(r1) + mtspr SPRN_IBAT4L,r4 ++ lwz r4,SL_IBAT5(r1) + mtspr SPRN_IBAT5U,r4 ++ lwz r4,SL_IBAT5+4(r1) + mtspr SPRN_IBAT5L,r4 ++ lwz r4,SL_IBAT6(r1) + mtspr SPRN_IBAT6U,r4 ++ lwz r4,SL_IBAT6+4(r1) + mtspr SPRN_IBAT6L,r4 ++ lwz r4,SL_IBAT7(r1) + mtspr SPRN_IBAT7U,r4 ++ lwz r4,SL_IBAT7+4(r1) + mtspr SPRN_IBAT7L,r4 + END_MMU_FTR_SECTION_IFSET(MMU_FTR_USE_HIGH_BATS) + +diff --git a/arch/powerpc/sysdev/uic.c b/arch/powerpc/sysdev/uic.c +index a00949f3e378..a8ebc96dfed2 100644 +--- a/arch/powerpc/sysdev/uic.c ++++ b/arch/powerpc/sysdev/uic.c +@@ -158,6 +158,7 @@ static int uic_set_irq_type(struct irq_data *d, unsigned int flow_type) + + mtdcr(uic->dcrbase + UIC_PR, pr); + mtdcr(uic->dcrbase + UIC_TR, tr); ++ mtdcr(uic->dcrbase + UIC_SR, ~mask); + + raw_spin_unlock_irqrestore(&uic->lock, flags); + +diff --git a/arch/sh/include/asm/io.h b/arch/sh/include/asm/io.h +index 3280a6bfa503..b2592c3864ad 100644 +--- a/arch/sh/include/asm/io.h ++++ b/arch/sh/include/asm/io.h +@@ -370,7 +370,11 @@ static inline int iounmap_fixed(void __iomem *addr) { return -EINVAL; } + + #define ioremap_nocache ioremap + #define ioremap_uc ioremap +-#define iounmap __iounmap ++ ++static inline void iounmap(void __iomem *addr) ++{ ++ __iounmap(addr); ++} + + /* + * Convert a physical pointer to a virtual kernel pointer for /dev/mem +diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h +index 1a60e1328e2f..6aca4c90aa1a 100644 +--- a/arch/um/include/asm/mmu_context.h ++++ b/arch/um/include/asm/mmu_context.h +@@ -56,7 +56,7 @@ static inline void activate_mm(struct mm_struct *old, struct mm_struct *new) + * when the new ->mm is used for the first time. + */ + __switch_mm(&new->context.id); +- down_write(&new->mmap_sem); ++ down_write_nested(&new->mmap_sem, 1); + uml_setup_stubs(new); + up_write(&new->mmap_sem); + } +diff --git a/arch/um/include/asm/thread_info.h b/arch/um/include/asm/thread_info.h +index 053baff03674..9300f7630d2a 100644 +--- a/arch/um/include/asm/thread_info.h ++++ b/arch/um/include/asm/thread_info.h +@@ -11,6 +11,7 @@ + #include + #include + #include ++#include + + struct thread_info { + struct task_struct *task; /* main task structure */ +@@ -22,6 +23,8 @@ struct thread_info { + 0-0xBFFFFFFF for user + 0-0xFFFFFFFF for kernel */ + struct thread_info *real_thread; /* Points to non-IRQ stack */ ++ unsigned long aux_fp_regs[FP_SIZE]; /* auxiliary fp_regs to save/restore ++ them out-of-band */ + }; + + #define INIT_THREAD_INFO(tsk) \ +diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h +index de5d572225f3..cc64f0579949 100644 +--- a/arch/um/include/shared/os.h ++++ b/arch/um/include/shared/os.h +@@ -274,7 +274,7 @@ extern int protect(struct mm_id * mm_idp, unsigned long addr, + extern int is_skas_winch(int pid, int fd, void *data); + extern int start_userspace(unsigned long stub_stack); + extern int copy_context_skas0(unsigned long stack, int pid); +-extern void userspace(struct uml_pt_regs *regs); ++extern void userspace(struct uml_pt_regs *regs, unsigned long *aux_fp_regs); + extern int map_stub_pages(int fd, unsigned long code, unsigned long data, + unsigned long stack); + extern void new_thread(void *stack, jmp_buf *buf, void (*handler)(void)); +diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c +index 034b42c7ab40..787568044a2a 100644 +--- a/arch/um/kernel/process.c ++++ b/arch/um/kernel/process.c +@@ -128,7 +128,7 @@ void new_thread_handler(void) + * callback returns only if the kernel thread execs a process + */ + n = fn(arg); +- userspace(¤t->thread.regs.regs); ++ userspace(¤t->thread.regs.regs, current_thread_info()->aux_fp_regs); + } + + /* Called magically, see new_thread_handler above */ +@@ -147,7 +147,7 @@ void fork_handler(void) + + current->thread.prev_sched = NULL; + +- userspace(¤t->thread.regs.regs); ++ userspace(¤t->thread.regs.regs, current_thread_info()->aux_fp_regs); + } + + int copy_thread(unsigned long clone_flags, unsigned long sp, +diff --git a/arch/um/os-Linux/skas/process.c b/arch/um/os-Linux/skas/process.c +index 0a99d4515065..cd4a6ff676a8 100644 +--- a/arch/um/os-Linux/skas/process.c ++++ b/arch/um/os-Linux/skas/process.c +@@ -87,12 +87,11 @@ bad_wait: + + extern unsigned long current_stub_stack(void); + +-static void get_skas_faultinfo(int pid, struct faultinfo *fi) ++static void get_skas_faultinfo(int pid, struct faultinfo *fi, unsigned long *aux_fp_regs) + { + int err; +- unsigned long fpregs[FP_SIZE]; + +- err = get_fp_registers(pid, fpregs); ++ err = get_fp_registers(pid, aux_fp_regs); + if (err < 0) { + printk(UM_KERN_ERR "save_fp_registers returned %d\n", + err); +@@ -112,7 +111,7 @@ static void get_skas_faultinfo(int pid, struct faultinfo *fi) + */ + memcpy(fi, (void *)current_stub_stack(), sizeof(*fi)); + +- err = put_fp_registers(pid, fpregs); ++ err = put_fp_registers(pid, aux_fp_regs); + if (err < 0) { + printk(UM_KERN_ERR "put_fp_registers returned %d\n", + err); +@@ -120,9 +119,9 @@ static void get_skas_faultinfo(int pid, struct faultinfo *fi) + } + } + +-static void handle_segv(int pid, struct uml_pt_regs * regs) ++static void handle_segv(int pid, struct uml_pt_regs *regs, unsigned long *aux_fp_regs) + { +- get_skas_faultinfo(pid, ®s->faultinfo); ++ get_skas_faultinfo(pid, ®s->faultinfo, aux_fp_regs); + segv(regs->faultinfo, 0, 1, NULL); + } + +@@ -305,7 +304,7 @@ int start_userspace(unsigned long stub_stack) + return err; + } + +-void userspace(struct uml_pt_regs *regs) ++void userspace(struct uml_pt_regs *regs, unsigned long *aux_fp_regs) + { + int err, status, op, pid = userspace_pid[0]; + /* To prevent races if using_sysemu changes under us.*/ +@@ -374,11 +373,11 @@ void userspace(struct uml_pt_regs *regs) + case SIGSEGV: + if (PTRACE_FULL_FAULTINFO) { + get_skas_faultinfo(pid, +- ®s->faultinfo); ++ ®s->faultinfo, aux_fp_regs); + (*sig_info[SIGSEGV])(SIGSEGV, (struct siginfo *)&si, + regs); + } +- else handle_segv(pid, regs); ++ else handle_segv(pid, regs, aux_fp_regs); + break; + case SIGTRAP + 0x80: + handle_trap(pid, regs, local_using_sysemu); +diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c +index 65577f081d07..c16c99bc2a10 100644 +--- a/arch/x86/events/amd/uncore.c ++++ b/arch/x86/events/amd/uncore.c +@@ -19,13 +19,14 @@ + #include + #include + #include ++#include + + #define NUM_COUNTERS_NB 4 + #define NUM_COUNTERS_L2 4 + #define MAX_COUNTERS NUM_COUNTERS_NB + + #define RDPMC_BASE_NB 6 +-#define RDPMC_BASE_L2 10 ++#define RDPMC_BASE_LLC 10 + + #define COUNTER_SHIFT 16 + +@@ -45,30 +46,30 @@ struct amd_uncore { + }; + + static struct amd_uncore * __percpu *amd_uncore_nb; +-static struct amd_uncore * __percpu *amd_uncore_l2; ++static struct amd_uncore * __percpu *amd_uncore_llc; + + static struct pmu amd_nb_pmu; +-static struct pmu amd_l2_pmu; ++static struct pmu amd_llc_pmu; + + static cpumask_t amd_nb_active_mask; +-static cpumask_t amd_l2_active_mask; ++static cpumask_t amd_llc_active_mask; + + static bool is_nb_event(struct perf_event *event) + { + return event->pmu->type == amd_nb_pmu.type; + } + +-static bool is_l2_event(struct perf_event *event) ++static bool is_llc_event(struct perf_event *event) + { +- return event->pmu->type == amd_l2_pmu.type; ++ return event->pmu->type == amd_llc_pmu.type; + } + + static struct amd_uncore *event_to_amd_uncore(struct perf_event *event) + { + if (is_nb_event(event) && amd_uncore_nb) + return *per_cpu_ptr(amd_uncore_nb, event->cpu); +- else if (is_l2_event(event) && amd_uncore_l2) +- return *per_cpu_ptr(amd_uncore_l2, event->cpu); ++ else if (is_llc_event(event) && amd_uncore_llc) ++ return *per_cpu_ptr(amd_uncore_llc, event->cpu); + + return NULL; + } +@@ -183,16 +184,16 @@ static int amd_uncore_event_init(struct perf_event *event) + return -ENOENT; + + /* +- * NB and L2 counters (MSRs) are shared across all cores that share the +- * same NB / L2 cache. Interrupts can be directed to a single target +- * core, however, event counts generated by processes running on other +- * cores cannot be masked out. So we do not support sampling and +- * per-thread events. ++ * NB and Last level cache counters (MSRs) are shared across all cores ++ * that share the same NB / Last level cache. Interrupts can be directed ++ * to a single target core, however, event counts generated by processes ++ * running on other cores cannot be masked out. So we do not support ++ * sampling and per-thread events. + */ + if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK) + return -EINVAL; + +- /* NB and L2 counters do not have usr/os/guest/host bits */ ++ /* NB and Last level cache counters do not have usr/os/guest/host bits */ + if (event->attr.exclude_user || event->attr.exclude_kernel || + event->attr.exclude_host || event->attr.exclude_guest) + return -EINVAL; +@@ -226,8 +227,8 @@ static ssize_t amd_uncore_attr_show_cpumask(struct device *dev, + + if (pmu->type == amd_nb_pmu.type) + active_mask = &amd_nb_active_mask; +- else if (pmu->type == amd_l2_pmu.type) +- active_mask = &amd_l2_active_mask; ++ else if (pmu->type == amd_llc_pmu.type) ++ active_mask = &amd_llc_active_mask; + else + return 0; + +@@ -276,7 +277,7 @@ static struct pmu amd_nb_pmu = { + .read = amd_uncore_read, + }; + +-static struct pmu amd_l2_pmu = { ++static struct pmu amd_llc_pmu = { + .task_ctx_nr = perf_invalid_context, + .attr_groups = amd_uncore_attr_groups, + .name = "amd_l2", +@@ -296,7 +297,7 @@ static struct amd_uncore *amd_uncore_alloc(unsigned int cpu) + + static int amd_uncore_cpu_up_prepare(unsigned int cpu) + { +- struct amd_uncore *uncore_nb = NULL, *uncore_l2; ++ struct amd_uncore *uncore_nb = NULL, *uncore_llc; + + if (amd_uncore_nb) { + uncore_nb = amd_uncore_alloc(cpu); +@@ -312,18 +313,18 @@ static int amd_uncore_cpu_up_prepare(unsigned int cpu) + *per_cpu_ptr(amd_uncore_nb, cpu) = uncore_nb; + } + +- if (amd_uncore_l2) { +- uncore_l2 = amd_uncore_alloc(cpu); +- if (!uncore_l2) ++ if (amd_uncore_llc) { ++ uncore_llc = amd_uncore_alloc(cpu); ++ if (!uncore_llc) + goto fail; +- uncore_l2->cpu = cpu; +- uncore_l2->num_counters = NUM_COUNTERS_L2; +- uncore_l2->rdpmc_base = RDPMC_BASE_L2; +- uncore_l2->msr_base = MSR_F16H_L2I_PERF_CTL; +- uncore_l2->active_mask = &amd_l2_active_mask; +- uncore_l2->pmu = &amd_l2_pmu; +- uncore_l2->id = -1; +- *per_cpu_ptr(amd_uncore_l2, cpu) = uncore_l2; ++ uncore_llc->cpu = cpu; ++ uncore_llc->num_counters = NUM_COUNTERS_L2; ++ uncore_llc->rdpmc_base = RDPMC_BASE_LLC; ++ uncore_llc->msr_base = MSR_F16H_L2I_PERF_CTL; ++ uncore_llc->active_mask = &amd_llc_active_mask; ++ uncore_llc->pmu = &amd_llc_pmu; ++ uncore_llc->id = -1; ++ *per_cpu_ptr(amd_uncore_llc, cpu) = uncore_llc; + } + + return 0; +@@ -376,17 +377,12 @@ static int amd_uncore_cpu_starting(unsigned int cpu) + *per_cpu_ptr(amd_uncore_nb, cpu) = uncore; + } + +- if (amd_uncore_l2) { +- unsigned int apicid = cpu_data(cpu).apicid; +- unsigned int nshared; ++ if (amd_uncore_llc) { ++ uncore = *per_cpu_ptr(amd_uncore_llc, cpu); ++ uncore->id = per_cpu(cpu_llc_id, cpu); + +- uncore = *per_cpu_ptr(amd_uncore_l2, cpu); +- cpuid_count(0x8000001d, 2, &eax, &ebx, &ecx, &edx); +- nshared = ((eax >> 14) & 0xfff) + 1; +- uncore->id = apicid - (apicid % nshared); +- +- uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_l2); +- *per_cpu_ptr(amd_uncore_l2, cpu) = uncore; ++ uncore = amd_uncore_find_online_sibling(uncore, amd_uncore_llc); ++ *per_cpu_ptr(amd_uncore_llc, cpu) = uncore; + } + + return 0; +@@ -419,8 +415,8 @@ static int amd_uncore_cpu_online(unsigned int cpu) + if (amd_uncore_nb) + uncore_online(cpu, amd_uncore_nb); + +- if (amd_uncore_l2) +- uncore_online(cpu, amd_uncore_l2); ++ if (amd_uncore_llc) ++ uncore_online(cpu, amd_uncore_llc); + + return 0; + } +@@ -456,8 +452,8 @@ static int amd_uncore_cpu_down_prepare(unsigned int cpu) + if (amd_uncore_nb) + uncore_down_prepare(cpu, amd_uncore_nb); + +- if (amd_uncore_l2) +- uncore_down_prepare(cpu, amd_uncore_l2); ++ if (amd_uncore_llc) ++ uncore_down_prepare(cpu, amd_uncore_llc); + + return 0; + } +@@ -479,8 +475,8 @@ static int amd_uncore_cpu_dead(unsigned int cpu) + if (amd_uncore_nb) + uncore_dead(cpu, amd_uncore_nb); + +- if (amd_uncore_l2) +- uncore_dead(cpu, amd_uncore_l2); ++ if (amd_uncore_llc) ++ uncore_dead(cpu, amd_uncore_llc); + + return 0; + } +@@ -510,16 +506,16 @@ static int __init amd_uncore_init(void) + } + + if (boot_cpu_has(X86_FEATURE_PERFCTR_L2)) { +- amd_uncore_l2 = alloc_percpu(struct amd_uncore *); +- if (!amd_uncore_l2) { ++ amd_uncore_llc = alloc_percpu(struct amd_uncore *); ++ if (!amd_uncore_llc) { + ret = -ENOMEM; +- goto fail_l2; ++ goto fail_llc; + } +- ret = perf_pmu_register(&amd_l2_pmu, amd_l2_pmu.name, -1); ++ ret = perf_pmu_register(&amd_llc_pmu, amd_llc_pmu.name, -1); + if (ret) +- goto fail_l2; ++ goto fail_llc; + +- pr_info("perf: AMD L2I counters detected\n"); ++ pr_info("perf: AMD LLC counters detected\n"); + ret = 0; + } + +@@ -529,7 +525,7 @@ static int __init amd_uncore_init(void) + if (cpuhp_setup_state(CPUHP_PERF_X86_AMD_UNCORE_PREP, + "PERF_X86_AMD_UNCORE_PREP", + amd_uncore_cpu_up_prepare, amd_uncore_cpu_dead)) +- goto fail_l2; ++ goto fail_llc; + + if (cpuhp_setup_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING, + "AP_PERF_X86_AMD_UNCORE_STARTING", +@@ -546,11 +542,11 @@ fail_start: + cpuhp_remove_state(CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING); + fail_prep: + cpuhp_remove_state(CPUHP_PERF_X86_AMD_UNCORE_PREP); +-fail_l2: ++fail_llc: + if (boot_cpu_has(X86_FEATURE_PERFCTR_NB)) + perf_pmu_unregister(&amd_nb_pmu); +- if (amd_uncore_l2) +- free_percpu(amd_uncore_l2); ++ if (amd_uncore_llc) ++ free_percpu(amd_uncore_llc); + fail_nb: + if (amd_uncore_nb) + free_percpu(amd_uncore_nb); +diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c +index 07a6c1fa173b..a4f343ac042e 100644 +--- a/arch/x86/kernel/cpu/bugs.c ++++ b/arch/x86/kernel/cpu/bugs.c +@@ -1205,7 +1205,7 @@ static ssize_t l1tf_show_state(char *buf) + static ssize_t mds_show_state(char *buf) + { + #ifdef CONFIG_HYPERVISOR_GUEST +- if (x86_hyper) { ++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) { + return sprintf(buf, "%s; SMT Host state unknown\n", + mds_strings[mds_mitigation]); + } +diff --git a/arch/x86/kernel/cpu/mkcapflags.sh b/arch/x86/kernel/cpu/mkcapflags.sh +index 6988c74409a8..711b74e0e623 100644 +--- a/arch/x86/kernel/cpu/mkcapflags.sh ++++ b/arch/x86/kernel/cpu/mkcapflags.sh +@@ -3,6 +3,8 @@ + # Generate the x86_cap/bug_flags[] arrays from include/asm/cpufeatures.h + # + ++set -e ++ + IN=$1 + OUT=$2 + +diff --git a/arch/x86/kernel/sysfb_efi.c b/arch/x86/kernel/sysfb_efi.c +index 623965e86b65..897da526e40e 100644 +--- a/arch/x86/kernel/sysfb_efi.c ++++ b/arch/x86/kernel/sysfb_efi.c +@@ -231,9 +231,55 @@ static const struct dmi_system_id efifb_dmi_system_table[] __initconst = { + {}, + }; + ++/* ++ * Some devices have a portrait LCD but advertise a landscape resolution (and ++ * pitch). We simply swap width and height for these devices so that we can ++ * correctly deal with some of them coming with multiple resolutions. ++ */ ++static const struct dmi_system_id efifb_dmi_swap_width_height[] __initconst = { ++ { ++ /* ++ * Lenovo MIIX310-10ICR, only some batches have the troublesome ++ * 800x1280 portrait screen. Luckily the portrait version has ++ * its own BIOS version, so we match on that. ++ */ ++ .matches = { ++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), ++ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "MIIX 310-10ICR"), ++ DMI_EXACT_MATCH(DMI_BIOS_VERSION, "1HCN44WW"), ++ }, ++ }, ++ { ++ /* Lenovo MIIX 320-10ICR with 800x1280 portrait screen */ ++ .matches = { ++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), ++ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, ++ "Lenovo MIIX 320-10ICR"), ++ }, ++ }, ++ { ++ /* Lenovo D330 with 800x1280 or 1200x1920 portrait screen */ ++ .matches = { ++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "LENOVO"), ++ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, ++ "Lenovo ideapad D330-10IGM"), ++ }, ++ }, ++ {}, ++}; ++ + __init void sysfb_apply_efi_quirks(void) + { + if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI || + !(screen_info.capabilities & VIDEO_CAPABILITY_SKIP_QUIRKS)) + dmi_check_system(efifb_dmi_system_table); ++ ++ if (screen_info.orig_video_isVGA == VIDEO_TYPE_EFI && ++ dmi_check_system(efifb_dmi_swap_width_height)) { ++ u16 temp = screen_info.lfb_width; ++ ++ screen_info.lfb_width = screen_info.lfb_height; ++ screen_info.lfb_height = temp; ++ screen_info.lfb_linelength = 4 * screen_info.lfb_width; ++ } + } +diff --git a/arch/x86/kvm/pmu.c b/arch/x86/kvm/pmu.c +index 06ce377dcbc9..0827ee7d0e9b 100644 +--- a/arch/x86/kvm/pmu.c ++++ b/arch/x86/kvm/pmu.c +@@ -124,8 +124,8 @@ static void pmc_reprogram_counter(struct kvm_pmc *pmc, u32 type, + intr ? kvm_perf_overflow_intr : + kvm_perf_overflow, pmc); + if (IS_ERR(event)) { +- printk_once("kvm_pmu: event creation failed %ld\n", +- PTR_ERR(event)); ++ pr_debug_ratelimited("kvm_pmu: event creation failed %ld for pmc->idx = %d\n", ++ PTR_ERR(event), pmc->idx); + return; + } + +diff --git a/arch/x86/um/os-Linux/registers.c b/arch/x86/um/os-Linux/registers.c +index 00f54a91bb4b..3c423dfcd78b 100644 +--- a/arch/x86/um/os-Linux/registers.c ++++ b/arch/x86/um/os-Linux/registers.c +@@ -5,6 +5,7 @@ + */ + + #include ++#include + #include + #ifdef __i386__ + #include +@@ -26,17 +27,18 @@ int save_i387_registers(int pid, unsigned long *fp_regs) + + int save_fp_registers(int pid, unsigned long *fp_regs) + { ++#ifdef PTRACE_GETREGSET + struct iovec iov; + + if (have_xstate_support) { + iov.iov_base = fp_regs; +- iov.iov_len = sizeof(struct _xstate); ++ iov.iov_len = FP_SIZE * sizeof(unsigned long); + if (ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov) < 0) + return -errno; + return 0; +- } else { ++ } else ++#endif + return save_i387_registers(pid, fp_regs); +- } + } + + int restore_i387_registers(int pid, unsigned long *fp_regs) +@@ -48,17 +50,17 @@ int restore_i387_registers(int pid, unsigned long *fp_regs) + + int restore_fp_registers(int pid, unsigned long *fp_regs) + { ++#ifdef PTRACE_SETREGSET + struct iovec iov; +- + if (have_xstate_support) { + iov.iov_base = fp_regs; +- iov.iov_len = sizeof(struct _xstate); ++ iov.iov_len = FP_SIZE * sizeof(unsigned long); + if (ptrace(PTRACE_SETREGSET, pid, NT_X86_XSTATE, &iov) < 0) + return -errno; + return 0; +- } else { ++ } else ++#endif + return restore_i387_registers(pid, fp_regs); +- } + } + + #ifdef __i386__ +@@ -122,13 +124,21 @@ int put_fp_registers(int pid, unsigned long *regs) + + void arch_init_registers(int pid) + { +- struct _xstate fp_regs; ++#ifdef PTRACE_GETREGSET ++ void * fp_regs; + struct iovec iov; + +- iov.iov_base = &fp_regs; +- iov.iov_len = sizeof(struct _xstate); ++ fp_regs = malloc(FP_SIZE * sizeof(unsigned long)); ++ if(fp_regs == NULL) ++ return; ++ ++ iov.iov_base = fp_regs; ++ iov.iov_len = FP_SIZE * sizeof(unsigned long); + if (ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov) == 0) + have_xstate_support = 1; ++ ++ free(fp_regs); ++#endif + } + #endif + +diff --git a/arch/x86/um/user-offsets.c b/arch/x86/um/user-offsets.c +index cb3c22370cf5..7bcd10614f8b 100644 +--- a/arch/x86/um/user-offsets.c ++++ b/arch/x86/um/user-offsets.c +@@ -50,7 +50,11 @@ void foo(void) + DEFINE(HOST_GS, GS); + DEFINE(HOST_ORIG_AX, ORIG_EAX); + #else +- DEFINE(HOST_FP_SIZE, sizeof(struct _xstate) / sizeof(unsigned long)); ++#ifdef FP_XSTATE_MAGIC1 ++ DEFINE_LONGS(HOST_FP_SIZE, 2696); ++#else ++ DEFINE(HOST_FP_SIZE, sizeof(struct _fpstate) / sizeof(unsigned long)); ++#endif + DEFINE_LONGS(HOST_BX, RBX); + DEFINE_LONGS(HOST_CX, RCX); + DEFINE_LONGS(HOST_DI, RDI); +diff --git a/block/compat_ioctl.c b/block/compat_ioctl.c +index 556826ac7cb4..3c9fdd6983aa 100644 +--- a/block/compat_ioctl.c ++++ b/block/compat_ioctl.c +@@ -4,7 +4,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -209,318 +208,6 @@ static int compat_blkpg_ioctl(struct block_device *bdev, fmode_t mode, + #define BLKBSZSET_32 _IOW(0x12, 113, int) + #define BLKGETSIZE64_32 _IOR(0x12, 114, int) + +-struct compat_floppy_drive_params { +- char cmos; +- compat_ulong_t max_dtr; +- compat_ulong_t hlt; +- compat_ulong_t hut; +- compat_ulong_t srt; +- compat_ulong_t spinup; +- compat_ulong_t spindown; +- unsigned char spindown_offset; +- unsigned char select_delay; +- unsigned char rps; +- unsigned char tracks; +- compat_ulong_t timeout; +- unsigned char interleave_sect; +- struct floppy_max_errors max_errors; +- char flags; +- char read_track; +- short autodetect[8]; +- compat_int_t checkfreq; +- compat_int_t native_format; +-}; +- +-struct compat_floppy_drive_struct { +- signed char flags; +- compat_ulong_t spinup_date; +- compat_ulong_t select_date; +- compat_ulong_t first_read_date; +- short probed_format; +- short track; +- short maxblock; +- short maxtrack; +- compat_int_t generation; +- compat_int_t keep_data; +- compat_int_t fd_ref; +- compat_int_t fd_device; +- compat_int_t last_checked; +- compat_caddr_t dmabuf; +- compat_int_t bufblocks; +-}; +- +-struct compat_floppy_fdc_state { +- compat_int_t spec1; +- compat_int_t spec2; +- compat_int_t dtr; +- unsigned char version; +- unsigned char dor; +- compat_ulong_t address; +- unsigned int rawcmd:2; +- unsigned int reset:1; +- unsigned int need_configure:1; +- unsigned int perp_mode:2; +- unsigned int has_fifo:1; +- unsigned int driver_version; +- unsigned char track[4]; +-}; +- +-struct compat_floppy_write_errors { +- unsigned int write_errors; +- compat_ulong_t first_error_sector; +- compat_int_t first_error_generation; +- compat_ulong_t last_error_sector; +- compat_int_t last_error_generation; +- compat_uint_t badness; +-}; +- +-#define FDSETPRM32 _IOW(2, 0x42, struct compat_floppy_struct) +-#define FDDEFPRM32 _IOW(2, 0x43, struct compat_floppy_struct) +-#define FDSETDRVPRM32 _IOW(2, 0x90, struct compat_floppy_drive_params) +-#define FDGETDRVPRM32 _IOR(2, 0x11, struct compat_floppy_drive_params) +-#define FDGETDRVSTAT32 _IOR(2, 0x12, struct compat_floppy_drive_struct) +-#define FDPOLLDRVSTAT32 _IOR(2, 0x13, struct compat_floppy_drive_struct) +-#define FDGETFDCSTAT32 _IOR(2, 0x15, struct compat_floppy_fdc_state) +-#define FDWERRORGET32 _IOR(2, 0x17, struct compat_floppy_write_errors) +- +-static struct { +- unsigned int cmd32; +- unsigned int cmd; +-} fd_ioctl_trans_table[] = { +- { FDSETPRM32, FDSETPRM }, +- { FDDEFPRM32, FDDEFPRM }, +- { FDGETPRM32, FDGETPRM }, +- { FDSETDRVPRM32, FDSETDRVPRM }, +- { FDGETDRVPRM32, FDGETDRVPRM }, +- { FDGETDRVSTAT32, FDGETDRVSTAT }, +- { FDPOLLDRVSTAT32, FDPOLLDRVSTAT }, +- { FDGETFDCSTAT32, FDGETFDCSTAT }, +- { FDWERRORGET32, FDWERRORGET } +-}; +- +-#define NR_FD_IOCTL_TRANS ARRAY_SIZE(fd_ioctl_trans_table) +- +-static int compat_fd_ioctl(struct block_device *bdev, fmode_t mode, +- unsigned int cmd, unsigned long arg) +-{ +- mm_segment_t old_fs = get_fs(); +- void *karg = NULL; +- unsigned int kcmd = 0; +- int i, err; +- +- for (i = 0; i < NR_FD_IOCTL_TRANS; i++) +- if (cmd == fd_ioctl_trans_table[i].cmd32) { +- kcmd = fd_ioctl_trans_table[i].cmd; +- break; +- } +- if (!kcmd) +- return -EINVAL; +- +- switch (cmd) { +- case FDSETPRM32: +- case FDDEFPRM32: +- case FDGETPRM32: +- { +- compat_uptr_t name; +- struct compat_floppy_struct __user *uf; +- struct floppy_struct *f; +- +- uf = compat_ptr(arg); +- f = karg = kmalloc(sizeof(struct floppy_struct), GFP_KERNEL); +- if (!karg) +- return -ENOMEM; +- if (cmd == FDGETPRM32) +- break; +- err = __get_user(f->size, &uf->size); +- err |= __get_user(f->sect, &uf->sect); +- err |= __get_user(f->head, &uf->head); +- err |= __get_user(f->track, &uf->track); +- err |= __get_user(f->stretch, &uf->stretch); +- err |= __get_user(f->gap, &uf->gap); +- err |= __get_user(f->rate, &uf->rate); +- err |= __get_user(f->spec1, &uf->spec1); +- err |= __get_user(f->fmt_gap, &uf->fmt_gap); +- err |= __get_user(name, &uf->name); +- f->name = compat_ptr(name); +- if (err) { +- err = -EFAULT; +- goto out; +- } +- break; +- } +- case FDSETDRVPRM32: +- case FDGETDRVPRM32: +- { +- struct compat_floppy_drive_params __user *uf; +- struct floppy_drive_params *f; +- +- uf = compat_ptr(arg); +- f = karg = kmalloc(sizeof(struct floppy_drive_params), GFP_KERNEL); +- if (!karg) +- return -ENOMEM; +- if (cmd == FDGETDRVPRM32) +- break; +- err = __get_user(f->cmos, &uf->cmos); +- err |= __get_user(f->max_dtr, &uf->max_dtr); +- err |= __get_user(f->hlt, &uf->hlt); +- err |= __get_user(f->hut, &uf->hut); +- err |= __get_user(f->srt, &uf->srt); +- err |= __get_user(f->spinup, &uf->spinup); +- err |= __get_user(f->spindown, &uf->spindown); +- err |= __get_user(f->spindown_offset, &uf->spindown_offset); +- err |= __get_user(f->select_delay, &uf->select_delay); +- err |= __get_user(f->rps, &uf->rps); +- err |= __get_user(f->tracks, &uf->tracks); +- err |= __get_user(f->timeout, &uf->timeout); +- err |= __get_user(f->interleave_sect, &uf->interleave_sect); +- err |= __copy_from_user(&f->max_errors, &uf->max_errors, sizeof(f->max_errors)); +- err |= __get_user(f->flags, &uf->flags); +- err |= __get_user(f->read_track, &uf->read_track); +- err |= __copy_from_user(f->autodetect, uf->autodetect, sizeof(f->autodetect)); +- err |= __get_user(f->checkfreq, &uf->checkfreq); +- err |= __get_user(f->native_format, &uf->native_format); +- if (err) { +- err = -EFAULT; +- goto out; +- } +- break; +- } +- case FDGETDRVSTAT32: +- case FDPOLLDRVSTAT32: +- karg = kmalloc(sizeof(struct floppy_drive_struct), GFP_KERNEL); +- if (!karg) +- return -ENOMEM; +- break; +- case FDGETFDCSTAT32: +- karg = kmalloc(sizeof(struct floppy_fdc_state), GFP_KERNEL); +- if (!karg) +- return -ENOMEM; +- break; +- case FDWERRORGET32: +- karg = kmalloc(sizeof(struct floppy_write_errors), GFP_KERNEL); +- if (!karg) +- return -ENOMEM; +- break; +- default: +- return -EINVAL; +- } +- set_fs(KERNEL_DS); +- err = __blkdev_driver_ioctl(bdev, mode, kcmd, (unsigned long)karg); +- set_fs(old_fs); +- if (err) +- goto out; +- switch (cmd) { +- case FDGETPRM32: +- { +- struct floppy_struct *f = karg; +- struct compat_floppy_struct __user *uf = compat_ptr(arg); +- +- err = __put_user(f->size, &uf->size); +- err |= __put_user(f->sect, &uf->sect); +- err |= __put_user(f->head, &uf->head); +- err |= __put_user(f->track, &uf->track); +- err |= __put_user(f->stretch, &uf->stretch); +- err |= __put_user(f->gap, &uf->gap); +- err |= __put_user(f->rate, &uf->rate); +- err |= __put_user(f->spec1, &uf->spec1); +- err |= __put_user(f->fmt_gap, &uf->fmt_gap); +- err |= __put_user((u64)f->name, (compat_caddr_t __user *)&uf->name); +- break; +- } +- case FDGETDRVPRM32: +- { +- struct compat_floppy_drive_params __user *uf; +- struct floppy_drive_params *f = karg; +- +- uf = compat_ptr(arg); +- err = __put_user(f->cmos, &uf->cmos); +- err |= __put_user(f->max_dtr, &uf->max_dtr); +- err |= __put_user(f->hlt, &uf->hlt); +- err |= __put_user(f->hut, &uf->hut); +- err |= __put_user(f->srt, &uf->srt); +- err |= __put_user(f->spinup, &uf->spinup); +- err |= __put_user(f->spindown, &uf->spindown); +- err |= __put_user(f->spindown_offset, &uf->spindown_offset); +- err |= __put_user(f->select_delay, &uf->select_delay); +- err |= __put_user(f->rps, &uf->rps); +- err |= __put_user(f->tracks, &uf->tracks); +- err |= __put_user(f->timeout, &uf->timeout); +- err |= __put_user(f->interleave_sect, &uf->interleave_sect); +- err |= __copy_to_user(&uf->max_errors, &f->max_errors, sizeof(f->max_errors)); +- err |= __put_user(f->flags, &uf->flags); +- err |= __put_user(f->read_track, &uf->read_track); +- err |= __copy_to_user(uf->autodetect, f->autodetect, sizeof(f->autodetect)); +- err |= __put_user(f->checkfreq, &uf->checkfreq); +- err |= __put_user(f->native_format, &uf->native_format); +- break; +- } +- case FDGETDRVSTAT32: +- case FDPOLLDRVSTAT32: +- { +- struct compat_floppy_drive_struct __user *uf; +- struct floppy_drive_struct *f = karg; +- +- uf = compat_ptr(arg); +- err = __put_user(f->flags, &uf->flags); +- err |= __put_user(f->spinup_date, &uf->spinup_date); +- err |= __put_user(f->select_date, &uf->select_date); +- err |= __put_user(f->first_read_date, &uf->first_read_date); +- err |= __put_user(f->probed_format, &uf->probed_format); +- err |= __put_user(f->track, &uf->track); +- err |= __put_user(f->maxblock, &uf->maxblock); +- err |= __put_user(f->maxtrack, &uf->maxtrack); +- err |= __put_user(f->generation, &uf->generation); +- err |= __put_user(f->keep_data, &uf->keep_data); +- err |= __put_user(f->fd_ref, &uf->fd_ref); +- err |= __put_user(f->fd_device, &uf->fd_device); +- err |= __put_user(f->last_checked, &uf->last_checked); +- err |= __put_user((u64)f->dmabuf, &uf->dmabuf); +- err |= __put_user((u64)f->bufblocks, &uf->bufblocks); +- break; +- } +- case FDGETFDCSTAT32: +- { +- struct compat_floppy_fdc_state __user *uf; +- struct floppy_fdc_state *f = karg; +- +- uf = compat_ptr(arg); +- err = __put_user(f->spec1, &uf->spec1); +- err |= __put_user(f->spec2, &uf->spec2); +- err |= __put_user(f->dtr, &uf->dtr); +- err |= __put_user(f->version, &uf->version); +- err |= __put_user(f->dor, &uf->dor); +- err |= __put_user(f->address, &uf->address); +- err |= __copy_to_user((char __user *)&uf->address + sizeof(uf->address), +- (char *)&f->address + sizeof(f->address), sizeof(int)); +- err |= __put_user(f->driver_version, &uf->driver_version); +- err |= __copy_to_user(uf->track, f->track, sizeof(f->track)); +- break; +- } +- case FDWERRORGET32: +- { +- struct compat_floppy_write_errors __user *uf; +- struct floppy_write_errors *f = karg; +- +- uf = compat_ptr(arg); +- err = __put_user(f->write_errors, &uf->write_errors); +- err |= __put_user(f->first_error_sector, &uf->first_error_sector); +- err |= __put_user(f->first_error_generation, &uf->first_error_generation); +- err |= __put_user(f->last_error_sector, &uf->last_error_sector); +- err |= __put_user(f->last_error_generation, &uf->last_error_generation); +- err |= __put_user(f->badness, &uf->badness); +- break; +- } +- default: +- break; +- } +- if (err) +- err = -EFAULT; +- +-out: +- kfree(karg); +- return err; +-} +- + static int compat_blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode, + unsigned cmd, unsigned long arg) + { +@@ -537,16 +224,6 @@ static int compat_blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode, + case HDIO_GET_ADDRESS: + case HDIO_GET_BUSSTATE: + return compat_hdio_ioctl(bdev, mode, cmd, arg); +- case FDSETPRM32: +- case FDDEFPRM32: +- case FDGETPRM32: +- case FDSETDRVPRM32: +- case FDGETDRVPRM32: +- case FDGETDRVSTAT32: +- case FDPOLLDRVSTAT32: +- case FDGETFDCSTAT32: +- case FDWERRORGET32: +- return compat_fd_ioctl(bdev, mode, cmd, arg); + case CDROMREADAUDIO: + return compat_cdrom_read_audio(bdev, mode, cmd, arg); + case CDROM_SEND_PACKET: +@@ -566,23 +243,6 @@ static int compat_blkdev_driver_ioctl(struct block_device *bdev, fmode_t mode, + case HDIO_DRIVE_CMD: + /* 0x330 is reserved -- it used to be HDIO_GETGEO_BIG */ + case 0x330: +- /* 0x02 -- Floppy ioctls */ +- case FDMSGON: +- case FDMSGOFF: +- case FDSETEMSGTRESH: +- case FDFLUSH: +- case FDWERRORCLR: +- case FDSETMAXERRS: +- case FDGETMAXERRS: +- case FDGETDRVTYP: +- case FDEJECT: +- case FDCLRPRM: +- case FDFMTBEG: +- case FDFMTEND: +- case FDRESET: +- case FDTWADDLE: +- case FDFMTTRK: +- case FDRAWCMD: + /* CDROM stuff */ + case CDROMPAUSE: + case CDROMRESUME: +diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig +index 331f6baf2df8..13f3de68b479 100644 +--- a/crypto/asymmetric_keys/Kconfig ++++ b/crypto/asymmetric_keys/Kconfig +@@ -14,6 +14,7 @@ config ASYMMETRIC_PUBLIC_KEY_SUBTYPE + select MPILIB + select CRYPTO_HASH_INFO + select CRYPTO_AKCIPHER ++ select CRYPTO_HASH + help + This option provides support for asymmetric public key type handling. + If signature generation and/or verification are to be used, +@@ -33,6 +34,7 @@ config X509_CERTIFICATE_PARSER + config PKCS7_MESSAGE_PARSER + tristate "PKCS#7 message parser" + depends on X509_CERTIFICATE_PARSER ++ select CRYPTO_HASH + select ASN1 + select OID_REGISTRY + help +@@ -55,6 +57,7 @@ config SIGNED_PE_FILE_VERIFICATION + bool "Support for PE file signature verification" + depends on PKCS7_MESSAGE_PARSER=y + depends on SYSTEM_DATA_VERIFICATION ++ select CRYPTO_HASH + select ASN1 + select OID_REGISTRY + help +diff --git a/crypto/chacha20poly1305.c b/crypto/chacha20poly1305.c +index 246905bf00aa..96d842a13ffc 100644 +--- a/crypto/chacha20poly1305.c ++++ b/crypto/chacha20poly1305.c +@@ -67,6 +67,8 @@ struct chachapoly_req_ctx { + unsigned int cryptlen; + /* Actual AD, excluding IV */ + unsigned int assoclen; ++ /* request flags, with MAY_SLEEP cleared if needed */ ++ u32 flags; + union { + struct poly_req poly; + struct chacha_req chacha; +@@ -76,8 +78,12 @@ struct chachapoly_req_ctx { + static inline void async_done_continue(struct aead_request *req, int err, + int (*cont)(struct aead_request *)) + { +- if (!err) ++ if (!err) { ++ struct chachapoly_req_ctx *rctx = aead_request_ctx(req); ++ ++ rctx->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + err = cont(req); ++ } + + if (err != -EINPROGRESS && err != -EBUSY) + aead_request_complete(req, err); +@@ -144,7 +150,7 @@ static int chacha_decrypt(struct aead_request *req) + dst = scatterwalk_ffwd(rctx->dst, req->dst, req->assoclen); + } + +- skcipher_request_set_callback(&creq->req, aead_request_flags(req), ++ skcipher_request_set_callback(&creq->req, rctx->flags, + chacha_decrypt_done, req); + skcipher_request_set_tfm(&creq->req, ctx->chacha); + skcipher_request_set_crypt(&creq->req, src, dst, +@@ -188,7 +194,7 @@ static int poly_tail(struct aead_request *req) + memcpy(&preq->tail.cryptlen, &len, sizeof(len)); + sg_set_buf(preq->src, &preq->tail, sizeof(preq->tail)); + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_tail_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + ahash_request_set_crypt(&preq->req, preq->src, +@@ -219,7 +225,7 @@ static int poly_cipherpad(struct aead_request *req) + sg_init_table(preq->src, 1); + sg_set_buf(preq->src, &preq->pad, padlen); + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_cipherpad_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + ahash_request_set_crypt(&preq->req, preq->src, NULL, padlen); +@@ -250,7 +256,7 @@ static int poly_cipher(struct aead_request *req) + sg_init_table(rctx->src, 2); + crypt = scatterwalk_ffwd(rctx->src, crypt, req->assoclen); + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_cipher_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + ahash_request_set_crypt(&preq->req, crypt, NULL, rctx->cryptlen); +@@ -280,7 +286,7 @@ static int poly_adpad(struct aead_request *req) + sg_init_table(preq->src, 1); + sg_set_buf(preq->src, preq->pad, padlen); + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_adpad_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + ahash_request_set_crypt(&preq->req, preq->src, NULL, padlen); +@@ -304,7 +310,7 @@ static int poly_ad(struct aead_request *req) + struct poly_req *preq = &rctx->u.poly; + int err; + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_ad_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + ahash_request_set_crypt(&preq->req, req->src, NULL, rctx->assoclen); +@@ -331,7 +337,7 @@ static int poly_setkey(struct aead_request *req) + sg_init_table(preq->src, 1); + sg_set_buf(preq->src, rctx->key, sizeof(rctx->key)); + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_setkey_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + ahash_request_set_crypt(&preq->req, preq->src, NULL, sizeof(rctx->key)); +@@ -355,7 +361,7 @@ static int poly_init(struct aead_request *req) + struct poly_req *preq = &rctx->u.poly; + int err; + +- ahash_request_set_callback(&preq->req, aead_request_flags(req), ++ ahash_request_set_callback(&preq->req, rctx->flags, + poly_init_done, req); + ahash_request_set_tfm(&preq->req, ctx->poly); + +@@ -393,7 +399,7 @@ static int poly_genkey(struct aead_request *req) + + chacha_iv(creq->iv, req, 0); + +- skcipher_request_set_callback(&creq->req, aead_request_flags(req), ++ skcipher_request_set_callback(&creq->req, rctx->flags, + poly_genkey_done, req); + skcipher_request_set_tfm(&creq->req, ctx->chacha); + skcipher_request_set_crypt(&creq->req, creq->src, creq->src, +@@ -433,7 +439,7 @@ static int chacha_encrypt(struct aead_request *req) + dst = scatterwalk_ffwd(rctx->dst, req->dst, req->assoclen); + } + +- skcipher_request_set_callback(&creq->req, aead_request_flags(req), ++ skcipher_request_set_callback(&creq->req, rctx->flags, + chacha_encrypt_done, req); + skcipher_request_set_tfm(&creq->req, ctx->chacha); + skcipher_request_set_crypt(&creq->req, src, dst, +@@ -451,6 +457,7 @@ static int chachapoly_encrypt(struct aead_request *req) + struct chachapoly_req_ctx *rctx = aead_request_ctx(req); + + rctx->cryptlen = req->cryptlen; ++ rctx->flags = aead_request_flags(req); + + /* encrypt call chain: + * - chacha_encrypt/done() +@@ -472,6 +479,7 @@ static int chachapoly_decrypt(struct aead_request *req) + struct chachapoly_req_ctx *rctx = aead_request_ctx(req); + + rctx->cryptlen = req->cryptlen - POLY1305_DIGEST_SIZE; ++ rctx->flags = aead_request_flags(req); + + /* decrypt call chain: + * - poly_genkey/done() +diff --git a/crypto/ghash-generic.c b/crypto/ghash-generic.c +index 12ad3e3a84e3..73b56f2f44f1 100644 +--- a/crypto/ghash-generic.c ++++ b/crypto/ghash-generic.c +@@ -34,6 +34,7 @@ static int ghash_setkey(struct crypto_shash *tfm, + const u8 *key, unsigned int keylen) + { + struct ghash_ctx *ctx = crypto_shash_ctx(tfm); ++ be128 k; + + if (keylen != GHASH_BLOCK_SIZE) { + crypto_shash_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN); +@@ -42,7 +43,12 @@ static int ghash_setkey(struct crypto_shash *tfm, + + if (ctx->gf128) + gf128mul_free_4k(ctx->gf128); +- ctx->gf128 = gf128mul_init_4k_lle((be128 *)key); ++ ++ BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE); ++ memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */ ++ ctx->gf128 = gf128mul_init_4k_lle(&k); ++ memzero_explicit(&k, GHASH_BLOCK_SIZE); ++ + if (!ctx->gf128) + return -ENOMEM; + +diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c +index 90c38778bc1f..16f8fda89981 100644 +--- a/drivers/ata/libata-eh.c ++++ b/drivers/ata/libata-eh.c +@@ -1600,7 +1600,7 @@ static int ata_eh_read_log_10h(struct ata_device *dev, + tf->hob_lbah = buf[10]; + tf->nsect = buf[12]; + tf->hob_nsect = buf[13]; +- if (ata_id_has_ncq_autosense(dev->id)) ++ if (dev->class == ATA_DEV_ZAC && ata_id_has_ncq_autosense(dev->id)) + tf->auxiliary = buf[14] << 16 | buf[15] << 8 | buf[16]; + + return 0; +@@ -1849,7 +1849,8 @@ void ata_eh_analyze_ncq_error(struct ata_link *link) + memcpy(&qc->result_tf, &tf, sizeof(tf)); + qc->result_tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_LBA | ATA_TFLAG_LBA48; + qc->err_mask |= AC_ERR_DEV | AC_ERR_NCQ; +- if ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary) { ++ if (dev->class == ATA_DEV_ZAC && ++ ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary)) { + char sense_key, asc, ascq; + + sense_key = (qc->result_tf.auxiliary >> 16) & 0xff; +@@ -1903,10 +1904,11 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc, + } + + switch (qc->dev->class) { +- case ATA_DEV_ATA: + case ATA_DEV_ZAC: + if (stat & ATA_SENSE) + ata_eh_request_sense(qc, qc->scsicmd); ++ /* fall through */ ++ case ATA_DEV_ATA: + if (err & ATA_ICRC) + qc->err_mask |= AC_ERR_ATA_BUS; + if (err & (ATA_UNC | ATA_AMNF)) +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c +index 69c84fddfe8a..1799a1dfa46e 100644 +--- a/drivers/base/regmap/regmap.c ++++ b/drivers/base/regmap/regmap.c +@@ -1506,6 +1506,8 @@ int _regmap_raw_write(struct regmap *map, unsigned int reg, + map->format.reg_bytes + + map->format.pad_bytes, + val, val_len); ++ else ++ ret = -ENOTSUPP; + + /* If that didn't work fall back on linearising by hand. */ + if (ret == -ENOTSUPP) { +diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c +index 6914c6e1e1a8..6930abef42b3 100644 +--- a/drivers/block/floppy.c ++++ b/drivers/block/floppy.c +@@ -192,6 +192,7 @@ static int print_unex = 1; + #include + #include + #include ++#include + + /* + * PS/2 floppies have much slower step rates than regular floppies. +@@ -2113,6 +2114,9 @@ static void setup_format_params(int track) + raw_cmd->kernel_data = floppy_track_buffer; + raw_cmd->length = 4 * F_SECT_PER_TRACK; + ++ if (!F_SECT_PER_TRACK) ++ return; ++ + /* allow for about 30ms for data transport per track */ + head_shift = (F_SECT_PER_TRACK + 5) / 6; + +@@ -3233,8 +3237,12 @@ static int set_geometry(unsigned int cmd, struct floppy_struct *g, + int cnt; + + /* sanity checking for parameters. */ +- if (g->sect <= 0 || +- g->head <= 0 || ++ if ((int)g->sect <= 0 || ++ (int)g->head <= 0 || ++ /* check for overflow in max_sector */ ++ (int)(g->sect * g->head) <= 0 || ++ /* check for zero in F_SECT_PER_TRACK */ ++ (unsigned char)((g->sect << 2) >> FD_SIZECODE(g)) == 0 || + g->track <= 0 || g->track > UDP->tracks >> STRETCH(g) || + /* check if reserved bits are set */ + (g->stretch & ~(FD_STRETCH | FD_SWAPSIDES | FD_SECTBASEMASK)) != 0) +@@ -3378,6 +3386,24 @@ static int fd_getgeo(struct block_device *bdev, struct hd_geometry *geo) + return 0; + } + ++static bool valid_floppy_drive_params(const short autodetect[8], ++ int native_format) ++{ ++ size_t floppy_type_size = ARRAY_SIZE(floppy_type); ++ size_t i = 0; ++ ++ for (i = 0; i < 8; ++i) { ++ if (autodetect[i] < 0 || ++ autodetect[i] >= floppy_type_size) ++ return false; ++ } ++ ++ if (native_format < 0 || native_format >= floppy_type_size) ++ return false; ++ ++ return true; ++} ++ + static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, + unsigned long param) + { +@@ -3504,6 +3530,9 @@ static int fd_locked_ioctl(struct block_device *bdev, fmode_t mode, unsigned int + SUPBOUND(size, strlen((const char *)outparam) + 1); + break; + case FDSETDRVPRM: ++ if (!valid_floppy_drive_params(inparam.dp.autodetect, ++ inparam.dp.native_format)) ++ return -EINVAL; + *UDP = inparam.dp; + break; + case FDGETDRVPRM: +@@ -3569,6 +3598,332 @@ static int fd_ioctl(struct block_device *bdev, fmode_t mode, + return ret; + } + ++#ifdef CONFIG_COMPAT ++ ++struct compat_floppy_drive_params { ++ char cmos; ++ compat_ulong_t max_dtr; ++ compat_ulong_t hlt; ++ compat_ulong_t hut; ++ compat_ulong_t srt; ++ compat_ulong_t spinup; ++ compat_ulong_t spindown; ++ unsigned char spindown_offset; ++ unsigned char select_delay; ++ unsigned char rps; ++ unsigned char tracks; ++ compat_ulong_t timeout; ++ unsigned char interleave_sect; ++ struct floppy_max_errors max_errors; ++ char flags; ++ char read_track; ++ short autodetect[8]; ++ compat_int_t checkfreq; ++ compat_int_t native_format; ++}; ++ ++struct compat_floppy_drive_struct { ++ signed char flags; ++ compat_ulong_t spinup_date; ++ compat_ulong_t select_date; ++ compat_ulong_t first_read_date; ++ short probed_format; ++ short track; ++ short maxblock; ++ short maxtrack; ++ compat_int_t generation; ++ compat_int_t keep_data; ++ compat_int_t fd_ref; ++ compat_int_t fd_device; ++ compat_int_t last_checked; ++ compat_caddr_t dmabuf; ++ compat_int_t bufblocks; ++}; ++ ++struct compat_floppy_fdc_state { ++ compat_int_t spec1; ++ compat_int_t spec2; ++ compat_int_t dtr; ++ unsigned char version; ++ unsigned char dor; ++ compat_ulong_t address; ++ unsigned int rawcmd:2; ++ unsigned int reset:1; ++ unsigned int need_configure:1; ++ unsigned int perp_mode:2; ++ unsigned int has_fifo:1; ++ unsigned int driver_version; ++ unsigned char track[4]; ++}; ++ ++struct compat_floppy_write_errors { ++ unsigned int write_errors; ++ compat_ulong_t first_error_sector; ++ compat_int_t first_error_generation; ++ compat_ulong_t last_error_sector; ++ compat_int_t last_error_generation; ++ compat_uint_t badness; ++}; ++ ++#define FDSETPRM32 _IOW(2, 0x42, struct compat_floppy_struct) ++#define FDDEFPRM32 _IOW(2, 0x43, struct compat_floppy_struct) ++#define FDSETDRVPRM32 _IOW(2, 0x90, struct compat_floppy_drive_params) ++#define FDGETDRVPRM32 _IOR(2, 0x11, struct compat_floppy_drive_params) ++#define FDGETDRVSTAT32 _IOR(2, 0x12, struct compat_floppy_drive_struct) ++#define FDPOLLDRVSTAT32 _IOR(2, 0x13, struct compat_floppy_drive_struct) ++#define FDGETFDCSTAT32 _IOR(2, 0x15, struct compat_floppy_fdc_state) ++#define FDWERRORGET32 _IOR(2, 0x17, struct compat_floppy_write_errors) ++ ++static int compat_set_geometry(struct block_device *bdev, fmode_t mode, unsigned int cmd, ++ struct compat_floppy_struct __user *arg) ++{ ++ struct floppy_struct v; ++ int drive, type; ++ int err; ++ ++ BUILD_BUG_ON(offsetof(struct floppy_struct, name) != ++ offsetof(struct compat_floppy_struct, name)); ++ ++ if (!(mode & (FMODE_WRITE | FMODE_WRITE_IOCTL))) ++ return -EPERM; ++ ++ memset(&v, 0, sizeof(struct floppy_struct)); ++ if (copy_from_user(&v, arg, offsetof(struct floppy_struct, name))) ++ return -EFAULT; ++ ++ mutex_lock(&floppy_mutex); ++ drive = (long)bdev->bd_disk->private_data; ++ type = ITYPE(UDRS->fd_device); ++ err = set_geometry(cmd == FDSETPRM32 ? FDSETPRM : FDDEFPRM, ++ &v, drive, type, bdev); ++ mutex_unlock(&floppy_mutex); ++ return err; ++} ++ ++static int compat_get_prm(int drive, ++ struct compat_floppy_struct __user *arg) ++{ ++ struct compat_floppy_struct v; ++ struct floppy_struct *p; ++ int err; ++ ++ memset(&v, 0, sizeof(v)); ++ mutex_lock(&floppy_mutex); ++ err = get_floppy_geometry(drive, ITYPE(UDRS->fd_device), &p); ++ if (err) { ++ mutex_unlock(&floppy_mutex); ++ return err; ++ } ++ memcpy(&v, p, offsetof(struct floppy_struct, name)); ++ mutex_unlock(&floppy_mutex); ++ if (copy_to_user(arg, &v, sizeof(struct compat_floppy_struct))) ++ return -EFAULT; ++ return 0; ++} ++ ++static int compat_setdrvprm(int drive, ++ struct compat_floppy_drive_params __user *arg) ++{ ++ struct compat_floppy_drive_params v; ++ ++ if (!capable(CAP_SYS_ADMIN)) ++ return -EPERM; ++ if (copy_from_user(&v, arg, sizeof(struct compat_floppy_drive_params))) ++ return -EFAULT; ++ if (!valid_floppy_drive_params(v.autodetect, v.native_format)) ++ return -EINVAL; ++ mutex_lock(&floppy_mutex); ++ UDP->cmos = v.cmos; ++ UDP->max_dtr = v.max_dtr; ++ UDP->hlt = v.hlt; ++ UDP->hut = v.hut; ++ UDP->srt = v.srt; ++ UDP->spinup = v.spinup; ++ UDP->spindown = v.spindown; ++ UDP->spindown_offset = v.spindown_offset; ++ UDP->select_delay = v.select_delay; ++ UDP->rps = v.rps; ++ UDP->tracks = v.tracks; ++ UDP->timeout = v.timeout; ++ UDP->interleave_sect = v.interleave_sect; ++ UDP->max_errors = v.max_errors; ++ UDP->flags = v.flags; ++ UDP->read_track = v.read_track; ++ memcpy(UDP->autodetect, v.autodetect, sizeof(v.autodetect)); ++ UDP->checkfreq = v.checkfreq; ++ UDP->native_format = v.native_format; ++ mutex_unlock(&floppy_mutex); ++ return 0; ++} ++ ++static int compat_getdrvprm(int drive, ++ struct compat_floppy_drive_params __user *arg) ++{ ++ struct compat_floppy_drive_params v; ++ ++ memset(&v, 0, sizeof(struct compat_floppy_drive_params)); ++ mutex_lock(&floppy_mutex); ++ v.cmos = UDP->cmos; ++ v.max_dtr = UDP->max_dtr; ++ v.hlt = UDP->hlt; ++ v.hut = UDP->hut; ++ v.srt = UDP->srt; ++ v.spinup = UDP->spinup; ++ v.spindown = UDP->spindown; ++ v.spindown_offset = UDP->spindown_offset; ++ v.select_delay = UDP->select_delay; ++ v.rps = UDP->rps; ++ v.tracks = UDP->tracks; ++ v.timeout = UDP->timeout; ++ v.interleave_sect = UDP->interleave_sect; ++ v.max_errors = UDP->max_errors; ++ v.flags = UDP->flags; ++ v.read_track = UDP->read_track; ++ memcpy(v.autodetect, UDP->autodetect, sizeof(v.autodetect)); ++ v.checkfreq = UDP->checkfreq; ++ v.native_format = UDP->native_format; ++ mutex_unlock(&floppy_mutex); ++ ++ if (copy_from_user(arg, &v, sizeof(struct compat_floppy_drive_params))) ++ return -EFAULT; ++ return 0; ++} ++ ++static int compat_getdrvstat(int drive, bool poll, ++ struct compat_floppy_drive_struct __user *arg) ++{ ++ struct compat_floppy_drive_struct v; ++ ++ memset(&v, 0, sizeof(struct compat_floppy_drive_struct)); ++ mutex_lock(&floppy_mutex); ++ ++ if (poll) { ++ if (lock_fdc(drive)) ++ goto Eintr; ++ if (poll_drive(true, FD_RAW_NEED_DISK) == -EINTR) ++ goto Eintr; ++ process_fd_request(); ++ } ++ v.spinup_date = UDRS->spinup_date; ++ v.select_date = UDRS->select_date; ++ v.first_read_date = UDRS->first_read_date; ++ v.probed_format = UDRS->probed_format; ++ v.track = UDRS->track; ++ v.maxblock = UDRS->maxblock; ++ v.maxtrack = UDRS->maxtrack; ++ v.generation = UDRS->generation; ++ v.keep_data = UDRS->keep_data; ++ v.fd_ref = UDRS->fd_ref; ++ v.fd_device = UDRS->fd_device; ++ v.last_checked = UDRS->last_checked; ++ v.dmabuf = (uintptr_t)UDRS->dmabuf; ++ v.bufblocks = UDRS->bufblocks; ++ mutex_unlock(&floppy_mutex); ++ ++ if (copy_from_user(arg, &v, sizeof(struct compat_floppy_drive_struct))) ++ return -EFAULT; ++ return 0; ++Eintr: ++ mutex_unlock(&floppy_mutex); ++ return -EINTR; ++} ++ ++static int compat_getfdcstat(int drive, ++ struct compat_floppy_fdc_state __user *arg) ++{ ++ struct compat_floppy_fdc_state v32; ++ struct floppy_fdc_state v; ++ ++ mutex_lock(&floppy_mutex); ++ v = *UFDCS; ++ mutex_unlock(&floppy_mutex); ++ ++ memset(&v32, 0, sizeof(struct compat_floppy_fdc_state)); ++ v32.spec1 = v.spec1; ++ v32.spec2 = v.spec2; ++ v32.dtr = v.dtr; ++ v32.version = v.version; ++ v32.dor = v.dor; ++ v32.address = v.address; ++ v32.rawcmd = v.rawcmd; ++ v32.reset = v.reset; ++ v32.need_configure = v.need_configure; ++ v32.perp_mode = v.perp_mode; ++ v32.has_fifo = v.has_fifo; ++ v32.driver_version = v.driver_version; ++ memcpy(v32.track, v.track, 4); ++ if (copy_to_user(arg, &v32, sizeof(struct compat_floppy_fdc_state))) ++ return -EFAULT; ++ return 0; ++} ++ ++static int compat_werrorget(int drive, ++ struct compat_floppy_write_errors __user *arg) ++{ ++ struct compat_floppy_write_errors v32; ++ struct floppy_write_errors v; ++ ++ memset(&v32, 0, sizeof(struct compat_floppy_write_errors)); ++ mutex_lock(&floppy_mutex); ++ v = *UDRWE; ++ mutex_unlock(&floppy_mutex); ++ v32.write_errors = v.write_errors; ++ v32.first_error_sector = v.first_error_sector; ++ v32.first_error_generation = v.first_error_generation; ++ v32.last_error_sector = v.last_error_sector; ++ v32.last_error_generation = v.last_error_generation; ++ v32.badness = v.badness; ++ if (copy_to_user(arg, &v32, sizeof(struct compat_floppy_write_errors))) ++ return -EFAULT; ++ return 0; ++} ++ ++static int fd_compat_ioctl(struct block_device *bdev, fmode_t mode, unsigned int cmd, ++ unsigned long param) ++{ ++ int drive = (long)bdev->bd_disk->private_data; ++ switch (cmd) { ++ case FDMSGON: ++ case FDMSGOFF: ++ case FDSETEMSGTRESH: ++ case FDFLUSH: ++ case FDWERRORCLR: ++ case FDEJECT: ++ case FDCLRPRM: ++ case FDFMTBEG: ++ case FDRESET: ++ case FDTWADDLE: ++ return fd_ioctl(bdev, mode, cmd, param); ++ case FDSETMAXERRS: ++ case FDGETMAXERRS: ++ case FDGETDRVTYP: ++ case FDFMTEND: ++ case FDFMTTRK: ++ case FDRAWCMD: ++ return fd_ioctl(bdev, mode, cmd, ++ (unsigned long)compat_ptr(param)); ++ case FDSETPRM32: ++ case FDDEFPRM32: ++ return compat_set_geometry(bdev, mode, cmd, compat_ptr(param)); ++ case FDGETPRM32: ++ return compat_get_prm(drive, compat_ptr(param)); ++ case FDSETDRVPRM32: ++ return compat_setdrvprm(drive, compat_ptr(param)); ++ case FDGETDRVPRM32: ++ return compat_getdrvprm(drive, compat_ptr(param)); ++ case FDPOLLDRVSTAT32: ++ return compat_getdrvstat(drive, true, compat_ptr(param)); ++ case FDGETDRVSTAT32: ++ return compat_getdrvstat(drive, false, compat_ptr(param)); ++ case FDGETFDCSTAT32: ++ return compat_getfdcstat(drive, compat_ptr(param)); ++ case FDWERRORGET32: ++ return compat_werrorget(drive, compat_ptr(param)); ++ } ++ return -EINVAL; ++} ++#endif ++ + static void __init config_types(void) + { + bool has_drive = false; +@@ -3891,6 +4246,9 @@ static const struct block_device_operations floppy_fops = { + .getgeo = fd_getgeo, + .check_events = floppy_check_events, + .revalidate_disk = floppy_revalidate, ++#ifdef CONFIG_COMPAT ++ .compat_ioctl = fd_compat_ioctl, ++#endif + }; + + /* +diff --git a/drivers/bluetooth/hci_ath.c b/drivers/bluetooth/hci_ath.c +index 0ccf6bf01ed4..c50b68bbecdc 100644 +--- a/drivers/bluetooth/hci_ath.c ++++ b/drivers/bluetooth/hci_ath.c +@@ -101,6 +101,9 @@ static int ath_open(struct hci_uart *hu) + + BT_DBG("hu %p", hu); + ++ if (!hci_uart_has_flow_control(hu)) ++ return -EOPNOTSUPP; ++ + ath = kzalloc(sizeof(*ath), GFP_KERNEL); + if (!ath) + return -ENOMEM; +diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c +index deed58013555..25042c794852 100644 +--- a/drivers/bluetooth/hci_bcm.c ++++ b/drivers/bluetooth/hci_bcm.c +@@ -279,6 +279,9 @@ static int bcm_open(struct hci_uart *hu) + + bt_dev_dbg(hu->hdev, "hu %p", hu); + ++ if (!hci_uart_has_flow_control(hu)) ++ return -EOPNOTSUPP; ++ + bcm = kzalloc(sizeof(*bcm), GFP_KERNEL); + if (!bcm) + return -ENOMEM; +diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c +index a2c921faaa12..34e04bf87a62 100644 +--- a/drivers/bluetooth/hci_bcsp.c ++++ b/drivers/bluetooth/hci_bcsp.c +@@ -759,6 +759,11 @@ static int bcsp_close(struct hci_uart *hu) + skb_queue_purge(&bcsp->rel); + skb_queue_purge(&bcsp->unrel); + ++ if (bcsp->rx_skb) { ++ kfree_skb(bcsp->rx_skb); ++ bcsp->rx_skb = NULL; ++ } ++ + kfree(bcsp); + return 0; + } +diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c +index 73306384af6c..f822e862b689 100644 +--- a/drivers/bluetooth/hci_intel.c ++++ b/drivers/bluetooth/hci_intel.c +@@ -407,6 +407,9 @@ static int intel_open(struct hci_uart *hu) + + BT_DBG("hu %p", hu); + ++ if (!hci_uart_has_flow_control(hu)) ++ return -EOPNOTSUPP; ++ + intel = kzalloc(sizeof(*intel), GFP_KERNEL); + if (!intel) + return -ENOMEM; +diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c +index 2230f9368c21..a2f6953a86f5 100644 +--- a/drivers/bluetooth/hci_ldisc.c ++++ b/drivers/bluetooth/hci_ldisc.c +@@ -263,6 +263,15 @@ static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb) + return 0; + } + ++/* Check the underlying device or tty has flow control support */ ++bool hci_uart_has_flow_control(struct hci_uart *hu) ++{ ++ if (hu->tty->driver->ops->tiocmget && hu->tty->driver->ops->tiocmset) ++ return true; ++ ++ return false; ++} ++ + /* Flow control or un-flow control the device */ + void hci_uart_set_flow_control(struct hci_uart *hu, bool enable) + { +diff --git a/drivers/bluetooth/hci_mrvl.c b/drivers/bluetooth/hci_mrvl.c +index bbc4b39b1dbf..716d89a90907 100644 +--- a/drivers/bluetooth/hci_mrvl.c ++++ b/drivers/bluetooth/hci_mrvl.c +@@ -66,6 +66,9 @@ static int mrvl_open(struct hci_uart *hu) + + BT_DBG("hu %p", hu); + ++ if (!hci_uart_has_flow_control(hu)) ++ return -EOPNOTSUPP; ++ + mrvl = kzalloc(sizeof(*mrvl), GFP_KERNEL); + if (!mrvl) + return -ENOMEM; +diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h +index 070139513e65..aeef870e31b8 100644 +--- a/drivers/bluetooth/hci_uart.h ++++ b/drivers/bluetooth/hci_uart.h +@@ -109,6 +109,7 @@ int hci_uart_tx_wakeup(struct hci_uart *hu); + int hci_uart_init_ready(struct hci_uart *hu); + void hci_uart_init_tty(struct hci_uart *hu); + void hci_uart_set_baudrate(struct hci_uart *hu, unsigned int speed); ++bool hci_uart_has_flow_control(struct hci_uart *hu); + void hci_uart_set_flow_control(struct hci_uart *hu, bool enable); + void hci_uart_set_speeds(struct hci_uart *hu, unsigned int init_speed, + unsigned int oper_speed); +diff --git a/drivers/char/hpet.c b/drivers/char/hpet.c +index 818a8d40e5c9..bedfd2412ec1 100644 +--- a/drivers/char/hpet.c ++++ b/drivers/char/hpet.c +@@ -569,8 +569,7 @@ static inline unsigned long hpet_time_div(struct hpets *hpets, + unsigned long long m; + + m = hpets->hp_tick_freq + (dis >> 1); +- do_div(m, dis); +- return (unsigned long)m; ++ return div64_ul(m, dis); + } + + static int +diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c +index fb0cf8b74516..d32248e2ceab 100644 +--- a/drivers/clocksource/exynos_mct.c ++++ b/drivers/clocksource/exynos_mct.c +@@ -211,7 +211,7 @@ static void exynos4_frc_resume(struct clocksource *cs) + + static struct clocksource mct_frc = { + .name = "mct-frc", +- .rating = 400, ++ .rating = 450, /* use value higher than ARM arch timer */ + .read = exynos4_frc_read, + .mask = CLOCKSOURCE_MASK(32), + .flags = CLOCK_SOURCE_IS_CONTINUOUS, +@@ -466,7 +466,7 @@ static int exynos4_mct_starting_cpu(unsigned int cpu) + evt->set_state_oneshot_stopped = set_state_shutdown; + evt->tick_resume = set_state_shutdown; + evt->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; +- evt->rating = 450; ++ evt->rating = 500; /* use value higher than ARM arch timer */ + + exynos4_mct_write(TICK_BASE_CNT, mevt->base + MCT_L_TCNTB_OFFSET); + +diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c +index 368c5599515e..a194ee0ddbb6 100644 +--- a/drivers/crypto/amcc/crypto4xx_trng.c ++++ b/drivers/crypto/amcc/crypto4xx_trng.c +@@ -111,7 +111,6 @@ void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev) + return; + + err_out: +- of_node_put(trng); + iounmap(dev->trng_base); + kfree(rng); + dev->trng_base = NULL; +diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c +index 88caca3370f2..f8ac768ed5d7 100644 +--- a/drivers/crypto/caam/caamalg.c ++++ b/drivers/crypto/caam/caamalg.c +@@ -2015,6 +2015,7 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, + struct ablkcipher_request *req = context; + struct ablkcipher_edesc *edesc; + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); ++ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + + #ifdef DEBUG +@@ -2040,10 +2041,11 @@ static void ablkcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err, + + /* + * The crypto API expects us to set the IV (req->info) to the last +- * ciphertext block. This is used e.g. by the CTS mode. ++ * ciphertext block when running in CBC mode. + */ +- scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ivsize, +- ivsize, 0); ++ if ((ctx->class1_alg_type & OP_ALG_AAI_MASK) == OP_ALG_AAI_CBC) ++ scatterwalk_map_and_copy(req->info, req->dst, req->nbytes - ++ ivsize, ivsize, 0); + + kfree(edesc); + +@@ -2056,6 +2058,7 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, + struct ablkcipher_request *req = context; + struct ablkcipher_edesc *edesc; + struct crypto_ablkcipher *ablkcipher = crypto_ablkcipher_reqtfm(req); ++ struct caam_ctx *ctx = crypto_ablkcipher_ctx(ablkcipher); + int ivsize = crypto_ablkcipher_ivsize(ablkcipher); + + #ifdef DEBUG +@@ -2080,10 +2083,11 @@ static void ablkcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err, + + /* + * The crypto API expects us to set the IV (req->info) to the last +- * ciphertext block. ++ * ciphertext block when running in CBC mode. + */ +- scatterwalk_map_and_copy(req->info, req->src, req->nbytes - ivsize, +- ivsize, 0); ++ if ((ctx->class1_alg_type & OP_ALG_AAI_MASK) == OP_ALG_AAI_CBC) ++ scatterwalk_map_and_copy(req->info, req->src, req->nbytes - ++ ivsize, ivsize, 0); + + kfree(edesc); + +diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c +index f796e36d7ec3..46d18f39fa7b 100644 +--- a/drivers/crypto/ccp/ccp-dev.c ++++ b/drivers/crypto/ccp/ccp-dev.c +@@ -40,57 +40,63 @@ struct ccp_tasklet_data { + struct ccp_cmd *cmd; + }; + +-/* Human-readable error strings */ +-char *ccp_error_codes[] = { +- "", +- "ERR 01: ILLEGAL_ENGINE", +- "ERR 02: ILLEGAL_KEY_ID", +- "ERR 03: ILLEGAL_FUNCTION_TYPE", +- "ERR 04: ILLEGAL_FUNCTION_MODE", +- "ERR 05: ILLEGAL_FUNCTION_ENCRYPT", +- "ERR 06: ILLEGAL_FUNCTION_SIZE", +- "ERR 07: Zlib_MISSING_INIT_EOM", +- "ERR 08: ILLEGAL_FUNCTION_RSVD", +- "ERR 09: ILLEGAL_BUFFER_LENGTH", +- "ERR 10: VLSB_FAULT", +- "ERR 11: ILLEGAL_MEM_ADDR", +- "ERR 12: ILLEGAL_MEM_SEL", +- "ERR 13: ILLEGAL_CONTEXT_ID", +- "ERR 14: ILLEGAL_KEY_ADDR", +- "ERR 15: 0xF Reserved", +- "ERR 16: Zlib_ILLEGAL_MULTI_QUEUE", +- "ERR 17: Zlib_ILLEGAL_JOBID_CHANGE", +- "ERR 18: CMD_TIMEOUT", +- "ERR 19: IDMA0_AXI_SLVERR", +- "ERR 20: IDMA0_AXI_DECERR", +- "ERR 21: 0x15 Reserved", +- "ERR 22: IDMA1_AXI_SLAVE_FAULT", +- "ERR 23: IDMA1_AIXI_DECERR", +- "ERR 24: 0x18 Reserved", +- "ERR 25: ZLIBVHB_AXI_SLVERR", +- "ERR 26: ZLIBVHB_AXI_DECERR", +- "ERR 27: 0x1B Reserved", +- "ERR 27: ZLIB_UNEXPECTED_EOM", +- "ERR 27: ZLIB_EXTRA_DATA", +- "ERR 30: ZLIB_BTYPE", +- "ERR 31: ZLIB_UNDEFINED_SYMBOL", +- "ERR 32: ZLIB_UNDEFINED_DISTANCE_S", +- "ERR 33: ZLIB_CODE_LENGTH_SYMBOL", +- "ERR 34: ZLIB _VHB_ILLEGAL_FETCH", +- "ERR 35: ZLIB_UNCOMPRESSED_LEN", +- "ERR 36: ZLIB_LIMIT_REACHED", +- "ERR 37: ZLIB_CHECKSUM_MISMATCH0", +- "ERR 38: ODMA0_AXI_SLVERR", +- "ERR 39: ODMA0_AXI_DECERR", +- "ERR 40: 0x28 Reserved", +- "ERR 41: ODMA1_AXI_SLVERR", +- "ERR 42: ODMA1_AXI_DECERR", +- "ERR 43: LSB_PARITY_ERR", ++ /* Human-readable error strings */ ++#define CCP_MAX_ERROR_CODE 64 ++ static char *ccp_error_codes[] = { ++ "", ++ "ILLEGAL_ENGINE", ++ "ILLEGAL_KEY_ID", ++ "ILLEGAL_FUNCTION_TYPE", ++ "ILLEGAL_FUNCTION_MODE", ++ "ILLEGAL_FUNCTION_ENCRYPT", ++ "ILLEGAL_FUNCTION_SIZE", ++ "Zlib_MISSING_INIT_EOM", ++ "ILLEGAL_FUNCTION_RSVD", ++ "ILLEGAL_BUFFER_LENGTH", ++ "VLSB_FAULT", ++ "ILLEGAL_MEM_ADDR", ++ "ILLEGAL_MEM_SEL", ++ "ILLEGAL_CONTEXT_ID", ++ "ILLEGAL_KEY_ADDR", ++ "0xF Reserved", ++ "Zlib_ILLEGAL_MULTI_QUEUE", ++ "Zlib_ILLEGAL_JOBID_CHANGE", ++ "CMD_TIMEOUT", ++ "IDMA0_AXI_SLVERR", ++ "IDMA0_AXI_DECERR", ++ "0x15 Reserved", ++ "IDMA1_AXI_SLAVE_FAULT", ++ "IDMA1_AIXI_DECERR", ++ "0x18 Reserved", ++ "ZLIBVHB_AXI_SLVERR", ++ "ZLIBVHB_AXI_DECERR", ++ "0x1B Reserved", ++ "ZLIB_UNEXPECTED_EOM", ++ "ZLIB_EXTRA_DATA", ++ "ZLIB_BTYPE", ++ "ZLIB_UNDEFINED_SYMBOL", ++ "ZLIB_UNDEFINED_DISTANCE_S", ++ "ZLIB_CODE_LENGTH_SYMBOL", ++ "ZLIB _VHB_ILLEGAL_FETCH", ++ "ZLIB_UNCOMPRESSED_LEN", ++ "ZLIB_LIMIT_REACHED", ++ "ZLIB_CHECKSUM_MISMATCH0", ++ "ODMA0_AXI_SLVERR", ++ "ODMA0_AXI_DECERR", ++ "0x28 Reserved", ++ "ODMA1_AXI_SLVERR", ++ "ODMA1_AXI_DECERR", + }; + +-void ccp_log_error(struct ccp_device *d, int e) ++void ccp_log_error(struct ccp_device *d, unsigned int e) + { +- dev_err(d->dev, "CCP error: %s (0x%x)\n", ccp_error_codes[e], e); ++ if (WARN_ON(e >= CCP_MAX_ERROR_CODE)) ++ return; ++ ++ if (e < ARRAY_SIZE(ccp_error_codes)) ++ dev_err(d->dev, "CCP error %d: %s\n", e, ccp_error_codes[e]); ++ else ++ dev_err(d->dev, "CCP error %d: Unknown Error\n", e); + } + + /* List of CCPs, CCP count, read-write access lock, and access functions +diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h +index 347b77108baa..cfe21d033745 100644 +--- a/drivers/crypto/ccp/ccp-dev.h ++++ b/drivers/crypto/ccp/ccp-dev.h +@@ -607,7 +607,7 @@ void ccp_platform_exit(void); + void ccp_add_device(struct ccp_device *ccp); + void ccp_del_device(struct ccp_device *ccp); + +-extern void ccp_log_error(struct ccp_device *, int); ++extern void ccp_log_error(struct ccp_device *, unsigned int); + + struct ccp_device *ccp_alloc_struct(struct device *dev); + bool ccp_queues_suspended(struct ccp_device *ccp); +diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c +index 5a24a484ecc7..ea8595d2c3d8 100644 +--- a/drivers/crypto/talitos.c ++++ b/drivers/crypto/talitos.c +@@ -984,7 +984,6 @@ static void ipsec_esp_encrypt_done(struct device *dev, + struct crypto_aead *authenc = crypto_aead_reqtfm(areq); + unsigned int authsize = crypto_aead_authsize(authenc); + struct talitos_edesc *edesc; +- struct scatterlist *sg; + void *icvdata; + + edesc = container_of(desc, struct talitos_edesc, desc); +@@ -998,9 +997,8 @@ static void ipsec_esp_encrypt_done(struct device *dev, + else + icvdata = &edesc->link_tbl[edesc->src_nents + + edesc->dst_nents + 2]; +- sg = sg_last(areq->dst, edesc->dst_nents); +- memcpy((char *)sg_virt(sg) + sg->length - authsize, +- icvdata, authsize); ++ sg_pcopy_from_buffer(areq->dst, edesc->dst_nents ? : 1, icvdata, ++ authsize, areq->assoclen + areq->cryptlen); + } + + kfree(edesc); +@@ -1016,7 +1014,6 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev, + struct crypto_aead *authenc = crypto_aead_reqtfm(req); + unsigned int authsize = crypto_aead_authsize(authenc); + struct talitos_edesc *edesc; +- struct scatterlist *sg; + char *oicv, *icv; + struct talitos_private *priv = dev_get_drvdata(dev); + bool is_sec1 = has_ftr_sec1(priv); +@@ -1026,9 +1023,18 @@ static void ipsec_esp_decrypt_swauth_done(struct device *dev, + ipsec_esp_unmap(dev, edesc, req); + + if (!err) { ++ char icvdata[SHA512_DIGEST_SIZE]; ++ int nents = edesc->dst_nents ? : 1; ++ unsigned int len = req->assoclen + req->cryptlen; ++ + /* auth check */ +- sg = sg_last(req->dst, edesc->dst_nents ? : 1); +- icv = (char *)sg_virt(sg) + sg->length - authsize; ++ if (nents > 1) { ++ sg_pcopy_to_buffer(req->dst, nents, icvdata, authsize, ++ len - authsize); ++ icv = icvdata; ++ } else { ++ icv = (char *)sg_virt(req->dst) + len - authsize; ++ } + + if (edesc->dma_len) { + if (is_sec1) +@@ -1458,7 +1464,6 @@ static int aead_decrypt(struct aead_request *req) + struct talitos_ctx *ctx = crypto_aead_ctx(authenc); + struct talitos_private *priv = dev_get_drvdata(ctx->dev); + struct talitos_edesc *edesc; +- struct scatterlist *sg; + void *icvdata; + + req->cryptlen -= authsize; +@@ -1493,9 +1498,8 @@ static int aead_decrypt(struct aead_request *req) + else + icvdata = &edesc->link_tbl[0]; + +- sg = sg_last(req->src, edesc->src_nents ? : 1); +- +- memcpy(icvdata, (char *)sg_virt(sg) + sg->length - authsize, authsize); ++ sg_pcopy_to_buffer(req->src, edesc->src_nents ? : 1, icvdata, authsize, ++ req->assoclen + req->cryptlen - authsize); + + return ipsec_esp(edesc, req, ipsec_esp_decrypt_swauth_done); + } +@@ -1544,11 +1548,15 @@ static void ablkcipher_done(struct device *dev, + int err) + { + struct ablkcipher_request *areq = context; ++ struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(areq); ++ struct talitos_ctx *ctx = crypto_ablkcipher_ctx(cipher); ++ unsigned int ivsize = crypto_ablkcipher_ivsize(cipher); + struct talitos_edesc *edesc; + + edesc = container_of(desc, struct talitos_edesc, desc); + + common_nonsnoop_unmap(dev, edesc, areq); ++ memcpy(areq->info, ctx->iv, ivsize); + + kfree(edesc); + +@@ -3111,7 +3119,10 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev, + alg->cra_priority = t_alg->algt.priority; + else + alg->cra_priority = TALITOS_CRA_PRIORITY; +- alg->cra_alignmask = 0; ++ if (has_ftr_sec1(priv)) ++ alg->cra_alignmask = 3; ++ else ++ alg->cra_alignmask = 0; + alg->cra_ctxsize = sizeof(struct talitos_ctx); + alg->cra_flags |= CRYPTO_ALG_KERN_DRIVER_ONLY; + +diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c +index 84856ac75a09..9f240b2d85a5 100644 +--- a/drivers/dma/imx-sdma.c ++++ b/drivers/dma/imx-sdma.c +@@ -1821,27 +1821,6 @@ static int sdma_probe(struct platform_device *pdev) + if (pdata && pdata->script_addrs) + sdma_add_scripts(sdma, pdata->script_addrs); + +- if (pdata) { +- ret = sdma_get_firmware(sdma, pdata->fw_name); +- if (ret) +- dev_warn(&pdev->dev, "failed to get firmware from platform data\n"); +- } else { +- /* +- * Because that device tree does not encode ROM script address, +- * the RAM script in firmware is mandatory for device tree +- * probe, otherwise it fails. +- */ +- ret = of_property_read_string(np, "fsl,sdma-ram-script-name", +- &fw_name); +- if (ret) +- dev_warn(&pdev->dev, "failed to get firmware name\n"); +- else { +- ret = sdma_get_firmware(sdma, fw_name); +- if (ret) +- dev_warn(&pdev->dev, "failed to get firmware from device tree\n"); +- } +- } +- + sdma->dma_device.dev = &pdev->dev; + + sdma->dma_device.device_alloc_chan_resources = sdma_alloc_chan_resources; +@@ -1883,6 +1862,33 @@ static int sdma_probe(struct platform_device *pdev) + of_node_put(spba_bus); + } + ++ /* ++ * Kick off firmware loading as the very last step: ++ * attempt to load firmware only if we're not on the error path, because ++ * the firmware callback requires a fully functional and allocated sdma ++ * instance. ++ */ ++ if (pdata) { ++ ret = sdma_get_firmware(sdma, pdata->fw_name); ++ if (ret) ++ dev_warn(&pdev->dev, "failed to get firmware from platform data\n"); ++ } else { ++ /* ++ * Because that device tree does not encode ROM script address, ++ * the RAM script in firmware is mandatory for device tree ++ * probe, otherwise it fails. ++ */ ++ ret = of_property_read_string(np, "fsl,sdma-ram-script-name", ++ &fw_name); ++ if (ret) { ++ dev_warn(&pdev->dev, "failed to get firmware name\n"); ++ } else { ++ ret = sdma_get_firmware(sdma, fw_name); ++ if (ret) ++ dev_warn(&pdev->dev, "failed to get firmware from device tree\n"); ++ } ++ } ++ + return 0; + + err_register: +diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c +index 40d792e96b75..d59641194860 100644 +--- a/drivers/edac/edac_mc_sysfs.c ++++ b/drivers/edac/edac_mc_sysfs.c +@@ -26,7 +26,7 @@ + static int edac_mc_log_ue = 1; + static int edac_mc_log_ce = 1; + static int edac_mc_panic_on_ue; +-static int edac_mc_poll_msec = 1000; ++static unsigned int edac_mc_poll_msec = 1000; + + /* Getter functions for above */ + int edac_mc_get_log_ue(void) +@@ -45,30 +45,30 @@ int edac_mc_get_panic_on_ue(void) + } + + /* this is temporary */ +-int edac_mc_get_poll_msec(void) ++unsigned int edac_mc_get_poll_msec(void) + { + return edac_mc_poll_msec; + } + + static int edac_set_poll_msec(const char *val, struct kernel_param *kp) + { +- unsigned long l; ++ unsigned int i; + int ret; + + if (!val) + return -EINVAL; + +- ret = kstrtoul(val, 0, &l); ++ ret = kstrtouint(val, 0, &i); + if (ret) + return ret; + +- if (l < 1000) ++ if (i < 1000) + return -EINVAL; + +- *((unsigned long *)kp->arg) = l; ++ *((unsigned int *)kp->arg) = i; + + /* notify edac_mc engine to reset the poll period */ +- edac_mc_reset_delay_period(l); ++ edac_mc_reset_delay_period(i); + + return 0; + } +@@ -82,7 +82,7 @@ MODULE_PARM_DESC(edac_mc_log_ue, + module_param(edac_mc_log_ce, int, 0644); + MODULE_PARM_DESC(edac_mc_log_ce, + "Log correctable error to console: 0=off 1=on"); +-module_param_call(edac_mc_poll_msec, edac_set_poll_msec, param_get_int, ++module_param_call(edac_mc_poll_msec, edac_set_poll_msec, param_get_uint, + &edac_mc_poll_msec, 0644); + MODULE_PARM_DESC(edac_mc_poll_msec, "Polling period in milliseconds"); + +@@ -426,6 +426,8 @@ static inline int nr_pages_per_csrow(struct csrow_info *csrow) + static int edac_create_csrow_object(struct mem_ctl_info *mci, + struct csrow_info *csrow, int index) + { ++ int err; ++ + csrow->dev.type = &csrow_attr_type; + csrow->dev.bus = mci->bus; + csrow->dev.groups = csrow_dev_groups; +@@ -438,7 +440,11 @@ static int edac_create_csrow_object(struct mem_ctl_info *mci, + edac_dbg(0, "creating (virtual) csrow node %s\n", + dev_name(&csrow->dev)); + +- return device_add(&csrow->dev); ++ err = device_add(&csrow->dev); ++ if (err) ++ put_device(&csrow->dev); ++ ++ return err; + } + + /* Create a CSROW object under specifed edac_mc_device */ +diff --git a/drivers/edac/edac_module.h b/drivers/edac/edac_module.h +index cfaacb99c973..c36f9f721fb2 100644 +--- a/drivers/edac/edac_module.h ++++ b/drivers/edac/edac_module.h +@@ -33,7 +33,7 @@ extern int edac_mc_get_log_ue(void); + extern int edac_mc_get_log_ce(void); + extern int edac_mc_get_panic_on_ue(void); + extern int edac_get_poll_msec(void); +-extern int edac_mc_get_poll_msec(void); ++extern unsigned int edac_mc_get_poll_msec(void); + + unsigned edac_dimm_info_location(struct dimm_info *dimm, char *buf, + unsigned len); +diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c +index 038882183bdf..fc841ce24db7 100644 +--- a/drivers/gpio/gpio-omap.c ++++ b/drivers/gpio/gpio-omap.c +@@ -786,9 +786,9 @@ static void omap_gpio_irq_shutdown(struct irq_data *d) + + raw_spin_lock_irqsave(&bank->lock, flags); + bank->irq_usage &= ~(BIT(offset)); +- omap_set_gpio_irqenable(bank, offset, 0); +- omap_clear_gpio_irqstatus(bank, offset); + omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); ++ omap_clear_gpio_irqstatus(bank, offset); ++ omap_set_gpio_irqenable(bank, offset, 0); + if (!LINE_USED(bank->mod_usage, offset)) + omap_clear_gpio_debounce(bank, offset); + omap_disable_gpio_module(bank, offset); +@@ -830,8 +830,8 @@ static void omap_gpio_mask_irq(struct irq_data *d) + unsigned long flags; + + raw_spin_lock_irqsave(&bank->lock, flags); +- omap_set_gpio_irqenable(bank, offset, 0); + omap_set_gpio_triggering(bank, offset, IRQ_TYPE_NONE); ++ omap_set_gpio_irqenable(bank, offset, 0); + raw_spin_unlock_irqrestore(&bank->lock, flags); + } + +@@ -843,9 +843,6 @@ static void omap_gpio_unmask_irq(struct irq_data *d) + unsigned long flags; + + raw_spin_lock_irqsave(&bank->lock, flags); +- if (trigger) +- omap_set_gpio_triggering(bank, offset, trigger); +- + omap_set_gpio_irqenable(bank, offset, 1); + + /* +@@ -853,9 +850,13 @@ static void omap_gpio_unmask_irq(struct irq_data *d) + * is cleared, thus after the handler has run. OMAP4 needs this done + * after enabing the interrupt to clear the wakeup status. + */ +- if (bank->level_mask & BIT(offset)) ++ if (bank->regs->leveldetect0 && bank->regs->wkup_en && ++ trigger & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) + omap_clear_gpio_irqstatus(bank, offset); + ++ if (trigger) ++ omap_set_gpio_triggering(bank, offset, trigger); ++ + raw_spin_unlock_irqrestore(&bank->lock, flags); + } + +@@ -1585,6 +1586,8 @@ static struct omap_gpio_reg_offs omap4_gpio_regs = { + .clr_dataout = OMAP4_GPIO_CLEARDATAOUT, + .irqstatus = OMAP4_GPIO_IRQSTATUS0, + .irqstatus2 = OMAP4_GPIO_IRQSTATUS1, ++ .irqstatus_raw0 = OMAP4_GPIO_IRQSTATUSRAW0, ++ .irqstatus_raw1 = OMAP4_GPIO_IRQSTATUSRAW1, + .irqenable = OMAP4_GPIO_IRQSTATUSSET0, + .irqenable2 = OMAP4_GPIO_IRQSTATUSSET1, + .set_irqenable = OMAP4_GPIO_IRQSTATUSSET0, +diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c +index 9e2fe12c2858..a3251faa3ed8 100644 +--- a/drivers/gpio/gpiolib.c ++++ b/drivers/gpio/gpiolib.c +@@ -2411,7 +2411,7 @@ static int _gpiod_get_raw_value(const struct gpio_desc *desc) + int gpiod_get_raw_value(const struct gpio_desc *desc) + { + VALIDATE_DESC(desc); +- /* Should be using gpio_get_value_cansleep() */ ++ /* Should be using gpiod_get_raw_value_cansleep() */ + WARN_ON(desc->gdev->chip->can_sleep); + return _gpiod_get_raw_value(desc); + } +@@ -2432,7 +2432,7 @@ int gpiod_get_value(const struct gpio_desc *desc) + int value; + + VALIDATE_DESC(desc); +- /* Should be using gpio_get_value_cansleep() */ ++ /* Should be using gpiod_get_value_cansleep() */ + WARN_ON(desc->gdev->chip->can_sleep); + + value = _gpiod_get_raw_value(desc); +@@ -2608,7 +2608,7 @@ void gpiod_set_array_value_complex(bool raw, bool can_sleep, + void gpiod_set_raw_value(struct gpio_desc *desc, int value) + { + VALIDATE_DESC_VOID(desc); +- /* Should be using gpiod_set_value_cansleep() */ ++ /* Should be using gpiod_set_raw_value_cansleep() */ + WARN_ON(desc->gdev->chip->can_sleep); + _gpiod_set_raw_value(desc, value); + } +diff --git a/drivers/gpu/drm/bridge/sii902x.c b/drivers/gpu/drm/bridge/sii902x.c +index 9126d0306ab5..51e2d03995a1 100644 +--- a/drivers/gpu/drm/bridge/sii902x.c ++++ b/drivers/gpu/drm/bridge/sii902x.c +@@ -250,10 +250,11 @@ static void sii902x_bridge_mode_set(struct drm_bridge *bridge, + struct regmap *regmap = sii902x->regmap; + u8 buf[HDMI_INFOFRAME_SIZE(AVI)]; + struct hdmi_avi_infoframe frame; ++ u16 pixel_clock_10kHz = adj->clock / 10; + int ret; + +- buf[0] = adj->clock; +- buf[1] = adj->clock >> 8; ++ buf[0] = pixel_clock_10kHz & 0xff; ++ buf[1] = pixel_clock_10kHz >> 8; + buf[2] = adj->vrefresh; + buf[3] = 0x00; + buf[4] = adj->hdisplay; +diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c +index fa3f2f039a74..80993a8734e0 100644 +--- a/drivers/gpu/drm/bridge/tc358767.c ++++ b/drivers/gpu/drm/bridge/tc358767.c +@@ -1153,6 +1153,13 @@ static int tc_connector_get_modes(struct drm_connector *connector) + struct tc_data *tc = connector_to_tc(connector); + struct edid *edid; + unsigned int count; ++ int ret; ++ ++ ret = tc_get_display_props(tc); ++ if (ret < 0) { ++ dev_err(tc->dev, "failed to read display props: %d\n", ret); ++ return 0; ++ } + + if (tc->panel && tc->panel->funcs && tc->panel->funcs->get_modes) { + count = tc->panel->funcs->get_modes(tc->panel); +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c +index ecacb22834d7..719345074711 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/i2c/base.c +@@ -184,6 +184,25 @@ nvkm_i2c_fini(struct nvkm_subdev *subdev, bool suspend) + return 0; + } + ++static int ++nvkm_i2c_preinit(struct nvkm_subdev *subdev) ++{ ++ struct nvkm_i2c *i2c = nvkm_i2c(subdev); ++ struct nvkm_i2c_bus *bus; ++ struct nvkm_i2c_pad *pad; ++ ++ /* ++ * We init our i2c busses as early as possible, since they may be ++ * needed by the vbios init scripts on some cards ++ */ ++ list_for_each_entry(pad, &i2c->pad, head) ++ nvkm_i2c_pad_init(pad); ++ list_for_each_entry(bus, &i2c->bus, head) ++ nvkm_i2c_bus_init(bus); ++ ++ return 0; ++} ++ + static int + nvkm_i2c_init(struct nvkm_subdev *subdev) + { +@@ -238,6 +257,7 @@ nvkm_i2c_dtor(struct nvkm_subdev *subdev) + static const struct nvkm_subdev_func + nvkm_i2c = { + .dtor = nvkm_i2c_dtor, ++ .preinit = nvkm_i2c_preinit, + .init = nvkm_i2c_init, + .fini = nvkm_i2c_fini, + .intr = nvkm_i2c_intr, +diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c +index 5b2a9f97ff04..68a2b25deb50 100644 +--- a/drivers/gpu/drm/panel/panel-simple.c ++++ b/drivers/gpu/drm/panel/panel-simple.c +@@ -1944,7 +1944,14 @@ static int panel_simple_dsi_probe(struct mipi_dsi_device *dsi) + dsi->format = desc->format; + dsi->lanes = desc->lanes; + +- return mipi_dsi_attach(dsi); ++ err = mipi_dsi_attach(dsi); ++ if (err) { ++ struct panel_simple *panel = dev_get_drvdata(&dsi->dev); ++ ++ drm_panel_remove(&panel->base); ++ } ++ ++ return err; + } + + static int panel_simple_dsi_remove(struct mipi_dsi_device *dsi) +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +index 32d87c6035c9..5bed63eee5f0 100644 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c +@@ -865,7 +865,8 @@ static bool vop_crtc_mode_fixup(struct drm_crtc *crtc, + struct vop *vop = to_vop(crtc); + + adjusted_mode->clock = +- clk_round_rate(vop->dclk, mode->clock * 1000) / 1000; ++ DIV_ROUND_UP(clk_round_rate(vop->dclk, mode->clock * 1000), ++ 1000); + + return true; + } +diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c +index 54639395aba0..a3559b1a3a0f 100644 +--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c ++++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c +@@ -521,6 +521,9 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev, + ret = wait_event_timeout(vgdev->resp_wq, + atomic_read(&cache_ent->is_valid), 5 * HZ); + ++ /* is_valid check must proceed before copy of the cache entry. */ ++ smp_rmb(); ++ + ptr = cache_ent->caps_cache; + + copy_exit: +diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c +index 52436b3c01bb..a1b3ea1ccb65 100644 +--- a/drivers/gpu/drm/virtio/virtgpu_vq.c ++++ b/drivers/gpu/drm/virtio/virtgpu_vq.c +@@ -618,6 +618,8 @@ static void virtio_gpu_cmd_capset_cb(struct virtio_gpu_device *vgdev, + cache_ent->id == le32_to_cpu(cmd->capset_id)) { + memcpy(cache_ent->caps_cache, resp->capset_data, + cache_ent->size); ++ /* Copy must occur before is_valid is signalled. */ ++ smp_wmb(); + atomic_set(&cache_ent->is_valid, 1); + break; + } +diff --git a/drivers/gpu/ipu-v3/ipu-ic.c b/drivers/gpu/ipu-v3/ipu-ic.c +index 321eb983c2f5..65d7daf944b0 100644 +--- a/drivers/gpu/ipu-v3/ipu-ic.c ++++ b/drivers/gpu/ipu-v3/ipu-ic.c +@@ -256,7 +256,7 @@ static int init_csc(struct ipu_ic *ic, + writel(param, base++); + + param = ((a[0] & 0x1fe0) >> 5) | (params->scale << 8) | +- (params->sat << 9); ++ (params->sat << 10); + writel(param, base++); + + param = ((a[1] & 0x1f) << 27) | ((c[0][1] & 0x1ff) << 18) | +diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c +index aadae9dc2aad..7bdd1bfbeedd 100644 +--- a/drivers/hwtracing/intel_th/msu.c ++++ b/drivers/hwtracing/intel_th/msu.c +@@ -638,7 +638,7 @@ static int msc_buffer_contig_alloc(struct msc *msc, unsigned long size) + goto err_out; + + ret = -ENOMEM; +- page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); ++ page = alloc_pages(GFP_KERNEL | __GFP_ZERO | GFP_DMA32, order); + if (!page) + goto err_free_sgt; + +diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c +index a8497cfdae6f..7524e17ac966 100644 +--- a/drivers/i2c/busses/i2c-qup.c ++++ b/drivers/i2c/busses/i2c-qup.c +@@ -808,6 +808,8 @@ static int qup_i2c_bam_do_xfer(struct qup_i2c_dev *qup, struct i2c_msg *msg, + } + + if (ret || qup->bus_err || qup->qup_err) { ++ reinit_completion(&qup->xfer); ++ + if (qup_i2c_change_state(qup, QUP_RUN_STATE)) { + dev_err(qup->dev, "change to run state timed out"); + goto desc_err; +diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c +index 095912fb3201..c3d2400e36b9 100644 +--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c ++++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c +@@ -812,6 +812,8 @@ static int i40iw_query_qp(struct ib_qp *ibqp, + struct i40iw_qp *iwqp = to_iwqp(ibqp); + struct i40iw_sc_qp *qp = &iwqp->sc_qp; + ++ attr->qp_state = iwqp->ibqp_state; ++ attr->cur_qp_state = attr->qp_state; + attr->qp_access_flags = 0; + attr->cap.max_send_wr = qp->qp_uk.sq_size; + attr->cap.max_recv_wr = qp->qp_uk.rq_size; +diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c +index 297653ab4004..5bfea23f3b60 100644 +--- a/drivers/infiniband/sw/rxe/rxe_resp.c ++++ b/drivers/infiniband/sw/rxe/rxe_resp.c +@@ -432,6 +432,7 @@ static enum resp_states check_rkey(struct rxe_qp *qp, + qp->resp.va = reth_va(pkt); + qp->resp.rkey = reth_rkey(pkt); + qp->resp.resid = reth_len(pkt); ++ qp->resp.length = reth_len(pkt); + } + access = (pkt->mask & RXE_READ_MASK) ? IB_ACCESS_REMOTE_READ + : IB_ACCESS_REMOTE_WRITE; +@@ -841,7 +842,9 @@ static enum resp_states do_complete(struct rxe_qp *qp, + pkt->mask & RXE_WRITE_MASK) ? + IB_WC_RECV_RDMA_WITH_IMM : IB_WC_RECV; + wc->vendor_err = 0; +- wc->byte_len = wqe->dma.length - wqe->dma.resid; ++ wc->byte_len = (pkt->mask & RXE_IMMDT_MASK && ++ pkt->mask & RXE_WRITE_MASK) ? ++ qp->resp.length : wqe->dma.length - wqe->dma.resid; + + /* fields after byte_len are different between kernel and user + * space +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h +index cac1d52a08f0..47003d2a4a46 100644 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.h ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h +@@ -209,6 +209,7 @@ struct rxe_resp_info { + struct rxe_mem *mr; + u32 resid; + u32 rkey; ++ u32 length; + u64 atomic_orig; + + /* SRQ only */ +diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c +index 17c5bc7e8957..45504febbc2a 100644 +--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c ++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c +@@ -1751,6 +1751,7 @@ static int ipoib_get_vf_config(struct net_device *dev, int vf, + return err; + + ivf->vf = vf; ++ memcpy(ivf->mac, dev->dev_addr, dev->addr_len); + + return 0; + } +diff --git a/drivers/input/tablet/gtco.c b/drivers/input/tablet/gtco.c +index 339a0e2d2f86..8af736dc4b18 100644 +--- a/drivers/input/tablet/gtco.c ++++ b/drivers/input/tablet/gtco.c +@@ -78,6 +78,7 @@ Scott Hill shill@gtcocalcomp.com + + /* Max size of a single report */ + #define REPORT_MAX_SIZE 10 ++#define MAX_COLLECTION_LEVELS 10 + + + /* Bitmask whether pen is in range */ +@@ -223,8 +224,7 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report, + char maintype = 'x'; + char globtype[12]; + int indent = 0; +- char indentstr[10] = ""; +- ++ char indentstr[MAX_COLLECTION_LEVELS + 1] = { 0 }; + + dev_dbg(ddev, "======>>>>>>PARSE<<<<<<======\n"); + +@@ -350,6 +350,13 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report, + case TAG_MAIN_COL_START: + maintype = 'S'; + ++ if (indent == MAX_COLLECTION_LEVELS) { ++ dev_err(ddev, "Collection level %d would exceed limit of %d\n", ++ indent + 1, ++ MAX_COLLECTION_LEVELS); ++ break; ++ } ++ + if (data == 0) { + dev_dbg(ddev, "======>>>>>> Physical\n"); + strcpy(globtype, "Physical"); +@@ -369,8 +376,15 @@ static void parse_hid_report_descriptor(struct gtco *device, char * report, + break; + + case TAG_MAIN_COL_END: +- dev_dbg(ddev, "<<<<<<======\n"); + maintype = 'E'; ++ ++ if (indent == 0) { ++ dev_err(ddev, "Collection level already at zero\n"); ++ break; ++ } ++ ++ dev_dbg(ddev, "<<<<<<======\n"); ++ + indent--; + for (x = 0; x < indent; x++) + indentstr[x] = '-'; +diff --git a/drivers/isdn/hardware/mISDN/hfcsusb.c b/drivers/isdn/hardware/mISDN/hfcsusb.c +index 114f3bcba1b0..c60c7998af17 100644 +--- a/drivers/isdn/hardware/mISDN/hfcsusb.c ++++ b/drivers/isdn/hardware/mISDN/hfcsusb.c +@@ -1963,6 +1963,9 @@ hfcsusb_probe(struct usb_interface *intf, const struct usb_device_id *id) + + /* get endpoint base */ + idx = ((ep_addr & 0x7f) - 1) * 2; ++ if (idx > 15) ++ return -EIO; ++ + if (ep_addr & 0x80) + idx++; + attr = ep->desc.bmAttributes; +diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c +index 87ef465c6947..c1c43800c4aa 100644 +--- a/drivers/mailbox/mailbox.c ++++ b/drivers/mailbox/mailbox.c +@@ -389,11 +389,13 @@ struct mbox_chan *mbox_request_channel_byname(struct mbox_client *cl, + + of_property_for_each_string(np, "mbox-names", prop, mbox_name) { + if (!strncmp(name, mbox_name, strlen(name))) +- break; ++ return mbox_request_channel(cl, index); + index++; + } + +- return mbox_request_channel(cl, index); ++ dev_err(cl->dev, "%s() could not locate channel named \"%s\"\n", ++ __func__, name); ++ return ERR_PTR(-EINVAL); + } + EXPORT_SYMBOL_GPL(mbox_request_channel_byname); + +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index 9f2588eaaf5f..c5bc3e5e921e 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -1405,7 +1405,7 @@ static void cache_set_flush(struct closure *cl) + kobject_put(&c->internal); + kobject_del(&c->kobj); + +- if (c->gc_thread) ++ if (!IS_ERR_OR_NULL(c->gc_thread)) + kthread_stop(c->gc_thread); + + if (!IS_ERR_OR_NULL(c->root)) +diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c +index c837defb5e4d..673ce38735ff 100644 +--- a/drivers/md/dm-bufio.c ++++ b/drivers/md/dm-bufio.c +@@ -1585,9 +1585,7 @@ dm_bufio_shrink_scan(struct shrinker *shrink, struct shrink_control *sc) + unsigned long freed; + + c = container_of(shrink, struct dm_bufio_client, shrinker); +- if (sc->gfp_mask & __GFP_FS) +- dm_bufio_lock(c); +- else if (!dm_bufio_trylock(c)) ++ if (!dm_bufio_trylock(c)) + return SHRINK_STOP; + + freed = __scan(c, sc->nr_to_scan, sc->gfp_mask); +diff --git a/drivers/media/dvb-frontends/tua6100.c b/drivers/media/dvb-frontends/tua6100.c +index 6da12b9e55eb..02c734b8718b 100644 +--- a/drivers/media/dvb-frontends/tua6100.c ++++ b/drivers/media/dvb-frontends/tua6100.c +@@ -80,8 +80,8 @@ static int tua6100_set_params(struct dvb_frontend *fe) + struct i2c_msg msg1 = { .addr = priv->i2c_address, .flags = 0, .buf = reg1, .len = 4 }; + struct i2c_msg msg2 = { .addr = priv->i2c_address, .flags = 0, .buf = reg2, .len = 3 }; + +-#define _R 4 +-#define _P 32 ++#define _R_VAL 4 ++#define _P_VAL 32 + #define _ri 4000000 + + // setup register 0 +@@ -96,14 +96,14 @@ static int tua6100_set_params(struct dvb_frontend *fe) + else + reg1[1] = 0x0c; + +- if (_P == 64) ++ if (_P_VAL == 64) + reg1[1] |= 0x40; + if (c->frequency >= 1525000) + reg1[1] |= 0x80; + + // register 2 +- reg2[1] = (_R >> 8) & 0x03; +- reg2[2] = _R; ++ reg2[1] = (_R_VAL >> 8) & 0x03; ++ reg2[2] = _R_VAL; + if (c->frequency < 1455000) + reg2[1] |= 0x1c; + else if (c->frequency < 1630000) +@@ -115,18 +115,18 @@ static int tua6100_set_params(struct dvb_frontend *fe) + * The N divisor ratio (note: c->frequency is in kHz, but we + * need it in Hz) + */ +- prediv = (c->frequency * _R) / (_ri / 1000); +- div = prediv / _P; ++ prediv = (c->frequency * _R_VAL) / (_ri / 1000); ++ div = prediv / _P_VAL; + reg1[1] |= (div >> 9) & 0x03; + reg1[2] = div >> 1; + reg1[3] = (div << 7); +- priv->frequency = ((div * _P) * (_ri / 1000)) / _R; ++ priv->frequency = ((div * _P_VAL) * (_ri / 1000)) / _R_VAL; + + // Finally, calculate and store the value for A +- reg1[3] |= (prediv - (div*_P)) & 0x7f; ++ reg1[3] |= (prediv - (div*_P_VAL)) & 0x7f; + +-#undef _R +-#undef _P ++#undef _R_VAL ++#undef _P_VAL + #undef _ri + + if (fe->ops.i2c_gate_ctrl) +diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile +index 92773b2e6225..bfe0afc209b8 100644 +--- a/drivers/media/i2c/Makefile ++++ b/drivers/media/i2c/Makefile +@@ -29,7 +29,7 @@ obj-$(CONFIG_VIDEO_ADV7393) += adv7393.o + obj-$(CONFIG_VIDEO_ADV7604) += adv7604.o + obj-$(CONFIG_VIDEO_ADV7842) += adv7842.o + obj-$(CONFIG_VIDEO_AD9389B) += ad9389b.o +-obj-$(CONFIG_VIDEO_ADV7511) += adv7511.o ++obj-$(CONFIG_VIDEO_ADV7511) += adv7511-v4l2.o + obj-$(CONFIG_VIDEO_VPX3220) += vpx3220.o + obj-$(CONFIG_VIDEO_VS6624) += vs6624.o + obj-$(CONFIG_VIDEO_BT819) += bt819.o +diff --git a/drivers/media/i2c/adv7511-v4l2.c b/drivers/media/i2c/adv7511-v4l2.c +new file mode 100644 +index 000000000000..b87c9e7ff146 +--- /dev/null ++++ b/drivers/media/i2c/adv7511-v4l2.c +@@ -0,0 +1,2008 @@ ++/* ++ * Analog Devices ADV7511 HDMI Transmitter Device Driver ++ * ++ * Copyright 2013 Cisco Systems, Inc. and/or its affiliates. All rights reserved. ++ * ++ * This program is free software; you may redistribute it and/or modify ++ * it under the terms of the GNU General Public License as published by ++ * the Free Software Foundation; version 2 of the License. ++ * ++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, ++ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF ++ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ++ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS ++ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ++ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN ++ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE ++ * SOFTWARE. ++ */ ++ ++/* ++ * This file is named adv7511-v4l2.c so it doesn't conflict with the Analog ++ * Device ADV7511 (config fragment CONFIG_DRM_I2C_ADV7511). ++ */ ++ ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++ ++static int debug; ++module_param(debug, int, 0644); ++MODULE_PARM_DESC(debug, "debug level (0-2)"); ++ ++MODULE_DESCRIPTION("Analog Devices ADV7511 HDMI Transmitter Device Driver"); ++MODULE_AUTHOR("Hans Verkuil"); ++MODULE_LICENSE("GPL v2"); ++ ++#define MASK_ADV7511_EDID_RDY_INT 0x04 ++#define MASK_ADV7511_MSEN_INT 0x40 ++#define MASK_ADV7511_HPD_INT 0x80 ++ ++#define MASK_ADV7511_HPD_DETECT 0x40 ++#define MASK_ADV7511_MSEN_DETECT 0x20 ++#define MASK_ADV7511_EDID_RDY 0x10 ++ ++#define EDID_MAX_RETRIES (8) ++#define EDID_DELAY 250 ++#define EDID_MAX_SEGM 8 ++ ++#define ADV7511_MAX_WIDTH 1920 ++#define ADV7511_MAX_HEIGHT 1200 ++#define ADV7511_MIN_PIXELCLOCK 20000000 ++#define ADV7511_MAX_PIXELCLOCK 225000000 ++ ++#define ADV7511_MAX_ADDRS (3) ++ ++/* ++********************************************************************** ++* ++* Arrays with configuration parameters for the ADV7511 ++* ++********************************************************************** ++*/ ++ ++struct i2c_reg_value { ++ unsigned char reg; ++ unsigned char value; ++}; ++ ++struct adv7511_state_edid { ++ /* total number of blocks */ ++ u32 blocks; ++ /* Number of segments read */ ++ u32 segments; ++ u8 data[EDID_MAX_SEGM * 256]; ++ /* Number of EDID read retries left */ ++ unsigned read_retries; ++ bool complete; ++}; ++ ++struct adv7511_state { ++ struct adv7511_platform_data pdata; ++ struct v4l2_subdev sd; ++ struct media_pad pad; ++ struct v4l2_ctrl_handler hdl; ++ int chip_revision; ++ u8 i2c_edid_addr; ++ u8 i2c_pktmem_addr; ++ u8 i2c_cec_addr; ++ ++ struct i2c_client *i2c_cec; ++ struct cec_adapter *cec_adap; ++ u8 cec_addr[ADV7511_MAX_ADDRS]; ++ u8 cec_valid_addrs; ++ bool cec_enabled_adap; ++ ++ /* Is the adv7511 powered on? */ ++ bool power_on; ++ /* Did we receive hotplug and rx-sense signals? */ ++ bool have_monitor; ++ bool enabled_irq; ++ /* timings from s_dv_timings */ ++ struct v4l2_dv_timings dv_timings; ++ u32 fmt_code; ++ u32 colorspace; ++ u32 ycbcr_enc; ++ u32 quantization; ++ u32 xfer_func; ++ u32 content_type; ++ /* controls */ ++ struct v4l2_ctrl *hdmi_mode_ctrl; ++ struct v4l2_ctrl *hotplug_ctrl; ++ struct v4l2_ctrl *rx_sense_ctrl; ++ struct v4l2_ctrl *have_edid0_ctrl; ++ struct v4l2_ctrl *rgb_quantization_range_ctrl; ++ struct v4l2_ctrl *content_type_ctrl; ++ struct i2c_client *i2c_edid; ++ struct i2c_client *i2c_pktmem; ++ struct adv7511_state_edid edid; ++ /* Running counter of the number of detected EDIDs (for debugging) */ ++ unsigned edid_detect_counter; ++ struct workqueue_struct *work_queue; ++ struct delayed_work edid_handler; /* work entry */ ++}; ++ ++static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd); ++static bool adv7511_check_edid_status(struct v4l2_subdev *sd); ++static void adv7511_setup(struct v4l2_subdev *sd); ++static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq); ++static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq); ++ ++ ++static const struct v4l2_dv_timings_cap adv7511_timings_cap = { ++ .type = V4L2_DV_BT_656_1120, ++ /* keep this initialization for compatibility with GCC < 4.4.6 */ ++ .reserved = { 0 }, ++ V4L2_INIT_BT_TIMINGS(640, ADV7511_MAX_WIDTH, 350, ADV7511_MAX_HEIGHT, ++ ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK, ++ V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | ++ V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, ++ V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | ++ V4L2_DV_BT_CAP_CUSTOM) ++}; ++ ++static inline struct adv7511_state *get_adv7511_state(struct v4l2_subdev *sd) ++{ ++ return container_of(sd, struct adv7511_state, sd); ++} ++ ++static inline struct v4l2_subdev *to_sd(struct v4l2_ctrl *ctrl) ++{ ++ return &container_of(ctrl->handler, struct adv7511_state, hdl)->sd; ++} ++ ++/* ------------------------ I2C ----------------------------------------------- */ ++ ++static s32 adv_smbus_read_byte_data_check(struct i2c_client *client, ++ u8 command, bool check) ++{ ++ union i2c_smbus_data data; ++ ++ if (!i2c_smbus_xfer(client->adapter, client->addr, client->flags, ++ I2C_SMBUS_READ, command, ++ I2C_SMBUS_BYTE_DATA, &data)) ++ return data.byte; ++ if (check) ++ v4l_err(client, "error reading %02x, %02x\n", ++ client->addr, command); ++ return -1; ++} ++ ++static s32 adv_smbus_read_byte_data(struct i2c_client *client, u8 command) ++{ ++ int i; ++ for (i = 0; i < 3; i++) { ++ int ret = adv_smbus_read_byte_data_check(client, command, true); ++ if (ret >= 0) { ++ if (i) ++ v4l_err(client, "read ok after %d retries\n", i); ++ return ret; ++ } ++ } ++ v4l_err(client, "read failed\n"); ++ return -1; ++} ++ ++static int adv7511_rd(struct v4l2_subdev *sd, u8 reg) ++{ ++ struct i2c_client *client = v4l2_get_subdevdata(sd); ++ ++ return adv_smbus_read_byte_data(client, reg); ++} ++ ++static int adv7511_wr(struct v4l2_subdev *sd, u8 reg, u8 val) ++{ ++ struct i2c_client *client = v4l2_get_subdevdata(sd); ++ int ret; ++ int i; ++ ++ for (i = 0; i < 3; i++) { ++ ret = i2c_smbus_write_byte_data(client, reg, val); ++ if (ret == 0) ++ return 0; ++ } ++ v4l2_err(sd, "%s: i2c write error\n", __func__); ++ return ret; ++} ++ ++/* To set specific bits in the register, a clear-mask is given (to be AND-ed), ++ and then the value-mask (to be OR-ed). */ ++static inline void adv7511_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask) ++{ ++ adv7511_wr(sd, reg, (adv7511_rd(sd, reg) & clr_mask) | val_mask); ++} ++ ++static int adv_smbus_read_i2c_block_data(struct i2c_client *client, ++ u8 command, unsigned length, u8 *values) ++{ ++ union i2c_smbus_data data; ++ int ret; ++ ++ if (length > I2C_SMBUS_BLOCK_MAX) ++ length = I2C_SMBUS_BLOCK_MAX; ++ data.block[0] = length; ++ ++ ret = i2c_smbus_xfer(client->adapter, client->addr, client->flags, ++ I2C_SMBUS_READ, command, ++ I2C_SMBUS_I2C_BLOCK_DATA, &data); ++ memcpy(values, data.block + 1, length); ++ return ret; ++} ++ ++static void adv7511_edid_rd(struct v4l2_subdev *sd, uint16_t len, uint8_t *buf) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ int i; ++ int err = 0; ++ ++ v4l2_dbg(1, debug, sd, "%s:\n", __func__); ++ ++ for (i = 0; !err && i < len; i += I2C_SMBUS_BLOCK_MAX) ++ err = adv_smbus_read_i2c_block_data(state->i2c_edid, i, ++ I2C_SMBUS_BLOCK_MAX, buf + i); ++ if (err) ++ v4l2_err(sd, "%s: i2c read error\n", __func__); ++} ++ ++static inline int adv7511_cec_read(struct v4l2_subdev *sd, u8 reg) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ return i2c_smbus_read_byte_data(state->i2c_cec, reg); ++} ++ ++static int adv7511_cec_write(struct v4l2_subdev *sd, u8 reg, u8 val) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ int ret; ++ int i; ++ ++ for (i = 0; i < 3; i++) { ++ ret = i2c_smbus_write_byte_data(state->i2c_cec, reg, val); ++ if (ret == 0) ++ return 0; ++ } ++ v4l2_err(sd, "%s: I2C Write Problem\n", __func__); ++ return ret; ++} ++ ++static inline int adv7511_cec_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, ++ u8 val) ++{ ++ return adv7511_cec_write(sd, reg, (adv7511_cec_read(sd, reg) & mask) | val); ++} ++ ++static int adv7511_pktmem_rd(struct v4l2_subdev *sd, u8 reg) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ return adv_smbus_read_byte_data(state->i2c_pktmem, reg); ++} ++ ++static int adv7511_pktmem_wr(struct v4l2_subdev *sd, u8 reg, u8 val) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ int ret; ++ int i; ++ ++ for (i = 0; i < 3; i++) { ++ ret = i2c_smbus_write_byte_data(state->i2c_pktmem, reg, val); ++ if (ret == 0) ++ return 0; ++ } ++ v4l2_err(sd, "%s: i2c write error\n", __func__); ++ return ret; ++} ++ ++/* To set specific bits in the register, a clear-mask is given (to be AND-ed), ++ and then the value-mask (to be OR-ed). */ ++static inline void adv7511_pktmem_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask) ++{ ++ adv7511_pktmem_wr(sd, reg, (adv7511_pktmem_rd(sd, reg) & clr_mask) | val_mask); ++} ++ ++static inline bool adv7511_have_hotplug(struct v4l2_subdev *sd) ++{ ++ return adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT; ++} ++ ++static inline bool adv7511_have_rx_sense(struct v4l2_subdev *sd) ++{ ++ return adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT; ++} ++ ++static void adv7511_csc_conversion_mode(struct v4l2_subdev *sd, u8 mode) ++{ ++ adv7511_wr_and_or(sd, 0x18, 0x9f, (mode & 0x3)<<5); ++} ++ ++static void adv7511_csc_coeff(struct v4l2_subdev *sd, ++ u16 A1, u16 A2, u16 A3, u16 A4, ++ u16 B1, u16 B2, u16 B3, u16 B4, ++ u16 C1, u16 C2, u16 C3, u16 C4) ++{ ++ /* A */ ++ adv7511_wr_and_or(sd, 0x18, 0xe0, A1>>8); ++ adv7511_wr(sd, 0x19, A1); ++ adv7511_wr_and_or(sd, 0x1A, 0xe0, A2>>8); ++ adv7511_wr(sd, 0x1B, A2); ++ adv7511_wr_and_or(sd, 0x1c, 0xe0, A3>>8); ++ adv7511_wr(sd, 0x1d, A3); ++ adv7511_wr_and_or(sd, 0x1e, 0xe0, A4>>8); ++ adv7511_wr(sd, 0x1f, A4); ++ ++ /* B */ ++ adv7511_wr_and_or(sd, 0x20, 0xe0, B1>>8); ++ adv7511_wr(sd, 0x21, B1); ++ adv7511_wr_and_or(sd, 0x22, 0xe0, B2>>8); ++ adv7511_wr(sd, 0x23, B2); ++ adv7511_wr_and_or(sd, 0x24, 0xe0, B3>>8); ++ adv7511_wr(sd, 0x25, B3); ++ adv7511_wr_and_or(sd, 0x26, 0xe0, B4>>8); ++ adv7511_wr(sd, 0x27, B4); ++ ++ /* C */ ++ adv7511_wr_and_or(sd, 0x28, 0xe0, C1>>8); ++ adv7511_wr(sd, 0x29, C1); ++ adv7511_wr_and_or(sd, 0x2A, 0xe0, C2>>8); ++ adv7511_wr(sd, 0x2B, C2); ++ adv7511_wr_and_or(sd, 0x2C, 0xe0, C3>>8); ++ adv7511_wr(sd, 0x2D, C3); ++ adv7511_wr_and_or(sd, 0x2E, 0xe0, C4>>8); ++ adv7511_wr(sd, 0x2F, C4); ++} ++ ++static void adv7511_csc_rgb_full2limit(struct v4l2_subdev *sd, bool enable) ++{ ++ if (enable) { ++ u8 csc_mode = 0; ++ adv7511_csc_conversion_mode(sd, csc_mode); ++ adv7511_csc_coeff(sd, ++ 4096-564, 0, 0, 256, ++ 0, 4096-564, 0, 256, ++ 0, 0, 4096-564, 256); ++ /* enable CSC */ ++ adv7511_wr_and_or(sd, 0x18, 0x7f, 0x80); ++ /* AVI infoframe: Limited range RGB (16-235) */ ++ adv7511_wr_and_or(sd, 0x57, 0xf3, 0x04); ++ } else { ++ /* disable CSC */ ++ adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0); ++ /* AVI infoframe: Full range RGB (0-255) */ ++ adv7511_wr_and_or(sd, 0x57, 0xf3, 0x08); ++ } ++} ++ ++static void adv7511_set_rgb_quantization_mode(struct v4l2_subdev *sd, struct v4l2_ctrl *ctrl) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ /* Only makes sense for RGB formats */ ++ if (state->fmt_code != MEDIA_BUS_FMT_RGB888_1X24) { ++ /* so just keep quantization */ ++ adv7511_csc_rgb_full2limit(sd, false); ++ return; ++ } ++ ++ switch (ctrl->val) { ++ case V4L2_DV_RGB_RANGE_AUTO: ++ /* automatic */ ++ if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) { ++ /* CE format, RGB limited range (16-235) */ ++ adv7511_csc_rgb_full2limit(sd, true); ++ } else { ++ /* not CE format, RGB full range (0-255) */ ++ adv7511_csc_rgb_full2limit(sd, false); ++ } ++ break; ++ case V4L2_DV_RGB_RANGE_LIMITED: ++ /* RGB limited range (16-235) */ ++ adv7511_csc_rgb_full2limit(sd, true); ++ break; ++ case V4L2_DV_RGB_RANGE_FULL: ++ /* RGB full range (0-255) */ ++ adv7511_csc_rgb_full2limit(sd, false); ++ break; ++ } ++} ++ ++/* ------------------------------ CTRL OPS ------------------------------ */ ++ ++static int adv7511_s_ctrl(struct v4l2_ctrl *ctrl) ++{ ++ struct v4l2_subdev *sd = to_sd(ctrl); ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ v4l2_dbg(1, debug, sd, "%s: ctrl id: %d, ctrl->val %d\n", __func__, ctrl->id, ctrl->val); ++ ++ if (state->hdmi_mode_ctrl == ctrl) { ++ /* Set HDMI or DVI-D */ ++ adv7511_wr_and_or(sd, 0xaf, 0xfd, ctrl->val == V4L2_DV_TX_MODE_HDMI ? 0x02 : 0x00); ++ return 0; ++ } ++ if (state->rgb_quantization_range_ctrl == ctrl) { ++ adv7511_set_rgb_quantization_mode(sd, ctrl); ++ return 0; ++ } ++ if (state->content_type_ctrl == ctrl) { ++ u8 itc, cn; ++ ++ state->content_type = ctrl->val; ++ itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC; ++ cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS; ++ adv7511_wr_and_or(sd, 0x57, 0x7f, itc << 7); ++ adv7511_wr_and_or(sd, 0x59, 0xcf, cn << 4); ++ return 0; ++ } ++ ++ return -EINVAL; ++} ++ ++static const struct v4l2_ctrl_ops adv7511_ctrl_ops = { ++ .s_ctrl = adv7511_s_ctrl, ++}; ++ ++/* ---------------------------- CORE OPS ------------------------------------------- */ ++ ++#ifdef CONFIG_VIDEO_ADV_DEBUG ++static void adv7511_inv_register(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ v4l2_info(sd, "0x000-0x0ff: Main Map\n"); ++ if (state->i2c_cec) ++ v4l2_info(sd, "0x100-0x1ff: CEC Map\n"); ++} ++ ++static int adv7511_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ reg->size = 1; ++ switch (reg->reg >> 8) { ++ case 0: ++ reg->val = adv7511_rd(sd, reg->reg & 0xff); ++ break; ++ case 1: ++ if (state->i2c_cec) { ++ reg->val = adv7511_cec_read(sd, reg->reg & 0xff); ++ break; ++ } ++ /* fall through */ ++ default: ++ v4l2_info(sd, "Register %03llx not supported\n", reg->reg); ++ adv7511_inv_register(sd); ++ break; ++ } ++ return 0; ++} ++ ++static int adv7511_s_register(struct v4l2_subdev *sd, const struct v4l2_dbg_register *reg) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ switch (reg->reg >> 8) { ++ case 0: ++ adv7511_wr(sd, reg->reg & 0xff, reg->val & 0xff); ++ break; ++ case 1: ++ if (state->i2c_cec) { ++ adv7511_cec_write(sd, reg->reg & 0xff, reg->val & 0xff); ++ break; ++ } ++ /* fall through */ ++ default: ++ v4l2_info(sd, "Register %03llx not supported\n", reg->reg); ++ adv7511_inv_register(sd); ++ break; ++ } ++ return 0; ++} ++#endif ++ ++struct adv7511_cfg_read_infoframe { ++ const char *desc; ++ u8 present_reg; ++ u8 present_mask; ++ u8 header[3]; ++ u16 payload_addr; ++}; ++ ++static u8 hdmi_infoframe_checksum(u8 *ptr, size_t size) ++{ ++ u8 csum = 0; ++ size_t i; ++ ++ /* compute checksum */ ++ for (i = 0; i < size; i++) ++ csum += ptr[i]; ++ ++ return 256 - csum; ++} ++ ++static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_infoframe *cri) ++{ ++ struct i2c_client *client = v4l2_get_subdevdata(sd); ++ struct device *dev = &client->dev; ++ union hdmi_infoframe frame; ++ u8 buffer[32]; ++ u8 len; ++ int i; ++ ++ if (!(adv7511_rd(sd, cri->present_reg) & cri->present_mask)) { ++ v4l2_info(sd, "%s infoframe not transmitted\n", cri->desc); ++ return; ++ } ++ ++ memcpy(buffer, cri->header, sizeof(cri->header)); ++ ++ len = buffer[2]; ++ ++ if (len + 4 > sizeof(buffer)) { ++ v4l2_err(sd, "%s: invalid %s infoframe length %d\n", __func__, cri->desc, len); ++ return; ++ } ++ ++ if (cri->payload_addr >= 0x100) { ++ for (i = 0; i < len; i++) ++ buffer[i + 4] = adv7511_pktmem_rd(sd, cri->payload_addr + i - 0x100); ++ } else { ++ for (i = 0; i < len; i++) ++ buffer[i + 4] = adv7511_rd(sd, cri->payload_addr + i); ++ } ++ buffer[3] = 0; ++ buffer[3] = hdmi_infoframe_checksum(buffer, len + 4); ++ ++ if (hdmi_infoframe_unpack(&frame, buffer) < 0) { ++ v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc); ++ return; ++ } ++ ++ hdmi_infoframe_log(KERN_INFO, dev, &frame); ++} ++ ++static void adv7511_log_infoframes(struct v4l2_subdev *sd) ++{ ++ static const struct adv7511_cfg_read_infoframe cri[] = { ++ { "AVI", 0x44, 0x10, { 0x82, 2, 13 }, 0x55 }, ++ { "Audio", 0x44, 0x08, { 0x84, 1, 10 }, 0x73 }, ++ { "SDP", 0x40, 0x40, { 0x83, 1, 25 }, 0x103 }, ++ }; ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(cri); i++) ++ log_infoframe(sd, &cri[i]); ++} ++ ++static int adv7511_log_status(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ struct adv7511_state_edid *edid = &state->edid; ++ int i; ++ ++ static const char * const states[] = { ++ "in reset", ++ "reading EDID", ++ "idle", ++ "initializing HDCP", ++ "HDCP enabled", ++ "initializing HDCP repeater", ++ "6", "7", "8", "9", "A", "B", "C", "D", "E", "F" ++ }; ++ static const char * const errors[] = { ++ "no error", ++ "bad receiver BKSV", ++ "Ri mismatch", ++ "Pj mismatch", ++ "i2c error", ++ "timed out", ++ "max repeater cascade exceeded", ++ "hash check failed", ++ "too many devices", ++ "9", "A", "B", "C", "D", "E", "F" ++ }; ++ ++ v4l2_info(sd, "power %s\n", state->power_on ? "on" : "off"); ++ v4l2_info(sd, "%s hotplug, %s Rx Sense, %s EDID (%d block(s))\n", ++ (adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT) ? "detected" : "no", ++ (adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT) ? "detected" : "no", ++ edid->segments ? "found" : "no", ++ edid->blocks); ++ v4l2_info(sd, "%s output %s\n", ++ (adv7511_rd(sd, 0xaf) & 0x02) ? ++ "HDMI" : "DVI-D", ++ (adv7511_rd(sd, 0xa1) & 0x3c) ? ++ "disabled" : "enabled"); ++ v4l2_info(sd, "state: %s, error: %s, detect count: %u, msk/irq: %02x/%02x\n", ++ states[adv7511_rd(sd, 0xc8) & 0xf], ++ errors[adv7511_rd(sd, 0xc8) >> 4], state->edid_detect_counter, ++ adv7511_rd(sd, 0x94), adv7511_rd(sd, 0x96)); ++ v4l2_info(sd, "RGB quantization: %s range\n", adv7511_rd(sd, 0x18) & 0x80 ? "limited" : "full"); ++ if (adv7511_rd(sd, 0xaf) & 0x02) { ++ /* HDMI only */ ++ u8 manual_cts = adv7511_rd(sd, 0x0a) & 0x80; ++ u32 N = (adv7511_rd(sd, 0x01) & 0xf) << 16 | ++ adv7511_rd(sd, 0x02) << 8 | ++ adv7511_rd(sd, 0x03); ++ u8 vic_detect = adv7511_rd(sd, 0x3e) >> 2; ++ u8 vic_sent = adv7511_rd(sd, 0x3d) & 0x3f; ++ u32 CTS; ++ ++ if (manual_cts) ++ CTS = (adv7511_rd(sd, 0x07) & 0xf) << 16 | ++ adv7511_rd(sd, 0x08) << 8 | ++ adv7511_rd(sd, 0x09); ++ else ++ CTS = (adv7511_rd(sd, 0x04) & 0xf) << 16 | ++ adv7511_rd(sd, 0x05) << 8 | ++ adv7511_rd(sd, 0x06); ++ v4l2_info(sd, "CTS %s mode: N %d, CTS %d\n", ++ manual_cts ? "manual" : "automatic", N, CTS); ++ v4l2_info(sd, "VIC: detected %d, sent %d\n", ++ vic_detect, vic_sent); ++ adv7511_log_infoframes(sd); ++ } ++ if (state->dv_timings.type == V4L2_DV_BT_656_1120) ++ v4l2_print_dv_timings(sd->name, "timings: ", ++ &state->dv_timings, false); ++ else ++ v4l2_info(sd, "no timings set\n"); ++ v4l2_info(sd, "i2c edid addr: 0x%x\n", state->i2c_edid_addr); ++ ++ if (state->i2c_cec == NULL) ++ return 0; ++ ++ v4l2_info(sd, "i2c cec addr: 0x%x\n", state->i2c_cec_addr); ++ ++ v4l2_info(sd, "CEC: %s\n", state->cec_enabled_adap ? ++ "enabled" : "disabled"); ++ if (state->cec_enabled_adap) { ++ for (i = 0; i < ADV7511_MAX_ADDRS; i++) { ++ bool is_valid = state->cec_valid_addrs & (1 << i); ++ ++ if (is_valid) ++ v4l2_info(sd, "CEC Logical Address: 0x%x\n", ++ state->cec_addr[i]); ++ } ++ } ++ v4l2_info(sd, "i2c pktmem addr: 0x%x\n", state->i2c_pktmem_addr); ++ return 0; ++} ++ ++/* Power up/down adv7511 */ ++static int adv7511_s_power(struct v4l2_subdev *sd, int on) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ const int retries = 20; ++ int i; ++ ++ v4l2_dbg(1, debug, sd, "%s: power %s\n", __func__, on ? "on" : "off"); ++ ++ state->power_on = on; ++ ++ if (!on) { ++ /* Power down */ ++ adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40); ++ return true; ++ } ++ ++ /* Power up */ ++ /* The adv7511 does not always come up immediately. ++ Retry multiple times. */ ++ for (i = 0; i < retries; i++) { ++ adv7511_wr_and_or(sd, 0x41, 0xbf, 0x0); ++ if ((adv7511_rd(sd, 0x41) & 0x40) == 0) ++ break; ++ adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40); ++ msleep(10); ++ } ++ if (i == retries) { ++ v4l2_dbg(1, debug, sd, "%s: failed to powerup the adv7511!\n", __func__); ++ adv7511_s_power(sd, 0); ++ return false; ++ } ++ if (i > 1) ++ v4l2_dbg(1, debug, sd, "%s: needed %d retries to powerup the adv7511\n", __func__, i); ++ ++ /* Reserved registers that must be set */ ++ adv7511_wr(sd, 0x98, 0x03); ++ adv7511_wr_and_or(sd, 0x9a, 0xfe, 0x70); ++ adv7511_wr(sd, 0x9c, 0x30); ++ adv7511_wr_and_or(sd, 0x9d, 0xfc, 0x01); ++ adv7511_wr(sd, 0xa2, 0xa4); ++ adv7511_wr(sd, 0xa3, 0xa4); ++ adv7511_wr(sd, 0xe0, 0xd0); ++ adv7511_wr(sd, 0xf9, 0x00); ++ ++ adv7511_wr(sd, 0x43, state->i2c_edid_addr); ++ adv7511_wr(sd, 0x45, state->i2c_pktmem_addr); ++ ++ /* Set number of attempts to read the EDID */ ++ adv7511_wr(sd, 0xc9, 0xf); ++ return true; ++} ++ ++#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC) ++static int adv7511_cec_adap_enable(struct cec_adapter *adap, bool enable) ++{ ++ struct adv7511_state *state = adap->priv; ++ struct v4l2_subdev *sd = &state->sd; ++ ++ if (state->i2c_cec == NULL) ++ return -EIO; ++ ++ if (!state->cec_enabled_adap && enable) { ++ /* power up cec section */ ++ adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x01); ++ /* legacy mode and clear all rx buffers */ ++ adv7511_cec_write(sd, 0x4a, 0x07); ++ adv7511_cec_write(sd, 0x4a, 0); ++ adv7511_cec_write_and_or(sd, 0x11, 0xfe, 0); /* initially disable tx */ ++ /* enabled irqs: */ ++ /* tx: ready */ ++ /* tx: arbitration lost */ ++ /* tx: retry timeout */ ++ /* rx: ready 1 */ ++ if (state->enabled_irq) ++ adv7511_wr_and_or(sd, 0x95, 0xc0, 0x39); ++ } else if (state->cec_enabled_adap && !enable) { ++ if (state->enabled_irq) ++ adv7511_wr_and_or(sd, 0x95, 0xc0, 0x00); ++ /* disable address mask 1-3 */ ++ adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0x00); ++ /* power down cec section */ ++ adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x00); ++ state->cec_valid_addrs = 0; ++ } ++ state->cec_enabled_adap = enable; ++ return 0; ++} ++ ++static int adv7511_cec_adap_log_addr(struct cec_adapter *adap, u8 addr) ++{ ++ struct adv7511_state *state = adap->priv; ++ struct v4l2_subdev *sd = &state->sd; ++ unsigned int i, free_idx = ADV7511_MAX_ADDRS; ++ ++ if (!state->cec_enabled_adap) ++ return addr == CEC_LOG_ADDR_INVALID ? 0 : -EIO; ++ ++ if (addr == CEC_LOG_ADDR_INVALID) { ++ adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0); ++ state->cec_valid_addrs = 0; ++ return 0; ++ } ++ ++ for (i = 0; i < ADV7511_MAX_ADDRS; i++) { ++ bool is_valid = state->cec_valid_addrs & (1 << i); ++ ++ if (free_idx == ADV7511_MAX_ADDRS && !is_valid) ++ free_idx = i; ++ if (is_valid && state->cec_addr[i] == addr) ++ return 0; ++ } ++ if (i == ADV7511_MAX_ADDRS) { ++ i = free_idx; ++ if (i == ADV7511_MAX_ADDRS) ++ return -ENXIO; ++ } ++ state->cec_addr[i] = addr; ++ state->cec_valid_addrs |= 1 << i; ++ ++ switch (i) { ++ case 0: ++ /* enable address mask 0 */ ++ adv7511_cec_write_and_or(sd, 0x4b, 0xef, 0x10); ++ /* set address for mask 0 */ ++ adv7511_cec_write_and_or(sd, 0x4c, 0xf0, addr); ++ break; ++ case 1: ++ /* enable address mask 1 */ ++ adv7511_cec_write_and_or(sd, 0x4b, 0xdf, 0x20); ++ /* set address for mask 1 */ ++ adv7511_cec_write_and_or(sd, 0x4c, 0x0f, addr << 4); ++ break; ++ case 2: ++ /* enable address mask 2 */ ++ adv7511_cec_write_and_or(sd, 0x4b, 0xbf, 0x40); ++ /* set address for mask 1 */ ++ adv7511_cec_write_and_or(sd, 0x4d, 0xf0, addr); ++ break; ++ } ++ return 0; ++} ++ ++static int adv7511_cec_adap_transmit(struct cec_adapter *adap, u8 attempts, ++ u32 signal_free_time, struct cec_msg *msg) ++{ ++ struct adv7511_state *state = adap->priv; ++ struct v4l2_subdev *sd = &state->sd; ++ u8 len = msg->len; ++ unsigned int i; ++ ++ v4l2_dbg(1, debug, sd, "%s: len %d\n", __func__, len); ++ ++ if (len > 16) { ++ v4l2_err(sd, "%s: len exceeded 16 (%d)\n", __func__, len); ++ return -EINVAL; ++ } ++ ++ /* ++ * The number of retries is the number of attempts - 1, but retry ++ * at least once. It's not clear if a value of 0 is allowed, so ++ * let's do at least one retry. ++ */ ++ adv7511_cec_write_and_or(sd, 0x12, ~0x70, max(1, attempts - 1) << 4); ++ ++ /* blocking, clear cec tx irq status */ ++ adv7511_wr_and_or(sd, 0x97, 0xc7, 0x38); ++ ++ /* write data */ ++ for (i = 0; i < len; i++) ++ adv7511_cec_write(sd, i, msg->msg[i]); ++ ++ /* set length (data + header) */ ++ adv7511_cec_write(sd, 0x10, len); ++ /* start transmit, enable tx */ ++ adv7511_cec_write(sd, 0x11, 0x01); ++ return 0; ++} ++ ++static void adv_cec_tx_raw_status(struct v4l2_subdev *sd, u8 tx_raw_status) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ if ((adv7511_cec_read(sd, 0x11) & 0x01) == 0) { ++ v4l2_dbg(1, debug, sd, "%s: tx raw: tx disabled\n", __func__); ++ return; ++ } ++ ++ if (tx_raw_status & 0x10) { ++ v4l2_dbg(1, debug, sd, ++ "%s: tx raw: arbitration lost\n", __func__); ++ cec_transmit_done(state->cec_adap, CEC_TX_STATUS_ARB_LOST, ++ 1, 0, 0, 0); ++ return; ++ } ++ if (tx_raw_status & 0x08) { ++ u8 status; ++ u8 nack_cnt; ++ u8 low_drive_cnt; ++ ++ v4l2_dbg(1, debug, sd, "%s: tx raw: retry failed\n", __func__); ++ /* ++ * We set this status bit since this hardware performs ++ * retransmissions. ++ */ ++ status = CEC_TX_STATUS_MAX_RETRIES; ++ nack_cnt = adv7511_cec_read(sd, 0x14) & 0xf; ++ if (nack_cnt) ++ status |= CEC_TX_STATUS_NACK; ++ low_drive_cnt = adv7511_cec_read(sd, 0x14) >> 4; ++ if (low_drive_cnt) ++ status |= CEC_TX_STATUS_LOW_DRIVE; ++ cec_transmit_done(state->cec_adap, status, ++ 0, nack_cnt, low_drive_cnt, 0); ++ return; ++ } ++ if (tx_raw_status & 0x20) { ++ v4l2_dbg(1, debug, sd, "%s: tx raw: ready ok\n", __func__); ++ cec_transmit_done(state->cec_adap, CEC_TX_STATUS_OK, 0, 0, 0, 0); ++ return; ++ } ++} ++ ++static const struct cec_adap_ops adv7511_cec_adap_ops = { ++ .adap_enable = adv7511_cec_adap_enable, ++ .adap_log_addr = adv7511_cec_adap_log_addr, ++ .adap_transmit = adv7511_cec_adap_transmit, ++}; ++#endif ++ ++/* Enable interrupts */ ++static void adv7511_set_isr(struct v4l2_subdev *sd, bool enable) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ u8 irqs = MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT; ++ u8 irqs_rd; ++ int retries = 100; ++ ++ v4l2_dbg(2, debug, sd, "%s: %s\n", __func__, enable ? "enable" : "disable"); ++ ++ if (state->enabled_irq == enable) ++ return; ++ state->enabled_irq = enable; ++ ++ /* The datasheet says that the EDID ready interrupt should be ++ disabled if there is no hotplug. */ ++ if (!enable) ++ irqs = 0; ++ else if (adv7511_have_hotplug(sd)) ++ irqs |= MASK_ADV7511_EDID_RDY_INT; ++ ++ adv7511_wr_and_or(sd, 0x95, 0xc0, ++ (state->cec_enabled_adap && enable) ? 0x39 : 0x00); ++ ++ /* ++ * This i2c write can fail (approx. 1 in 1000 writes). But it ++ * is essential that this register is correct, so retry it ++ * multiple times. ++ * ++ * Note that the i2c write does not report an error, but the readback ++ * clearly shows the wrong value. ++ */ ++ do { ++ adv7511_wr(sd, 0x94, irqs); ++ irqs_rd = adv7511_rd(sd, 0x94); ++ } while (retries-- && irqs_rd != irqs); ++ ++ if (irqs_rd == irqs) ++ return; ++ v4l2_err(sd, "Could not set interrupts: hw failure?\n"); ++} ++ ++/* Interrupt handler */ ++static int adv7511_isr(struct v4l2_subdev *sd, u32 status, bool *handled) ++{ ++ u8 irq_status; ++ u8 cec_irq; ++ ++ /* disable interrupts to prevent a race condition */ ++ adv7511_set_isr(sd, false); ++ irq_status = adv7511_rd(sd, 0x96); ++ cec_irq = adv7511_rd(sd, 0x97); ++ /* clear detected interrupts */ ++ adv7511_wr(sd, 0x96, irq_status); ++ adv7511_wr(sd, 0x97, cec_irq); ++ ++ v4l2_dbg(1, debug, sd, "%s: irq 0x%x, cec-irq 0x%x\n", __func__, ++ irq_status, cec_irq); ++ ++ if (irq_status & (MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT)) ++ adv7511_check_monitor_present_status(sd); ++ if (irq_status & MASK_ADV7511_EDID_RDY_INT) ++ adv7511_check_edid_status(sd); ++ ++#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC) ++ if (cec_irq & 0x38) ++ adv_cec_tx_raw_status(sd, cec_irq); ++ ++ if (cec_irq & 1) { ++ struct adv7511_state *state = get_adv7511_state(sd); ++ struct cec_msg msg; ++ ++ msg.len = adv7511_cec_read(sd, 0x25) & 0x1f; ++ ++ v4l2_dbg(1, debug, sd, "%s: cec msg len %d\n", __func__, ++ msg.len); ++ ++ if (msg.len > 16) ++ msg.len = 16; ++ ++ if (msg.len) { ++ u8 i; ++ ++ for (i = 0; i < msg.len; i++) ++ msg.msg[i] = adv7511_cec_read(sd, i + 0x15); ++ ++ adv7511_cec_write(sd, 0x4a, 1); /* toggle to re-enable rx 1 */ ++ adv7511_cec_write(sd, 0x4a, 0); ++ cec_received_msg(state->cec_adap, &msg); ++ } ++ } ++#endif ++ ++ /* enable interrupts */ ++ adv7511_set_isr(sd, true); ++ ++ if (handled) ++ *handled = true; ++ return 0; ++} ++ ++static const struct v4l2_subdev_core_ops adv7511_core_ops = { ++ .log_status = adv7511_log_status, ++#ifdef CONFIG_VIDEO_ADV_DEBUG ++ .g_register = adv7511_g_register, ++ .s_register = adv7511_s_register, ++#endif ++ .s_power = adv7511_s_power, ++ .interrupt_service_routine = adv7511_isr, ++}; ++ ++/* ------------------------------ VIDEO OPS ------------------------------ */ ++ ++/* Enable/disable adv7511 output */ ++static int adv7511_s_stream(struct v4l2_subdev *sd, int enable) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis")); ++ adv7511_wr_and_or(sd, 0xa1, ~0x3c, (enable ? 0 : 0x3c)); ++ if (enable) { ++ adv7511_check_monitor_present_status(sd); ++ } else { ++ adv7511_s_power(sd, 0); ++ state->have_monitor = false; ++ } ++ return 0; ++} ++ ++static int adv7511_s_dv_timings(struct v4l2_subdev *sd, ++ struct v4l2_dv_timings *timings) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ struct v4l2_bt_timings *bt = &timings->bt; ++ u32 fps; ++ ++ v4l2_dbg(1, debug, sd, "%s:\n", __func__); ++ ++ /* quick sanity check */ ++ if (!v4l2_valid_dv_timings(timings, &adv7511_timings_cap, NULL, NULL)) ++ return -EINVAL; ++ ++ /* Fill the optional fields .standards and .flags in struct v4l2_dv_timings ++ if the format is one of the CEA or DMT timings. */ ++ v4l2_find_dv_timings_cap(timings, &adv7511_timings_cap, 0, NULL, NULL); ++ ++ /* save timings */ ++ state->dv_timings = *timings; ++ ++ /* set h/vsync polarities */ ++ adv7511_wr_and_or(sd, 0x17, 0x9f, ++ ((bt->polarities & V4L2_DV_VSYNC_POS_POL) ? 0 : 0x40) | ++ ((bt->polarities & V4L2_DV_HSYNC_POS_POL) ? 0 : 0x20)); ++ ++ fps = (u32)bt->pixelclock / (V4L2_DV_BT_FRAME_WIDTH(bt) * V4L2_DV_BT_FRAME_HEIGHT(bt)); ++ switch (fps) { ++ case 24: ++ adv7511_wr_and_or(sd, 0xfb, 0xf9, 1 << 1); ++ break; ++ case 25: ++ adv7511_wr_and_or(sd, 0xfb, 0xf9, 2 << 1); ++ break; ++ case 30: ++ adv7511_wr_and_or(sd, 0xfb, 0xf9, 3 << 1); ++ break; ++ default: ++ adv7511_wr_and_or(sd, 0xfb, 0xf9, 0); ++ break; ++ } ++ ++ /* update quantization range based on new dv_timings */ ++ adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl); ++ ++ return 0; ++} ++ ++static int adv7511_g_dv_timings(struct v4l2_subdev *sd, ++ struct v4l2_dv_timings *timings) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ v4l2_dbg(1, debug, sd, "%s:\n", __func__); ++ ++ if (!timings) ++ return -EINVAL; ++ ++ *timings = state->dv_timings; ++ ++ return 0; ++} ++ ++static int adv7511_enum_dv_timings(struct v4l2_subdev *sd, ++ struct v4l2_enum_dv_timings *timings) ++{ ++ if (timings->pad != 0) ++ return -EINVAL; ++ ++ return v4l2_enum_dv_timings_cap(timings, &adv7511_timings_cap, NULL, NULL); ++} ++ ++static int adv7511_dv_timings_cap(struct v4l2_subdev *sd, ++ struct v4l2_dv_timings_cap *cap) ++{ ++ if (cap->pad != 0) ++ return -EINVAL; ++ ++ *cap = adv7511_timings_cap; ++ return 0; ++} ++ ++static const struct v4l2_subdev_video_ops adv7511_video_ops = { ++ .s_stream = adv7511_s_stream, ++ .s_dv_timings = adv7511_s_dv_timings, ++ .g_dv_timings = adv7511_g_dv_timings, ++}; ++ ++/* ------------------------------ AUDIO OPS ------------------------------ */ ++static int adv7511_s_audio_stream(struct v4l2_subdev *sd, int enable) ++{ ++ v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis")); ++ ++ if (enable) ++ adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x80); ++ else ++ adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x40); ++ ++ return 0; ++} ++ ++static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq) ++{ ++ u32 N; ++ ++ switch (freq) { ++ case 32000: N = 4096; break; ++ case 44100: N = 6272; break; ++ case 48000: N = 6144; break; ++ case 88200: N = 12544; break; ++ case 96000: N = 12288; break; ++ case 176400: N = 25088; break; ++ case 192000: N = 24576; break; ++ default: ++ return -EINVAL; ++ } ++ ++ /* Set N (used with CTS to regenerate the audio clock) */ ++ adv7511_wr(sd, 0x01, (N >> 16) & 0xf); ++ adv7511_wr(sd, 0x02, (N >> 8) & 0xff); ++ adv7511_wr(sd, 0x03, N & 0xff); ++ ++ return 0; ++} ++ ++static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq) ++{ ++ u32 i2s_sf; ++ ++ switch (freq) { ++ case 32000: i2s_sf = 0x30; break; ++ case 44100: i2s_sf = 0x00; break; ++ case 48000: i2s_sf = 0x20; break; ++ case 88200: i2s_sf = 0x80; break; ++ case 96000: i2s_sf = 0xa0; break; ++ case 176400: i2s_sf = 0xc0; break; ++ case 192000: i2s_sf = 0xe0; break; ++ default: ++ return -EINVAL; ++ } ++ ++ /* Set sampling frequency for I2S audio to 48 kHz */ ++ adv7511_wr_and_or(sd, 0x15, 0xf, i2s_sf); ++ ++ return 0; ++} ++ ++static int adv7511_s_routing(struct v4l2_subdev *sd, u32 input, u32 output, u32 config) ++{ ++ /* Only 2 channels in use for application */ ++ adv7511_wr_and_or(sd, 0x73, 0xf8, 0x1); ++ /* Speaker mapping */ ++ adv7511_wr(sd, 0x76, 0x00); ++ ++ /* 16 bit audio word length */ ++ adv7511_wr_and_or(sd, 0x14, 0xf0, 0x02); ++ ++ return 0; ++} ++ ++static const struct v4l2_subdev_audio_ops adv7511_audio_ops = { ++ .s_stream = adv7511_s_audio_stream, ++ .s_clock_freq = adv7511_s_clock_freq, ++ .s_i2s_clock_freq = adv7511_s_i2s_clock_freq, ++ .s_routing = adv7511_s_routing, ++}; ++ ++/* ---------------------------- PAD OPS ------------------------------------- */ ++ ++static int adv7511_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ memset(edid->reserved, 0, sizeof(edid->reserved)); ++ ++ if (edid->pad != 0) ++ return -EINVAL; ++ ++ if (edid->start_block == 0 && edid->blocks == 0) { ++ edid->blocks = state->edid.segments * 2; ++ return 0; ++ } ++ ++ if (state->edid.segments == 0) ++ return -ENODATA; ++ ++ if (edid->start_block >= state->edid.segments * 2) ++ return -EINVAL; ++ ++ if (edid->start_block + edid->blocks > state->edid.segments * 2) ++ edid->blocks = state->edid.segments * 2 - edid->start_block; ++ ++ memcpy(edid->edid, &state->edid.data[edid->start_block * 128], ++ 128 * edid->blocks); ++ ++ return 0; ++} ++ ++static int adv7511_enum_mbus_code(struct v4l2_subdev *sd, ++ struct v4l2_subdev_pad_config *cfg, ++ struct v4l2_subdev_mbus_code_enum *code) ++{ ++ if (code->pad != 0) ++ return -EINVAL; ++ ++ switch (code->index) { ++ case 0: ++ code->code = MEDIA_BUS_FMT_RGB888_1X24; ++ break; ++ case 1: ++ code->code = MEDIA_BUS_FMT_YUYV8_1X16; ++ break; ++ case 2: ++ code->code = MEDIA_BUS_FMT_UYVY8_1X16; ++ break; ++ default: ++ return -EINVAL; ++ } ++ return 0; ++} ++ ++static void adv7511_fill_format(struct adv7511_state *state, ++ struct v4l2_mbus_framefmt *format) ++{ ++ format->width = state->dv_timings.bt.width; ++ format->height = state->dv_timings.bt.height; ++ format->field = V4L2_FIELD_NONE; ++} ++ ++static int adv7511_get_fmt(struct v4l2_subdev *sd, ++ struct v4l2_subdev_pad_config *cfg, ++ struct v4l2_subdev_format *format) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ if (format->pad != 0) ++ return -EINVAL; ++ ++ memset(&format->format, 0, sizeof(format->format)); ++ adv7511_fill_format(state, &format->format); ++ ++ if (format->which == V4L2_SUBDEV_FORMAT_TRY) { ++ struct v4l2_mbus_framefmt *fmt; ++ ++ fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad); ++ format->format.code = fmt->code; ++ format->format.colorspace = fmt->colorspace; ++ format->format.ycbcr_enc = fmt->ycbcr_enc; ++ format->format.quantization = fmt->quantization; ++ format->format.xfer_func = fmt->xfer_func; ++ } else { ++ format->format.code = state->fmt_code; ++ format->format.colorspace = state->colorspace; ++ format->format.ycbcr_enc = state->ycbcr_enc; ++ format->format.quantization = state->quantization; ++ format->format.xfer_func = state->xfer_func; ++ } ++ ++ return 0; ++} ++ ++static int adv7511_set_fmt(struct v4l2_subdev *sd, ++ struct v4l2_subdev_pad_config *cfg, ++ struct v4l2_subdev_format *format) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ /* ++ * Bitfield namings come the CEA-861-F standard, table 8 "Auxiliary ++ * Video Information (AVI) InfoFrame Format" ++ * ++ * c = Colorimetry ++ * ec = Extended Colorimetry ++ * y = RGB or YCbCr ++ * q = RGB Quantization Range ++ * yq = YCC Quantization Range ++ */ ++ u8 c = HDMI_COLORIMETRY_NONE; ++ u8 ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; ++ u8 y = HDMI_COLORSPACE_RGB; ++ u8 q = HDMI_QUANTIZATION_RANGE_DEFAULT; ++ u8 yq = HDMI_YCC_QUANTIZATION_RANGE_LIMITED; ++ u8 itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC; ++ u8 cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS; ++ ++ if (format->pad != 0) ++ return -EINVAL; ++ switch (format->format.code) { ++ case MEDIA_BUS_FMT_UYVY8_1X16: ++ case MEDIA_BUS_FMT_YUYV8_1X16: ++ case MEDIA_BUS_FMT_RGB888_1X24: ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ adv7511_fill_format(state, &format->format); ++ if (format->which == V4L2_SUBDEV_FORMAT_TRY) { ++ struct v4l2_mbus_framefmt *fmt; ++ ++ fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad); ++ fmt->code = format->format.code; ++ fmt->colorspace = format->format.colorspace; ++ fmt->ycbcr_enc = format->format.ycbcr_enc; ++ fmt->quantization = format->format.quantization; ++ fmt->xfer_func = format->format.xfer_func; ++ return 0; ++ } ++ ++ switch (format->format.code) { ++ case MEDIA_BUS_FMT_UYVY8_1X16: ++ adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01); ++ adv7511_wr_and_or(sd, 0x16, 0x03, 0xb8); ++ y = HDMI_COLORSPACE_YUV422; ++ break; ++ case MEDIA_BUS_FMT_YUYV8_1X16: ++ adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01); ++ adv7511_wr_and_or(sd, 0x16, 0x03, 0xbc); ++ y = HDMI_COLORSPACE_YUV422; ++ break; ++ case MEDIA_BUS_FMT_RGB888_1X24: ++ default: ++ adv7511_wr_and_or(sd, 0x15, 0xf0, 0x00); ++ adv7511_wr_and_or(sd, 0x16, 0x03, 0x00); ++ break; ++ } ++ state->fmt_code = format->format.code; ++ state->colorspace = format->format.colorspace; ++ state->ycbcr_enc = format->format.ycbcr_enc; ++ state->quantization = format->format.quantization; ++ state->xfer_func = format->format.xfer_func; ++ ++ switch (format->format.colorspace) { ++ case V4L2_COLORSPACE_ADOBERGB: ++ c = HDMI_COLORIMETRY_EXTENDED; ++ ec = y ? HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601 : ++ HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB; ++ break; ++ case V4L2_COLORSPACE_SMPTE170M: ++ c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE; ++ if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV601) { ++ c = HDMI_COLORIMETRY_EXTENDED; ++ ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; ++ } ++ break; ++ case V4L2_COLORSPACE_REC709: ++ c = y ? HDMI_COLORIMETRY_ITU_709 : HDMI_COLORIMETRY_NONE; ++ if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV709) { ++ c = HDMI_COLORIMETRY_EXTENDED; ++ ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_709; ++ } ++ break; ++ case V4L2_COLORSPACE_SRGB: ++ c = y ? HDMI_COLORIMETRY_EXTENDED : HDMI_COLORIMETRY_NONE; ++ ec = y ? HDMI_EXTENDED_COLORIMETRY_S_YCC_601 : ++ HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; ++ break; ++ case V4L2_COLORSPACE_BT2020: ++ c = HDMI_COLORIMETRY_EXTENDED; ++ if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_BT2020_CONST_LUM) ++ ec = 5; /* Not yet available in hdmi.h */ ++ else ++ ec = 6; /* Not yet available in hdmi.h */ ++ break; ++ default: ++ break; ++ } ++ ++ /* ++ * CEA-861-F says that for RGB formats the YCC range must match the ++ * RGB range, although sources should ignore the YCC range. ++ * ++ * The RGB quantization range shouldn't be non-zero if the EDID doesn't ++ * have the Q bit set in the Video Capabilities Data Block, however this ++ * isn't checked at the moment. The assumption is that the application ++ * knows the EDID and can detect this. ++ * ++ * The same is true for the YCC quantization range: non-standard YCC ++ * quantization ranges should only be sent if the EDID has the YQ bit ++ * set in the Video Capabilities Data Block. ++ */ ++ switch (format->format.quantization) { ++ case V4L2_QUANTIZATION_FULL_RANGE: ++ q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT : ++ HDMI_QUANTIZATION_RANGE_FULL; ++ yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_FULL; ++ break; ++ case V4L2_QUANTIZATION_LIM_RANGE: ++ q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT : ++ HDMI_QUANTIZATION_RANGE_LIMITED; ++ yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_LIMITED; ++ break; ++ } ++ ++ adv7511_wr_and_or(sd, 0x4a, 0xbf, 0); ++ adv7511_wr_and_or(sd, 0x55, 0x9f, y << 5); ++ adv7511_wr_and_or(sd, 0x56, 0x3f, c << 6); ++ adv7511_wr_and_or(sd, 0x57, 0x83, (ec << 4) | (q << 2) | (itc << 7)); ++ adv7511_wr_and_or(sd, 0x59, 0x0f, (yq << 6) | (cn << 4)); ++ adv7511_wr_and_or(sd, 0x4a, 0xff, 1); ++ adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl); ++ ++ return 0; ++} ++ ++static const struct v4l2_subdev_pad_ops adv7511_pad_ops = { ++ .get_edid = adv7511_get_edid, ++ .enum_mbus_code = adv7511_enum_mbus_code, ++ .get_fmt = adv7511_get_fmt, ++ .set_fmt = adv7511_set_fmt, ++ .enum_dv_timings = adv7511_enum_dv_timings, ++ .dv_timings_cap = adv7511_dv_timings_cap, ++}; ++ ++/* --------------------- SUBDEV OPS --------------------------------------- */ ++ ++static const struct v4l2_subdev_ops adv7511_ops = { ++ .core = &adv7511_core_ops, ++ .pad = &adv7511_pad_ops, ++ .video = &adv7511_video_ops, ++ .audio = &adv7511_audio_ops, ++}; ++ ++/* ----------------------------------------------------------------------- */ ++static void adv7511_dbg_dump_edid(int lvl, int debug, struct v4l2_subdev *sd, int segment, u8 *buf) ++{ ++ if (debug >= lvl) { ++ int i, j; ++ v4l2_dbg(lvl, debug, sd, "edid segment %d\n", segment); ++ for (i = 0; i < 256; i += 16) { ++ u8 b[128]; ++ u8 *bp = b; ++ if (i == 128) ++ v4l2_dbg(lvl, debug, sd, "\n"); ++ for (j = i; j < i + 16; j++) { ++ sprintf(bp, "0x%02x, ", buf[j]); ++ bp += 6; ++ } ++ bp[0] = '\0'; ++ v4l2_dbg(lvl, debug, sd, "%s\n", b); ++ } ++ } ++} ++ ++static void adv7511_notify_no_edid(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ struct adv7511_edid_detect ed; ++ ++ /* We failed to read the EDID, so send an event for this. */ ++ ed.present = false; ++ ed.segment = adv7511_rd(sd, 0xc4); ++ ed.phys_addr = CEC_PHYS_ADDR_INVALID; ++ cec_s_phys_addr(state->cec_adap, ed.phys_addr, false); ++ v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); ++ v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x0); ++} ++ ++static void adv7511_edid_handler(struct work_struct *work) ++{ ++ struct delayed_work *dwork = to_delayed_work(work); ++ struct adv7511_state *state = container_of(dwork, struct adv7511_state, edid_handler); ++ struct v4l2_subdev *sd = &state->sd; ++ ++ v4l2_dbg(1, debug, sd, "%s:\n", __func__); ++ ++ if (adv7511_check_edid_status(sd)) { ++ /* Return if we received the EDID. */ ++ return; ++ } ++ ++ if (adv7511_have_hotplug(sd)) { ++ /* We must retry reading the EDID several times, it is possible ++ * that initially the EDID couldn't be read due to i2c errors ++ * (DVI connectors are particularly prone to this problem). */ ++ if (state->edid.read_retries) { ++ state->edid.read_retries--; ++ v4l2_dbg(1, debug, sd, "%s: edid read failed\n", __func__); ++ state->have_monitor = false; ++ adv7511_s_power(sd, false); ++ adv7511_s_power(sd, true); ++ queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); ++ return; ++ } ++ } ++ ++ /* We failed to read the EDID, so send an event for this. */ ++ adv7511_notify_no_edid(sd); ++ v4l2_dbg(1, debug, sd, "%s: no edid found\n", __func__); ++} ++ ++static void adv7511_audio_setup(struct v4l2_subdev *sd) ++{ ++ v4l2_dbg(1, debug, sd, "%s\n", __func__); ++ ++ adv7511_s_i2s_clock_freq(sd, 48000); ++ adv7511_s_clock_freq(sd, 48000); ++ adv7511_s_routing(sd, 0, 0, 0); ++} ++ ++/* Configure hdmi transmitter. */ ++static void adv7511_setup(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ v4l2_dbg(1, debug, sd, "%s\n", __func__); ++ ++ /* Input format: RGB 4:4:4 */ ++ adv7511_wr_and_or(sd, 0x15, 0xf0, 0x0); ++ /* Output format: RGB 4:4:4 */ ++ adv7511_wr_and_or(sd, 0x16, 0x7f, 0x0); ++ /* 1st order interpolation 4:2:2 -> 4:4:4 up conversion, Aspect ratio: 16:9 */ ++ adv7511_wr_and_or(sd, 0x17, 0xf9, 0x06); ++ /* Disable pixel repetition */ ++ adv7511_wr_and_or(sd, 0x3b, 0x9f, 0x0); ++ /* Disable CSC */ ++ adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0); ++ /* Output format: RGB 4:4:4, Active Format Information is valid, ++ * underscanned */ ++ adv7511_wr_and_or(sd, 0x55, 0x9c, 0x12); ++ /* AVI Info frame packet enable, Audio Info frame disable */ ++ adv7511_wr_and_or(sd, 0x44, 0xe7, 0x10); ++ /* Colorimetry, Active format aspect ratio: same as picure. */ ++ adv7511_wr(sd, 0x56, 0xa8); ++ /* No encryption */ ++ adv7511_wr_and_or(sd, 0xaf, 0xed, 0x0); ++ ++ /* Positive clk edge capture for input video clock */ ++ adv7511_wr_and_or(sd, 0xba, 0x1f, 0x60); ++ ++ adv7511_audio_setup(sd); ++ ++ v4l2_ctrl_handler_setup(&state->hdl); ++} ++ ++static void adv7511_notify_monitor_detect(struct v4l2_subdev *sd) ++{ ++ struct adv7511_monitor_detect mdt; ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ mdt.present = state->have_monitor; ++ v4l2_subdev_notify(sd, ADV7511_MONITOR_DETECT, (void *)&mdt); ++} ++ ++static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ /* read hotplug and rx-sense state */ ++ u8 status = adv7511_rd(sd, 0x42); ++ ++ v4l2_dbg(1, debug, sd, "%s: status: 0x%x%s%s\n", ++ __func__, ++ status, ++ status & MASK_ADV7511_HPD_DETECT ? ", hotplug" : "", ++ status & MASK_ADV7511_MSEN_DETECT ? ", rx-sense" : ""); ++ ++ /* update read only ctrls */ ++ v4l2_ctrl_s_ctrl(state->hotplug_ctrl, adv7511_have_hotplug(sd) ? 0x1 : 0x0); ++ v4l2_ctrl_s_ctrl(state->rx_sense_ctrl, adv7511_have_rx_sense(sd) ? 0x1 : 0x0); ++ ++ if ((status & MASK_ADV7511_HPD_DETECT) && ((status & MASK_ADV7511_MSEN_DETECT) || state->edid.segments)) { ++ v4l2_dbg(1, debug, sd, "%s: hotplug and (rx-sense or edid)\n", __func__); ++ if (!state->have_monitor) { ++ v4l2_dbg(1, debug, sd, "%s: monitor detected\n", __func__); ++ state->have_monitor = true; ++ adv7511_set_isr(sd, true); ++ if (!adv7511_s_power(sd, true)) { ++ v4l2_dbg(1, debug, sd, "%s: monitor detected, powerup failed\n", __func__); ++ return; ++ } ++ adv7511_setup(sd); ++ adv7511_notify_monitor_detect(sd); ++ state->edid.read_retries = EDID_MAX_RETRIES; ++ queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); ++ } ++ } else if (status & MASK_ADV7511_HPD_DETECT) { ++ v4l2_dbg(1, debug, sd, "%s: hotplug detected\n", __func__); ++ state->edid.read_retries = EDID_MAX_RETRIES; ++ queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); ++ } else if (!(status & MASK_ADV7511_HPD_DETECT)) { ++ v4l2_dbg(1, debug, sd, "%s: hotplug not detected\n", __func__); ++ if (state->have_monitor) { ++ v4l2_dbg(1, debug, sd, "%s: monitor not detected\n", __func__); ++ state->have_monitor = false; ++ adv7511_notify_monitor_detect(sd); ++ } ++ adv7511_s_power(sd, false); ++ memset(&state->edid, 0, sizeof(struct adv7511_state_edid)); ++ adv7511_notify_no_edid(sd); ++ } ++} ++ ++static bool edid_block_verify_crc(u8 *edid_block) ++{ ++ u8 sum = 0; ++ int i; ++ ++ for (i = 0; i < 128; i++) ++ sum += edid_block[i]; ++ return sum == 0; ++} ++ ++static bool edid_verify_crc(struct v4l2_subdev *sd, u32 segment) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ u32 blocks = state->edid.blocks; ++ u8 *data = state->edid.data; ++ ++ if (!edid_block_verify_crc(&data[segment * 256])) ++ return false; ++ if ((segment + 1) * 2 <= blocks) ++ return edid_block_verify_crc(&data[segment * 256 + 128]); ++ return true; ++} ++ ++static bool edid_verify_header(struct v4l2_subdev *sd, u32 segment) ++{ ++ static const u8 hdmi_header[] = { ++ 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00 ++ }; ++ struct adv7511_state *state = get_adv7511_state(sd); ++ u8 *data = state->edid.data; ++ ++ if (segment != 0) ++ return true; ++ return !memcmp(data, hdmi_header, sizeof(hdmi_header)); ++} ++ ++static bool adv7511_check_edid_status(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ u8 edidRdy = adv7511_rd(sd, 0xc5); ++ ++ v4l2_dbg(1, debug, sd, "%s: edid ready (retries: %d)\n", ++ __func__, EDID_MAX_RETRIES - state->edid.read_retries); ++ ++ if (state->edid.complete) ++ return true; ++ ++ if (edidRdy & MASK_ADV7511_EDID_RDY) { ++ int segment = adv7511_rd(sd, 0xc4); ++ struct adv7511_edid_detect ed; ++ ++ if (segment >= EDID_MAX_SEGM) { ++ v4l2_err(sd, "edid segment number too big\n"); ++ return false; ++ } ++ v4l2_dbg(1, debug, sd, "%s: got segment %d\n", __func__, segment); ++ adv7511_edid_rd(sd, 256, &state->edid.data[segment * 256]); ++ adv7511_dbg_dump_edid(2, debug, sd, segment, &state->edid.data[segment * 256]); ++ if (segment == 0) { ++ state->edid.blocks = state->edid.data[0x7e] + 1; ++ v4l2_dbg(1, debug, sd, "%s: %d blocks in total\n", __func__, state->edid.blocks); ++ } ++ if (!edid_verify_crc(sd, segment) || ++ !edid_verify_header(sd, segment)) { ++ /* edid crc error, force reread of edid segment */ ++ v4l2_err(sd, "%s: edid crc or header error\n", __func__); ++ state->have_monitor = false; ++ adv7511_s_power(sd, false); ++ adv7511_s_power(sd, true); ++ return false; ++ } ++ /* one more segment read ok */ ++ state->edid.segments = segment + 1; ++ v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x1); ++ if (((state->edid.data[0x7e] >> 1) + 1) > state->edid.segments) { ++ /* Request next EDID segment */ ++ v4l2_dbg(1, debug, sd, "%s: request segment %d\n", __func__, state->edid.segments); ++ adv7511_wr(sd, 0xc9, 0xf); ++ adv7511_wr(sd, 0xc4, state->edid.segments); ++ state->edid.read_retries = EDID_MAX_RETRIES; ++ queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); ++ return false; ++ } ++ ++ v4l2_dbg(1, debug, sd, "%s: edid complete with %d segment(s)\n", __func__, state->edid.segments); ++ state->edid.complete = true; ++ ed.phys_addr = cec_get_edid_phys_addr(state->edid.data, ++ state->edid.segments * 256, ++ NULL); ++ /* report when we have all segments ++ but report only for segment 0 ++ */ ++ ed.present = true; ++ ed.segment = 0; ++ state->edid_detect_counter++; ++ cec_s_phys_addr(state->cec_adap, ed.phys_addr, false); ++ v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); ++ return ed.present; ++ } ++ ++ return false; ++} ++ ++static int adv7511_registered(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ int err; ++ ++ err = cec_register_adapter(state->cec_adap); ++ if (err) ++ cec_delete_adapter(state->cec_adap); ++ return err; ++} ++ ++static void adv7511_unregistered(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ cec_unregister_adapter(state->cec_adap); ++} ++ ++static const struct v4l2_subdev_internal_ops adv7511_int_ops = { ++ .registered = adv7511_registered, ++ .unregistered = adv7511_unregistered, ++}; ++ ++/* ----------------------------------------------------------------------- */ ++/* Setup ADV7511 */ ++static void adv7511_init_setup(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ struct adv7511_state_edid *edid = &state->edid; ++ u32 cec_clk = state->pdata.cec_clk; ++ u8 ratio; ++ ++ v4l2_dbg(1, debug, sd, "%s\n", __func__); ++ ++ /* clear all interrupts */ ++ adv7511_wr(sd, 0x96, 0xff); ++ adv7511_wr(sd, 0x97, 0xff); ++ /* ++ * Stop HPD from resetting a lot of registers. ++ * It might leave the chip in a partly un-initialized state, ++ * in particular with regards to hotplug bounces. ++ */ ++ adv7511_wr_and_or(sd, 0xd6, 0x3f, 0xc0); ++ memset(edid, 0, sizeof(struct adv7511_state_edid)); ++ state->have_monitor = false; ++ adv7511_set_isr(sd, false); ++ adv7511_s_stream(sd, false); ++ adv7511_s_audio_stream(sd, false); ++ ++ if (state->i2c_cec == NULL) ++ return; ++ ++ v4l2_dbg(1, debug, sd, "%s: cec_clk %d\n", __func__, cec_clk); ++ ++ /* cec soft reset */ ++ adv7511_cec_write(sd, 0x50, 0x01); ++ adv7511_cec_write(sd, 0x50, 0x00); ++ ++ /* legacy mode */ ++ adv7511_cec_write(sd, 0x4a, 0x00); ++ ++ if (cec_clk % 750000 != 0) ++ v4l2_err(sd, "%s: cec_clk %d, not multiple of 750 Khz\n", ++ __func__, cec_clk); ++ ++ ratio = (cec_clk / 750000) - 1; ++ adv7511_cec_write(sd, 0x4e, ratio << 2); ++} ++ ++static int adv7511_probe(struct i2c_client *client, const struct i2c_device_id *id) ++{ ++ struct adv7511_state *state; ++ struct adv7511_platform_data *pdata = client->dev.platform_data; ++ struct v4l2_ctrl_handler *hdl; ++ struct v4l2_subdev *sd; ++ u8 chip_id[2]; ++ int err = -EIO; ++ ++ /* Check if the adapter supports the needed features */ ++ if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA)) ++ return -EIO; ++ ++ state = devm_kzalloc(&client->dev, sizeof(struct adv7511_state), GFP_KERNEL); ++ if (!state) ++ return -ENOMEM; ++ ++ /* Platform data */ ++ if (!pdata) { ++ v4l_err(client, "No platform data!\n"); ++ return -ENODEV; ++ } ++ memcpy(&state->pdata, pdata, sizeof(state->pdata)); ++ state->fmt_code = MEDIA_BUS_FMT_RGB888_1X24; ++ state->colorspace = V4L2_COLORSPACE_SRGB; ++ ++ sd = &state->sd; ++ ++ v4l2_dbg(1, debug, sd, "detecting adv7511 client on address 0x%x\n", ++ client->addr << 1); ++ ++ v4l2_i2c_subdev_init(sd, client, &adv7511_ops); ++ sd->internal_ops = &adv7511_int_ops; ++ ++ hdl = &state->hdl; ++ v4l2_ctrl_handler_init(hdl, 10); ++ /* add in ascending ID order */ ++ state->hdmi_mode_ctrl = v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops, ++ V4L2_CID_DV_TX_MODE, V4L2_DV_TX_MODE_HDMI, ++ 0, V4L2_DV_TX_MODE_DVI_D); ++ state->hotplug_ctrl = v4l2_ctrl_new_std(hdl, NULL, ++ V4L2_CID_DV_TX_HOTPLUG, 0, 1, 0, 0); ++ state->rx_sense_ctrl = v4l2_ctrl_new_std(hdl, NULL, ++ V4L2_CID_DV_TX_RXSENSE, 0, 1, 0, 0); ++ state->have_edid0_ctrl = v4l2_ctrl_new_std(hdl, NULL, ++ V4L2_CID_DV_TX_EDID_PRESENT, 0, 1, 0, 0); ++ state->rgb_quantization_range_ctrl = ++ v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops, ++ V4L2_CID_DV_TX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL, ++ 0, V4L2_DV_RGB_RANGE_AUTO); ++ state->content_type_ctrl = ++ v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops, ++ V4L2_CID_DV_TX_IT_CONTENT_TYPE, V4L2_DV_IT_CONTENT_TYPE_NO_ITC, ++ 0, V4L2_DV_IT_CONTENT_TYPE_NO_ITC); ++ sd->ctrl_handler = hdl; ++ if (hdl->error) { ++ err = hdl->error; ++ goto err_hdl; ++ } ++ state->pad.flags = MEDIA_PAD_FL_SINK; ++ err = media_entity_pads_init(&sd->entity, 1, &state->pad); ++ if (err) ++ goto err_hdl; ++ ++ /* EDID and CEC i2c addr */ ++ state->i2c_edid_addr = state->pdata.i2c_edid << 1; ++ state->i2c_cec_addr = state->pdata.i2c_cec << 1; ++ state->i2c_pktmem_addr = state->pdata.i2c_pktmem << 1; ++ ++ state->chip_revision = adv7511_rd(sd, 0x0); ++ chip_id[0] = adv7511_rd(sd, 0xf5); ++ chip_id[1] = adv7511_rd(sd, 0xf6); ++ if (chip_id[0] != 0x75 || chip_id[1] != 0x11) { ++ v4l2_err(sd, "chip_id != 0x7511, read 0x%02x%02x\n", chip_id[0], ++ chip_id[1]); ++ err = -EIO; ++ goto err_entity; ++ } ++ ++ state->i2c_edid = i2c_new_dummy(client->adapter, ++ state->i2c_edid_addr >> 1); ++ if (state->i2c_edid == NULL) { ++ v4l2_err(sd, "failed to register edid i2c client\n"); ++ err = -ENOMEM; ++ goto err_entity; ++ } ++ ++ adv7511_wr(sd, 0xe1, state->i2c_cec_addr); ++ if (state->pdata.cec_clk < 3000000 || ++ state->pdata.cec_clk > 100000000) { ++ v4l2_err(sd, "%s: cec_clk %u outside range, disabling cec\n", ++ __func__, state->pdata.cec_clk); ++ state->pdata.cec_clk = 0; ++ } ++ ++ if (state->pdata.cec_clk) { ++ state->i2c_cec = i2c_new_dummy(client->adapter, ++ state->i2c_cec_addr >> 1); ++ if (state->i2c_cec == NULL) { ++ v4l2_err(sd, "failed to register cec i2c client\n"); ++ err = -ENOMEM; ++ goto err_unreg_edid; ++ } ++ adv7511_wr(sd, 0xe2, 0x00); /* power up cec section */ ++ } else { ++ adv7511_wr(sd, 0xe2, 0x01); /* power down cec section */ ++ } ++ ++ state->i2c_pktmem = i2c_new_dummy(client->adapter, state->i2c_pktmem_addr >> 1); ++ if (state->i2c_pktmem == NULL) { ++ v4l2_err(sd, "failed to register pktmem i2c client\n"); ++ err = -ENOMEM; ++ goto err_unreg_cec; ++ } ++ ++ state->work_queue = create_singlethread_workqueue(sd->name); ++ if (state->work_queue == NULL) { ++ v4l2_err(sd, "could not create workqueue\n"); ++ err = -ENOMEM; ++ goto err_unreg_pktmem; ++ } ++ ++ INIT_DELAYED_WORK(&state->edid_handler, adv7511_edid_handler); ++ ++ adv7511_init_setup(sd); ++ ++#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC) ++ state->cec_adap = cec_allocate_adapter(&adv7511_cec_adap_ops, ++ state, dev_name(&client->dev), CEC_CAP_TRANSMIT | ++ CEC_CAP_LOG_ADDRS | CEC_CAP_PASSTHROUGH | CEC_CAP_RC, ++ ADV7511_MAX_ADDRS, &client->dev); ++ err = PTR_ERR_OR_ZERO(state->cec_adap); ++ if (err) { ++ destroy_workqueue(state->work_queue); ++ goto err_unreg_pktmem; ++ } ++#endif ++ ++ adv7511_set_isr(sd, true); ++ adv7511_check_monitor_present_status(sd); ++ ++ v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name, ++ client->addr << 1, client->adapter->name); ++ return 0; ++ ++err_unreg_pktmem: ++ i2c_unregister_device(state->i2c_pktmem); ++err_unreg_cec: ++ if (state->i2c_cec) ++ i2c_unregister_device(state->i2c_cec); ++err_unreg_edid: ++ i2c_unregister_device(state->i2c_edid); ++err_entity: ++ media_entity_cleanup(&sd->entity); ++err_hdl: ++ v4l2_ctrl_handler_free(&state->hdl); ++ return err; ++} ++ ++/* ----------------------------------------------------------------------- */ ++ ++static int adv7511_remove(struct i2c_client *client) ++{ ++ struct v4l2_subdev *sd = i2c_get_clientdata(client); ++ struct adv7511_state *state = get_adv7511_state(sd); ++ ++ state->chip_revision = -1; ++ ++ v4l2_dbg(1, debug, sd, "%s removed @ 0x%x (%s)\n", client->name, ++ client->addr << 1, client->adapter->name); ++ ++ adv7511_set_isr(sd, false); ++ adv7511_init_setup(sd); ++ cancel_delayed_work(&state->edid_handler); ++ i2c_unregister_device(state->i2c_edid); ++ if (state->i2c_cec) ++ i2c_unregister_device(state->i2c_cec); ++ i2c_unregister_device(state->i2c_pktmem); ++ destroy_workqueue(state->work_queue); ++ v4l2_device_unregister_subdev(sd); ++ media_entity_cleanup(&sd->entity); ++ v4l2_ctrl_handler_free(sd->ctrl_handler); ++ return 0; ++} ++ ++/* ----------------------------------------------------------------------- */ ++ ++static struct i2c_device_id adv7511_id[] = { ++ { "adv7511", 0 }, ++ { } ++}; ++MODULE_DEVICE_TABLE(i2c, adv7511_id); ++ ++static struct i2c_driver adv7511_driver = { ++ .driver = { ++ .name = "adv7511", ++ }, ++ .probe = adv7511_probe, ++ .remove = adv7511_remove, ++ .id_table = adv7511_id, ++}; ++ ++module_i2c_driver(adv7511_driver); +diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c +deleted file mode 100644 +index 5f1c8ee8a50e..000000000000 +--- a/drivers/media/i2c/adv7511.c ++++ /dev/null +@@ -1,2003 +0,0 @@ +-/* +- * Analog Devices ADV7511 HDMI Transmitter Device Driver +- * +- * Copyright 2013 Cisco Systems, Inc. and/or its affiliates. All rights reserved. +- * +- * This program is free software; you may redistribute it and/or modify +- * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation; version 2 of the License. +- * +- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS +- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN +- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +- * SOFTWARE. +- */ +- +- +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +- +-static int debug; +-module_param(debug, int, 0644); +-MODULE_PARM_DESC(debug, "debug level (0-2)"); +- +-MODULE_DESCRIPTION("Analog Devices ADV7511 HDMI Transmitter Device Driver"); +-MODULE_AUTHOR("Hans Verkuil"); +-MODULE_LICENSE("GPL v2"); +- +-#define MASK_ADV7511_EDID_RDY_INT 0x04 +-#define MASK_ADV7511_MSEN_INT 0x40 +-#define MASK_ADV7511_HPD_INT 0x80 +- +-#define MASK_ADV7511_HPD_DETECT 0x40 +-#define MASK_ADV7511_MSEN_DETECT 0x20 +-#define MASK_ADV7511_EDID_RDY 0x10 +- +-#define EDID_MAX_RETRIES (8) +-#define EDID_DELAY 250 +-#define EDID_MAX_SEGM 8 +- +-#define ADV7511_MAX_WIDTH 1920 +-#define ADV7511_MAX_HEIGHT 1200 +-#define ADV7511_MIN_PIXELCLOCK 20000000 +-#define ADV7511_MAX_PIXELCLOCK 225000000 +- +-#define ADV7511_MAX_ADDRS (3) +- +-/* +-********************************************************************** +-* +-* Arrays with configuration parameters for the ADV7511 +-* +-********************************************************************** +-*/ +- +-struct i2c_reg_value { +- unsigned char reg; +- unsigned char value; +-}; +- +-struct adv7511_state_edid { +- /* total number of blocks */ +- u32 blocks; +- /* Number of segments read */ +- u32 segments; +- u8 data[EDID_MAX_SEGM * 256]; +- /* Number of EDID read retries left */ +- unsigned read_retries; +- bool complete; +-}; +- +-struct adv7511_state { +- struct adv7511_platform_data pdata; +- struct v4l2_subdev sd; +- struct media_pad pad; +- struct v4l2_ctrl_handler hdl; +- int chip_revision; +- u8 i2c_edid_addr; +- u8 i2c_pktmem_addr; +- u8 i2c_cec_addr; +- +- struct i2c_client *i2c_cec; +- struct cec_adapter *cec_adap; +- u8 cec_addr[ADV7511_MAX_ADDRS]; +- u8 cec_valid_addrs; +- bool cec_enabled_adap; +- +- /* Is the adv7511 powered on? */ +- bool power_on; +- /* Did we receive hotplug and rx-sense signals? */ +- bool have_monitor; +- bool enabled_irq; +- /* timings from s_dv_timings */ +- struct v4l2_dv_timings dv_timings; +- u32 fmt_code; +- u32 colorspace; +- u32 ycbcr_enc; +- u32 quantization; +- u32 xfer_func; +- u32 content_type; +- /* controls */ +- struct v4l2_ctrl *hdmi_mode_ctrl; +- struct v4l2_ctrl *hotplug_ctrl; +- struct v4l2_ctrl *rx_sense_ctrl; +- struct v4l2_ctrl *have_edid0_ctrl; +- struct v4l2_ctrl *rgb_quantization_range_ctrl; +- struct v4l2_ctrl *content_type_ctrl; +- struct i2c_client *i2c_edid; +- struct i2c_client *i2c_pktmem; +- struct adv7511_state_edid edid; +- /* Running counter of the number of detected EDIDs (for debugging) */ +- unsigned edid_detect_counter; +- struct workqueue_struct *work_queue; +- struct delayed_work edid_handler; /* work entry */ +-}; +- +-static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd); +-static bool adv7511_check_edid_status(struct v4l2_subdev *sd); +-static void adv7511_setup(struct v4l2_subdev *sd); +-static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq); +-static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq); +- +- +-static const struct v4l2_dv_timings_cap adv7511_timings_cap = { +- .type = V4L2_DV_BT_656_1120, +- /* keep this initialization for compatibility with GCC < 4.4.6 */ +- .reserved = { 0 }, +- V4L2_INIT_BT_TIMINGS(640, ADV7511_MAX_WIDTH, 350, ADV7511_MAX_HEIGHT, +- ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK, +- V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT | +- V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT, +- V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING | +- V4L2_DV_BT_CAP_CUSTOM) +-}; +- +-static inline struct adv7511_state *get_adv7511_state(struct v4l2_subdev *sd) +-{ +- return container_of(sd, struct adv7511_state, sd); +-} +- +-static inline struct v4l2_subdev *to_sd(struct v4l2_ctrl *ctrl) +-{ +- return &container_of(ctrl->handler, struct adv7511_state, hdl)->sd; +-} +- +-/* ------------------------ I2C ----------------------------------------------- */ +- +-static s32 adv_smbus_read_byte_data_check(struct i2c_client *client, +- u8 command, bool check) +-{ +- union i2c_smbus_data data; +- +- if (!i2c_smbus_xfer(client->adapter, client->addr, client->flags, +- I2C_SMBUS_READ, command, +- I2C_SMBUS_BYTE_DATA, &data)) +- return data.byte; +- if (check) +- v4l_err(client, "error reading %02x, %02x\n", +- client->addr, command); +- return -1; +-} +- +-static s32 adv_smbus_read_byte_data(struct i2c_client *client, u8 command) +-{ +- int i; +- for (i = 0; i < 3; i++) { +- int ret = adv_smbus_read_byte_data_check(client, command, true); +- if (ret >= 0) { +- if (i) +- v4l_err(client, "read ok after %d retries\n", i); +- return ret; +- } +- } +- v4l_err(client, "read failed\n"); +- return -1; +-} +- +-static int adv7511_rd(struct v4l2_subdev *sd, u8 reg) +-{ +- struct i2c_client *client = v4l2_get_subdevdata(sd); +- +- return adv_smbus_read_byte_data(client, reg); +-} +- +-static int adv7511_wr(struct v4l2_subdev *sd, u8 reg, u8 val) +-{ +- struct i2c_client *client = v4l2_get_subdevdata(sd); +- int ret; +- int i; +- +- for (i = 0; i < 3; i++) { +- ret = i2c_smbus_write_byte_data(client, reg, val); +- if (ret == 0) +- return 0; +- } +- v4l2_err(sd, "%s: i2c write error\n", __func__); +- return ret; +-} +- +-/* To set specific bits in the register, a clear-mask is given (to be AND-ed), +- and then the value-mask (to be OR-ed). */ +-static inline void adv7511_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask) +-{ +- adv7511_wr(sd, reg, (adv7511_rd(sd, reg) & clr_mask) | val_mask); +-} +- +-static int adv_smbus_read_i2c_block_data(struct i2c_client *client, +- u8 command, unsigned length, u8 *values) +-{ +- union i2c_smbus_data data; +- int ret; +- +- if (length > I2C_SMBUS_BLOCK_MAX) +- length = I2C_SMBUS_BLOCK_MAX; +- data.block[0] = length; +- +- ret = i2c_smbus_xfer(client->adapter, client->addr, client->flags, +- I2C_SMBUS_READ, command, +- I2C_SMBUS_I2C_BLOCK_DATA, &data); +- memcpy(values, data.block + 1, length); +- return ret; +-} +- +-static void adv7511_edid_rd(struct v4l2_subdev *sd, uint16_t len, uint8_t *buf) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- int i; +- int err = 0; +- +- v4l2_dbg(1, debug, sd, "%s:\n", __func__); +- +- for (i = 0; !err && i < len; i += I2C_SMBUS_BLOCK_MAX) +- err = adv_smbus_read_i2c_block_data(state->i2c_edid, i, +- I2C_SMBUS_BLOCK_MAX, buf + i); +- if (err) +- v4l2_err(sd, "%s: i2c read error\n", __func__); +-} +- +-static inline int adv7511_cec_read(struct v4l2_subdev *sd, u8 reg) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- return i2c_smbus_read_byte_data(state->i2c_cec, reg); +-} +- +-static int adv7511_cec_write(struct v4l2_subdev *sd, u8 reg, u8 val) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- int ret; +- int i; +- +- for (i = 0; i < 3; i++) { +- ret = i2c_smbus_write_byte_data(state->i2c_cec, reg, val); +- if (ret == 0) +- return 0; +- } +- v4l2_err(sd, "%s: I2C Write Problem\n", __func__); +- return ret; +-} +- +-static inline int adv7511_cec_write_and_or(struct v4l2_subdev *sd, u8 reg, u8 mask, +- u8 val) +-{ +- return adv7511_cec_write(sd, reg, (adv7511_cec_read(sd, reg) & mask) | val); +-} +- +-static int adv7511_pktmem_rd(struct v4l2_subdev *sd, u8 reg) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- return adv_smbus_read_byte_data(state->i2c_pktmem, reg); +-} +- +-static int adv7511_pktmem_wr(struct v4l2_subdev *sd, u8 reg, u8 val) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- int ret; +- int i; +- +- for (i = 0; i < 3; i++) { +- ret = i2c_smbus_write_byte_data(state->i2c_pktmem, reg, val); +- if (ret == 0) +- return 0; +- } +- v4l2_err(sd, "%s: i2c write error\n", __func__); +- return ret; +-} +- +-/* To set specific bits in the register, a clear-mask is given (to be AND-ed), +- and then the value-mask (to be OR-ed). */ +-static inline void adv7511_pktmem_wr_and_or(struct v4l2_subdev *sd, u8 reg, u8 clr_mask, u8 val_mask) +-{ +- adv7511_pktmem_wr(sd, reg, (adv7511_pktmem_rd(sd, reg) & clr_mask) | val_mask); +-} +- +-static inline bool adv7511_have_hotplug(struct v4l2_subdev *sd) +-{ +- return adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT; +-} +- +-static inline bool adv7511_have_rx_sense(struct v4l2_subdev *sd) +-{ +- return adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT; +-} +- +-static void adv7511_csc_conversion_mode(struct v4l2_subdev *sd, u8 mode) +-{ +- adv7511_wr_and_or(sd, 0x18, 0x9f, (mode & 0x3)<<5); +-} +- +-static void adv7511_csc_coeff(struct v4l2_subdev *sd, +- u16 A1, u16 A2, u16 A3, u16 A4, +- u16 B1, u16 B2, u16 B3, u16 B4, +- u16 C1, u16 C2, u16 C3, u16 C4) +-{ +- /* A */ +- adv7511_wr_and_or(sd, 0x18, 0xe0, A1>>8); +- adv7511_wr(sd, 0x19, A1); +- adv7511_wr_and_or(sd, 0x1A, 0xe0, A2>>8); +- adv7511_wr(sd, 0x1B, A2); +- adv7511_wr_and_or(sd, 0x1c, 0xe0, A3>>8); +- adv7511_wr(sd, 0x1d, A3); +- adv7511_wr_and_or(sd, 0x1e, 0xe0, A4>>8); +- adv7511_wr(sd, 0x1f, A4); +- +- /* B */ +- adv7511_wr_and_or(sd, 0x20, 0xe0, B1>>8); +- adv7511_wr(sd, 0x21, B1); +- adv7511_wr_and_or(sd, 0x22, 0xe0, B2>>8); +- adv7511_wr(sd, 0x23, B2); +- adv7511_wr_and_or(sd, 0x24, 0xe0, B3>>8); +- adv7511_wr(sd, 0x25, B3); +- adv7511_wr_and_or(sd, 0x26, 0xe0, B4>>8); +- adv7511_wr(sd, 0x27, B4); +- +- /* C */ +- adv7511_wr_and_or(sd, 0x28, 0xe0, C1>>8); +- adv7511_wr(sd, 0x29, C1); +- adv7511_wr_and_or(sd, 0x2A, 0xe0, C2>>8); +- adv7511_wr(sd, 0x2B, C2); +- adv7511_wr_and_or(sd, 0x2C, 0xe0, C3>>8); +- adv7511_wr(sd, 0x2D, C3); +- adv7511_wr_and_or(sd, 0x2E, 0xe0, C4>>8); +- adv7511_wr(sd, 0x2F, C4); +-} +- +-static void adv7511_csc_rgb_full2limit(struct v4l2_subdev *sd, bool enable) +-{ +- if (enable) { +- u8 csc_mode = 0; +- adv7511_csc_conversion_mode(sd, csc_mode); +- adv7511_csc_coeff(sd, +- 4096-564, 0, 0, 256, +- 0, 4096-564, 0, 256, +- 0, 0, 4096-564, 256); +- /* enable CSC */ +- adv7511_wr_and_or(sd, 0x18, 0x7f, 0x80); +- /* AVI infoframe: Limited range RGB (16-235) */ +- adv7511_wr_and_or(sd, 0x57, 0xf3, 0x04); +- } else { +- /* disable CSC */ +- adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0); +- /* AVI infoframe: Full range RGB (0-255) */ +- adv7511_wr_and_or(sd, 0x57, 0xf3, 0x08); +- } +-} +- +-static void adv7511_set_rgb_quantization_mode(struct v4l2_subdev *sd, struct v4l2_ctrl *ctrl) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- /* Only makes sense for RGB formats */ +- if (state->fmt_code != MEDIA_BUS_FMT_RGB888_1X24) { +- /* so just keep quantization */ +- adv7511_csc_rgb_full2limit(sd, false); +- return; +- } +- +- switch (ctrl->val) { +- case V4L2_DV_RGB_RANGE_AUTO: +- /* automatic */ +- if (state->dv_timings.bt.flags & V4L2_DV_FL_IS_CE_VIDEO) { +- /* CE format, RGB limited range (16-235) */ +- adv7511_csc_rgb_full2limit(sd, true); +- } else { +- /* not CE format, RGB full range (0-255) */ +- adv7511_csc_rgb_full2limit(sd, false); +- } +- break; +- case V4L2_DV_RGB_RANGE_LIMITED: +- /* RGB limited range (16-235) */ +- adv7511_csc_rgb_full2limit(sd, true); +- break; +- case V4L2_DV_RGB_RANGE_FULL: +- /* RGB full range (0-255) */ +- adv7511_csc_rgb_full2limit(sd, false); +- break; +- } +-} +- +-/* ------------------------------ CTRL OPS ------------------------------ */ +- +-static int adv7511_s_ctrl(struct v4l2_ctrl *ctrl) +-{ +- struct v4l2_subdev *sd = to_sd(ctrl); +- struct adv7511_state *state = get_adv7511_state(sd); +- +- v4l2_dbg(1, debug, sd, "%s: ctrl id: %d, ctrl->val %d\n", __func__, ctrl->id, ctrl->val); +- +- if (state->hdmi_mode_ctrl == ctrl) { +- /* Set HDMI or DVI-D */ +- adv7511_wr_and_or(sd, 0xaf, 0xfd, ctrl->val == V4L2_DV_TX_MODE_HDMI ? 0x02 : 0x00); +- return 0; +- } +- if (state->rgb_quantization_range_ctrl == ctrl) { +- adv7511_set_rgb_quantization_mode(sd, ctrl); +- return 0; +- } +- if (state->content_type_ctrl == ctrl) { +- u8 itc, cn; +- +- state->content_type = ctrl->val; +- itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC; +- cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS; +- adv7511_wr_and_or(sd, 0x57, 0x7f, itc << 7); +- adv7511_wr_and_or(sd, 0x59, 0xcf, cn << 4); +- return 0; +- } +- +- return -EINVAL; +-} +- +-static const struct v4l2_ctrl_ops adv7511_ctrl_ops = { +- .s_ctrl = adv7511_s_ctrl, +-}; +- +-/* ---------------------------- CORE OPS ------------------------------------------- */ +- +-#ifdef CONFIG_VIDEO_ADV_DEBUG +-static void adv7511_inv_register(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- v4l2_info(sd, "0x000-0x0ff: Main Map\n"); +- if (state->i2c_cec) +- v4l2_info(sd, "0x100-0x1ff: CEC Map\n"); +-} +- +-static int adv7511_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *reg) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- reg->size = 1; +- switch (reg->reg >> 8) { +- case 0: +- reg->val = adv7511_rd(sd, reg->reg & 0xff); +- break; +- case 1: +- if (state->i2c_cec) { +- reg->val = adv7511_cec_read(sd, reg->reg & 0xff); +- break; +- } +- /* fall through */ +- default: +- v4l2_info(sd, "Register %03llx not supported\n", reg->reg); +- adv7511_inv_register(sd); +- break; +- } +- return 0; +-} +- +-static int adv7511_s_register(struct v4l2_subdev *sd, const struct v4l2_dbg_register *reg) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- switch (reg->reg >> 8) { +- case 0: +- adv7511_wr(sd, reg->reg & 0xff, reg->val & 0xff); +- break; +- case 1: +- if (state->i2c_cec) { +- adv7511_cec_write(sd, reg->reg & 0xff, reg->val & 0xff); +- break; +- } +- /* fall through */ +- default: +- v4l2_info(sd, "Register %03llx not supported\n", reg->reg); +- adv7511_inv_register(sd); +- break; +- } +- return 0; +-} +-#endif +- +-struct adv7511_cfg_read_infoframe { +- const char *desc; +- u8 present_reg; +- u8 present_mask; +- u8 header[3]; +- u16 payload_addr; +-}; +- +-static u8 hdmi_infoframe_checksum(u8 *ptr, size_t size) +-{ +- u8 csum = 0; +- size_t i; +- +- /* compute checksum */ +- for (i = 0; i < size; i++) +- csum += ptr[i]; +- +- return 256 - csum; +-} +- +-static void log_infoframe(struct v4l2_subdev *sd, const struct adv7511_cfg_read_infoframe *cri) +-{ +- struct i2c_client *client = v4l2_get_subdevdata(sd); +- struct device *dev = &client->dev; +- union hdmi_infoframe frame; +- u8 buffer[32]; +- u8 len; +- int i; +- +- if (!(adv7511_rd(sd, cri->present_reg) & cri->present_mask)) { +- v4l2_info(sd, "%s infoframe not transmitted\n", cri->desc); +- return; +- } +- +- memcpy(buffer, cri->header, sizeof(cri->header)); +- +- len = buffer[2]; +- +- if (len + 4 > sizeof(buffer)) { +- v4l2_err(sd, "%s: invalid %s infoframe length %d\n", __func__, cri->desc, len); +- return; +- } +- +- if (cri->payload_addr >= 0x100) { +- for (i = 0; i < len; i++) +- buffer[i + 4] = adv7511_pktmem_rd(sd, cri->payload_addr + i - 0x100); +- } else { +- for (i = 0; i < len; i++) +- buffer[i + 4] = adv7511_rd(sd, cri->payload_addr + i); +- } +- buffer[3] = 0; +- buffer[3] = hdmi_infoframe_checksum(buffer, len + 4); +- +- if (hdmi_infoframe_unpack(&frame, buffer) < 0) { +- v4l2_err(sd, "%s: unpack of %s infoframe failed\n", __func__, cri->desc); +- return; +- } +- +- hdmi_infoframe_log(KERN_INFO, dev, &frame); +-} +- +-static void adv7511_log_infoframes(struct v4l2_subdev *sd) +-{ +- static const struct adv7511_cfg_read_infoframe cri[] = { +- { "AVI", 0x44, 0x10, { 0x82, 2, 13 }, 0x55 }, +- { "Audio", 0x44, 0x08, { 0x84, 1, 10 }, 0x73 }, +- { "SDP", 0x40, 0x40, { 0x83, 1, 25 }, 0x103 }, +- }; +- int i; +- +- for (i = 0; i < ARRAY_SIZE(cri); i++) +- log_infoframe(sd, &cri[i]); +-} +- +-static int adv7511_log_status(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- struct adv7511_state_edid *edid = &state->edid; +- int i; +- +- static const char * const states[] = { +- "in reset", +- "reading EDID", +- "idle", +- "initializing HDCP", +- "HDCP enabled", +- "initializing HDCP repeater", +- "6", "7", "8", "9", "A", "B", "C", "D", "E", "F" +- }; +- static const char * const errors[] = { +- "no error", +- "bad receiver BKSV", +- "Ri mismatch", +- "Pj mismatch", +- "i2c error", +- "timed out", +- "max repeater cascade exceeded", +- "hash check failed", +- "too many devices", +- "9", "A", "B", "C", "D", "E", "F" +- }; +- +- v4l2_info(sd, "power %s\n", state->power_on ? "on" : "off"); +- v4l2_info(sd, "%s hotplug, %s Rx Sense, %s EDID (%d block(s))\n", +- (adv7511_rd(sd, 0x42) & MASK_ADV7511_HPD_DETECT) ? "detected" : "no", +- (adv7511_rd(sd, 0x42) & MASK_ADV7511_MSEN_DETECT) ? "detected" : "no", +- edid->segments ? "found" : "no", +- edid->blocks); +- v4l2_info(sd, "%s output %s\n", +- (adv7511_rd(sd, 0xaf) & 0x02) ? +- "HDMI" : "DVI-D", +- (adv7511_rd(sd, 0xa1) & 0x3c) ? +- "disabled" : "enabled"); +- v4l2_info(sd, "state: %s, error: %s, detect count: %u, msk/irq: %02x/%02x\n", +- states[adv7511_rd(sd, 0xc8) & 0xf], +- errors[adv7511_rd(sd, 0xc8) >> 4], state->edid_detect_counter, +- adv7511_rd(sd, 0x94), adv7511_rd(sd, 0x96)); +- v4l2_info(sd, "RGB quantization: %s range\n", adv7511_rd(sd, 0x18) & 0x80 ? "limited" : "full"); +- if (adv7511_rd(sd, 0xaf) & 0x02) { +- /* HDMI only */ +- u8 manual_cts = adv7511_rd(sd, 0x0a) & 0x80; +- u32 N = (adv7511_rd(sd, 0x01) & 0xf) << 16 | +- adv7511_rd(sd, 0x02) << 8 | +- adv7511_rd(sd, 0x03); +- u8 vic_detect = adv7511_rd(sd, 0x3e) >> 2; +- u8 vic_sent = adv7511_rd(sd, 0x3d) & 0x3f; +- u32 CTS; +- +- if (manual_cts) +- CTS = (adv7511_rd(sd, 0x07) & 0xf) << 16 | +- adv7511_rd(sd, 0x08) << 8 | +- adv7511_rd(sd, 0x09); +- else +- CTS = (adv7511_rd(sd, 0x04) & 0xf) << 16 | +- adv7511_rd(sd, 0x05) << 8 | +- adv7511_rd(sd, 0x06); +- v4l2_info(sd, "CTS %s mode: N %d, CTS %d\n", +- manual_cts ? "manual" : "automatic", N, CTS); +- v4l2_info(sd, "VIC: detected %d, sent %d\n", +- vic_detect, vic_sent); +- adv7511_log_infoframes(sd); +- } +- if (state->dv_timings.type == V4L2_DV_BT_656_1120) +- v4l2_print_dv_timings(sd->name, "timings: ", +- &state->dv_timings, false); +- else +- v4l2_info(sd, "no timings set\n"); +- v4l2_info(sd, "i2c edid addr: 0x%x\n", state->i2c_edid_addr); +- +- if (state->i2c_cec == NULL) +- return 0; +- +- v4l2_info(sd, "i2c cec addr: 0x%x\n", state->i2c_cec_addr); +- +- v4l2_info(sd, "CEC: %s\n", state->cec_enabled_adap ? +- "enabled" : "disabled"); +- if (state->cec_enabled_adap) { +- for (i = 0; i < ADV7511_MAX_ADDRS; i++) { +- bool is_valid = state->cec_valid_addrs & (1 << i); +- +- if (is_valid) +- v4l2_info(sd, "CEC Logical Address: 0x%x\n", +- state->cec_addr[i]); +- } +- } +- v4l2_info(sd, "i2c pktmem addr: 0x%x\n", state->i2c_pktmem_addr); +- return 0; +-} +- +-/* Power up/down adv7511 */ +-static int adv7511_s_power(struct v4l2_subdev *sd, int on) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- const int retries = 20; +- int i; +- +- v4l2_dbg(1, debug, sd, "%s: power %s\n", __func__, on ? "on" : "off"); +- +- state->power_on = on; +- +- if (!on) { +- /* Power down */ +- adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40); +- return true; +- } +- +- /* Power up */ +- /* The adv7511 does not always come up immediately. +- Retry multiple times. */ +- for (i = 0; i < retries; i++) { +- adv7511_wr_and_or(sd, 0x41, 0xbf, 0x0); +- if ((adv7511_rd(sd, 0x41) & 0x40) == 0) +- break; +- adv7511_wr_and_or(sd, 0x41, 0xbf, 0x40); +- msleep(10); +- } +- if (i == retries) { +- v4l2_dbg(1, debug, sd, "%s: failed to powerup the adv7511!\n", __func__); +- adv7511_s_power(sd, 0); +- return false; +- } +- if (i > 1) +- v4l2_dbg(1, debug, sd, "%s: needed %d retries to powerup the adv7511\n", __func__, i); +- +- /* Reserved registers that must be set */ +- adv7511_wr(sd, 0x98, 0x03); +- adv7511_wr_and_or(sd, 0x9a, 0xfe, 0x70); +- adv7511_wr(sd, 0x9c, 0x30); +- adv7511_wr_and_or(sd, 0x9d, 0xfc, 0x01); +- adv7511_wr(sd, 0xa2, 0xa4); +- adv7511_wr(sd, 0xa3, 0xa4); +- adv7511_wr(sd, 0xe0, 0xd0); +- adv7511_wr(sd, 0xf9, 0x00); +- +- adv7511_wr(sd, 0x43, state->i2c_edid_addr); +- adv7511_wr(sd, 0x45, state->i2c_pktmem_addr); +- +- /* Set number of attempts to read the EDID */ +- adv7511_wr(sd, 0xc9, 0xf); +- return true; +-} +- +-#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC) +-static int adv7511_cec_adap_enable(struct cec_adapter *adap, bool enable) +-{ +- struct adv7511_state *state = adap->priv; +- struct v4l2_subdev *sd = &state->sd; +- +- if (state->i2c_cec == NULL) +- return -EIO; +- +- if (!state->cec_enabled_adap && enable) { +- /* power up cec section */ +- adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x01); +- /* legacy mode and clear all rx buffers */ +- adv7511_cec_write(sd, 0x4a, 0x07); +- adv7511_cec_write(sd, 0x4a, 0); +- adv7511_cec_write_and_or(sd, 0x11, 0xfe, 0); /* initially disable tx */ +- /* enabled irqs: */ +- /* tx: ready */ +- /* tx: arbitration lost */ +- /* tx: retry timeout */ +- /* rx: ready 1 */ +- if (state->enabled_irq) +- adv7511_wr_and_or(sd, 0x95, 0xc0, 0x39); +- } else if (state->cec_enabled_adap && !enable) { +- if (state->enabled_irq) +- adv7511_wr_and_or(sd, 0x95, 0xc0, 0x00); +- /* disable address mask 1-3 */ +- adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0x00); +- /* power down cec section */ +- adv7511_cec_write_and_or(sd, 0x4e, 0xfc, 0x00); +- state->cec_valid_addrs = 0; +- } +- state->cec_enabled_adap = enable; +- return 0; +-} +- +-static int adv7511_cec_adap_log_addr(struct cec_adapter *adap, u8 addr) +-{ +- struct adv7511_state *state = adap->priv; +- struct v4l2_subdev *sd = &state->sd; +- unsigned int i, free_idx = ADV7511_MAX_ADDRS; +- +- if (!state->cec_enabled_adap) +- return addr == CEC_LOG_ADDR_INVALID ? 0 : -EIO; +- +- if (addr == CEC_LOG_ADDR_INVALID) { +- adv7511_cec_write_and_or(sd, 0x4b, 0x8f, 0); +- state->cec_valid_addrs = 0; +- return 0; +- } +- +- for (i = 0; i < ADV7511_MAX_ADDRS; i++) { +- bool is_valid = state->cec_valid_addrs & (1 << i); +- +- if (free_idx == ADV7511_MAX_ADDRS && !is_valid) +- free_idx = i; +- if (is_valid && state->cec_addr[i] == addr) +- return 0; +- } +- if (i == ADV7511_MAX_ADDRS) { +- i = free_idx; +- if (i == ADV7511_MAX_ADDRS) +- return -ENXIO; +- } +- state->cec_addr[i] = addr; +- state->cec_valid_addrs |= 1 << i; +- +- switch (i) { +- case 0: +- /* enable address mask 0 */ +- adv7511_cec_write_and_or(sd, 0x4b, 0xef, 0x10); +- /* set address for mask 0 */ +- adv7511_cec_write_and_or(sd, 0x4c, 0xf0, addr); +- break; +- case 1: +- /* enable address mask 1 */ +- adv7511_cec_write_and_or(sd, 0x4b, 0xdf, 0x20); +- /* set address for mask 1 */ +- adv7511_cec_write_and_or(sd, 0x4c, 0x0f, addr << 4); +- break; +- case 2: +- /* enable address mask 2 */ +- adv7511_cec_write_and_or(sd, 0x4b, 0xbf, 0x40); +- /* set address for mask 1 */ +- adv7511_cec_write_and_or(sd, 0x4d, 0xf0, addr); +- break; +- } +- return 0; +-} +- +-static int adv7511_cec_adap_transmit(struct cec_adapter *adap, u8 attempts, +- u32 signal_free_time, struct cec_msg *msg) +-{ +- struct adv7511_state *state = adap->priv; +- struct v4l2_subdev *sd = &state->sd; +- u8 len = msg->len; +- unsigned int i; +- +- v4l2_dbg(1, debug, sd, "%s: len %d\n", __func__, len); +- +- if (len > 16) { +- v4l2_err(sd, "%s: len exceeded 16 (%d)\n", __func__, len); +- return -EINVAL; +- } +- +- /* +- * The number of retries is the number of attempts - 1, but retry +- * at least once. It's not clear if a value of 0 is allowed, so +- * let's do at least one retry. +- */ +- adv7511_cec_write_and_or(sd, 0x12, ~0x70, max(1, attempts - 1) << 4); +- +- /* blocking, clear cec tx irq status */ +- adv7511_wr_and_or(sd, 0x97, 0xc7, 0x38); +- +- /* write data */ +- for (i = 0; i < len; i++) +- adv7511_cec_write(sd, i, msg->msg[i]); +- +- /* set length (data + header) */ +- adv7511_cec_write(sd, 0x10, len); +- /* start transmit, enable tx */ +- adv7511_cec_write(sd, 0x11, 0x01); +- return 0; +-} +- +-static void adv_cec_tx_raw_status(struct v4l2_subdev *sd, u8 tx_raw_status) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- if ((adv7511_cec_read(sd, 0x11) & 0x01) == 0) { +- v4l2_dbg(1, debug, sd, "%s: tx raw: tx disabled\n", __func__); +- return; +- } +- +- if (tx_raw_status & 0x10) { +- v4l2_dbg(1, debug, sd, +- "%s: tx raw: arbitration lost\n", __func__); +- cec_transmit_done(state->cec_adap, CEC_TX_STATUS_ARB_LOST, +- 1, 0, 0, 0); +- return; +- } +- if (tx_raw_status & 0x08) { +- u8 status; +- u8 nack_cnt; +- u8 low_drive_cnt; +- +- v4l2_dbg(1, debug, sd, "%s: tx raw: retry failed\n", __func__); +- /* +- * We set this status bit since this hardware performs +- * retransmissions. +- */ +- status = CEC_TX_STATUS_MAX_RETRIES; +- nack_cnt = adv7511_cec_read(sd, 0x14) & 0xf; +- if (nack_cnt) +- status |= CEC_TX_STATUS_NACK; +- low_drive_cnt = adv7511_cec_read(sd, 0x14) >> 4; +- if (low_drive_cnt) +- status |= CEC_TX_STATUS_LOW_DRIVE; +- cec_transmit_done(state->cec_adap, status, +- 0, nack_cnt, low_drive_cnt, 0); +- return; +- } +- if (tx_raw_status & 0x20) { +- v4l2_dbg(1, debug, sd, "%s: tx raw: ready ok\n", __func__); +- cec_transmit_done(state->cec_adap, CEC_TX_STATUS_OK, 0, 0, 0, 0); +- return; +- } +-} +- +-static const struct cec_adap_ops adv7511_cec_adap_ops = { +- .adap_enable = adv7511_cec_adap_enable, +- .adap_log_addr = adv7511_cec_adap_log_addr, +- .adap_transmit = adv7511_cec_adap_transmit, +-}; +-#endif +- +-/* Enable interrupts */ +-static void adv7511_set_isr(struct v4l2_subdev *sd, bool enable) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- u8 irqs = MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT; +- u8 irqs_rd; +- int retries = 100; +- +- v4l2_dbg(2, debug, sd, "%s: %s\n", __func__, enable ? "enable" : "disable"); +- +- if (state->enabled_irq == enable) +- return; +- state->enabled_irq = enable; +- +- /* The datasheet says that the EDID ready interrupt should be +- disabled if there is no hotplug. */ +- if (!enable) +- irqs = 0; +- else if (adv7511_have_hotplug(sd)) +- irqs |= MASK_ADV7511_EDID_RDY_INT; +- +- adv7511_wr_and_or(sd, 0x95, 0xc0, +- (state->cec_enabled_adap && enable) ? 0x39 : 0x00); +- +- /* +- * This i2c write can fail (approx. 1 in 1000 writes). But it +- * is essential that this register is correct, so retry it +- * multiple times. +- * +- * Note that the i2c write does not report an error, but the readback +- * clearly shows the wrong value. +- */ +- do { +- adv7511_wr(sd, 0x94, irqs); +- irqs_rd = adv7511_rd(sd, 0x94); +- } while (retries-- && irqs_rd != irqs); +- +- if (irqs_rd == irqs) +- return; +- v4l2_err(sd, "Could not set interrupts: hw failure?\n"); +-} +- +-/* Interrupt handler */ +-static int adv7511_isr(struct v4l2_subdev *sd, u32 status, bool *handled) +-{ +- u8 irq_status; +- u8 cec_irq; +- +- /* disable interrupts to prevent a race condition */ +- adv7511_set_isr(sd, false); +- irq_status = adv7511_rd(sd, 0x96); +- cec_irq = adv7511_rd(sd, 0x97); +- /* clear detected interrupts */ +- adv7511_wr(sd, 0x96, irq_status); +- adv7511_wr(sd, 0x97, cec_irq); +- +- v4l2_dbg(1, debug, sd, "%s: irq 0x%x, cec-irq 0x%x\n", __func__, +- irq_status, cec_irq); +- +- if (irq_status & (MASK_ADV7511_HPD_INT | MASK_ADV7511_MSEN_INT)) +- adv7511_check_monitor_present_status(sd); +- if (irq_status & MASK_ADV7511_EDID_RDY_INT) +- adv7511_check_edid_status(sd); +- +-#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC) +- if (cec_irq & 0x38) +- adv_cec_tx_raw_status(sd, cec_irq); +- +- if (cec_irq & 1) { +- struct adv7511_state *state = get_adv7511_state(sd); +- struct cec_msg msg; +- +- msg.len = adv7511_cec_read(sd, 0x25) & 0x1f; +- +- v4l2_dbg(1, debug, sd, "%s: cec msg len %d\n", __func__, +- msg.len); +- +- if (msg.len > 16) +- msg.len = 16; +- +- if (msg.len) { +- u8 i; +- +- for (i = 0; i < msg.len; i++) +- msg.msg[i] = adv7511_cec_read(sd, i + 0x15); +- +- adv7511_cec_write(sd, 0x4a, 1); /* toggle to re-enable rx 1 */ +- adv7511_cec_write(sd, 0x4a, 0); +- cec_received_msg(state->cec_adap, &msg); +- } +- } +-#endif +- +- /* enable interrupts */ +- adv7511_set_isr(sd, true); +- +- if (handled) +- *handled = true; +- return 0; +-} +- +-static const struct v4l2_subdev_core_ops adv7511_core_ops = { +- .log_status = adv7511_log_status, +-#ifdef CONFIG_VIDEO_ADV_DEBUG +- .g_register = adv7511_g_register, +- .s_register = adv7511_s_register, +-#endif +- .s_power = adv7511_s_power, +- .interrupt_service_routine = adv7511_isr, +-}; +- +-/* ------------------------------ VIDEO OPS ------------------------------ */ +- +-/* Enable/disable adv7511 output */ +-static int adv7511_s_stream(struct v4l2_subdev *sd, int enable) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis")); +- adv7511_wr_and_or(sd, 0xa1, ~0x3c, (enable ? 0 : 0x3c)); +- if (enable) { +- adv7511_check_monitor_present_status(sd); +- } else { +- adv7511_s_power(sd, 0); +- state->have_monitor = false; +- } +- return 0; +-} +- +-static int adv7511_s_dv_timings(struct v4l2_subdev *sd, +- struct v4l2_dv_timings *timings) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- struct v4l2_bt_timings *bt = &timings->bt; +- u32 fps; +- +- v4l2_dbg(1, debug, sd, "%s:\n", __func__); +- +- /* quick sanity check */ +- if (!v4l2_valid_dv_timings(timings, &adv7511_timings_cap, NULL, NULL)) +- return -EINVAL; +- +- /* Fill the optional fields .standards and .flags in struct v4l2_dv_timings +- if the format is one of the CEA or DMT timings. */ +- v4l2_find_dv_timings_cap(timings, &adv7511_timings_cap, 0, NULL, NULL); +- +- /* save timings */ +- state->dv_timings = *timings; +- +- /* set h/vsync polarities */ +- adv7511_wr_and_or(sd, 0x17, 0x9f, +- ((bt->polarities & V4L2_DV_VSYNC_POS_POL) ? 0 : 0x40) | +- ((bt->polarities & V4L2_DV_HSYNC_POS_POL) ? 0 : 0x20)); +- +- fps = (u32)bt->pixelclock / (V4L2_DV_BT_FRAME_WIDTH(bt) * V4L2_DV_BT_FRAME_HEIGHT(bt)); +- switch (fps) { +- case 24: +- adv7511_wr_and_or(sd, 0xfb, 0xf9, 1 << 1); +- break; +- case 25: +- adv7511_wr_and_or(sd, 0xfb, 0xf9, 2 << 1); +- break; +- case 30: +- adv7511_wr_and_or(sd, 0xfb, 0xf9, 3 << 1); +- break; +- default: +- adv7511_wr_and_or(sd, 0xfb, 0xf9, 0); +- break; +- } +- +- /* update quantization range based on new dv_timings */ +- adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl); +- +- return 0; +-} +- +-static int adv7511_g_dv_timings(struct v4l2_subdev *sd, +- struct v4l2_dv_timings *timings) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- v4l2_dbg(1, debug, sd, "%s:\n", __func__); +- +- if (!timings) +- return -EINVAL; +- +- *timings = state->dv_timings; +- +- return 0; +-} +- +-static int adv7511_enum_dv_timings(struct v4l2_subdev *sd, +- struct v4l2_enum_dv_timings *timings) +-{ +- if (timings->pad != 0) +- return -EINVAL; +- +- return v4l2_enum_dv_timings_cap(timings, &adv7511_timings_cap, NULL, NULL); +-} +- +-static int adv7511_dv_timings_cap(struct v4l2_subdev *sd, +- struct v4l2_dv_timings_cap *cap) +-{ +- if (cap->pad != 0) +- return -EINVAL; +- +- *cap = adv7511_timings_cap; +- return 0; +-} +- +-static const struct v4l2_subdev_video_ops adv7511_video_ops = { +- .s_stream = adv7511_s_stream, +- .s_dv_timings = adv7511_s_dv_timings, +- .g_dv_timings = adv7511_g_dv_timings, +-}; +- +-/* ------------------------------ AUDIO OPS ------------------------------ */ +-static int adv7511_s_audio_stream(struct v4l2_subdev *sd, int enable) +-{ +- v4l2_dbg(1, debug, sd, "%s: %sable\n", __func__, (enable ? "en" : "dis")); +- +- if (enable) +- adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x80); +- else +- adv7511_wr_and_or(sd, 0x4b, 0x3f, 0x40); +- +- return 0; +-} +- +-static int adv7511_s_clock_freq(struct v4l2_subdev *sd, u32 freq) +-{ +- u32 N; +- +- switch (freq) { +- case 32000: N = 4096; break; +- case 44100: N = 6272; break; +- case 48000: N = 6144; break; +- case 88200: N = 12544; break; +- case 96000: N = 12288; break; +- case 176400: N = 25088; break; +- case 192000: N = 24576; break; +- default: +- return -EINVAL; +- } +- +- /* Set N (used with CTS to regenerate the audio clock) */ +- adv7511_wr(sd, 0x01, (N >> 16) & 0xf); +- adv7511_wr(sd, 0x02, (N >> 8) & 0xff); +- adv7511_wr(sd, 0x03, N & 0xff); +- +- return 0; +-} +- +-static int adv7511_s_i2s_clock_freq(struct v4l2_subdev *sd, u32 freq) +-{ +- u32 i2s_sf; +- +- switch (freq) { +- case 32000: i2s_sf = 0x30; break; +- case 44100: i2s_sf = 0x00; break; +- case 48000: i2s_sf = 0x20; break; +- case 88200: i2s_sf = 0x80; break; +- case 96000: i2s_sf = 0xa0; break; +- case 176400: i2s_sf = 0xc0; break; +- case 192000: i2s_sf = 0xe0; break; +- default: +- return -EINVAL; +- } +- +- /* Set sampling frequency for I2S audio to 48 kHz */ +- adv7511_wr_and_or(sd, 0x15, 0xf, i2s_sf); +- +- return 0; +-} +- +-static int adv7511_s_routing(struct v4l2_subdev *sd, u32 input, u32 output, u32 config) +-{ +- /* Only 2 channels in use for application */ +- adv7511_wr_and_or(sd, 0x73, 0xf8, 0x1); +- /* Speaker mapping */ +- adv7511_wr(sd, 0x76, 0x00); +- +- /* 16 bit audio word length */ +- adv7511_wr_and_or(sd, 0x14, 0xf0, 0x02); +- +- return 0; +-} +- +-static const struct v4l2_subdev_audio_ops adv7511_audio_ops = { +- .s_stream = adv7511_s_audio_stream, +- .s_clock_freq = adv7511_s_clock_freq, +- .s_i2s_clock_freq = adv7511_s_i2s_clock_freq, +- .s_routing = adv7511_s_routing, +-}; +- +-/* ---------------------------- PAD OPS ------------------------------------- */ +- +-static int adv7511_get_edid(struct v4l2_subdev *sd, struct v4l2_edid *edid) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- memset(edid->reserved, 0, sizeof(edid->reserved)); +- +- if (edid->pad != 0) +- return -EINVAL; +- +- if (edid->start_block == 0 && edid->blocks == 0) { +- edid->blocks = state->edid.segments * 2; +- return 0; +- } +- +- if (state->edid.segments == 0) +- return -ENODATA; +- +- if (edid->start_block >= state->edid.segments * 2) +- return -EINVAL; +- +- if (edid->start_block + edid->blocks > state->edid.segments * 2) +- edid->blocks = state->edid.segments * 2 - edid->start_block; +- +- memcpy(edid->edid, &state->edid.data[edid->start_block * 128], +- 128 * edid->blocks); +- +- return 0; +-} +- +-static int adv7511_enum_mbus_code(struct v4l2_subdev *sd, +- struct v4l2_subdev_pad_config *cfg, +- struct v4l2_subdev_mbus_code_enum *code) +-{ +- if (code->pad != 0) +- return -EINVAL; +- +- switch (code->index) { +- case 0: +- code->code = MEDIA_BUS_FMT_RGB888_1X24; +- break; +- case 1: +- code->code = MEDIA_BUS_FMT_YUYV8_1X16; +- break; +- case 2: +- code->code = MEDIA_BUS_FMT_UYVY8_1X16; +- break; +- default: +- return -EINVAL; +- } +- return 0; +-} +- +-static void adv7511_fill_format(struct adv7511_state *state, +- struct v4l2_mbus_framefmt *format) +-{ +- format->width = state->dv_timings.bt.width; +- format->height = state->dv_timings.bt.height; +- format->field = V4L2_FIELD_NONE; +-} +- +-static int adv7511_get_fmt(struct v4l2_subdev *sd, +- struct v4l2_subdev_pad_config *cfg, +- struct v4l2_subdev_format *format) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- if (format->pad != 0) +- return -EINVAL; +- +- memset(&format->format, 0, sizeof(format->format)); +- adv7511_fill_format(state, &format->format); +- +- if (format->which == V4L2_SUBDEV_FORMAT_TRY) { +- struct v4l2_mbus_framefmt *fmt; +- +- fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad); +- format->format.code = fmt->code; +- format->format.colorspace = fmt->colorspace; +- format->format.ycbcr_enc = fmt->ycbcr_enc; +- format->format.quantization = fmt->quantization; +- format->format.xfer_func = fmt->xfer_func; +- } else { +- format->format.code = state->fmt_code; +- format->format.colorspace = state->colorspace; +- format->format.ycbcr_enc = state->ycbcr_enc; +- format->format.quantization = state->quantization; +- format->format.xfer_func = state->xfer_func; +- } +- +- return 0; +-} +- +-static int adv7511_set_fmt(struct v4l2_subdev *sd, +- struct v4l2_subdev_pad_config *cfg, +- struct v4l2_subdev_format *format) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- /* +- * Bitfield namings come the CEA-861-F standard, table 8 "Auxiliary +- * Video Information (AVI) InfoFrame Format" +- * +- * c = Colorimetry +- * ec = Extended Colorimetry +- * y = RGB or YCbCr +- * q = RGB Quantization Range +- * yq = YCC Quantization Range +- */ +- u8 c = HDMI_COLORIMETRY_NONE; +- u8 ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; +- u8 y = HDMI_COLORSPACE_RGB; +- u8 q = HDMI_QUANTIZATION_RANGE_DEFAULT; +- u8 yq = HDMI_YCC_QUANTIZATION_RANGE_LIMITED; +- u8 itc = state->content_type != V4L2_DV_IT_CONTENT_TYPE_NO_ITC; +- u8 cn = itc ? state->content_type : V4L2_DV_IT_CONTENT_TYPE_GRAPHICS; +- +- if (format->pad != 0) +- return -EINVAL; +- switch (format->format.code) { +- case MEDIA_BUS_FMT_UYVY8_1X16: +- case MEDIA_BUS_FMT_YUYV8_1X16: +- case MEDIA_BUS_FMT_RGB888_1X24: +- break; +- default: +- return -EINVAL; +- } +- +- adv7511_fill_format(state, &format->format); +- if (format->which == V4L2_SUBDEV_FORMAT_TRY) { +- struct v4l2_mbus_framefmt *fmt; +- +- fmt = v4l2_subdev_get_try_format(sd, cfg, format->pad); +- fmt->code = format->format.code; +- fmt->colorspace = format->format.colorspace; +- fmt->ycbcr_enc = format->format.ycbcr_enc; +- fmt->quantization = format->format.quantization; +- fmt->xfer_func = format->format.xfer_func; +- return 0; +- } +- +- switch (format->format.code) { +- case MEDIA_BUS_FMT_UYVY8_1X16: +- adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01); +- adv7511_wr_and_or(sd, 0x16, 0x03, 0xb8); +- y = HDMI_COLORSPACE_YUV422; +- break; +- case MEDIA_BUS_FMT_YUYV8_1X16: +- adv7511_wr_and_or(sd, 0x15, 0xf0, 0x01); +- adv7511_wr_and_or(sd, 0x16, 0x03, 0xbc); +- y = HDMI_COLORSPACE_YUV422; +- break; +- case MEDIA_BUS_FMT_RGB888_1X24: +- default: +- adv7511_wr_and_or(sd, 0x15, 0xf0, 0x00); +- adv7511_wr_and_or(sd, 0x16, 0x03, 0x00); +- break; +- } +- state->fmt_code = format->format.code; +- state->colorspace = format->format.colorspace; +- state->ycbcr_enc = format->format.ycbcr_enc; +- state->quantization = format->format.quantization; +- state->xfer_func = format->format.xfer_func; +- +- switch (format->format.colorspace) { +- case V4L2_COLORSPACE_ADOBERGB: +- c = HDMI_COLORIMETRY_EXTENDED; +- ec = y ? HDMI_EXTENDED_COLORIMETRY_ADOBE_YCC_601 : +- HDMI_EXTENDED_COLORIMETRY_ADOBE_RGB; +- break; +- case V4L2_COLORSPACE_SMPTE170M: +- c = y ? HDMI_COLORIMETRY_ITU_601 : HDMI_COLORIMETRY_NONE; +- if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV601) { +- c = HDMI_COLORIMETRY_EXTENDED; +- ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; +- } +- break; +- case V4L2_COLORSPACE_REC709: +- c = y ? HDMI_COLORIMETRY_ITU_709 : HDMI_COLORIMETRY_NONE; +- if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_XV709) { +- c = HDMI_COLORIMETRY_EXTENDED; +- ec = HDMI_EXTENDED_COLORIMETRY_XV_YCC_709; +- } +- break; +- case V4L2_COLORSPACE_SRGB: +- c = y ? HDMI_COLORIMETRY_EXTENDED : HDMI_COLORIMETRY_NONE; +- ec = y ? HDMI_EXTENDED_COLORIMETRY_S_YCC_601 : +- HDMI_EXTENDED_COLORIMETRY_XV_YCC_601; +- break; +- case V4L2_COLORSPACE_BT2020: +- c = HDMI_COLORIMETRY_EXTENDED; +- if (y && format->format.ycbcr_enc == V4L2_YCBCR_ENC_BT2020_CONST_LUM) +- ec = 5; /* Not yet available in hdmi.h */ +- else +- ec = 6; /* Not yet available in hdmi.h */ +- break; +- default: +- break; +- } +- +- /* +- * CEA-861-F says that for RGB formats the YCC range must match the +- * RGB range, although sources should ignore the YCC range. +- * +- * The RGB quantization range shouldn't be non-zero if the EDID doesn't +- * have the Q bit set in the Video Capabilities Data Block, however this +- * isn't checked at the moment. The assumption is that the application +- * knows the EDID and can detect this. +- * +- * The same is true for the YCC quantization range: non-standard YCC +- * quantization ranges should only be sent if the EDID has the YQ bit +- * set in the Video Capabilities Data Block. +- */ +- switch (format->format.quantization) { +- case V4L2_QUANTIZATION_FULL_RANGE: +- q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT : +- HDMI_QUANTIZATION_RANGE_FULL; +- yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_FULL; +- break; +- case V4L2_QUANTIZATION_LIM_RANGE: +- q = y ? HDMI_QUANTIZATION_RANGE_DEFAULT : +- HDMI_QUANTIZATION_RANGE_LIMITED; +- yq = q ? q - 1 : HDMI_YCC_QUANTIZATION_RANGE_LIMITED; +- break; +- } +- +- adv7511_wr_and_or(sd, 0x4a, 0xbf, 0); +- adv7511_wr_and_or(sd, 0x55, 0x9f, y << 5); +- adv7511_wr_and_or(sd, 0x56, 0x3f, c << 6); +- adv7511_wr_and_or(sd, 0x57, 0x83, (ec << 4) | (q << 2) | (itc << 7)); +- adv7511_wr_and_or(sd, 0x59, 0x0f, (yq << 6) | (cn << 4)); +- adv7511_wr_and_or(sd, 0x4a, 0xff, 1); +- adv7511_set_rgb_quantization_mode(sd, state->rgb_quantization_range_ctrl); +- +- return 0; +-} +- +-static const struct v4l2_subdev_pad_ops adv7511_pad_ops = { +- .get_edid = adv7511_get_edid, +- .enum_mbus_code = adv7511_enum_mbus_code, +- .get_fmt = adv7511_get_fmt, +- .set_fmt = adv7511_set_fmt, +- .enum_dv_timings = adv7511_enum_dv_timings, +- .dv_timings_cap = adv7511_dv_timings_cap, +-}; +- +-/* --------------------- SUBDEV OPS --------------------------------------- */ +- +-static const struct v4l2_subdev_ops adv7511_ops = { +- .core = &adv7511_core_ops, +- .pad = &adv7511_pad_ops, +- .video = &adv7511_video_ops, +- .audio = &adv7511_audio_ops, +-}; +- +-/* ----------------------------------------------------------------------- */ +-static void adv7511_dbg_dump_edid(int lvl, int debug, struct v4l2_subdev *sd, int segment, u8 *buf) +-{ +- if (debug >= lvl) { +- int i, j; +- v4l2_dbg(lvl, debug, sd, "edid segment %d\n", segment); +- for (i = 0; i < 256; i += 16) { +- u8 b[128]; +- u8 *bp = b; +- if (i == 128) +- v4l2_dbg(lvl, debug, sd, "\n"); +- for (j = i; j < i + 16; j++) { +- sprintf(bp, "0x%02x, ", buf[j]); +- bp += 6; +- } +- bp[0] = '\0'; +- v4l2_dbg(lvl, debug, sd, "%s\n", b); +- } +- } +-} +- +-static void adv7511_notify_no_edid(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- struct adv7511_edid_detect ed; +- +- /* We failed to read the EDID, so send an event for this. */ +- ed.present = false; +- ed.segment = adv7511_rd(sd, 0xc4); +- ed.phys_addr = CEC_PHYS_ADDR_INVALID; +- cec_s_phys_addr(state->cec_adap, ed.phys_addr, false); +- v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); +- v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x0); +-} +- +-static void adv7511_edid_handler(struct work_struct *work) +-{ +- struct delayed_work *dwork = to_delayed_work(work); +- struct adv7511_state *state = container_of(dwork, struct adv7511_state, edid_handler); +- struct v4l2_subdev *sd = &state->sd; +- +- v4l2_dbg(1, debug, sd, "%s:\n", __func__); +- +- if (adv7511_check_edid_status(sd)) { +- /* Return if we received the EDID. */ +- return; +- } +- +- if (adv7511_have_hotplug(sd)) { +- /* We must retry reading the EDID several times, it is possible +- * that initially the EDID couldn't be read due to i2c errors +- * (DVI connectors are particularly prone to this problem). */ +- if (state->edid.read_retries) { +- state->edid.read_retries--; +- v4l2_dbg(1, debug, sd, "%s: edid read failed\n", __func__); +- state->have_monitor = false; +- adv7511_s_power(sd, false); +- adv7511_s_power(sd, true); +- queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); +- return; +- } +- } +- +- /* We failed to read the EDID, so send an event for this. */ +- adv7511_notify_no_edid(sd); +- v4l2_dbg(1, debug, sd, "%s: no edid found\n", __func__); +-} +- +-static void adv7511_audio_setup(struct v4l2_subdev *sd) +-{ +- v4l2_dbg(1, debug, sd, "%s\n", __func__); +- +- adv7511_s_i2s_clock_freq(sd, 48000); +- adv7511_s_clock_freq(sd, 48000); +- adv7511_s_routing(sd, 0, 0, 0); +-} +- +-/* Configure hdmi transmitter. */ +-static void adv7511_setup(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- v4l2_dbg(1, debug, sd, "%s\n", __func__); +- +- /* Input format: RGB 4:4:4 */ +- adv7511_wr_and_or(sd, 0x15, 0xf0, 0x0); +- /* Output format: RGB 4:4:4 */ +- adv7511_wr_and_or(sd, 0x16, 0x7f, 0x0); +- /* 1st order interpolation 4:2:2 -> 4:4:4 up conversion, Aspect ratio: 16:9 */ +- adv7511_wr_and_or(sd, 0x17, 0xf9, 0x06); +- /* Disable pixel repetition */ +- adv7511_wr_and_or(sd, 0x3b, 0x9f, 0x0); +- /* Disable CSC */ +- adv7511_wr_and_or(sd, 0x18, 0x7f, 0x0); +- /* Output format: RGB 4:4:4, Active Format Information is valid, +- * underscanned */ +- adv7511_wr_and_or(sd, 0x55, 0x9c, 0x12); +- /* AVI Info frame packet enable, Audio Info frame disable */ +- adv7511_wr_and_or(sd, 0x44, 0xe7, 0x10); +- /* Colorimetry, Active format aspect ratio: same as picure. */ +- adv7511_wr(sd, 0x56, 0xa8); +- /* No encryption */ +- adv7511_wr_and_or(sd, 0xaf, 0xed, 0x0); +- +- /* Positive clk edge capture for input video clock */ +- adv7511_wr_and_or(sd, 0xba, 0x1f, 0x60); +- +- adv7511_audio_setup(sd); +- +- v4l2_ctrl_handler_setup(&state->hdl); +-} +- +-static void adv7511_notify_monitor_detect(struct v4l2_subdev *sd) +-{ +- struct adv7511_monitor_detect mdt; +- struct adv7511_state *state = get_adv7511_state(sd); +- +- mdt.present = state->have_monitor; +- v4l2_subdev_notify(sd, ADV7511_MONITOR_DETECT, (void *)&mdt); +-} +- +-static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- /* read hotplug and rx-sense state */ +- u8 status = adv7511_rd(sd, 0x42); +- +- v4l2_dbg(1, debug, sd, "%s: status: 0x%x%s%s\n", +- __func__, +- status, +- status & MASK_ADV7511_HPD_DETECT ? ", hotplug" : "", +- status & MASK_ADV7511_MSEN_DETECT ? ", rx-sense" : ""); +- +- /* update read only ctrls */ +- v4l2_ctrl_s_ctrl(state->hotplug_ctrl, adv7511_have_hotplug(sd) ? 0x1 : 0x0); +- v4l2_ctrl_s_ctrl(state->rx_sense_ctrl, adv7511_have_rx_sense(sd) ? 0x1 : 0x0); +- +- if ((status & MASK_ADV7511_HPD_DETECT) && ((status & MASK_ADV7511_MSEN_DETECT) || state->edid.segments)) { +- v4l2_dbg(1, debug, sd, "%s: hotplug and (rx-sense or edid)\n", __func__); +- if (!state->have_monitor) { +- v4l2_dbg(1, debug, sd, "%s: monitor detected\n", __func__); +- state->have_monitor = true; +- adv7511_set_isr(sd, true); +- if (!adv7511_s_power(sd, true)) { +- v4l2_dbg(1, debug, sd, "%s: monitor detected, powerup failed\n", __func__); +- return; +- } +- adv7511_setup(sd); +- adv7511_notify_monitor_detect(sd); +- state->edid.read_retries = EDID_MAX_RETRIES; +- queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); +- } +- } else if (status & MASK_ADV7511_HPD_DETECT) { +- v4l2_dbg(1, debug, sd, "%s: hotplug detected\n", __func__); +- state->edid.read_retries = EDID_MAX_RETRIES; +- queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); +- } else if (!(status & MASK_ADV7511_HPD_DETECT)) { +- v4l2_dbg(1, debug, sd, "%s: hotplug not detected\n", __func__); +- if (state->have_monitor) { +- v4l2_dbg(1, debug, sd, "%s: monitor not detected\n", __func__); +- state->have_monitor = false; +- adv7511_notify_monitor_detect(sd); +- } +- adv7511_s_power(sd, false); +- memset(&state->edid, 0, sizeof(struct adv7511_state_edid)); +- adv7511_notify_no_edid(sd); +- } +-} +- +-static bool edid_block_verify_crc(u8 *edid_block) +-{ +- u8 sum = 0; +- int i; +- +- for (i = 0; i < 128; i++) +- sum += edid_block[i]; +- return sum == 0; +-} +- +-static bool edid_verify_crc(struct v4l2_subdev *sd, u32 segment) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- u32 blocks = state->edid.blocks; +- u8 *data = state->edid.data; +- +- if (!edid_block_verify_crc(&data[segment * 256])) +- return false; +- if ((segment + 1) * 2 <= blocks) +- return edid_block_verify_crc(&data[segment * 256 + 128]); +- return true; +-} +- +-static bool edid_verify_header(struct v4l2_subdev *sd, u32 segment) +-{ +- static const u8 hdmi_header[] = { +- 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x00 +- }; +- struct adv7511_state *state = get_adv7511_state(sd); +- u8 *data = state->edid.data; +- +- if (segment != 0) +- return true; +- return !memcmp(data, hdmi_header, sizeof(hdmi_header)); +-} +- +-static bool adv7511_check_edid_status(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- u8 edidRdy = adv7511_rd(sd, 0xc5); +- +- v4l2_dbg(1, debug, sd, "%s: edid ready (retries: %d)\n", +- __func__, EDID_MAX_RETRIES - state->edid.read_retries); +- +- if (state->edid.complete) +- return true; +- +- if (edidRdy & MASK_ADV7511_EDID_RDY) { +- int segment = adv7511_rd(sd, 0xc4); +- struct adv7511_edid_detect ed; +- +- if (segment >= EDID_MAX_SEGM) { +- v4l2_err(sd, "edid segment number too big\n"); +- return false; +- } +- v4l2_dbg(1, debug, sd, "%s: got segment %d\n", __func__, segment); +- adv7511_edid_rd(sd, 256, &state->edid.data[segment * 256]); +- adv7511_dbg_dump_edid(2, debug, sd, segment, &state->edid.data[segment * 256]); +- if (segment == 0) { +- state->edid.blocks = state->edid.data[0x7e] + 1; +- v4l2_dbg(1, debug, sd, "%s: %d blocks in total\n", __func__, state->edid.blocks); +- } +- if (!edid_verify_crc(sd, segment) || +- !edid_verify_header(sd, segment)) { +- /* edid crc error, force reread of edid segment */ +- v4l2_err(sd, "%s: edid crc or header error\n", __func__); +- state->have_monitor = false; +- adv7511_s_power(sd, false); +- adv7511_s_power(sd, true); +- return false; +- } +- /* one more segment read ok */ +- state->edid.segments = segment + 1; +- v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x1); +- if (((state->edid.data[0x7e] >> 1) + 1) > state->edid.segments) { +- /* Request next EDID segment */ +- v4l2_dbg(1, debug, sd, "%s: request segment %d\n", __func__, state->edid.segments); +- adv7511_wr(sd, 0xc9, 0xf); +- adv7511_wr(sd, 0xc4, state->edid.segments); +- state->edid.read_retries = EDID_MAX_RETRIES; +- queue_delayed_work(state->work_queue, &state->edid_handler, EDID_DELAY); +- return false; +- } +- +- v4l2_dbg(1, debug, sd, "%s: edid complete with %d segment(s)\n", __func__, state->edid.segments); +- state->edid.complete = true; +- ed.phys_addr = cec_get_edid_phys_addr(state->edid.data, +- state->edid.segments * 256, +- NULL); +- /* report when we have all segments +- but report only for segment 0 +- */ +- ed.present = true; +- ed.segment = 0; +- state->edid_detect_counter++; +- cec_s_phys_addr(state->cec_adap, ed.phys_addr, false); +- v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); +- return ed.present; +- } +- +- return false; +-} +- +-static int adv7511_registered(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- int err; +- +- err = cec_register_adapter(state->cec_adap); +- if (err) +- cec_delete_adapter(state->cec_adap); +- return err; +-} +- +-static void adv7511_unregistered(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- +- cec_unregister_adapter(state->cec_adap); +-} +- +-static const struct v4l2_subdev_internal_ops adv7511_int_ops = { +- .registered = adv7511_registered, +- .unregistered = adv7511_unregistered, +-}; +- +-/* ----------------------------------------------------------------------- */ +-/* Setup ADV7511 */ +-static void adv7511_init_setup(struct v4l2_subdev *sd) +-{ +- struct adv7511_state *state = get_adv7511_state(sd); +- struct adv7511_state_edid *edid = &state->edid; +- u32 cec_clk = state->pdata.cec_clk; +- u8 ratio; +- +- v4l2_dbg(1, debug, sd, "%s\n", __func__); +- +- /* clear all interrupts */ +- adv7511_wr(sd, 0x96, 0xff); +- adv7511_wr(sd, 0x97, 0xff); +- /* +- * Stop HPD from resetting a lot of registers. +- * It might leave the chip in a partly un-initialized state, +- * in particular with regards to hotplug bounces. +- */ +- adv7511_wr_and_or(sd, 0xd6, 0x3f, 0xc0); +- memset(edid, 0, sizeof(struct adv7511_state_edid)); +- state->have_monitor = false; +- adv7511_set_isr(sd, false); +- adv7511_s_stream(sd, false); +- adv7511_s_audio_stream(sd, false); +- +- if (state->i2c_cec == NULL) +- return; +- +- v4l2_dbg(1, debug, sd, "%s: cec_clk %d\n", __func__, cec_clk); +- +- /* cec soft reset */ +- adv7511_cec_write(sd, 0x50, 0x01); +- adv7511_cec_write(sd, 0x50, 0x00); +- +- /* legacy mode */ +- adv7511_cec_write(sd, 0x4a, 0x00); +- +- if (cec_clk % 750000 != 0) +- v4l2_err(sd, "%s: cec_clk %d, not multiple of 750 Khz\n", +- __func__, cec_clk); +- +- ratio = (cec_clk / 750000) - 1; +- adv7511_cec_write(sd, 0x4e, ratio << 2); +-} +- +-static int adv7511_probe(struct i2c_client *client, const struct i2c_device_id *id) +-{ +- struct adv7511_state *state; +- struct adv7511_platform_data *pdata = client->dev.platform_data; +- struct v4l2_ctrl_handler *hdl; +- struct v4l2_subdev *sd; +- u8 chip_id[2]; +- int err = -EIO; +- +- /* Check if the adapter supports the needed features */ +- if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_BYTE_DATA)) +- return -EIO; +- +- state = devm_kzalloc(&client->dev, sizeof(struct adv7511_state), GFP_KERNEL); +- if (!state) +- return -ENOMEM; +- +- /* Platform data */ +- if (!pdata) { +- v4l_err(client, "No platform data!\n"); +- return -ENODEV; +- } +- memcpy(&state->pdata, pdata, sizeof(state->pdata)); +- state->fmt_code = MEDIA_BUS_FMT_RGB888_1X24; +- state->colorspace = V4L2_COLORSPACE_SRGB; +- +- sd = &state->sd; +- +- v4l2_dbg(1, debug, sd, "detecting adv7511 client on address 0x%x\n", +- client->addr << 1); +- +- v4l2_i2c_subdev_init(sd, client, &adv7511_ops); +- sd->internal_ops = &adv7511_int_ops; +- +- hdl = &state->hdl; +- v4l2_ctrl_handler_init(hdl, 10); +- /* add in ascending ID order */ +- state->hdmi_mode_ctrl = v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops, +- V4L2_CID_DV_TX_MODE, V4L2_DV_TX_MODE_HDMI, +- 0, V4L2_DV_TX_MODE_DVI_D); +- state->hotplug_ctrl = v4l2_ctrl_new_std(hdl, NULL, +- V4L2_CID_DV_TX_HOTPLUG, 0, 1, 0, 0); +- state->rx_sense_ctrl = v4l2_ctrl_new_std(hdl, NULL, +- V4L2_CID_DV_TX_RXSENSE, 0, 1, 0, 0); +- state->have_edid0_ctrl = v4l2_ctrl_new_std(hdl, NULL, +- V4L2_CID_DV_TX_EDID_PRESENT, 0, 1, 0, 0); +- state->rgb_quantization_range_ctrl = +- v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops, +- V4L2_CID_DV_TX_RGB_RANGE, V4L2_DV_RGB_RANGE_FULL, +- 0, V4L2_DV_RGB_RANGE_AUTO); +- state->content_type_ctrl = +- v4l2_ctrl_new_std_menu(hdl, &adv7511_ctrl_ops, +- V4L2_CID_DV_TX_IT_CONTENT_TYPE, V4L2_DV_IT_CONTENT_TYPE_NO_ITC, +- 0, V4L2_DV_IT_CONTENT_TYPE_NO_ITC); +- sd->ctrl_handler = hdl; +- if (hdl->error) { +- err = hdl->error; +- goto err_hdl; +- } +- state->pad.flags = MEDIA_PAD_FL_SINK; +- err = media_entity_pads_init(&sd->entity, 1, &state->pad); +- if (err) +- goto err_hdl; +- +- /* EDID and CEC i2c addr */ +- state->i2c_edid_addr = state->pdata.i2c_edid << 1; +- state->i2c_cec_addr = state->pdata.i2c_cec << 1; +- state->i2c_pktmem_addr = state->pdata.i2c_pktmem << 1; +- +- state->chip_revision = adv7511_rd(sd, 0x0); +- chip_id[0] = adv7511_rd(sd, 0xf5); +- chip_id[1] = adv7511_rd(sd, 0xf6); +- if (chip_id[0] != 0x75 || chip_id[1] != 0x11) { +- v4l2_err(sd, "chip_id != 0x7511, read 0x%02x%02x\n", chip_id[0], +- chip_id[1]); +- err = -EIO; +- goto err_entity; +- } +- +- state->i2c_edid = i2c_new_dummy(client->adapter, +- state->i2c_edid_addr >> 1); +- if (state->i2c_edid == NULL) { +- v4l2_err(sd, "failed to register edid i2c client\n"); +- err = -ENOMEM; +- goto err_entity; +- } +- +- adv7511_wr(sd, 0xe1, state->i2c_cec_addr); +- if (state->pdata.cec_clk < 3000000 || +- state->pdata.cec_clk > 100000000) { +- v4l2_err(sd, "%s: cec_clk %u outside range, disabling cec\n", +- __func__, state->pdata.cec_clk); +- state->pdata.cec_clk = 0; +- } +- +- if (state->pdata.cec_clk) { +- state->i2c_cec = i2c_new_dummy(client->adapter, +- state->i2c_cec_addr >> 1); +- if (state->i2c_cec == NULL) { +- v4l2_err(sd, "failed to register cec i2c client\n"); +- err = -ENOMEM; +- goto err_unreg_edid; +- } +- adv7511_wr(sd, 0xe2, 0x00); /* power up cec section */ +- } else { +- adv7511_wr(sd, 0xe2, 0x01); /* power down cec section */ +- } +- +- state->i2c_pktmem = i2c_new_dummy(client->adapter, state->i2c_pktmem_addr >> 1); +- if (state->i2c_pktmem == NULL) { +- v4l2_err(sd, "failed to register pktmem i2c client\n"); +- err = -ENOMEM; +- goto err_unreg_cec; +- } +- +- state->work_queue = create_singlethread_workqueue(sd->name); +- if (state->work_queue == NULL) { +- v4l2_err(sd, "could not create workqueue\n"); +- err = -ENOMEM; +- goto err_unreg_pktmem; +- } +- +- INIT_DELAYED_WORK(&state->edid_handler, adv7511_edid_handler); +- +- adv7511_init_setup(sd); +- +-#if IS_ENABLED(CONFIG_VIDEO_ADV7511_CEC) +- state->cec_adap = cec_allocate_adapter(&adv7511_cec_adap_ops, +- state, dev_name(&client->dev), CEC_CAP_TRANSMIT | +- CEC_CAP_LOG_ADDRS | CEC_CAP_PASSTHROUGH | CEC_CAP_RC, +- ADV7511_MAX_ADDRS, &client->dev); +- err = PTR_ERR_OR_ZERO(state->cec_adap); +- if (err) { +- destroy_workqueue(state->work_queue); +- goto err_unreg_pktmem; +- } +-#endif +- +- adv7511_set_isr(sd, true); +- adv7511_check_monitor_present_status(sd); +- +- v4l2_info(sd, "%s found @ 0x%x (%s)\n", client->name, +- client->addr << 1, client->adapter->name); +- return 0; +- +-err_unreg_pktmem: +- i2c_unregister_device(state->i2c_pktmem); +-err_unreg_cec: +- if (state->i2c_cec) +- i2c_unregister_device(state->i2c_cec); +-err_unreg_edid: +- i2c_unregister_device(state->i2c_edid); +-err_entity: +- media_entity_cleanup(&sd->entity); +-err_hdl: +- v4l2_ctrl_handler_free(&state->hdl); +- return err; +-} +- +-/* ----------------------------------------------------------------------- */ +- +-static int adv7511_remove(struct i2c_client *client) +-{ +- struct v4l2_subdev *sd = i2c_get_clientdata(client); +- struct adv7511_state *state = get_adv7511_state(sd); +- +- state->chip_revision = -1; +- +- v4l2_dbg(1, debug, sd, "%s removed @ 0x%x (%s)\n", client->name, +- client->addr << 1, client->adapter->name); +- +- adv7511_set_isr(sd, false); +- adv7511_init_setup(sd); +- cancel_delayed_work(&state->edid_handler); +- i2c_unregister_device(state->i2c_edid); +- if (state->i2c_cec) +- i2c_unregister_device(state->i2c_cec); +- i2c_unregister_device(state->i2c_pktmem); +- destroy_workqueue(state->work_queue); +- v4l2_device_unregister_subdev(sd); +- media_entity_cleanup(&sd->entity); +- v4l2_ctrl_handler_free(sd->ctrl_handler); +- return 0; +-} +- +-/* ----------------------------------------------------------------------- */ +- +-static struct i2c_device_id adv7511_id[] = { +- { "adv7511", 0 }, +- { } +-}; +-MODULE_DEVICE_TABLE(i2c, adv7511_id); +- +-static struct i2c_driver adv7511_driver = { +- .driver = { +- .name = "adv7511", +- }, +- .probe = adv7511_probe, +- .remove = adv7511_remove, +- .id_table = adv7511_id, +-}; +- +-module_i2c_driver(adv7511_driver); +diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c +index 6f46c59415fe..73a2dba475d0 100644 +--- a/drivers/media/media-device.c ++++ b/drivers/media/media-device.c +@@ -474,6 +474,7 @@ static long media_device_enum_links32(struct media_device *mdev, + { + struct media_links_enum links; + compat_uptr_t pads_ptr, links_ptr; ++ int ret; + + memset(&links, 0, sizeof(links)); + +@@ -485,7 +486,14 @@ static long media_device_enum_links32(struct media_device *mdev, + links.pads = compat_ptr(pads_ptr); + links.links = compat_ptr(links_ptr); + +- return media_device_enum_links(mdev, &links); ++ ret = media_device_enum_links(mdev, &links); ++ if (ret) ++ return ret; ++ ++ if (copy_to_user(ulinks->reserved, links.reserved, ++ sizeof(ulinks->reserved))) ++ return -EFAULT; ++ return 0; + } + + #define MEDIA_IOC_ENUM_LINKS32 _IOWR('|', 0x02, struct media_links_enum32) +diff --git a/drivers/media/platform/coda/coda-bit.c b/drivers/media/platform/coda/coda-bit.c +index 717ee9a6a80e..7b4c93619c3d 100644 +--- a/drivers/media/platform/coda/coda-bit.c ++++ b/drivers/media/platform/coda/coda-bit.c +@@ -1581,6 +1581,7 @@ static int __coda_start_decoding(struct coda_ctx *ctx) + coda_write(dev, 0, CODA_REG_BIT_BIT_STREAM_PARAM); + return -ETIMEDOUT; + } ++ ctx->sequence_offset = ~0U; + ctx->initialized = 1; + + /* Update kfifo out pointer from coda bitstream read pointer */ +@@ -1966,12 +1967,17 @@ static void coda_finish_decode(struct coda_ctx *ctx) + else if (ctx->display_idx < 0) + ctx->hold = true; + } else if (decoded_idx == -2) { ++ if (ctx->display_idx >= 0 && ++ ctx->display_idx < ctx->num_internal_frames) ++ ctx->sequence_offset++; + /* no frame was decoded, we still return remaining buffers */ + } else if (decoded_idx < 0 || decoded_idx >= ctx->num_internal_frames) { + v4l2_err(&dev->v4l2_dev, + "decoded frame index out of range: %d\n", decoded_idx); + } else { +- val = coda_read(dev, CODA_RET_DEC_PIC_FRAME_NUM) - 1; ++ val = coda_read(dev, CODA_RET_DEC_PIC_FRAME_NUM); ++ if (ctx->sequence_offset == -1) ++ ctx->sequence_offset = val; + val -= ctx->sequence_offset; + spin_lock_irqsave(&ctx->buffer_meta_lock, flags); + if (!list_empty(&ctx->buffer_meta_list)) { +@@ -2101,7 +2107,6 @@ irqreturn_t coda_irq_handler(int irq, void *data) + if (ctx == NULL) { + v4l2_err(&dev->v4l2_dev, + "Instance released before the end of transaction\n"); +- mutex_unlock(&dev->coda_mutex); + return IRQ_HANDLED; + } + +diff --git a/drivers/media/platform/davinci/vpss.c b/drivers/media/platform/davinci/vpss.c +index fce86f17dffc..c2c68988e38a 100644 +--- a/drivers/media/platform/davinci/vpss.c ++++ b/drivers/media/platform/davinci/vpss.c +@@ -523,6 +523,11 @@ static int __init vpss_init(void) + return -EBUSY; + + oper_cfg.vpss_regs_base2 = ioremap(VPSS_CLK_CTRL, 4); ++ if (unlikely(!oper_cfg.vpss_regs_base2)) { ++ release_mem_region(VPSS_CLK_CTRL, 4); ++ return -ENOMEM; ++ } ++ + writel(VPSS_CLK_CTRL_VENCCLKEN | + VPSS_CLK_CTRL_DACCLKEN, oper_cfg.vpss_regs_base2); + +diff --git a/drivers/media/platform/marvell-ccic/mcam-core.c b/drivers/media/platform/marvell-ccic/mcam-core.c +index af59bf4dca2d..a74bfb9afc8d 100644 +--- a/drivers/media/platform/marvell-ccic/mcam-core.c ++++ b/drivers/media/platform/marvell-ccic/mcam-core.c +@@ -209,7 +209,6 @@ struct mcam_vb_buffer { + struct list_head queue; + struct mcam_dma_desc *dma_desc; /* Descriptor virtual address */ + dma_addr_t dma_desc_pa; /* Descriptor physical address */ +- int dma_desc_nent; /* Number of mapped descriptors */ + }; + + static inline struct mcam_vb_buffer *vb_to_mvb(struct vb2_v4l2_buffer *vb) +@@ -616,9 +615,11 @@ static void mcam_dma_contig_done(struct mcam_camera *cam, int frame) + static void mcam_sg_next_buffer(struct mcam_camera *cam) + { + struct mcam_vb_buffer *buf; ++ struct sg_table *sg_table; + + buf = list_first_entry(&cam->buffers, struct mcam_vb_buffer, queue); + list_del_init(&buf->queue); ++ sg_table = vb2_dma_sg_plane_desc(&buf->vb_buf.vb2_buf, 0); + /* + * Very Bad Not Good Things happen if you don't clear + * C1_DESC_ENA before making any descriptor changes. +@@ -626,7 +627,7 @@ static void mcam_sg_next_buffer(struct mcam_camera *cam) + mcam_reg_clear_bit(cam, REG_CTRL1, C1_DESC_ENA); + mcam_reg_write(cam, REG_DMA_DESC_Y, buf->dma_desc_pa); + mcam_reg_write(cam, REG_DESC_LEN_Y, +- buf->dma_desc_nent*sizeof(struct mcam_dma_desc)); ++ sg_table->nents * sizeof(struct mcam_dma_desc)); + mcam_reg_write(cam, REG_DESC_LEN_U, 0); + mcam_reg_write(cam, REG_DESC_LEN_V, 0); + mcam_reg_set_bit(cam, REG_CTRL1, C1_DESC_ENA); +diff --git a/drivers/media/radio/radio-raremono.c b/drivers/media/radio/radio-raremono.c +index bfb3a6d051ba..10958bac0ad9 100644 +--- a/drivers/media/radio/radio-raremono.c ++++ b/drivers/media/radio/radio-raremono.c +@@ -283,6 +283,14 @@ static int vidioc_g_frequency(struct file *file, void *priv, + return 0; + } + ++static void raremono_device_release(struct v4l2_device *v4l2_dev) ++{ ++ struct raremono_device *radio = to_raremono_dev(v4l2_dev); ++ ++ kfree(radio->buffer); ++ kfree(radio); ++} ++ + /* File system interface */ + static const struct v4l2_file_operations usb_raremono_fops = { + .owner = THIS_MODULE, +@@ -307,12 +315,14 @@ static int usb_raremono_probe(struct usb_interface *intf, + struct raremono_device *radio; + int retval = 0; + +- radio = devm_kzalloc(&intf->dev, sizeof(struct raremono_device), GFP_KERNEL); +- if (radio) +- radio->buffer = devm_kmalloc(&intf->dev, BUFFER_LENGTH, GFP_KERNEL); +- +- if (!radio || !radio->buffer) ++ radio = kzalloc(sizeof(*radio), GFP_KERNEL); ++ if (!radio) ++ return -ENOMEM; ++ radio->buffer = kmalloc(BUFFER_LENGTH, GFP_KERNEL); ++ if (!radio->buffer) { ++ kfree(radio); + return -ENOMEM; ++ } + + radio->usbdev = interface_to_usbdev(intf); + radio->intf = intf; +@@ -336,7 +346,8 @@ static int usb_raremono_probe(struct usb_interface *intf, + if (retval != 3 || + (get_unaligned_be16(&radio->buffer[1]) & 0xfff) == 0x0242) { + dev_info(&intf->dev, "this is not Thanko's Raremono.\n"); +- return -ENODEV; ++ retval = -ENODEV; ++ goto free_mem; + } + + dev_info(&intf->dev, "Thanko's Raremono connected: (%04X:%04X)\n", +@@ -345,7 +356,7 @@ static int usb_raremono_probe(struct usb_interface *intf, + retval = v4l2_device_register(&intf->dev, &radio->v4l2_dev); + if (retval < 0) { + dev_err(&intf->dev, "couldn't register v4l2_device\n"); +- return retval; ++ goto free_mem; + } + + mutex_init(&radio->lock); +@@ -357,6 +368,7 @@ static int usb_raremono_probe(struct usb_interface *intf, + radio->vdev.ioctl_ops = &usb_raremono_ioctl_ops; + radio->vdev.lock = &radio->lock; + radio->vdev.release = video_device_release_empty; ++ radio->v4l2_dev.release = raremono_device_release; + + usb_set_intfdata(intf, &radio->v4l2_dev); + +@@ -372,6 +384,10 @@ static int usb_raremono_probe(struct usb_interface *intf, + } + dev_err(&intf->dev, "could not register video device\n"); + v4l2_device_unregister(&radio->v4l2_dev); ++ ++free_mem: ++ kfree(radio->buffer); ++ kfree(radio); + return retval; + } + +diff --git a/drivers/media/radio/wl128x/fmdrv_v4l2.c b/drivers/media/radio/wl128x/fmdrv_v4l2.c +index fb42f0fd0c1f..add26eac1677 100644 +--- a/drivers/media/radio/wl128x/fmdrv_v4l2.c ++++ b/drivers/media/radio/wl128x/fmdrv_v4l2.c +@@ -553,6 +553,7 @@ int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr) + + /* Register with V4L2 subsystem as RADIO device */ + if (video_register_device(&gradio_dev, VFL_TYPE_RADIO, radio_nr)) { ++ v4l2_device_unregister(&fmdev->v4l2_dev); + fmerr("Could not register video device\n"); + return -ENOMEM; + } +@@ -566,6 +567,8 @@ int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr) + if (ret < 0) { + fmerr("(fmdev): Can't init ctrl handler\n"); + v4l2_ctrl_handler_free(&fmdev->ctrl_handler); ++ video_unregister_device(fmdev->radio_dev); ++ v4l2_device_unregister(&fmdev->v4l2_dev); + return -EBUSY; + } + +diff --git a/drivers/media/usb/au0828/au0828-core.c b/drivers/media/usb/au0828/au0828-core.c +index bf53553d2624..38e73ee5c8fb 100644 +--- a/drivers/media/usb/au0828/au0828-core.c ++++ b/drivers/media/usb/au0828/au0828-core.c +@@ -630,6 +630,12 @@ static int au0828_usb_probe(struct usb_interface *interface, + /* Setup */ + au0828_card_setup(dev); + ++ /* ++ * Store the pointer to the au0828_dev so it can be accessed in ++ * au0828_usb_disconnect ++ */ ++ usb_set_intfdata(interface, dev); ++ + /* Analog TV */ + retval = au0828_analog_register(dev, interface); + if (retval) { +@@ -647,12 +653,6 @@ static int au0828_usb_probe(struct usb_interface *interface, + /* Remote controller */ + au0828_rc_register(dev); + +- /* +- * Store the pointer to the au0828_dev so it can be accessed in +- * au0828_usb_disconnect +- */ +- usb_set_intfdata(interface, dev); +- + pr_info("Registered device AU0828 [%s]\n", + dev->board.name == NULL ? "Unset" : dev->board.name); + +diff --git a/drivers/media/usb/cpia2/cpia2_usb.c b/drivers/media/usb/cpia2/cpia2_usb.c +index e9100a235831..21e5454d260a 100644 +--- a/drivers/media/usb/cpia2/cpia2_usb.c ++++ b/drivers/media/usb/cpia2/cpia2_usb.c +@@ -909,7 +909,6 @@ static void cpia2_usb_disconnect(struct usb_interface *intf) + cpia2_unregister_camera(cam); + v4l2_device_disconnect(&cam->v4l2_dev); + mutex_unlock(&cam->v4l2_lock); +- v4l2_device_put(&cam->v4l2_dev); + + if(cam->buffers) { + DBG("Wakeup waiting processes\n"); +@@ -921,6 +920,8 @@ static void cpia2_usb_disconnect(struct usb_interface *intf) + DBG("Releasing interface\n"); + usb_driver_release_interface(&cpia2_driver, intf); + ++ v4l2_device_put(&cam->v4l2_dev); ++ + LOG("CPiA2 camera disconnected.\n"); + } + +diff --git a/drivers/media/usb/dvb-usb/dvb-usb-init.c b/drivers/media/usb/dvb-usb/dvb-usb-init.c +index 84308569e7dc..b3413404f91a 100644 +--- a/drivers/media/usb/dvb-usb/dvb-usb-init.c ++++ b/drivers/media/usb/dvb-usb/dvb-usb-init.c +@@ -287,12 +287,15 @@ EXPORT_SYMBOL(dvb_usb_device_init); + void dvb_usb_device_exit(struct usb_interface *intf) + { + struct dvb_usb_device *d = usb_get_intfdata(intf); +- const char *name = "generic DVB-USB module"; ++ const char *default_name = "generic DVB-USB module"; ++ char name[40]; + + usb_set_intfdata(intf, NULL); + if (d != NULL && d->desc != NULL) { +- name = d->desc->name; ++ strscpy(name, d->desc->name, sizeof(name)); + dvb_usb_exit(d); ++ } else { ++ strscpy(name, default_name, sizeof(name)); + } + info("%s successfully deinitialized and disconnected.", name); + +diff --git a/drivers/media/v4l2-core/v4l2-ctrls.c b/drivers/media/v4l2-core/v4l2-ctrls.c +index c56d649fa7da..b3d8b9592f8a 100644 +--- a/drivers/media/v4l2-core/v4l2-ctrls.c ++++ b/drivers/media/v4l2-core/v4l2-ctrls.c +@@ -2103,16 +2103,15 @@ struct v4l2_ctrl *v4l2_ctrl_new_custom(struct v4l2_ctrl_handler *hdl, + v4l2_ctrl_fill(cfg->id, &name, &type, &min, &max, &step, + &def, &flags); + +- is_menu = (cfg->type == V4L2_CTRL_TYPE_MENU || +- cfg->type == V4L2_CTRL_TYPE_INTEGER_MENU); ++ is_menu = (type == V4L2_CTRL_TYPE_MENU || ++ type == V4L2_CTRL_TYPE_INTEGER_MENU); + if (is_menu) + WARN_ON(step); + else + WARN_ON(cfg->menu_skip_mask); +- if (cfg->type == V4L2_CTRL_TYPE_MENU && qmenu == NULL) ++ if (type == V4L2_CTRL_TYPE_MENU && !qmenu) { + qmenu = v4l2_ctrl_get_menu(cfg->id); +- else if (cfg->type == V4L2_CTRL_TYPE_INTEGER_MENU && +- qmenu_int == NULL) { ++ } else if (type == V4L2_CTRL_TYPE_INTEGER_MENU && !qmenu_int) { + handler_set_err(hdl, -EINVAL); + return NULL; + } +diff --git a/drivers/memstick/core/memstick.c b/drivers/memstick/core/memstick.c +index 4d673a626db4..1041eb7a6167 100644 +--- a/drivers/memstick/core/memstick.c ++++ b/drivers/memstick/core/memstick.c +@@ -629,13 +629,18 @@ static int __init memstick_init(void) + return -ENOMEM; + + rc = bus_register(&memstick_bus_type); +- if (!rc) +- rc = class_register(&memstick_host_class); ++ if (rc) ++ goto error_destroy_workqueue; + +- if (!rc) +- return 0; ++ rc = class_register(&memstick_host_class); ++ if (rc) ++ goto error_bus_unregister; ++ ++ return 0; + ++error_bus_unregister: + bus_unregister(&memstick_bus_type); ++error_destroy_workqueue: + destroy_workqueue(workqueue); + + return rc; +diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c +index 41767f7239bb..0556a9749dbe 100644 +--- a/drivers/mfd/arizona-core.c ++++ b/drivers/mfd/arizona-core.c +@@ -1038,7 +1038,7 @@ int arizona_dev_init(struct arizona *arizona) + unsigned int reg, val, mask; + int (*apply_patch)(struct arizona *) = NULL; + const struct mfd_cell *subdevs = NULL; +- int n_subdevs, ret, i; ++ int n_subdevs = 0, ret, i; + + dev_set_drvdata(arizona->dev, arizona); + mutex_init(&arizona->clk_lock); +diff --git a/drivers/mfd/hi655x-pmic.c b/drivers/mfd/hi655x-pmic.c +index 11347a3e6d40..c311b869be38 100644 +--- a/drivers/mfd/hi655x-pmic.c ++++ b/drivers/mfd/hi655x-pmic.c +@@ -111,6 +111,8 @@ static int hi655x_pmic_probe(struct platform_device *pdev) + + pmic->regmap = devm_regmap_init_mmio_clk(dev, NULL, base, + &hi655x_regmap_config); ++ if (IS_ERR(pmic->regmap)) ++ return PTR_ERR(pmic->regmap); + + regmap_read(pmic->regmap, HI655X_BUS_ADDR(HI655X_VER_REG), &pmic->ver); + if ((pmic->ver < PMU_VER_START) || (pmic->ver > PMU_VER_END)) { +diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c +index c57e407020f1..5c8ed2150c8b 100644 +--- a/drivers/mfd/mfd-core.c ++++ b/drivers/mfd/mfd-core.c +@@ -179,6 +179,7 @@ static int mfd_add_device(struct device *parent, int id, + for_each_child_of_node(parent->of_node, np) { + if (of_device_is_compatible(np, cell->of_compatible)) { + pdev->dev.of_node = np; ++ pdev->dev.fwnode = &np->fwnode; + break; + } + } +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index fd01138c411e..5b116ec756b4 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -3788,8 +3788,8 @@ static u32 bond_rr_gen_slave_id(struct bonding *bond) + static int bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev) + { + struct bonding *bond = netdev_priv(bond_dev); +- struct iphdr *iph = ip_hdr(skb); + struct slave *slave; ++ int slave_cnt; + u32 slave_id; + + /* Start with the curr_active_slave that joined the bond as the +@@ -3798,23 +3798,32 @@ static int bond_xmit_roundrobin(struct sk_buff *skb, struct net_device *bond_dev + * send the join/membership reports. The curr_active_slave found + * will send all of this type of traffic. + */ +- if (iph->protocol == IPPROTO_IGMP && skb->protocol == htons(ETH_P_IP)) { +- slave = rcu_dereference(bond->curr_active_slave); +- if (slave) +- bond_dev_queue_xmit(bond, skb, slave->dev); +- else +- bond_xmit_slave_id(bond, skb, 0); +- } else { +- int slave_cnt = ACCESS_ONCE(bond->slave_cnt); ++ if (skb->protocol == htons(ETH_P_IP)) { ++ int noff = skb_network_offset(skb); ++ struct iphdr *iph; + +- if (likely(slave_cnt)) { +- slave_id = bond_rr_gen_slave_id(bond); +- bond_xmit_slave_id(bond, skb, slave_id % slave_cnt); +- } else { +- bond_tx_drop(bond_dev, skb); ++ if (unlikely(!pskb_may_pull(skb, noff + sizeof(*iph)))) ++ goto non_igmp; ++ ++ iph = ip_hdr(skb); ++ if (iph->protocol == IPPROTO_IGMP) { ++ slave = rcu_dereference(bond->curr_active_slave); ++ if (slave) ++ bond_dev_queue_xmit(bond, skb, slave->dev); ++ else ++ bond_xmit_slave_id(bond, skb, 0); ++ return NETDEV_TX_OK; + } + } + ++non_igmp: ++ slave_cnt = ACCESS_ONCE(bond->slave_cnt); ++ if (likely(slave_cnt)) { ++ slave_id = bond_rr_gen_slave_id(bond); ++ bond_xmit_slave_id(bond, skb, slave_id % slave_cnt); ++ } else { ++ bond_tx_drop(bond_dev, skb); ++ } + return NETDEV_TX_OK; + } + +diff --git a/drivers/net/caif/caif_hsi.c b/drivers/net/caif/caif_hsi.c +index ddabce759456..7f79a6cf5665 100644 +--- a/drivers/net/caif/caif_hsi.c ++++ b/drivers/net/caif/caif_hsi.c +@@ -1464,7 +1464,7 @@ static void __exit cfhsi_exit_module(void) + rtnl_lock(); + list_for_each_safe(list_node, n, &cfhsi_list) { + cfhsi = list_entry(list_node, struct cfhsi, list); +- unregister_netdev(cfhsi->ndev); ++ unregister_netdevice(cfhsi->ndev); + } + rtnl_unlock(); + } +diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c +index 2edd193c96ab..f157b81551b7 100644 +--- a/drivers/net/dsa/mv88e6xxx/chip.c ++++ b/drivers/net/dsa/mv88e6xxx/chip.c +@@ -3846,6 +3846,8 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev) + mv88e6xxx_mdio_unregister(chip); + return err; + } ++ if (chip->reset) ++ usleep_range(1000, 2000); + + return 0; + } +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +index 2cd1dcd77559..6167bb0c71ed 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c +@@ -286,6 +286,9 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata) + hw_cons = le16_to_cpu(*txdata->tx_cons_sb); + sw_cons = txdata->tx_pkt_cons; + ++ /* Ensure subsequent loads occur after hw_cons */ ++ smp_rmb(); ++ + while (sw_cons != hw_cons) { + u16 pkt_cons; + +@@ -3860,9 +3863,12 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev) + + if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) { + if (!(bp->flags & TX_TIMESTAMPING_EN)) { ++ bp->eth_stats.ptp_skip_tx_ts++; + BNX2X_ERR("Tx timestamping was not enabled, this packet will not be timestamped\n"); + } else if (bp->ptp_tx_skb) { +- BNX2X_ERR("The device supports only a single outstanding packet to timestamp, this packet will not be timestamped\n"); ++ bp->eth_stats.ptp_skip_tx_ts++; ++ dev_err_once(&bp->dev->dev, ++ "Device supports only a single outstanding packet to timestamp, this packet won't be timestamped\n"); + } else { + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + /* schedule check for Tx timestamp */ +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +index 15a0850e6bde..b1992f464b3d 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c +@@ -182,7 +182,9 @@ static const struct { + { STATS_OFFSET32(driver_filtered_tx_pkt), + 4, false, "driver_filtered_tx_pkt" }, + { STATS_OFFSET32(eee_tx_lpi), +- 4, true, "Tx LPI entry count"} ++ 4, true, "Tx LPI entry count"}, ++ { STATS_OFFSET32(ptp_skip_tx_ts), ++ 4, false, "ptp_skipped_tx_tstamp" }, + }; + + #define BNX2X_NUM_STATS ARRAY_SIZE(bnx2x_stats_arr) +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +index eeeb4c5740bf..2ef6012c3dc5 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +@@ -15261,11 +15261,24 @@ static void bnx2x_ptp_task(struct work_struct *work) + u32 val_seq; + u64 timestamp, ns; + struct skb_shared_hwtstamps shhwtstamps; ++ bool bail = true; ++ int i; ++ ++ /* FW may take a while to complete timestamping; try a bit and if it's ++ * still not complete, may indicate an error state - bail out then. ++ */ ++ for (i = 0; i < 10; i++) { ++ /* Read Tx timestamp registers */ ++ val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID : ++ NIG_REG_P0_TLLH_PTP_BUF_SEQID); ++ if (val_seq & 0x10000) { ++ bail = false; ++ break; ++ } ++ msleep(1 << i); ++ } + +- /* Read Tx timestamp registers */ +- val_seq = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_SEQID : +- NIG_REG_P0_TLLH_PTP_BUF_SEQID); +- if (val_seq & 0x10000) { ++ if (!bail) { + /* There is a valid timestamp value */ + timestamp = REG_RD(bp, port ? NIG_REG_P1_TLLH_PTP_BUF_TS_MSB : + NIG_REG_P0_TLLH_PTP_BUF_TS_MSB); +@@ -15280,16 +15293,18 @@ static void bnx2x_ptp_task(struct work_struct *work) + memset(&shhwtstamps, 0, sizeof(shhwtstamps)); + shhwtstamps.hwtstamp = ns_to_ktime(ns); + skb_tstamp_tx(bp->ptp_tx_skb, &shhwtstamps); +- dev_kfree_skb_any(bp->ptp_tx_skb); +- bp->ptp_tx_skb = NULL; + + DP(BNX2X_MSG_PTP, "Tx timestamp, timestamp cycles = %llu, ns = %llu\n", + timestamp, ns); + } else { +- DP(BNX2X_MSG_PTP, "There is no valid Tx timestamp yet\n"); +- /* Reschedule to keep checking for a valid timestamp value */ +- schedule_work(&bp->ptp_task); ++ DP(BNX2X_MSG_PTP, ++ "Tx timestamp is not recorded (register read=%u)\n", ++ val_seq); ++ bp->eth_stats.ptp_skip_tx_ts++; + } ++ ++ dev_kfree_skb_any(bp->ptp_tx_skb); ++ bp->ptp_tx_skb = NULL; + } + + void bnx2x_set_rx_ts(struct bnx2x *bp, struct sk_buff *skb) +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h +index b2644ed13d06..d55e63692cf3 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h +@@ -207,6 +207,9 @@ struct bnx2x_eth_stats { + u32 driver_filtered_tx_pkt; + /* src: Clear-on-Read register; Will not survive PMF Migration */ + u32 eee_tx_lpi; ++ ++ /* PTP */ ++ u32 ptp_skip_tx_ts; + }; + + struct bnx2x_eth_q_stats { +diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +index 3480b3078775..1bb923e3a2bc 100644 +--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c ++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c +@@ -3002,39 +3002,42 @@ static void bcmgenet_timeout(struct net_device *dev) + netif_tx_wake_all_queues(dev); + } + +-#define MAX_MC_COUNT 16 ++#define MAX_MDF_FILTER 17 + + static inline void bcmgenet_set_mdf_addr(struct bcmgenet_priv *priv, + unsigned char *addr, +- int *i, +- int *mc) ++ int *i) + { +- u32 reg; +- + bcmgenet_umac_writel(priv, addr[0] << 8 | addr[1], + UMAC_MDF_ADDR + (*i * 4)); + bcmgenet_umac_writel(priv, addr[2] << 24 | addr[3] << 16 | + addr[4] << 8 | addr[5], + UMAC_MDF_ADDR + ((*i + 1) * 4)); +- reg = bcmgenet_umac_readl(priv, UMAC_MDF_CTRL); +- reg |= (1 << (MAX_MC_COUNT - *mc)); +- bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL); + *i += 2; +- (*mc)++; + } + + static void bcmgenet_set_rx_mode(struct net_device *dev) + { + struct bcmgenet_priv *priv = netdev_priv(dev); + struct netdev_hw_addr *ha; +- int i, mc; ++ int i, nfilter; + u32 reg; + + netif_dbg(priv, hw, dev, "%s: %08X\n", __func__, dev->flags); + +- /* Promiscuous mode */ ++ /* Number of filters needed */ ++ nfilter = netdev_uc_count(dev) + netdev_mc_count(dev) + 2; ++ ++ /* ++ * Turn on promicuous mode for three scenarios ++ * 1. IFF_PROMISC flag is set ++ * 2. IFF_ALLMULTI flag is set ++ * 3. The number of filters needed exceeds the number filters ++ * supported by the hardware. ++ */ + reg = bcmgenet_umac_readl(priv, UMAC_CMD); +- if (dev->flags & IFF_PROMISC) { ++ if ((dev->flags & (IFF_PROMISC | IFF_ALLMULTI)) || ++ (nfilter > MAX_MDF_FILTER)) { + reg |= CMD_PROMISC; + bcmgenet_umac_writel(priv, reg, UMAC_CMD); + bcmgenet_umac_writel(priv, 0, UMAC_MDF_CTRL); +@@ -3044,32 +3047,24 @@ static void bcmgenet_set_rx_mode(struct net_device *dev) + bcmgenet_umac_writel(priv, reg, UMAC_CMD); + } + +- /* UniMac doesn't support ALLMULTI */ +- if (dev->flags & IFF_ALLMULTI) { +- netdev_warn(dev, "ALLMULTI is not supported\n"); +- return; +- } +- + /* update MDF filter */ + i = 0; +- mc = 0; + /* Broadcast */ +- bcmgenet_set_mdf_addr(priv, dev->broadcast, &i, &mc); ++ bcmgenet_set_mdf_addr(priv, dev->broadcast, &i); + /* my own address.*/ +- bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i, &mc); +- /* Unicast list*/ +- if (netdev_uc_count(dev) > (MAX_MC_COUNT - mc)) +- return; ++ bcmgenet_set_mdf_addr(priv, dev->dev_addr, &i); + +- if (!netdev_uc_empty(dev)) +- netdev_for_each_uc_addr(ha, dev) +- bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc); +- /* Multicast */ +- if (netdev_mc_empty(dev) || netdev_mc_count(dev) >= (MAX_MC_COUNT - mc)) +- return; ++ /* Unicast */ ++ netdev_for_each_uc_addr(ha, dev) ++ bcmgenet_set_mdf_addr(priv, ha->addr, &i); + ++ /* Multicast */ + netdev_for_each_mc_addr(ha, dev) +- bcmgenet_set_mdf_addr(priv, ha->addr, &i, &mc); ++ bcmgenet_set_mdf_addr(priv, ha->addr, &i); ++ ++ /* Enable filters */ ++ reg = GENMASK(MAX_MDF_FILTER - 1, MAX_MDF_FILTER - nfilter); ++ bcmgenet_umac_writel(priv, reg, UMAC_MDF_CTRL); + } + + /* Set the hardware MAC address. */ +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c +index 1eb34109b207..92ea760c4822 100644 +--- a/drivers/net/ethernet/freescale/fec_main.c ++++ b/drivers/net/ethernet/freescale/fec_main.c +@@ -1685,10 +1685,10 @@ static void fec_get_mac(struct net_device *ndev) + */ + if (!is_valid_ether_addr(iap)) { + /* Report it and use a random ethernet address instead */ +- netdev_err(ndev, "Invalid MAC address: %pM\n", iap); ++ dev_err(&fep->pdev->dev, "Invalid MAC address: %pM\n", iap); + eth_hw_addr_random(ndev); +- netdev_info(ndev, "Using random MAC address: %pM\n", +- ndev->dev_addr); ++ dev_info(&fep->pdev->dev, "Using random MAC address: %pM\n", ++ ndev->dev_addr); + return; + } + +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +index a137e060c185..bbc23e88de89 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c +@@ -3192,7 +3192,8 @@ static int ixgbe_get_module_info(struct net_device *dev, + page_swap = true; + } + +- if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap) { ++ if (sff8472_rev == IXGBE_SFF_SFF_8472_UNSUP || page_swap || ++ !(addr_mode & IXGBE_SFF_DDM_IMPLEMENTED)) { + /* We have a SFP, but it does not support SFF-8472 */ + modinfo->type = ETH_MODULE_SFF_8079; + modinfo->eeprom_len = ETH_MODULE_SFF_8079_LEN; +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h +index cc735ec3e045..25090b4880b3 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h +@@ -70,6 +70,7 @@ + #define IXGBE_SFF_SOFT_RS_SELECT_10G 0x8 + #define IXGBE_SFF_SOFT_RS_SELECT_1G 0x0 + #define IXGBE_SFF_ADDRESSING_MODE 0x4 ++#define IXGBE_SFF_DDM_IMPLEMENTED 0x40 + #define IXGBE_SFF_QSFP_DA_ACTIVE_CABLE 0x1 + #define IXGBE_SFF_QSFP_DA_PASSIVE_CABLE 0x8 + #define IXGBE_SFF_QSFP_CONNECTOR_NOT_SEPARABLE 0x23 +diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c +index 4ac023a37936..59dbecd19c93 100644 +--- a/drivers/net/ethernet/marvell/sky2.c ++++ b/drivers/net/ethernet/marvell/sky2.c +@@ -4939,6 +4939,13 @@ static const struct dmi_system_id msi_blacklist[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "P-79"), + }, + }, ++ { ++ .ident = "ASUS P6T", ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."), ++ DMI_MATCH(DMI_BOARD_NAME, "P6T"), ++ }, ++ }, + {} + }; + +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +index 7d19029e2564..093e58e94075 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c +@@ -213,6 +213,12 @@ static void dwmac1000_set_filter(struct mac_device_info *hw, + GMAC_ADDR_LOW(reg)); + reg++; + } ++ ++ while (reg <= perfect_addr_number) { ++ writel(0, ioaddr + GMAC_ADDR_HIGH(reg)); ++ writel(0, ioaddr + GMAC_ADDR_LOW(reg)); ++ reg++; ++ } + } + + #ifdef FRAME_FILTER_DEBUG +diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +index 51019b794be5..f46f2bfc2cc0 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c ++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c +@@ -173,14 +173,20 @@ static void dwmac4_set_filter(struct mac_device_info *hw, + * are required + */ + value |= GMAC_PACKET_FILTER_PR; +- } else if (!netdev_uc_empty(dev)) { +- int reg = 1; ++ } else { + struct netdev_hw_addr *ha; ++ int reg = 1; + + netdev_for_each_uc_addr(ha, dev) { + dwmac4_set_umac_addr(hw, ha->addr, reg); + reg++; + } ++ ++ while (reg <= GMAC_MAX_PERFECT_ADDRESSES) { ++ writel(0, ioaddr + GMAC_ADDR_HIGH(reg)); ++ writel(0, ioaddr + GMAC_ADDR_LOW(reg)); ++ reg++; ++ } + } + + writel(value, ioaddr + GMAC_PACKET_FILTER); +diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +index a8afc92cbfca..5f21ddff9e0f 100644 +--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c ++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +@@ -612,6 +612,10 @@ static void axienet_start_xmit_done(struct net_device *ndev) + + ndev->stats.tx_packets += packets; + ndev->stats.tx_bytes += size; ++ ++ /* Matches barrier in axienet_start_xmit */ ++ smp_mb(); ++ + netif_wake_queue(ndev); + } + +@@ -666,9 +670,19 @@ static int axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev) + cur_p = &lp->tx_bd_v[lp->tx_bd_tail]; + + if (axienet_check_tx_bd_space(lp, num_frag)) { +- if (!netif_queue_stopped(ndev)) +- netif_stop_queue(ndev); +- return NETDEV_TX_BUSY; ++ if (netif_queue_stopped(ndev)) ++ return NETDEV_TX_BUSY; ++ ++ netif_stop_queue(ndev); ++ ++ /* Matches barrier in axienet_start_xmit_done */ ++ smp_mb(); ++ ++ /* Space might have just been freed - check again */ ++ if (axienet_check_tx_bd_space(lp, num_frag)) ++ return NETDEV_TX_BUSY; ++ ++ netif_wake_queue(ndev); + } + + if (skb->ip_summed == CHECKSUM_PARTIAL) { +diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c +index cb206e5526c4..7e1df403a37d 100644 +--- a/drivers/net/gtp.c ++++ b/drivers/net/gtp.c +@@ -952,7 +952,7 @@ static int ipv4_pdp_add(struct net_device *dev, struct genl_info *info) + + } + +- pctx = kmalloc(sizeof(struct pdp_ctx), GFP_KERNEL); ++ pctx = kmalloc(sizeof(*pctx), GFP_ATOMIC); + if (pctx == NULL) + return -ENOMEM; + +@@ -1358,9 +1358,9 @@ late_initcall(gtp_init); + + static void __exit gtp_fini(void) + { +- unregister_pernet_subsys(>p_net_ops); + genl_unregister_family(>p_genl_family); + rtnl_link_unregister(>p_link_ops); ++ unregister_pernet_subsys(>p_net_ops); + + pr_info("GTP module unloaded\n"); + } +diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c +index 653f0b185a68..d91f020a8491 100644 +--- a/drivers/net/macsec.c ++++ b/drivers/net/macsec.c +@@ -867,6 +867,7 @@ static void macsec_reset_skb(struct sk_buff *skb, struct net_device *dev) + + static void macsec_finalize_skb(struct sk_buff *skb, u8 icv_len, u8 hdr_len) + { ++ skb->ip_summed = CHECKSUM_NONE; + memmove(skb->data + hdr_len, skb->data, 2 * ETH_ALEN); + skb_pull(skb, hdr_len); + pskb_trim_unique(skb, skb->len - icv_len); +@@ -1105,10 +1106,9 @@ static rx_handler_result_t macsec_handle_frame(struct sk_buff **pskb) + } + + skb = skb_unshare(skb, GFP_ATOMIC); +- if (!skb) { +- *pskb = NULL; ++ *pskb = skb; ++ if (!skb) + return RX_HANDLER_CONSUMED; +- } + + pulled_sci = pskb_may_pull(skb, macsec_extra_len(true)); + if (!pulled_sci) { +diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c +index 5048a6df6a8e..5c2c72b1ef8b 100644 +--- a/drivers/net/phy/phy_device.c ++++ b/drivers/net/phy/phy_device.c +@@ -673,6 +673,9 @@ int phy_connect_direct(struct net_device *dev, struct phy_device *phydev, + { + int rc; + ++ if (!dev) ++ return -EINVAL; ++ + rc = phy_attach_direct(dev, phydev, phydev->dev_flags, interface); + if (rc) + return rc; +@@ -965,6 +968,9 @@ struct phy_device *phy_attach(struct net_device *dev, const char *bus_id, + struct device *d; + int rc; + ++ if (!dev) ++ return ERR_PTR(-EINVAL); ++ + /* Search the list of PHY devices on the mdio bus for the + * PHY with the requested name + */ +diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c +index 393fd3ed6b94..4b12b6da3fab 100644 +--- a/drivers/net/usb/asix_devices.c ++++ b/drivers/net/usb/asix_devices.c +@@ -237,7 +237,7 @@ static void asix_phy_reset(struct usbnet *dev, unsigned int reset_bits) + static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf) + { + int ret = 0; +- u8 buf[ETH_ALEN]; ++ u8 buf[ETH_ALEN] = {0}; + int i; + unsigned long gpio_bits = dev->driver_info->data; + +@@ -687,7 +687,7 @@ static int asix_resume(struct usb_interface *intf) + static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf) + { + int ret, i; +- u8 buf[ETH_ALEN], chipcode = 0; ++ u8 buf[ETH_ALEN] = {0}, chipcode = 0; + u32 phyid; + struct asix_common_private *priv; + +@@ -1064,7 +1064,7 @@ static const struct net_device_ops ax88178_netdev_ops = { + static int ax88178_bind(struct usbnet *dev, struct usb_interface *intf) + { + int ret; +- u8 buf[ETH_ALEN]; ++ u8 buf[ETH_ALEN] = {0}; + + usbnet_get_endpoints(dev,intf); + +diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c +index 42c9480acdc7..3b6e908d3164 100644 +--- a/drivers/net/vrf.c ++++ b/drivers/net/vrf.c +@@ -153,23 +153,29 @@ static int vrf_ip6_local_out(struct net *net, struct sock *sk, + static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb, + struct net_device *dev) + { +- const struct ipv6hdr *iph = ipv6_hdr(skb); ++ const struct ipv6hdr *iph; + struct net *net = dev_net(skb->dev); +- struct flowi6 fl6 = { +- /* needed to match OIF rule */ +- .flowi6_oif = dev->ifindex, +- .flowi6_iif = LOOPBACK_IFINDEX, +- .daddr = iph->daddr, +- .saddr = iph->saddr, +- .flowlabel = ip6_flowinfo(iph), +- .flowi6_mark = skb->mark, +- .flowi6_proto = iph->nexthdr, +- .flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF, +- }; ++ struct flowi6 fl6; + int ret = NET_XMIT_DROP; + struct dst_entry *dst; + struct dst_entry *dst_null = &net->ipv6.ip6_null_entry->dst; + ++ if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct ipv6hdr))) ++ goto err; ++ ++ iph = ipv6_hdr(skb); ++ ++ memset(&fl6, 0, sizeof(fl6)); ++ /* needed to match OIF rule */ ++ fl6.flowi6_oif = dev->ifindex; ++ fl6.flowi6_iif = LOOPBACK_IFINDEX; ++ fl6.daddr = iph->daddr; ++ fl6.saddr = iph->saddr; ++ fl6.flowlabel = ip6_flowinfo(iph); ++ fl6.flowi6_mark = skb->mark; ++ fl6.flowi6_proto = iph->nexthdr; ++ fl6.flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF; ++ + dst = ip6_route_output(net, NULL, &fl6); + if (dst == dst_null) + goto err; +@@ -257,21 +263,27 @@ static int vrf_ip_local_out(struct net *net, struct sock *sk, + static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb, + struct net_device *vrf_dev) + { +- struct iphdr *ip4h = ip_hdr(skb); ++ struct iphdr *ip4h; + int ret = NET_XMIT_DROP; +- struct flowi4 fl4 = { +- /* needed to match OIF rule */ +- .flowi4_oif = vrf_dev->ifindex, +- .flowi4_iif = LOOPBACK_IFINDEX, +- .flowi4_tos = RT_TOS(ip4h->tos), +- .flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF, +- .flowi4_proto = ip4h->protocol, +- .daddr = ip4h->daddr, +- .saddr = ip4h->saddr, +- }; ++ struct flowi4 fl4; + struct net *net = dev_net(vrf_dev); + struct rtable *rt; + ++ if (!pskb_may_pull(skb, ETH_HLEN + sizeof(struct iphdr))) ++ goto err; ++ ++ ip4h = ip_hdr(skb); ++ ++ memset(&fl4, 0, sizeof(fl4)); ++ /* needed to match OIF rule */ ++ fl4.flowi4_oif = vrf_dev->ifindex; ++ fl4.flowi4_iif = LOOPBACK_IFINDEX; ++ fl4.flowi4_tos = RT_TOS(ip4h->tos); ++ fl4.flowi4_flags = FLOWI_FLAG_ANYSRC | FLOWI_FLAG_SKIP_NH_OIF; ++ fl4.flowi4_proto = ip4h->protocol; ++ fl4.daddr = ip4h->daddr; ++ fl4.saddr = ip4h->saddr; ++ + rt = ip_route_output_flow(net, &fl4, NULL); + if (IS_ERR(rt)) + goto err; +diff --git a/drivers/net/wireless/ath/ath10k/hw.c b/drivers/net/wireless/ath/ath10k/hw.c +index 675e75d66db2..14dc6548701c 100644 +--- a/drivers/net/wireless/ath/ath10k/hw.c ++++ b/drivers/net/wireless/ath/ath10k/hw.c +@@ -157,7 +157,7 @@ const struct ath10k_hw_values qca6174_values = { + }; + + const struct ath10k_hw_values qca99x0_values = { +- .rtc_state_val_on = 5, ++ .rtc_state_val_on = 7, + .ce_count = 12, + .msi_assign_ce_max = 12, + .num_target_ce_config_wlan = 10, +diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c +index fb632a454fc2..1588fe8110d0 100644 +--- a/drivers/net/wireless/ath/ath10k/mac.c ++++ b/drivers/net/wireless/ath/ath10k/mac.c +@@ -1596,6 +1596,10 @@ static int ath10k_mac_setup_prb_tmpl(struct ath10k_vif *arvif) + if (arvif->vdev_type != WMI_VDEV_TYPE_AP) + return 0; + ++ /* For mesh, probe response and beacon share the same template */ ++ if (ieee80211_vif_is_mesh(vif)) ++ return 0; ++ + prb = ieee80211_proberesp_get(hw, vif); + if (!prb) { + ath10k_warn(ar, "failed to get probe resp template from mac80211\n"); +diff --git a/drivers/net/wireless/ath/ath6kl/wmi.c b/drivers/net/wireless/ath/ath6kl/wmi.c +index 3fd1cc98fd2f..55609fc4e50e 100644 +--- a/drivers/net/wireless/ath/ath6kl/wmi.c ++++ b/drivers/net/wireless/ath/ath6kl/wmi.c +@@ -1178,6 +1178,10 @@ static int ath6kl_wmi_pstream_timeout_event_rx(struct wmi *wmi, u8 *datap, + return -EINVAL; + + ev = (struct wmi_pstream_timeout_event *) datap; ++ if (ev->traffic_class >= WMM_NUM_AC) { ++ ath6kl_err("invalid traffic class: %d\n", ev->traffic_class); ++ return -EINVAL; ++ } + + /* + * When the pstream (fat pipe == AC) timesout, it means there were +@@ -1519,6 +1523,10 @@ static int ath6kl_wmi_cac_event_rx(struct wmi *wmi, u8 *datap, int len, + return -EINVAL; + + reply = (struct wmi_cac_event *) datap; ++ if (reply->ac >= WMM_NUM_AC) { ++ ath6kl_err("invalid AC: %d\n", reply->ac); ++ return -EINVAL; ++ } + + if ((reply->cac_indication == CAC_INDICATION_ADMISSION_RESP) && + (reply->status_code != IEEE80211_TSPEC_STATUS_ADMISS_ACCEPTED)) { +@@ -2635,7 +2643,7 @@ int ath6kl_wmi_delete_pstream_cmd(struct wmi *wmi, u8 if_idx, u8 traffic_class, + u16 active_tsids = 0; + int ret; + +- if (traffic_class > 3) { ++ if (traffic_class >= WMM_NUM_AC) { + ath6kl_err("invalid traffic class: %d\n", traffic_class); + return -EINVAL; + } +diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c +index 951bac2caf12..e7fca78cdd96 100644 +--- a/drivers/net/wireless/ath/ath9k/hw.c ++++ b/drivers/net/wireless/ath/ath9k/hw.c +@@ -250,8 +250,9 @@ void ath9k_hw_get_channel_centers(struct ath_hw *ah, + /* Chip Revisions */ + /******************/ + +-static void ath9k_hw_read_revisions(struct ath_hw *ah) ++static bool ath9k_hw_read_revisions(struct ath_hw *ah) + { ++ u32 srev; + u32 val; + + if (ah->get_mac_revision) +@@ -267,25 +268,33 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah) + val = REG_READ(ah, AR_SREV); + ah->hw_version.macRev = MS(val, AR_SREV_REVISION2); + } +- return; ++ return true; + case AR9300_DEVID_AR9340: + ah->hw_version.macVersion = AR_SREV_VERSION_9340; +- return; ++ return true; + case AR9300_DEVID_QCA955X: + ah->hw_version.macVersion = AR_SREV_VERSION_9550; +- return; ++ return true; + case AR9300_DEVID_AR953X: + ah->hw_version.macVersion = AR_SREV_VERSION_9531; +- return; ++ return true; + case AR9300_DEVID_QCA956X: + ah->hw_version.macVersion = AR_SREV_VERSION_9561; +- return; ++ return true; + } + +- val = REG_READ(ah, AR_SREV) & AR_SREV_ID; ++ srev = REG_READ(ah, AR_SREV); ++ ++ if (srev == -EIO) { ++ ath_err(ath9k_hw_common(ah), ++ "Failed to read SREV register"); ++ return false; ++ } ++ ++ val = srev & AR_SREV_ID; + + if (val == 0xFF) { +- val = REG_READ(ah, AR_SREV); ++ val = srev; + ah->hw_version.macVersion = + (val & AR_SREV_VERSION2) >> AR_SREV_TYPE2_S; + ah->hw_version.macRev = MS(val, AR_SREV_REVISION2); +@@ -304,6 +313,8 @@ static void ath9k_hw_read_revisions(struct ath_hw *ah) + if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCIE) + ah->is_pciexpress = true; + } ++ ++ return true; + } + + /************************************/ +@@ -557,7 +568,10 @@ static int __ath9k_hw_init(struct ath_hw *ah) + struct ath_common *common = ath9k_hw_common(ah); + int r = 0; + +- ath9k_hw_read_revisions(ah); ++ if (!ath9k_hw_read_revisions(ah)) { ++ ath_err(common, "Could not read hardware revisions"); ++ return -EOPNOTSUPP; ++ } + + switch (ah->hw_version.macVersion) { + case AR_SREV_VERSION_5416_PCI: +diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c +index 4100ffd42a43..78146607f16e 100644 +--- a/drivers/net/wireless/ath/dfs_pattern_detector.c ++++ b/drivers/net/wireless/ath/dfs_pattern_detector.c +@@ -111,7 +111,7 @@ static const struct radar_detector_specs jp_radar_ref_types[] = { + JP_PATTERN(0, 0, 1, 1428, 1428, 1, 18, 29, false), + JP_PATTERN(1, 2, 3, 3846, 3846, 1, 18, 29, false), + JP_PATTERN(2, 0, 1, 1388, 1388, 1, 18, 50, false), +- JP_PATTERN(3, 1, 2, 4000, 4000, 1, 18, 50, false), ++ JP_PATTERN(3, 0, 4, 4000, 4000, 1, 18, 50, false), + JP_PATTERN(4, 0, 5, 150, 230, 1, 23, 50, false), + JP_PATTERN(5, 6, 10, 200, 500, 1, 16, 50, false), + JP_PATTERN(6, 11, 20, 200, 500, 1, 12, 50, false), +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +index bd7ff562d82d..1aa74b87599f 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c +@@ -551,6 +551,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb) + + memcpy(&info, skb->cb, sizeof(info)); + ++ if (WARN_ON_ONCE(skb->len > IEEE80211_MAX_DATA_LEN + hdrlen)) ++ return -1; ++ + if (WARN_ON_ONCE(info.flags & IEEE80211_TX_CTL_AMPDU)) + return -1; + +diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +index 25f2a0aceaa2..a2ebe46bcfc5 100644 +--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c ++++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c +@@ -1901,10 +1901,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id) + return IRQ_NONE; + } + +- if (iwl_have_debug_level(IWL_DL_ISR)) +- IWL_DEBUG_ISR(trans, "ISR inta_fh 0x%08x, enabled 0x%08x\n", +- inta_fh, ++ if (iwl_have_debug_level(IWL_DL_ISR)) { ++ IWL_DEBUG_ISR(trans, ++ "ISR inta_fh 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n", ++ inta_fh, trans_pcie->fh_mask, + iwl_read32(trans, CSR_MSIX_FH_INT_MASK_AD)); ++ if (inta_fh & ~trans_pcie->fh_mask) ++ IWL_DEBUG_ISR(trans, ++ "We got a masked interrupt (0x%08x)\n", ++ inta_fh & ~trans_pcie->fh_mask); ++ } ++ ++ inta_fh &= trans_pcie->fh_mask; + + if ((trans_pcie->shared_vec_mask & IWL_SHARED_IRQ_NON_RX) && + inta_fh & MSIX_FH_INT_CAUSES_Q0) { +@@ -1943,11 +1951,18 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id) + } + + /* After checking FH register check HW register */ +- if (iwl_have_debug_level(IWL_DL_ISR)) ++ if (iwl_have_debug_level(IWL_DL_ISR)) { + IWL_DEBUG_ISR(trans, +- "ISR inta_hw 0x%08x, enabled 0x%08x\n", +- inta_hw, ++ "ISR inta_hw 0x%08x, enabled (sw) 0x%08x (hw) 0x%08x\n", ++ inta_hw, trans_pcie->hw_mask, + iwl_read32(trans, CSR_MSIX_HW_INT_MASK_AD)); ++ if (inta_hw & ~trans_pcie->hw_mask) ++ IWL_DEBUG_ISR(trans, ++ "We got a masked interrupt 0x%08x\n", ++ inta_hw & ~trans_pcie->hw_mask); ++ } ++ ++ inta_hw &= trans_pcie->hw_mask; + + /* Alive notification via Rx interrupt will do the real work */ + if (inta_hw & MSIX_HW_INT_CAUSES_REG_ALIVE) { +diff --git a/drivers/net/wireless/mediatek/mt7601u/dma.c b/drivers/net/wireless/mediatek/mt7601u/dma.c +index a8bc064bc14f..56cad16e70ca 100644 +--- a/drivers/net/wireless/mediatek/mt7601u/dma.c ++++ b/drivers/net/wireless/mediatek/mt7601u/dma.c +@@ -193,10 +193,23 @@ static void mt7601u_complete_rx(struct urb *urb) + struct mt7601u_rx_queue *q = &dev->rx_q; + unsigned long flags; + +- spin_lock_irqsave(&dev->rx_lock, flags); ++ /* do no schedule rx tasklet if urb has been unlinked ++ * or the device has been removed ++ */ ++ switch (urb->status) { ++ case -ECONNRESET: ++ case -ESHUTDOWN: ++ case -ENOENT: ++ return; ++ default: ++ dev_err_ratelimited(dev->dev, "rx urb failed: %d\n", ++ urb->status); ++ /* fall through */ ++ case 0: ++ break; ++ } + +- if (mt7601u_urb_has_error(urb)) +- dev_err(dev->dev, "Error: RX urb failed:%d\n", urb->status); ++ spin_lock_irqsave(&dev->rx_lock, flags); + if (WARN_ONCE(q->e[q->end].urb != urb, "RX urb mismatch")) + goto out; + +@@ -228,14 +241,25 @@ static void mt7601u_complete_tx(struct urb *urb) + struct sk_buff *skb; + unsigned long flags; + +- spin_lock_irqsave(&dev->tx_lock, flags); ++ switch (urb->status) { ++ case -ECONNRESET: ++ case -ESHUTDOWN: ++ case -ENOENT: ++ return; ++ default: ++ dev_err_ratelimited(dev->dev, "tx urb failed: %d\n", ++ urb->status); ++ /* fall through */ ++ case 0: ++ break; ++ } + +- if (mt7601u_urb_has_error(urb)) +- dev_err(dev->dev, "Error: TX urb failed:%d\n", urb->status); ++ spin_lock_irqsave(&dev->tx_lock, flags); + if (WARN_ONCE(q->e[q->start].urb != urb, "TX urb mismatch")) + goto out; + + skb = q->e[q->start].skb; ++ q->e[q->start].skb = NULL; + trace_mt_tx_dma_done(dev, skb); + + __skb_queue_tail(&dev->tx_skb_done, skb); +@@ -363,19 +387,9 @@ int mt7601u_dma_enqueue_tx(struct mt7601u_dev *dev, struct sk_buff *skb, + static void mt7601u_kill_rx(struct mt7601u_dev *dev) + { + int i; +- unsigned long flags; +- +- spin_lock_irqsave(&dev->rx_lock, flags); +- +- for (i = 0; i < dev->rx_q.entries; i++) { +- int next = dev->rx_q.end; + +- spin_unlock_irqrestore(&dev->rx_lock, flags); +- usb_poison_urb(dev->rx_q.e[next].urb); +- spin_lock_irqsave(&dev->rx_lock, flags); +- } +- +- spin_unlock_irqrestore(&dev->rx_lock, flags); ++ for (i = 0; i < dev->rx_q.entries; i++) ++ usb_poison_urb(dev->rx_q.e[i].urb); + } + + static int mt7601u_submit_rx_buf(struct mt7601u_dev *dev, +@@ -445,10 +459,10 @@ static void mt7601u_free_tx_queue(struct mt7601u_tx_queue *q) + { + int i; + +- WARN_ON(q->used); +- + for (i = 0; i < q->entries; i++) { + usb_poison_urb(q->e[i].urb); ++ if (q->e[i].skb) ++ mt7601u_tx_status(q->dev, q->e[i].skb); + usb_free_urb(q->e[i].urb); + } + } +diff --git a/drivers/net/wireless/mediatek/mt7601u/tx.c b/drivers/net/wireless/mediatek/mt7601u/tx.c +index ad77bec1ba0f..2cb1883c0d33 100644 +--- a/drivers/net/wireless/mediatek/mt7601u/tx.c ++++ b/drivers/net/wireless/mediatek/mt7601u/tx.c +@@ -117,9 +117,9 @@ void mt7601u_tx_status(struct mt7601u_dev *dev, struct sk_buff *skb) + info->status.rates[0].idx = -1; + info->flags |= IEEE80211_TX_STAT_ACK; + +- spin_lock(&dev->mac_lock); ++ spin_lock_bh(&dev->mac_lock); + ieee80211_tx_status(dev->hw, skb); +- spin_unlock(&dev->mac_lock); ++ spin_unlock_bh(&dev->mac_lock); + } + + static int mt7601u_skb_rooms(struct mt7601u_dev *dev, struct sk_buff *skb) +diff --git a/drivers/nvdimm/dax_devs.c b/drivers/nvdimm/dax_devs.c +index 45fa82cae87c..da504665b1c7 100644 +--- a/drivers/nvdimm/dax_devs.c ++++ b/drivers/nvdimm/dax_devs.c +@@ -118,7 +118,7 @@ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns) + nvdimm_bus_unlock(&ndns->dev); + if (!dax_dev) + return -ENOMEM; +- pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); ++ pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); + nd_pfn->pfn_sb = pfn_sb; + rc = nd_pfn_validate(nd_pfn, DAX_SIG); + dev_dbg(dev, "%s: dax: %s\n", __func__, +diff --git a/drivers/nvdimm/pfn.h b/drivers/nvdimm/pfn.h +index dde9853453d3..e901e3a3b04c 100644 +--- a/drivers/nvdimm/pfn.h ++++ b/drivers/nvdimm/pfn.h +@@ -36,6 +36,7 @@ struct nd_pfn_sb { + __le32 end_trunc; + /* minor-version-2 record the base alignment of the mapping */ + __le32 align; ++ /* minor-version-3 guarantee the padding and flags are zero */ + u8 padding[4000]; + __le64 checksum; + }; +diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c +index ba9aa8475e6d..f40c9c626861 100644 +--- a/drivers/nvdimm/pfn_devs.c ++++ b/drivers/nvdimm/pfn_devs.c +@@ -349,6 +349,15 @@ struct device *nd_pfn_create(struct nd_region *nd_region) + return dev; + } + ++/** ++ * nd_pfn_validate - read and validate info-block ++ * @nd_pfn: fsdax namespace runtime state / properties ++ * @sig: 'devdax' or 'fsdax' signature ++ * ++ * Upon return the info-block buffer contents (->pfn_sb) are ++ * indeterminate when validation fails, and a coherent info-block ++ * otherwise. ++ */ + int nd_pfn_validate(struct nd_pfn *nd_pfn, const char *sig) + { + u64 checksum, offset; +@@ -486,7 +495,7 @@ int nd_pfn_probe(struct device *dev, struct nd_namespace_common *ndns) + nvdimm_bus_unlock(&ndns->dev); + if (!pfn_dev) + return -ENOMEM; +- pfn_sb = devm_kzalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); ++ pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL); + nd_pfn = to_nd_pfn(pfn_dev); + nd_pfn->pfn_sb = pfn_sb; + rc = nd_pfn_validate(nd_pfn, PFN_SIG); +@@ -584,7 +593,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) + u64 checksum; + int rc; + +- pfn_sb = devm_kzalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL); ++ pfn_sb = devm_kmalloc(&nd_pfn->dev, sizeof(*pfn_sb), GFP_KERNEL); + if (!pfn_sb) + return -ENOMEM; + +@@ -593,11 +602,14 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) + sig = DAX_SIG; + else + sig = PFN_SIG; ++ + rc = nd_pfn_validate(nd_pfn, sig); + if (rc != -ENODEV) + return rc; + + /* no info block, do init */; ++ memset(pfn_sb, 0, sizeof(*pfn_sb)); ++ + nd_region = to_nd_region(nd_pfn->dev.parent); + if (nd_region->ro) { + dev_info(&nd_pfn->dev, +@@ -673,7 +685,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) + memcpy(pfn_sb->uuid, nd_pfn->uuid, 16); + memcpy(pfn_sb->parent_uuid, nd_dev_to_uuid(&ndns->dev), 16); + pfn_sb->version_major = cpu_to_le16(1); +- pfn_sb->version_minor = cpu_to_le16(2); ++ pfn_sb->version_minor = cpu_to_le16(3); + pfn_sb->start_pad = cpu_to_le32(start_pad); + pfn_sb->end_trunc = cpu_to_le32(end_trunc); + pfn_sb->align = cpu_to_le32(nd_pfn->align); +diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c +index 200b41576526..a597619f25d6 100644 +--- a/drivers/pci/host/pci-hyperv.c ++++ b/drivers/pci/host/pci-hyperv.c +@@ -1575,6 +1575,7 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus, + static void hv_eject_device_work(struct work_struct *work) + { + struct pci_eject_response *ejct_pkt; ++ struct hv_pcibus_device *hbus; + struct hv_pci_dev *hpdev; + struct pci_dev *pdev; + unsigned long flags; +@@ -1585,6 +1586,7 @@ static void hv_eject_device_work(struct work_struct *work) + } ctxt; + + hpdev = container_of(work, struct hv_pci_dev, wrk); ++ hbus = hpdev->hbus; + + if (hpdev->state != hv_pcichild_ejecting) { + put_pcichild(hpdev, hv_pcidev_ref_pnp); +@@ -1598,8 +1600,7 @@ static void hv_eject_device_work(struct work_struct *work) + * because hbus->pci_bus may not exist yet. + */ + wslot = wslot_to_devfn(hpdev->desc.win_slot.slot); +- pdev = pci_get_domain_bus_and_slot(hpdev->hbus->sysdata.domain, 0, +- wslot); ++ pdev = pci_get_domain_bus_and_slot(hbus->sysdata.domain, 0, wslot); + if (pdev) { + pci_lock_rescan_remove(); + pci_stop_and_remove_bus_device(pdev); +@@ -1607,22 +1608,24 @@ static void hv_eject_device_work(struct work_struct *work) + pci_unlock_rescan_remove(); + } + ++ spin_lock_irqsave(&hbus->device_list_lock, flags); ++ list_del(&hpdev->list_entry); ++ spin_unlock_irqrestore(&hbus->device_list_lock, flags); ++ + memset(&ctxt, 0, sizeof(ctxt)); + ejct_pkt = (struct pci_eject_response *)&ctxt.pkt.message; + ejct_pkt->message_type.type = PCI_EJECTION_COMPLETE; + ejct_pkt->wslot.slot = hpdev->desc.win_slot.slot; +- vmbus_sendpacket(hpdev->hbus->hdev->channel, ejct_pkt, ++ vmbus_sendpacket(hbus->hdev->channel, ejct_pkt, + sizeof(*ejct_pkt), (unsigned long)&ctxt.pkt, + VM_PKT_DATA_INBAND, 0); + +- spin_lock_irqsave(&hpdev->hbus->device_list_lock, flags); +- list_del(&hpdev->list_entry); +- spin_unlock_irqrestore(&hpdev->hbus->device_list_lock, flags); +- + put_pcichild(hpdev, hv_pcidev_ref_childlist); + put_pcichild(hpdev, hv_pcidev_ref_initial); + put_pcichild(hpdev, hv_pcidev_ref_pnp); +- put_hvpcibus(hpdev->hbus); ++ ++ /* hpdev has been freed. Do not use it any more. */ ++ put_hvpcibus(hbus); + } + + /** +diff --git a/drivers/pci/host/pcie-xilinx-nwl.c b/drivers/pci/host/pcie-xilinx-nwl.c +index 94fdd295aae2..3bba87af0b6b 100644 +--- a/drivers/pci/host/pcie-xilinx-nwl.c ++++ b/drivers/pci/host/pcie-xilinx-nwl.c +@@ -456,15 +456,13 @@ static int nwl_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, + int i; + + mutex_lock(&msi->lock); +- bit = bitmap_find_next_zero_area(msi->bitmap, INT_PCI_MSI_NR, 0, +- nr_irqs, 0); +- if (bit >= INT_PCI_MSI_NR) { ++ bit = bitmap_find_free_region(msi->bitmap, INT_PCI_MSI_NR, ++ get_count_order(nr_irqs)); ++ if (bit < 0) { + mutex_unlock(&msi->lock); + return -ENOSPC; + } + +- bitmap_set(msi->bitmap, bit, nr_irqs); +- + for (i = 0; i < nr_irqs; i++) { + irq_domain_set_info(domain, virq + i, bit + i, &nwl_irq_chip, + domain->host_data, handle_simple_irq, +@@ -482,7 +480,8 @@ static void nwl_irq_domain_free(struct irq_domain *domain, unsigned int virq, + struct nwl_msi *msi = &pcie->msi; + + mutex_lock(&msi->lock); +- bitmap_clear(msi->bitmap, data->hwirq, nr_irqs); ++ bitmap_release_region(msi->bitmap, data->hwirq, ++ get_count_order(nr_irqs)); + mutex_unlock(&msi->lock); + } + +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c +index e5d8e2e2bd30..717540161223 100644 +--- a/drivers/pci/pci-sysfs.c ++++ b/drivers/pci/pci-sysfs.c +@@ -371,7 +371,7 @@ static ssize_t remove_store(struct device *dev, struct device_attribute *attr, + pci_stop_and_remove_bus_device_locked(to_pci_dev(dev)); + return count; + } +-static struct device_attribute dev_remove_attr = __ATTR(remove, ++static struct device_attribute dev_remove_attr = __ATTR_IGNORE_LOCKDEP(remove, + (S_IWUSR|S_IWGRP), + NULL, remove_store); + +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index ccbbd4cde0f1..a07533702d26 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -1786,6 +1786,13 @@ static void pci_pme_list_scan(struct work_struct *work) + */ + if (bridge && bridge->current_state != PCI_D0) + continue; ++ /* ++ * If the device is in D3cold it should not be ++ * polled either. ++ */ ++ if (pme_dev->dev->current_state == PCI_D3cold) ++ continue; ++ + pci_pme_wakeup(pme_dev->dev, NULL); + } else { + list_del(&pme_dev->list); +diff --git a/drivers/phy/phy-rcar-gen2.c b/drivers/phy/phy-rcar-gen2.c +index 97d4dd6ea924..aa02b19b7e0e 100644 +--- a/drivers/phy/phy-rcar-gen2.c ++++ b/drivers/phy/phy-rcar-gen2.c +@@ -288,6 +288,7 @@ static int rcar_gen2_phy_probe(struct platform_device *pdev) + error = of_property_read_u32(np, "reg", &channel_num); + if (error || channel_num > 2) { + dev_err(dev, "Invalid \"reg\" property\n"); ++ of_node_put(np); + return error; + } + channel->select_mask = select_mask[channel_num]; +@@ -303,6 +304,7 @@ static int rcar_gen2_phy_probe(struct platform_device *pdev) + &rcar_gen2_phy_ops); + if (IS_ERR(phy->phy)) { + dev_err(dev, "Failed to create PHY\n"); ++ of_node_put(np); + return PTR_ERR(phy->phy); + } + phy_set_drvdata(phy->phy, phy); +diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c +index f826793e972c..417cd3bd7e0c 100644 +--- a/drivers/pinctrl/pinctrl-rockchip.c ++++ b/drivers/pinctrl/pinctrl-rockchip.c +@@ -2208,6 +2208,7 @@ static int rockchip_get_bank_data(struct rockchip_pin_bank *bank, + base, + &rockchip_regmap_config); + } ++ of_node_put(node); + } + + bank->irq = irq_of_parse_and_map(bank->of_node, 0); +diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c +index 2f07cd615665..76ae38450aea 100644 +--- a/drivers/pps/pps.c ++++ b/drivers/pps/pps.c +@@ -129,6 +129,14 @@ static long pps_cdev_ioctl(struct file *file, + pps->params.mode |= PPS_CANWAIT; + pps->params.api_version = PPS_API_VERS; + ++ /* ++ * Clear unused fields of pps_kparams to avoid leaking ++ * uninitialized data of the PPS_SETPARAMS caller via ++ * PPS_GETPARAMS ++ */ ++ pps->params.assert_off_tu.flags = 0; ++ pps->params.clear_off_tu.flags = 0; ++ + spin_unlock_irq(&pps->lock); + + break; +diff --git a/drivers/regulator/s2mps11.c b/drivers/regulator/s2mps11.c +index 1fe1c18cc27b..179f3c61a321 100644 +--- a/drivers/regulator/s2mps11.c ++++ b/drivers/regulator/s2mps11.c +@@ -386,8 +386,8 @@ static const struct regulator_desc s2mps11_regulators[] = { + regulator_desc_s2mps11_buck1_4(4), + regulator_desc_s2mps11_buck5, + regulator_desc_s2mps11_buck67810(6, MIN_600_MV, STEP_6_25_MV), +- regulator_desc_s2mps11_buck67810(7, MIN_600_MV, STEP_12_5_MV), +- regulator_desc_s2mps11_buck67810(8, MIN_600_MV, STEP_12_5_MV), ++ regulator_desc_s2mps11_buck67810(7, MIN_750_MV, STEP_12_5_MV), ++ regulator_desc_s2mps11_buck67810(8, MIN_750_MV, STEP_12_5_MV), + regulator_desc_s2mps11_buck9, + regulator_desc_s2mps11_buck67810(10, MIN_750_MV, STEP_12_5_MV), + }; +diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c +index 18ab84e9c6b2..58cd0e0c9680 100644 +--- a/drivers/s390/cio/qdio_main.c ++++ b/drivers/s390/cio/qdio_main.c +@@ -758,6 +758,7 @@ static int get_outbound_buffer_frontier(struct qdio_q *q) + + switch (state) { + case SLSB_P_OUTPUT_EMPTY: ++ case SLSB_P_OUTPUT_PENDING: + /* the adapter got it */ + DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, + "out empty:%1d %02x", q->nr, count); +diff --git a/drivers/scsi/NCR5380.c b/drivers/scsi/NCR5380.c +index 790babc5ef66..3cfab8868c98 100644 +--- a/drivers/scsi/NCR5380.c ++++ b/drivers/scsi/NCR5380.c +@@ -813,6 +813,8 @@ static void NCR5380_main(struct work_struct *work) + NCR5380_information_transfer(instance); + done = 0; + } ++ if (!hostdata->connected) ++ NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); + spin_unlock_irq(&hostdata->lock); + if (!done) + cond_resched(); +@@ -1086,7 +1088,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, + if (!hostdata->selecting) { + /* Command was aborted */ + NCR5380_write(MODE_REG, MR_BASE); +- goto out; ++ return NULL; + } + if (err < 0) { + NCR5380_write(MODE_REG, MR_BASE); +@@ -1135,7 +1137,7 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, + if (!hostdata->selecting) { + NCR5380_write(MODE_REG, MR_BASE); + NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); +- goto out; ++ return NULL; + } + + dsprintk(NDEBUG_ARBITRATION, instance, "won arbitration\n"); +@@ -1208,8 +1210,6 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, + spin_lock_irq(&hostdata->lock); + NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); + NCR5380_reselect(instance); +- if (!hostdata->connected) +- NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); + shost_printk(KERN_ERR, instance, "reselection after won arbitration?\n"); + goto out; + } +@@ -1217,14 +1217,16 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, + if (err < 0) { + spin_lock_irq(&hostdata->lock); + NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); +- NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); ++ + /* Can't touch cmd if it has been reclaimed by the scsi ML */ +- if (hostdata->selecting) { +- cmd->result = DID_BAD_TARGET << 16; +- complete_cmd(instance, cmd); +- dsprintk(NDEBUG_SELECTION, instance, "target did not respond within 250ms\n"); +- cmd = NULL; +- } ++ if (!hostdata->selecting) ++ return NULL; ++ ++ cmd->result = DID_BAD_TARGET << 16; ++ complete_cmd(instance, cmd); ++ dsprintk(NDEBUG_SELECTION, instance, ++ "target did not respond within 250ms\n"); ++ cmd = NULL; + goto out; + } + +@@ -1252,12 +1254,11 @@ static struct scsi_cmnd *NCR5380_select(struct Scsi_Host *instance, + if (err < 0) { + shost_printk(KERN_ERR, instance, "select: REQ timeout\n"); + NCR5380_write(INITIATOR_COMMAND_REG, ICR_BASE); +- NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); + goto out; + } + if (!hostdata->selecting) { + do_abort(instance); +- goto out; ++ return NULL; + } + + dsprintk(NDEBUG_SELECTION, instance, "target %d selected, going into MESSAGE OUT phase.\n", +@@ -1903,9 +1904,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) + */ + NCR5380_write(TARGET_COMMAND_REG, 0); + +- /* Enable reselect interrupts */ +- NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); +- + maybe_release_dma_irq(instance); + return; + case MESSAGE_REJECT: +@@ -1937,8 +1935,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) + */ + NCR5380_write(TARGET_COMMAND_REG, 0); + +- /* Enable reselect interrupts */ +- NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); + #ifdef SUN3_SCSI_VME + dregs->csr |= CSR_DMA_ENABLE; + #endif +@@ -2046,7 +2042,6 @@ static void NCR5380_information_transfer(struct Scsi_Host *instance) + cmd->result = DID_ERROR << 16; + complete_cmd(instance, cmd); + maybe_release_dma_irq(instance); +- NCR5380_write(SELECT_ENABLE_REG, hostdata->id_mask); + return; + } + msgout = NOP; +diff --git a/drivers/scsi/mac_scsi.c b/drivers/scsi/mac_scsi.c +index a590089b9397..5648d30c7376 100644 +--- a/drivers/scsi/mac_scsi.c ++++ b/drivers/scsi/mac_scsi.c +@@ -54,7 +54,7 @@ static int setup_cmd_per_lun = -1; + module_param(setup_cmd_per_lun, int, 0); + static int setup_sg_tablesize = -1; + module_param(setup_sg_tablesize, int, 0); +-static int setup_use_pdma = -1; ++static int setup_use_pdma = 512; + module_param(setup_use_pdma, int, 0); + static int setup_hostid = -1; + module_param(setup_hostid, int, 0); +@@ -325,7 +325,7 @@ static int macscsi_dma_xfer_len(struct Scsi_Host *instance, + struct NCR5380_hostdata *hostdata = shost_priv(instance); + + if (hostdata->flags & FLAG_NO_PSEUDO_DMA || +- cmd->SCp.this_residual < 16) ++ cmd->SCp.this_residual < setup_use_pdma) + return 0; + + return cmd->SCp.this_residual; +diff --git a/drivers/staging/media/davinci_vpfe/vpfe_video.c b/drivers/staging/media/davinci_vpfe/vpfe_video.c +index 89dd6b989254..e0440807b4ed 100644 +--- a/drivers/staging/media/davinci_vpfe/vpfe_video.c ++++ b/drivers/staging/media/davinci_vpfe/vpfe_video.c +@@ -423,6 +423,9 @@ static int vpfe_open(struct file *file) + /* If decoder is not initialized. initialize it */ + if (!video->initialized && vpfe_update_pipe_state(video)) { + mutex_unlock(&video->lock); ++ v4l2_fh_del(&handle->vfh); ++ v4l2_fh_exit(&handle->vfh); ++ kfree(handle); + return -ENODEV; + } + /* Increment device users counter */ +diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c +index 84474f06dbcf..8f1233324586 100644 +--- a/drivers/tty/serial/8250/8250_port.c ++++ b/drivers/tty/serial/8250/8250_port.c +@@ -1819,7 +1819,8 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) + status = serial8250_rx_chars(up, status); + } + serial8250_modem_status(up); +- if ((!up->dma || up->dma->tx_err) && (status & UART_LSR_THRE)) ++ if ((!up->dma || up->dma->tx_err) && (status & UART_LSR_THRE) && ++ (up->ier & UART_IER_THRI)) + serial8250_tx_chars(up); + + spin_unlock_irqrestore(&port->lock, flags); +diff --git a/drivers/tty/serial/cpm_uart/cpm_uart_core.c b/drivers/tty/serial/cpm_uart/cpm_uart_core.c +index 0040c29f651a..b9e137c03fe3 100644 +--- a/drivers/tty/serial/cpm_uart/cpm_uart_core.c ++++ b/drivers/tty/serial/cpm_uart/cpm_uart_core.c +@@ -421,7 +421,16 @@ static int cpm_uart_startup(struct uart_port *port) + clrbits16(&pinfo->sccp->scc_sccm, UART_SCCM_RX); + } + cpm_uart_initbd(pinfo); +- cpm_line_cr_cmd(pinfo, CPM_CR_INIT_TRX); ++ if (IS_SMC(pinfo)) { ++ out_be32(&pinfo->smcup->smc_rstate, 0); ++ out_be32(&pinfo->smcup->smc_tstate, 0); ++ out_be16(&pinfo->smcup->smc_rbptr, ++ in_be16(&pinfo->smcup->smc_rbase)); ++ out_be16(&pinfo->smcup->smc_tbptr, ++ in_be16(&pinfo->smcup->smc_tbase)); ++ } else { ++ cpm_line_cr_cmd(pinfo, CPM_CR_INIT_TRX); ++ } + } + /* Install interrupt handler. */ + retval = request_irq(port->irq, cpm_uart_int, 0, "cpm_uart", port); +@@ -875,16 +884,14 @@ static void cpm_uart_init_smc(struct uart_cpm_port *pinfo) + (u8 __iomem *)pinfo->tx_bd_base - DPRAM_BASE); + + /* +- * In case SMC1 is being relocated... ++ * In case SMC is being relocated... + */ +-#if defined (CONFIG_I2C_SPI_SMC1_UCODE_PATCH) + out_be16(&up->smc_rbptr, in_be16(&pinfo->smcup->smc_rbase)); + out_be16(&up->smc_tbptr, in_be16(&pinfo->smcup->smc_tbase)); + out_be32(&up->smc_rstate, 0); + out_be32(&up->smc_tstate, 0); + out_be16(&up->smc_brkcr, 1); /* number of break chars */ + out_be16(&up->smc_brkec, 0); +-#endif + + /* Set up the uart parameters in the + * parameter ram. +@@ -898,8 +905,6 @@ static void cpm_uart_init_smc(struct uart_cpm_port *pinfo) + out_be16(&up->smc_brkec, 0); + out_be16(&up->smc_brkcr, 1); + +- cpm_line_cr_cmd(pinfo, CPM_CR_INIT_TRX); +- + /* Set UART mode, 8 bit, no parity, one stop. + * Enable receive and transmit. + */ +diff --git a/drivers/tty/serial/digicolor-usart.c b/drivers/tty/serial/digicolor-usart.c +index 02ad6953b167..50ec5f1ac77f 100644 +--- a/drivers/tty/serial/digicolor-usart.c ++++ b/drivers/tty/serial/digicolor-usart.c +@@ -545,7 +545,11 @@ static int __init digicolor_uart_init(void) + if (ret) + return ret; + +- return platform_driver_register(&digicolor_uart_platform); ++ ret = platform_driver_register(&digicolor_uart_platform); ++ if (ret) ++ uart_unregister_driver(&digicolor_uart); ++ ++ return ret; + } + module_init(digicolor_uart_init); + +diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c +index ec3db8d8306c..bacc7e284c0c 100644 +--- a/drivers/tty/serial/max310x.c ++++ b/drivers/tty/serial/max310x.c +@@ -494,37 +494,48 @@ static bool max310x_reg_precious(struct device *dev, unsigned int reg) + + static int max310x_set_baud(struct uart_port *port, int baud) + { +- unsigned int mode = 0, clk = port->uartclk, div = clk / baud; ++ unsigned int mode = 0, div = 0, frac = 0, c = 0, F = 0; + +- /* Check for minimal value for divider */ +- if (div < 16) +- div = 16; +- +- if (clk % baud && (div / 16) < 0x8000) { ++ /* ++ * Calculate the integer divisor first. Select a proper mode ++ * in case if the requested baud is too high for the pre-defined ++ * clocks frequency. ++ */ ++ div = port->uartclk / baud; ++ if (div < 8) { ++ /* Mode x4 */ ++ c = 4; ++ mode = MAX310X_BRGCFG_4XMODE_BIT; ++ } else if (div < 16) { + /* Mode x2 */ ++ c = 8; + mode = MAX310X_BRGCFG_2XMODE_BIT; +- clk = port->uartclk * 2; +- div = clk / baud; +- +- if (clk % baud && (div / 16) < 0x8000) { +- /* Mode x4 */ +- mode = MAX310X_BRGCFG_4XMODE_BIT; +- clk = port->uartclk * 4; +- div = clk / baud; +- } ++ } else { ++ c = 16; + } + +- max310x_port_write(port, MAX310X_BRGDIVMSB_REG, (div / 16) >> 8); +- max310x_port_write(port, MAX310X_BRGDIVLSB_REG, div / 16); +- max310x_port_write(port, MAX310X_BRGCFG_REG, (div % 16) | mode); ++ /* Calculate the divisor in accordance with the fraction coefficient */ ++ div /= c; ++ F = c*baud; ++ ++ /* Calculate the baud rate fraction */ ++ if (div > 0) ++ frac = (16*(port->uartclk % F)) / F; ++ else ++ div = 1; ++ ++ max310x_port_write(port, MAX310X_BRGDIVMSB_REG, div >> 8); ++ max310x_port_write(port, MAX310X_BRGDIVLSB_REG, div); ++ max310x_port_write(port, MAX310X_BRGCFG_REG, frac | mode); + +- return DIV_ROUND_CLOSEST(clk, div); ++ /* Return the actual baud rate we just programmed */ ++ return (16*port->uartclk) / (c*(16*div + frac)); + } + + static int max310x_update_best_err(unsigned long f, long *besterr) + { + /* Use baudrate 115200 for calculate error */ +- long err = f % (115200 * 16); ++ long err = f % (460800 * 16); + + if ((*besterr < 0) || (*besterr > err)) { + *besterr = err; +diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c +index 7dc8272c6b15..9027455c6be1 100644 +--- a/drivers/tty/serial/msm_serial.c ++++ b/drivers/tty/serial/msm_serial.c +@@ -391,10 +391,14 @@ no_rx: + + static inline void msm_wait_for_xmitr(struct uart_port *port) + { ++ unsigned int timeout = 500000; ++ + while (!(msm_read(port, UART_SR) & UART_SR_TX_EMPTY)) { + if (msm_read(port, UART_ISR) & UART_ISR_TX_READY) + break; + udelay(1); ++ if (!timeout--) ++ break; + } + msm_write(port, UART_CR_CMD_RESET_TX_READY, UART_CR); + } +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c +index 680fb3f9be2d..04c023f7f633 100644 +--- a/drivers/tty/serial/serial_core.c ++++ b/drivers/tty/serial/serial_core.c +@@ -1725,6 +1725,7 @@ static int uart_port_activate(struct tty_port *port, struct tty_struct *tty) + { + struct uart_state *state = container_of(port, struct uart_state, port); + struct uart_port *uport; ++ int ret; + + uport = uart_port_check(state); + if (!uport || uport->flags & UPF_DEAD) +@@ -1735,7 +1736,11 @@ static int uart_port_activate(struct tty_port *port, struct tty_struct *tty) + /* + * Start up the serial port. + */ +- return uart_startup(tty, state, 0); ++ ret = uart_startup(tty, state, 0); ++ if (ret > 0) ++ tty_port_set_active(port, 1); ++ ++ return ret; + } + + static const char *uart_type(struct uart_port *port) +diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c +index bcb997935c5e..ea35f5144237 100644 +--- a/drivers/tty/serial/sh-sci.c ++++ b/drivers/tty/serial/sh-sci.c +@@ -1291,6 +1291,7 @@ static void work_fn_tx(struct work_struct *work) + struct uart_port *port = &s->port; + struct circ_buf *xmit = &port->state->xmit; + dma_addr_t buf; ++ int head, tail; + + /* + * DMA is idle now. +@@ -1300,16 +1301,23 @@ static void work_fn_tx(struct work_struct *work) + * consistent xmit buffer state. + */ + spin_lock_irq(&port->lock); +- buf = s->tx_dma_addr + (xmit->tail & (UART_XMIT_SIZE - 1)); ++ head = xmit->head; ++ tail = xmit->tail; ++ buf = s->tx_dma_addr + (tail & (UART_XMIT_SIZE - 1)); + s->tx_dma_len = min_t(unsigned int, +- CIRC_CNT(xmit->head, xmit->tail, UART_XMIT_SIZE), +- CIRC_CNT_TO_END(xmit->head, xmit->tail, UART_XMIT_SIZE)); +- spin_unlock_irq(&port->lock); ++ CIRC_CNT(head, tail, UART_XMIT_SIZE), ++ CIRC_CNT_TO_END(head, tail, UART_XMIT_SIZE)); ++ if (!s->tx_dma_len) { ++ /* Transmit buffer has been flushed */ ++ spin_unlock_irq(&port->lock); ++ return; ++ } + + desc = dmaengine_prep_slave_single(chan, buf, s->tx_dma_len, + DMA_MEM_TO_DEV, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc) { ++ spin_unlock_irq(&port->lock); + dev_warn(port->dev, "Failed preparing Tx DMA descriptor\n"); + /* switch to PIO */ + sci_tx_dma_release(s, true); +@@ -1319,20 +1327,20 @@ static void work_fn_tx(struct work_struct *work) + dma_sync_single_for_device(chan->device->dev, buf, s->tx_dma_len, + DMA_TO_DEVICE); + +- spin_lock_irq(&port->lock); + desc->callback = sci_dma_tx_complete; + desc->callback_param = s; +- spin_unlock_irq(&port->lock); + s->cookie_tx = dmaengine_submit(desc); + if (dma_submit_error(s->cookie_tx)) { ++ spin_unlock_irq(&port->lock); + dev_warn(port->dev, "Failed submitting Tx DMA descriptor\n"); + /* switch to PIO */ + sci_tx_dma_release(s, true); + return; + } + ++ spin_unlock_irq(&port->lock); + dev_dbg(port->dev, "%s: %p: %d...%d, cookie %d\n", +- __func__, xmit->buf, xmit->tail, xmit->head, s->cookie_tx); ++ __func__, xmit->buf, tail, head, s->cookie_tx); + + dma_async_issue_pending(chan); + } +@@ -1538,11 +1546,18 @@ static void sci_free_dma(struct uart_port *port) + + static void sci_flush_buffer(struct uart_port *port) + { ++ struct sci_port *s = to_sci_port(port); ++ + /* + * In uart_flush_buffer(), the xmit circular buffer has just been +- * cleared, so we have to reset tx_dma_len accordingly. ++ * cleared, so we have to reset tx_dma_len accordingly, and stop any ++ * pending transfers + */ +- to_sci_port(port)->tx_dma_len = 0; ++ s->tx_dma_len = 0; ++ if (s->chan_tx) { ++ dmaengine_terminate_async(s->chan_tx); ++ s->cookie_tx = -EINVAL; ++ } + } + #else /* !CONFIG_SERIAL_SH_SCI_DMA */ + static inline void sci_request_dma(struct uart_port *port) +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 3941df076cca..63646dc3ca27 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -3535,6 +3535,7 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port, + struct usb_device *hdev; + struct usb_device *udev; + int connect_change = 0; ++ u16 link_state; + int ret; + + hdev = hub->hdev; +@@ -3544,9 +3545,11 @@ static int hub_handle_remote_wakeup(struct usb_hub *hub, unsigned int port, + return 0; + usb_clear_port_feature(hdev, port, USB_PORT_FEAT_C_SUSPEND); + } else { ++ link_state = portstatus & USB_PORT_STAT_LINK_STATE; + if (!udev || udev->state != USB_STATE_SUSPENDED || +- (portstatus & USB_PORT_STAT_LINK_STATE) != +- USB_SS_PORT_LS_U0) ++ (link_state != USB_SS_PORT_LS_U0 && ++ link_state != USB_SS_PORT_LS_U1 && ++ link_state != USB_SS_PORT_LS_U2)) + return 0; + } + +@@ -3876,6 +3879,9 @@ static int usb_set_lpm_timeout(struct usb_device *udev, + * control transfers to set the hub timeout or enable device-initiated U1/U2 + * will be successful. + * ++ * If the control transfer to enable device-initiated U1/U2 entry fails, then ++ * hub-initiated U1/U2 will be disabled. ++ * + * If we cannot set the parent hub U1/U2 timeout, we attempt to let the xHCI + * driver know about it. If that call fails, it should be harmless, and just + * take up more slightly more bus bandwidth for unnecessary U1/U2 exit latency. +@@ -3930,23 +3936,24 @@ static void usb_enable_link_state(struct usb_hcd *hcd, struct usb_device *udev, + * host know that this link state won't be enabled. + */ + hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state); +- } else { +- /* Only a configured device will accept the Set Feature +- * U1/U2_ENABLE +- */ +- if (udev->actconfig) +- usb_set_device_initiated_lpm(udev, state, true); ++ return; ++ } + +- /* As soon as usb_set_lpm_timeout(timeout) returns 0, the +- * hub-initiated LPM is enabled. Thus, LPM is enabled no +- * matter the result of usb_set_device_initiated_lpm(). +- * The only difference is whether device is able to initiate +- * LPM. +- */ ++ /* Only a configured device will accept the Set Feature ++ * U1/U2_ENABLE ++ */ ++ if (udev->actconfig && ++ usb_set_device_initiated_lpm(udev, state, true) == 0) { + if (state == USB3_LPM_U1) + udev->usb3_lpm_u1_enabled = 1; + else if (state == USB3_LPM_U2) + udev->usb3_lpm_u2_enabled = 1; ++ } else { ++ /* Don't request U1/U2 entry if the device ++ * cannot transition to U1/U2. ++ */ ++ usb_set_lpm_timeout(udev, state, 0); ++ hcd->driver->disable_usb3_lpm_timeout(hcd, udev, state); + } + } + +diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c +index 927ac0ee09b7..d1278d2d544b 100644 +--- a/drivers/usb/gadget/function/f_fs.c ++++ b/drivers/usb/gadget/function/f_fs.c +@@ -1101,11 +1101,12 @@ static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from) + ENTER(); + + if (!is_sync_kiocb(kiocb)) { +- p = kmalloc(sizeof(io_data), GFP_KERNEL); ++ p = kzalloc(sizeof(io_data), GFP_KERNEL); + if (unlikely(!p)) + return -ENOMEM; + p->aio = true; + } else { ++ memset(p, 0, sizeof(*p)); + p->aio = false; + } + +@@ -1137,11 +1138,12 @@ static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to) + ENTER(); + + if (!is_sync_kiocb(kiocb)) { +- p = kmalloc(sizeof(io_data), GFP_KERNEL); ++ p = kzalloc(sizeof(io_data), GFP_KERNEL); + if (unlikely(!p)) + return -ENOMEM; + p->aio = true; + } else { ++ memset(p, 0, sizeof(*p)); + p->aio = false; + } + +diff --git a/drivers/usb/host/hwa-hc.c b/drivers/usb/host/hwa-hc.c +index 97750f162f01..c14e4a64b0e8 100644 +--- a/drivers/usb/host/hwa-hc.c ++++ b/drivers/usb/host/hwa-hc.c +@@ -173,7 +173,7 @@ out: + return result; + + error_set_cluster_id: +- wusb_cluster_id_put(wusbhc->cluster_id); ++ wusb_cluster_id_put(addr); + error_cluster_id_get: + goto out; + +diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c +index ee213c5f4107..11b0767ca1ba 100644 +--- a/drivers/usb/host/pci-quirks.c ++++ b/drivers/usb/host/pci-quirks.c +@@ -187,7 +187,7 @@ int usb_amd_find_chipset_info(void) + { + unsigned long flags; + struct amd_chipset_info info; +- int ret; ++ int need_pll_quirk = 0; + + spin_lock_irqsave(&amd_lock, flags); + +@@ -201,21 +201,28 @@ int usb_amd_find_chipset_info(void) + spin_unlock_irqrestore(&amd_lock, flags); + + if (!amd_chipset_sb_type_init(&info)) { +- ret = 0; + goto commit; + } + +- /* Below chipset generations needn't enable AMD PLL quirk */ +- if (info.sb_type.gen == AMD_CHIPSET_UNKNOWN || +- info.sb_type.gen == AMD_CHIPSET_SB600 || +- info.sb_type.gen == AMD_CHIPSET_YANGTZE || +- (info.sb_type.gen == AMD_CHIPSET_SB700 && +- info.sb_type.rev > 0x3b)) { ++ switch (info.sb_type.gen) { ++ case AMD_CHIPSET_SB700: ++ need_pll_quirk = info.sb_type.rev <= 0x3B; ++ break; ++ case AMD_CHIPSET_SB800: ++ case AMD_CHIPSET_HUDSON2: ++ case AMD_CHIPSET_BOLTON: ++ need_pll_quirk = 1; ++ break; ++ default: ++ need_pll_quirk = 0; ++ break; ++ } ++ ++ if (!need_pll_quirk) { + if (info.smbus_dev) { + pci_dev_put(info.smbus_dev); + info.smbus_dev = NULL; + } +- ret = 0; + goto commit; + } + +@@ -234,7 +241,7 @@ int usb_amd_find_chipset_info(void) + } + } + +- ret = info.probe_result = 1; ++ need_pll_quirk = info.probe_result = 1; + printk(KERN_DEBUG "QUIRK: Enable AMD PLL fix\n"); + + commit: +@@ -245,7 +252,7 @@ commit: + + /* Mark that we where here */ + amd_chipset.probe_count++; +- ret = amd_chipset.probe_result; ++ need_pll_quirk = amd_chipset.probe_result; + + spin_unlock_irqrestore(&amd_lock, flags); + +@@ -259,7 +266,7 @@ commit: + spin_unlock_irqrestore(&amd_lock, flags); + } + +- return ret; ++ return need_pll_quirk; + } + EXPORT_SYMBOL_GPL(usb_amd_find_chipset_info); + +diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c +index 681d0eade82f..75e1089dfb01 100644 +--- a/drivers/vhost/net.c ++++ b/drivers/vhost/net.c +@@ -30,7 +30,7 @@ + + #include "vhost.h" + +-static int experimental_zcopytx = 1; ++static int experimental_zcopytx = 0; + module_param(experimental_zcopytx, int, 0444); + MODULE_PARM_DESC(experimental_zcopytx, "Enable Zero Copy TX;" + " 1 -Enable; 0 - Disable"); +diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c +index e4db19e88ab1..6af117af9780 100644 +--- a/drivers/xen/balloon.c ++++ b/drivers/xen/balloon.c +@@ -591,8 +591,15 @@ static void balloon_process(struct work_struct *work) + state = reserve_additional_memory(); + } + +- if (credit < 0) +- state = decrease_reservation(-credit, GFP_BALLOON); ++ if (credit < 0) { ++ long n_pages; ++ ++ n_pages = min(-credit, si_mem_available()); ++ state = decrease_reservation(n_pages, GFP_BALLOON); ++ if (state == BP_DONE && n_pages != -credit && ++ n_pages < totalreserve_pages) ++ state = BP_EAGAIN; ++ } + + state = update_schedule(state); + +@@ -631,6 +638,9 @@ static int add_ballooned_pages(int nr_pages) + } + } + ++ if (si_mem_available() < nr_pages) ++ return -ENOMEM; ++ + st = decrease_reservation(nr_pages, GFP_USER); + if (st != BP_DONE) + return -ENOMEM; +@@ -754,7 +764,7 @@ static int __init balloon_init(void) + balloon_stats.schedule_delay = 1; + balloon_stats.max_schedule_delay = 32; + balloon_stats.retry_count = 1; +- balloon_stats.max_retry_count = RETRY_UNLIMITED; ++ balloon_stats.max_retry_count = 4; + + #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG + set_online_page_callback(&xen_online_page); +diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c +index 6181ad79e1a5..e45b1a0dd513 100644 +--- a/fs/9p/vfs_addr.c ++++ b/fs/9p/vfs_addr.c +@@ -49,8 +49,9 @@ + * @page: structure to page + * + */ +-static int v9fs_fid_readpage(struct p9_fid *fid, struct page *page) ++static int v9fs_fid_readpage(void *data, struct page *page) + { ++ struct p9_fid *fid = data; + struct inode *inode = page->mapping->host; + struct bio_vec bvec = {.bv_page = page, .bv_len = PAGE_SIZE}; + struct iov_iter to; +@@ -121,7 +122,8 @@ static int v9fs_vfs_readpages(struct file *filp, struct address_space *mapping, + if (ret == 0) + return ret; + +- ret = read_cache_pages(mapping, pages, (void *)v9fs_vfs_readpage, filp); ++ ret = read_cache_pages(mapping, pages, v9fs_fid_readpage, ++ filp->private_data); + p9_debug(P9_DEBUG_VFS, " = %d\n", ret); + return ret; + } +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index c77114ce884b..6cdf27325576 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -2646,6 +2646,11 @@ out_only_mutex: + * for detecting, at fsync time, if the inode isn't yet in the + * log tree or it's there but not up to date. + */ ++ struct timespec now = current_time(inode); ++ ++ inode_inc_iversion(inode); ++ inode->i_mtime = now; ++ inode->i_ctime = now; + trans = btrfs_start_transaction(root, 1); + if (IS_ERR(trans)) { + err = PTR_ERR(trans); +diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c +index f916cd7b1918..82df349b84f7 100644 +--- a/fs/ceph/caps.c ++++ b/fs/ceph/caps.c +@@ -1081,20 +1081,23 @@ static int send_cap_msg(struct ceph_mds_session *session, + } + + /* +- * Queue cap releases when an inode is dropped from our cache. Since +- * inode is about to be destroyed, there is no need for i_ceph_lock. ++ * Queue cap releases when an inode is dropped from our cache. + */ + void ceph_queue_caps_release(struct inode *inode) + { + struct ceph_inode_info *ci = ceph_inode(inode); + struct rb_node *p; + ++ /* lock i_ceph_lock, because ceph_d_revalidate(..., LOOKUP_RCU) ++ * may call __ceph_caps_issued_mask() on a freeing inode. */ ++ spin_lock(&ci->i_ceph_lock); + p = rb_first(&ci->i_caps); + while (p) { + struct ceph_cap *cap = rb_entry(p, struct ceph_cap, ci_node); + p = rb_next(p); + __ceph_remove_cap(cap, true); + } ++ spin_unlock(&ci->i_ceph_lock); + } + + /* +diff --git a/fs/coda/file.c b/fs/coda/file.c +index 6e0154eb6fcc..649d17edc071 100644 +--- a/fs/coda/file.c ++++ b/fs/coda/file.c +@@ -60,6 +60,41 @@ coda_file_write_iter(struct kiocb *iocb, struct iov_iter *to) + return ret; + } + ++struct coda_vm_ops { ++ atomic_t refcnt; ++ struct file *coda_file; ++ const struct vm_operations_struct *host_vm_ops; ++ struct vm_operations_struct vm_ops; ++}; ++ ++static void ++coda_vm_open(struct vm_area_struct *vma) ++{ ++ struct coda_vm_ops *cvm_ops = ++ container_of(vma->vm_ops, struct coda_vm_ops, vm_ops); ++ ++ atomic_inc(&cvm_ops->refcnt); ++ ++ if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->open) ++ cvm_ops->host_vm_ops->open(vma); ++} ++ ++static void ++coda_vm_close(struct vm_area_struct *vma) ++{ ++ struct coda_vm_ops *cvm_ops = ++ container_of(vma->vm_ops, struct coda_vm_ops, vm_ops); ++ ++ if (cvm_ops->host_vm_ops && cvm_ops->host_vm_ops->close) ++ cvm_ops->host_vm_ops->close(vma); ++ ++ if (atomic_dec_and_test(&cvm_ops->refcnt)) { ++ vma->vm_ops = cvm_ops->host_vm_ops; ++ fput(cvm_ops->coda_file); ++ kfree(cvm_ops); ++ } ++} ++ + static int + coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) + { +@@ -67,6 +102,8 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) + struct coda_inode_info *cii; + struct file *host_file; + struct inode *coda_inode, *host_inode; ++ struct coda_vm_ops *cvm_ops; ++ int ret; + + cfi = CODA_FTOC(coda_file); + BUG_ON(!cfi || cfi->cfi_magic != CODA_MAGIC); +@@ -75,6 +112,13 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) + if (!host_file->f_op->mmap) + return -ENODEV; + ++ if (WARN_ON(coda_file != vma->vm_file)) ++ return -EIO; ++ ++ cvm_ops = kmalloc(sizeof(struct coda_vm_ops), GFP_KERNEL); ++ if (!cvm_ops) ++ return -ENOMEM; ++ + coda_inode = file_inode(coda_file); + host_inode = file_inode(host_file); + +@@ -88,6 +132,7 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) + * the container file on us! */ + else if (coda_inode->i_mapping != host_inode->i_mapping) { + spin_unlock(&cii->c_lock); ++ kfree(cvm_ops); + return -EBUSY; + } + +@@ -96,7 +141,29 @@ coda_file_mmap(struct file *coda_file, struct vm_area_struct *vma) + cfi->cfi_mapcount++; + spin_unlock(&cii->c_lock); + +- return host_file->f_op->mmap(host_file, vma); ++ vma->vm_file = get_file(host_file); ++ ret = host_file->f_op->mmap(host_file, vma); ++ ++ if (ret) { ++ /* if call_mmap fails, our caller will put coda_file so we ++ * should drop the reference to the host_file that we got. ++ */ ++ fput(host_file); ++ kfree(cvm_ops); ++ } else { ++ /* here we add redirects for the open/close vm_operations */ ++ cvm_ops->host_vm_ops = vma->vm_ops; ++ if (vma->vm_ops) ++ cvm_ops->vm_ops = *vma->vm_ops; ++ ++ cvm_ops->vm_ops.open = coda_vm_open; ++ cvm_ops->vm_ops.close = coda_vm_close; ++ cvm_ops->coda_file = coda_file; ++ atomic_set(&cvm_ops->refcnt, 1); ++ ++ vma->vm_ops = &cvm_ops->vm_ops; ++ } ++ return ret; + } + + int coda_open(struct inode *coda_inode, struct file *coda_file) +diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c +index e5e29f8c920b..cb77e7ee2c9f 100644 +--- a/fs/ecryptfs/crypto.c ++++ b/fs/ecryptfs/crypto.c +@@ -1034,8 +1034,10 @@ int ecryptfs_read_and_validate_header_region(struct inode *inode) + + rc = ecryptfs_read_lower(file_size, 0, ECRYPTFS_SIZE_AND_MARKER_BYTES, + inode); +- if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) +- return rc >= 0 ? -EINVAL : rc; ++ if (rc < 0) ++ return rc; ++ else if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) ++ return -EINVAL; + rc = ecryptfs_validate_marker(marker); + if (!rc) + ecryptfs_i_size_init(file_size, inode); +@@ -1397,8 +1399,10 @@ int ecryptfs_read_and_validate_xattr_region(struct dentry *dentry, + ecryptfs_inode_to_lower(inode), + ECRYPTFS_XATTR_NAME, file_size, + ECRYPTFS_SIZE_AND_MARKER_BYTES); +- if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) +- return rc >= 0 ? -EINVAL : rc; ++ if (rc < 0) ++ return rc; ++ else if (rc < ECRYPTFS_SIZE_AND_MARKER_BYTES) ++ return -EINVAL; + rc = ecryptfs_validate_marker(marker); + if (!rc) + ecryptfs_i_size_init(file_size, inode); +diff --git a/fs/exec.c b/fs/exec.c +index 81477116035d..820d7f3b25e8 100644 +--- a/fs/exec.c ++++ b/fs/exec.c +@@ -1790,7 +1790,7 @@ static int do_execveat_common(int fd, struct filename *filename, + current->fs->in_exec = 0; + current->in_execve = 0; + acct_update_integrals(current); +- task_numa_free(current); ++ task_numa_free(current, false); + free_bprm(bprm); + kfree(pathbuf); + putname(filename); +diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c +index e16bc4cec62e..0c83bffa7927 100644 +--- a/fs/ext4/dir.c ++++ b/fs/ext4/dir.c +@@ -106,7 +106,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) + struct inode *inode = file_inode(file); + struct super_block *sb = inode->i_sb; + struct buffer_head *bh = NULL; +- int dir_has_error = 0; + struct fscrypt_str fstr = FSTR_INIT(NULL, 0); + + if (ext4_encrypted_inode(inode)) { +@@ -142,8 +141,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) + return err; + } + +- offset = ctx->pos & (sb->s_blocksize - 1); +- + while (ctx->pos < inode->i_size) { + struct ext4_map_blocks map; + +@@ -152,9 +149,18 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) + goto errout; + } + cond_resched(); ++ offset = ctx->pos & (sb->s_blocksize - 1); + map.m_lblk = ctx->pos >> EXT4_BLOCK_SIZE_BITS(sb); + map.m_len = 1; + err = ext4_map_blocks(NULL, inode, &map, 0); ++ if (err == 0) { ++ /* m_len should never be zero but let's avoid ++ * an infinite loop if it somehow is */ ++ if (map.m_len == 0) ++ map.m_len = 1; ++ ctx->pos += map.m_len * sb->s_blocksize; ++ continue; ++ } + if (err > 0) { + pgoff_t index = map.m_pblk >> + (PAGE_SHIFT - inode->i_blkbits); +@@ -173,13 +179,6 @@ static int ext4_readdir(struct file *file, struct dir_context *ctx) + } + + if (!bh) { +- if (!dir_has_error) { +- EXT4_ERROR_FILE(file, 0, +- "directory contains a " +- "hole at offset %llu", +- (unsigned long long) ctx->pos); +- dir_has_error = 1; +- } + /* corrupt size? Maybe no more blocks to read */ + if (ctx->pos > inode->i_blocks << 9) + break; +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index 3c3757ee11f0..29dc02758a52 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -79,8 +79,18 @@ static struct buffer_head *ext4_append(handle_t *handle, + static int ext4_dx_csum_verify(struct inode *inode, + struct ext4_dir_entry *dirent); + ++/* ++ * Hints to ext4_read_dirblock regarding whether we expect a directory ++ * block being read to be an index block, or a block containing ++ * directory entries (and if the latter, whether it was found via a ++ * logical block in an htree index block). This is used to control ++ * what sort of sanity checkinig ext4_read_dirblock() will do on the ++ * directory block read from the storage device. EITHER will means ++ * the caller doesn't know what kind of directory block will be read, ++ * so no specific verification will be done. ++ */ + typedef enum { +- EITHER, INDEX, DIRENT ++ EITHER, INDEX, DIRENT, DIRENT_HTREE + } dirblock_type_t; + + #define ext4_read_dirblock(inode, block, type) \ +@@ -106,11 +116,14 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode, + + return bh; + } +- if (!bh) { ++ if (!bh && (type == INDEX || type == DIRENT_HTREE)) { + ext4_error_inode(inode, func, line, block, +- "Directory hole found"); ++ "Directory hole found for htree %s block", ++ (type == INDEX) ? "index" : "leaf"); + return ERR_PTR(-EFSCORRUPTED); + } ++ if (!bh) ++ return NULL; + dirent = (struct ext4_dir_entry *) bh->b_data; + /* Determine whether or not we have an index block */ + if (is_dx(inode)) { +@@ -960,7 +973,7 @@ static int htree_dirblock_to_tree(struct file *dir_file, + + dxtrace(printk(KERN_INFO "In htree dirblock_to_tree: block %lu\n", + (unsigned long)block)); +- bh = ext4_read_dirblock(dir, block, DIRENT); ++ bh = ext4_read_dirblock(dir, block, DIRENT_HTREE); + if (IS_ERR(bh)) + return PTR_ERR(bh); + +@@ -1537,7 +1550,7 @@ static struct buffer_head * ext4_dx_find_entry(struct inode *dir, + return (struct buffer_head *) frame; + do { + block = dx_get_block(frame->at); +- bh = ext4_read_dirblock(dir, block, DIRENT); ++ bh = ext4_read_dirblock(dir, block, DIRENT_HTREE); + if (IS_ERR(bh)) + goto errout; + +@@ -2142,6 +2155,11 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry, + blocks = dir->i_size >> sb->s_blocksize_bits; + for (block = 0; block < blocks; block++) { + bh = ext4_read_dirblock(dir, block, DIRENT); ++ if (bh == NULL) { ++ bh = ext4_bread(handle, dir, block, ++ EXT4_GET_BLOCKS_CREATE); ++ goto add_to_new_block; ++ } + if (IS_ERR(bh)) { + retval = PTR_ERR(bh); + bh = NULL; +@@ -2162,6 +2180,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry, + brelse(bh); + } + bh = ext4_append(handle, dir, &block); ++add_to_new_block: + if (IS_ERR(bh)) { + retval = PTR_ERR(bh); + bh = NULL; +@@ -2203,7 +2222,7 @@ static int ext4_dx_add_entry(handle_t *handle, struct ext4_filename *fname, + return PTR_ERR(frame); + entries = frame->entries; + at = frame->at; +- bh = ext4_read_dirblock(dir, dx_get_block(frame->at), DIRENT); ++ bh = ext4_read_dirblock(dir, dx_get_block(frame->at), DIRENT_HTREE); + if (IS_ERR(bh)) { + err = PTR_ERR(bh); + bh = NULL; +@@ -2719,7 +2738,10 @@ bool ext4_empty_dir(struct inode *inode) + EXT4_ERROR_INODE(inode, "invalid size"); + return true; + } +- bh = ext4_read_dirblock(inode, 0, EITHER); ++ /* The first directory block must not be a hole, ++ * so treat it as DIRENT_HTREE ++ */ ++ bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE); + if (IS_ERR(bh)) + return true; + +@@ -2741,6 +2763,10 @@ bool ext4_empty_dir(struct inode *inode) + brelse(bh); + lblock = offset >> EXT4_BLOCK_SIZE_BITS(sb); + bh = ext4_read_dirblock(inode, lblock, EITHER); ++ if (bh == NULL) { ++ offset += sb->s_blocksize; ++ continue; ++ } + if (IS_ERR(bh)) + return true; + de = (struct ext4_dir_entry_2 *) bh->b_data; +@@ -3302,7 +3328,10 @@ static struct buffer_head *ext4_get_first_dir_block(handle_t *handle, + struct buffer_head *bh; + + if (!ext4_has_inline_data(inode)) { +- bh = ext4_read_dirblock(inode, 0, EITHER); ++ /* The first directory block must not be a hole, so ++ * treat it as DIRENT_HTREE ++ */ ++ bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE); + if (IS_ERR(bh)) { + *retval = PTR_ERR(bh); + return NULL; +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index 2fb99a081de8..c983f7d28f03 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -1709,6 +1709,11 @@ static int read_compacted_summaries(struct f2fs_sb_info *sbi) + seg_i = CURSEG_I(sbi, i); + segno = le32_to_cpu(ckpt->cur_data_segno[i]); + blk_off = le16_to_cpu(ckpt->cur_data_blkoff[i]); ++ if (blk_off > ENTRIES_IN_SUM) { ++ f2fs_bug_on(sbi, 1); ++ f2fs_put_page(page, 1); ++ return -EFAULT; ++ } + seg_i->next_segno = segno; + reset_curseg(sbi, i, 0); + seg_i->alloc_type = ckpt->alloc_type[i]; +diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c +index 8b93d4b98428..baaed9369ab4 100644 +--- a/fs/fs-writeback.c ++++ b/fs/fs-writeback.c +@@ -721,6 +721,7 @@ void wbc_detach_inode(struct writeback_control *wbc) + void wbc_account_io(struct writeback_control *wbc, struct page *page, + size_t bytes) + { ++ struct cgroup_subsys_state *css; + int id; + + /* +@@ -732,7 +733,12 @@ void wbc_account_io(struct writeback_control *wbc, struct page *page, + if (!wbc->wb) + return; + +- id = mem_cgroup_css_from_page(page)->id; ++ css = mem_cgroup_css_from_page(page); ++ /* dead cgroups shouldn't contribute to inode ownership arbitration */ ++ if (!(css->flags & CSS_ONLINE)) ++ return; ++ ++ id = css->id; + + if (id == wbc->wb_id) { + wbc->wb_bytes += bytes; +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index 76ae25661d3f..851274b25d39 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -950,6 +950,7 @@ int nfs_open(struct inode *inode, struct file *filp) + nfs_fscache_open_file(inode, filp); + return 0; + } ++EXPORT_SYMBOL_GPL(nfs_open); + + /* + * This function is called whenever some part of NFS notices that +diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c +index 89a77950e0b0..8a0c301b0c69 100644 +--- a/fs/nfs/nfs4file.c ++++ b/fs/nfs/nfs4file.c +@@ -49,7 +49,7 @@ nfs4_file_open(struct inode *inode, struct file *filp) + return err; + + if ((openflags & O_ACCMODE) == 3) +- openflags--; ++ return nfs_open(inode, filp); + + /* We can't create new files here */ + openflags &= ~(O_CREAT|O_EXCL); +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 6d0d94fc243d..ea29c608be89 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -1121,6 +1121,12 @@ struct nfs4_opendata { + int cancelled; + }; + ++struct nfs4_open_createattrs { ++ struct nfs4_label *label; ++ struct iattr *sattr; ++ const __u32 verf[2]; ++}; ++ + static bool nfs4_clear_cap_atomic_open_v1(struct nfs_server *server, + int err, struct nfs4_exception *exception) + { +@@ -1190,8 +1196,7 @@ static void nfs4_init_opendata_res(struct nfs4_opendata *p) + + static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry, + struct nfs4_state_owner *sp, fmode_t fmode, int flags, +- const struct iattr *attrs, +- struct nfs4_label *label, ++ const struct nfs4_open_createattrs *c, + enum open_claim_type4 claim, + gfp_t gfp_mask) + { +@@ -1199,6 +1204,7 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry, + struct inode *dir = d_inode(parent); + struct nfs_server *server = NFS_SERVER(dir); + struct nfs_seqid *(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t); ++ struct nfs4_label *label = (c != NULL) ? c->label : NULL; + struct nfs4_opendata *p; + + p = kzalloc(sizeof(*p), gfp_mask); +@@ -1255,15 +1261,11 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry, + case NFS4_OPEN_CLAIM_DELEG_PREV_FH: + p->o_arg.fh = NFS_FH(d_inode(dentry)); + } +- if (attrs != NULL && attrs->ia_valid != 0) { +- __u32 verf[2]; +- ++ if (c != NULL && c->sattr != NULL && c->sattr->ia_valid != 0) { + p->o_arg.u.attrs = &p->attrs; +- memcpy(&p->attrs, attrs, sizeof(p->attrs)); ++ memcpy(&p->attrs, c->sattr, sizeof(p->attrs)); + +- verf[0] = jiffies; +- verf[1] = current->pid; +- memcpy(p->o_arg.u.verifier.data, verf, ++ memcpy(p->o_arg.u.verifier.data, c->verf, + sizeof(p->o_arg.u.verifier.data)); + } + p->c_arg.fh = &p->o_res.fh; +@@ -1814,7 +1816,7 @@ static struct nfs4_opendata *nfs4_open_recoverdata_alloc(struct nfs_open_context + struct nfs4_opendata *opendata; + + opendata = nfs4_opendata_alloc(ctx->dentry, state->owner, 0, 0, +- NULL, NULL, claim, GFP_NOFS); ++ NULL, claim, GFP_NOFS); + if (opendata == NULL) + return ERR_PTR(-ENOMEM); + opendata->state = state; +@@ -2759,8 +2761,7 @@ out: + static int _nfs4_do_open(struct inode *dir, + struct nfs_open_context *ctx, + int flags, +- struct iattr *sattr, +- struct nfs4_label *label, ++ const struct nfs4_open_createattrs *c, + int *opened) + { + struct nfs4_state_owner *sp; +@@ -2772,6 +2773,8 @@ static int _nfs4_do_open(struct inode *dir, + struct nfs4_threshold **ctx_th = &ctx->mdsthreshold; + fmode_t fmode = ctx->mode & (FMODE_READ|FMODE_WRITE|FMODE_EXEC); + enum open_claim_type4 claim = NFS4_OPEN_CLAIM_NULL; ++ struct iattr *sattr = c->sattr; ++ struct nfs4_label *label = c->label; + struct nfs4_label *olabel = NULL; + int status; + +@@ -2790,8 +2793,8 @@ static int _nfs4_do_open(struct inode *dir, + status = -ENOMEM; + if (d_really_is_positive(dentry)) + claim = NFS4_OPEN_CLAIM_FH; +- opendata = nfs4_opendata_alloc(dentry, sp, fmode, flags, sattr, +- label, claim, GFP_KERNEL); ++ opendata = nfs4_opendata_alloc(dentry, sp, fmode, flags, ++ c, claim, GFP_KERNEL); + if (opendata == NULL) + goto err_put_state_owner; + +@@ -2872,10 +2875,18 @@ static struct nfs4_state *nfs4_do_open(struct inode *dir, + struct nfs_server *server = NFS_SERVER(dir); + struct nfs4_exception exception = { }; + struct nfs4_state *res; ++ struct nfs4_open_createattrs c = { ++ .label = label, ++ .sattr = sattr, ++ .verf = { ++ [0] = (__u32)jiffies, ++ [1] = (__u32)current->pid, ++ }, ++ }; + int status; + + do { +- status = _nfs4_do_open(dir, ctx, flags, sattr, label, opened); ++ status = _nfs4_do_open(dir, ctx, flags, &c, opened); + res = ctx->state; + trace_nfs4_open_file(ctx, flags, status); + if (status == 0) +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 3656f87d11e3..032fcae3a94f 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1502,11 +1502,16 @@ static u32 nfsd4_get_drc_mem(struct nfsd4_channel_attrs *ca) + { + u32 slotsize = slot_bytes(ca); + u32 num = ca->maxreqs; +- int avail; ++ unsigned long avail, total_avail; + + spin_lock(&nfsd_drc_lock); +- avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, +- nfsd_drc_max_mem - nfsd_drc_mem_used); ++ total_avail = nfsd_drc_max_mem - nfsd_drc_mem_used; ++ avail = min((unsigned long)NFSD_MAX_MEM_PER_SESSION, total_avail); ++ /* ++ * Never use more than a third of the remaining memory, ++ * unless it's the only way to give this client a slot: ++ */ ++ avail = clamp_t(unsigned long, avail, slotsize, total_avail/3); + num = min_t(int, num, avail / slotsize); + nfsd_drc_mem_used += num * slotsize; + spin_unlock(&nfsd_drc_lock); +diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c +index 5c4800626f13..60291d10f8e4 100644 +--- a/fs/nfsd/nfssvc.c ++++ b/fs/nfsd/nfssvc.c +@@ -430,7 +430,7 @@ void nfsd_reset_versions(void) + */ + static void set_max_drc(void) + { +- #define NFSD_DRC_SIZE_SHIFT 10 ++ #define NFSD_DRC_SIZE_SHIFT 7 + nfsd_drc_max_mem = (nr_free_buffer_pages() + >> NFSD_DRC_SIZE_SHIFT) * PAGE_SIZE; + nfsd_drc_mem_used = 0; +diff --git a/fs/open.c b/fs/open.c +index 6ad9a21f2459..8db6e3a5fc10 100644 +--- a/fs/open.c ++++ b/fs/open.c +@@ -380,6 +380,25 @@ SYSCALL_DEFINE3(faccessat, int, dfd, const char __user *, filename, int, mode) + override_cred->cap_permitted; + } + ++ /* ++ * The new set of credentials can *only* be used in ++ * task-synchronous circumstances, and does not need ++ * RCU freeing, unless somebody then takes a separate ++ * reference to it. ++ * ++ * NOTE! This is _only_ true because this credential ++ * is used purely for override_creds() that installs ++ * it as the subjective cred. Other threads will be ++ * accessing ->real_cred, not the subjective cred. ++ * ++ * If somebody _does_ make a copy of this (using the ++ * 'get_current_cred()' function), that will clear the ++ * non_rcu field, because now that other user may be ++ * expecting RCU freeing. But normal thread-synchronous ++ * cred accesses will keep things non-RCY. ++ */ ++ override_cred->non_rcu = 1; ++ + old_cred = override_creds(override_cred); + retry: + res = user_path_at(dfd, filename, lookup_flags, &path); +diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c +index 5b32c054df71..191573a625f2 100644 +--- a/fs/proc/proc_sysctl.c ++++ b/fs/proc/proc_sysctl.c +@@ -500,6 +500,10 @@ static struct inode *proc_sys_make_inode(struct super_block *sb, + + if (root->set_ownership) + root->set_ownership(head, table, &inode->i_uid, &inode->i_gid); ++ else { ++ inode->i_uid = GLOBAL_ROOT_UID; ++ inode->i_gid = GLOBAL_ROOT_GID; ++ } + + return inode; + } +diff --git a/include/linux/compiler.h b/include/linux/compiler.h +index 80a5bc623c47..3050de0dac96 100644 +--- a/include/linux/compiler.h ++++ b/include/linux/compiler.h +@@ -250,23 +250,21 @@ void __read_once_size(const volatile void *p, void *res, int size) + + #ifdef CONFIG_KASAN + /* +- * This function is not 'inline' because __no_sanitize_address confilcts ++ * We can't declare function 'inline' because __no_sanitize_address confilcts + * with inlining. Attempt to inline it may cause a build failure. + * https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67368 + * '__maybe_unused' allows us to avoid defined-but-not-used warnings. + */ +-static __no_sanitize_address __maybe_unused +-void __read_once_size_nocheck(const volatile void *p, void *res, int size) +-{ +- __READ_ONCE_SIZE; +-} ++# define __no_kasan_or_inline __no_sanitize_address __maybe_unused + #else +-static __always_inline ++# define __no_kasan_or_inline __always_inline ++#endif ++ ++static __no_kasan_or_inline + void __read_once_size_nocheck(const volatile void *p, void *res, int size) + { + __READ_ONCE_SIZE; + } +-#endif + + static __always_inline void __write_once_size(volatile void *p, void *res, int size) + { +@@ -304,6 +302,7 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s + * with an explicit memory barrier or atomic instruction that provides the + * required ordering. + */ ++#include + + #define __READ_ONCE(x, check) \ + ({ \ +@@ -322,6 +321,13 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s + */ + #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0) + ++static __no_kasan_or_inline ++unsigned long read_word_at_a_time(const void *addr) ++{ ++ kasan_check_read(addr, 1); ++ return *(unsigned long *)addr; ++} ++ + #define WRITE_ONCE(x, val) \ + ({ \ + union { typeof(x) __val; char __c[1]; } __u = \ +diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h +index c9447a689522..1ab0273560ae 100644 +--- a/include/linux/cpuhotplug.h ++++ b/include/linux/cpuhotplug.h +@@ -77,10 +77,10 @@ enum cpuhp_state { + CPUHP_AP_PERF_ARM_HW_BREAKPOINT_STARTING, + CPUHP_AP_PERF_ARM_STARTING, + CPUHP_AP_ARM_L2X0_STARTING, ++ CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING, + CPUHP_AP_ARM_ARCH_TIMER_STARTING, + CPUHP_AP_ARM_GLOBAL_TIMER_STARTING, + CPUHP_AP_JCORE_TIMER_STARTING, +- CPUHP_AP_EXYNOS4_MCT_TIMER_STARTING, + CPUHP_AP_ARM_TWD_STARTING, + CPUHP_AP_METAG_TIMER_STARTING, + CPUHP_AP_QCOM_TIMER_STARTING, +diff --git a/include/linux/cred.h b/include/linux/cred.h +index cf1a5d0c4eb4..4f614042214b 100644 +--- a/include/linux/cred.h ++++ b/include/linux/cred.h +@@ -144,7 +144,11 @@ struct cred { + struct user_struct *user; /* real user ID subscription */ + struct user_namespace *user_ns; /* user_ns the caps and keyrings are relative to. */ + struct group_info *group_info; /* supplementary groups for euid/fsgid */ +- struct rcu_head rcu; /* RCU deletion hook */ ++ /* RCU deletion */ ++ union { ++ int non_rcu; /* Can we skip RCU deletion? */ ++ struct rcu_head rcu; /* RCU deletion hook */ ++ }; + }; + + extern void __put_cred(struct cred *); +@@ -242,6 +246,7 @@ static inline const struct cred *get_cred(const struct cred *cred) + { + struct cred *nonconst_cred = (struct cred *) cred; + validate_creds(cred); ++ nonconst_cred->non_rcu = 0; + return get_new_cred(nonconst_cred); + } + +diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h +index aa2935779e43..96037ba940ee 100644 +--- a/include/linux/rcupdate.h ++++ b/include/linux/rcupdate.h +@@ -866,7 +866,7 @@ static inline void rcu_preempt_sleep_check(void) + * read-side critical sections may be preempted and they may also block, but + * only when acquiring spinlocks that are subject to priority inheritance. + */ +-static inline void rcu_read_lock(void) ++static __always_inline void rcu_read_lock(void) + { + __rcu_read_lock(); + __acquire(RCU); +diff --git a/include/linux/sched.h b/include/linux/sched.h +index 1c487a3abd84..275511b60978 100644 +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -2044,7 +2044,7 @@ static inline bool in_vfork(struct task_struct *tsk) + extern void task_numa_fault(int last_node, int node, int pages, int flags); + extern pid_t task_numa_group_id(struct task_struct *p); + extern void set_numabalancing_state(bool enabled); +-extern void task_numa_free(struct task_struct *p); ++extern void task_numa_free(struct task_struct *p, bool final); + extern bool should_numa_migrate_memory(struct task_struct *p, struct page *page, + int src_nid, int dst_cpu); + #else +@@ -2059,7 +2059,7 @@ static inline pid_t task_numa_group_id(struct task_struct *p) + static inline void set_numabalancing_state(bool enabled) + { + } +-static inline void task_numa_free(struct task_struct *p) ++static inline void task_numa_free(struct task_struct *p, bool final) + { + } + static inline bool should_numa_migrate_memory(struct task_struct *p, +diff --git a/include/net/tcp.h b/include/net/tcp.h +index d7047de952f0..1eda31f7f013 100644 +--- a/include/net/tcp.h ++++ b/include/net/tcp.h +@@ -1512,6 +1512,11 @@ struct sock *tcp_try_fastopen(struct sock *sk, struct sk_buff *skb, + void tcp_fastopen_init_key_once(bool publish); + #define TCP_FASTOPEN_KEY_LENGTH 16 + ++static inline void tcp_init_send_head(struct sock *sk) ++{ ++ sk->sk_send_head = NULL; ++} ++ + /* Fastopen key context */ + struct tcp_fastopen_context { + struct crypto_cipher *tfm; +@@ -1528,6 +1533,7 @@ static inline void tcp_write_queue_purge(struct sock *sk) + sk_wmem_free_skb(sk, skb); + sk_mem_reclaim(sk); + tcp_clear_all_retrans_hints(tcp_sk(sk)); ++ tcp_init_send_head(sk); + inet_csk(sk)->icsk_backoff = 0; + } + +@@ -1589,11 +1595,6 @@ static inline void tcp_check_send_head(struct sock *sk, struct sk_buff *skb_unli + tcp_sk(sk)->highest_sack = NULL; + } + +-static inline void tcp_init_send_head(struct sock *sk) +-{ +- sk->sk_send_head = NULL; +-} +- + static inline void __tcp_add_write_queue_tail(struct sock *sk, struct sk_buff *skb) + { + __skb_queue_tail(&sk->sk_write_queue, skb); +diff --git a/kernel/bpf/Makefile b/kernel/bpf/Makefile +index eed911d091da..5a590f22b4d4 100644 +--- a/kernel/bpf/Makefile ++++ b/kernel/bpf/Makefile +@@ -1,4 +1,5 @@ + obj-y := core.o ++CFLAGS_core.o += $(call cc-disable-warning, override-init) + + obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o + obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o +diff --git a/kernel/cred.c b/kernel/cred.c +index 7b925925be95..0966fab0f48b 100644 +--- a/kernel/cred.c ++++ b/kernel/cred.c +@@ -146,7 +146,10 @@ void __put_cred(struct cred *cred) + BUG_ON(cred == current->cred); + BUG_ON(cred == current->real_cred); + +- call_rcu(&cred->rcu, put_cred_rcu); ++ if (cred->non_rcu) ++ put_cred_rcu(&cred->rcu); ++ else ++ call_rcu(&cred->rcu, put_cred_rcu); + } + EXPORT_SYMBOL(__put_cred); + +@@ -257,6 +260,7 @@ struct cred *prepare_creds(void) + old = task->cred; + memcpy(new, old, sizeof(struct cred)); + ++ new->non_rcu = 0; + atomic_set(&new->usage, 1); + set_cred_subscribers(new, 0); + get_group_info(new->group_info); +@@ -536,7 +540,19 @@ const struct cred *override_creds(const struct cred *new) + + validate_creds(old); + validate_creds(new); +- get_cred(new); ++ ++ /* ++ * NOTE! This uses 'get_new_cred()' rather than 'get_cred()'. ++ * ++ * That means that we do not clear the 'non_rcu' flag, since ++ * we are only installing the cred into the thread-synchronous ++ * '->cred' pointer, not the '->real_cred' pointer that is ++ * visible to other threads under RCU. ++ * ++ * Also note that we did validate_creds() manually, not depending ++ * on the validation in 'get_cred()'. ++ */ ++ get_new_cred((struct cred *)new); + alter_cred_subscribers(new, 1); + rcu_assign_pointer(current->cred, new); + alter_cred_subscribers(old, -1); +@@ -619,6 +635,7 @@ struct cred *prepare_kernel_cred(struct task_struct *daemon) + validate_creds(old); + + *new = *old; ++ new->non_rcu = 0; + atomic_set(&new->usage, 1); + set_cred_subscribers(new, 0); + get_uid(new->user); +diff --git a/kernel/fork.c b/kernel/fork.c +index e92b06351dec..1c21d13a3874 100644 +--- a/kernel/fork.c ++++ b/kernel/fork.c +@@ -389,7 +389,7 @@ void __put_task_struct(struct task_struct *tsk) + WARN_ON(tsk == current); + + cgroup_free(tsk); +- task_numa_free(tsk); ++ task_numa_free(tsk, true); + security_task_free(tsk); + exit_creds(tsk); + delayacct_tsk_free(tsk); +diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c +index 26fc428476b9..4b27aaffdf35 100644 +--- a/kernel/locking/lockdep.c ++++ b/kernel/locking/lockdep.c +@@ -3260,17 +3260,17 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, + if (depth) { + hlock = curr->held_locks + depth - 1; + if (hlock->class_idx == class_idx && nest_lock) { +- if (hlock->references) { +- /* +- * Check: unsigned int references:12, overflow. +- */ +- if (DEBUG_LOCKS_WARN_ON(hlock->references == (1 << 12)-1)) +- return 0; ++ if (!references) ++ references++; + ++ if (!hlock->references) + hlock->references++; +- } else { +- hlock->references = 2; +- } ++ ++ hlock->references += references; ++ ++ /* Overflow */ ++ if (DEBUG_LOCKS_WARN_ON(hlock->references < references)) ++ return 0; + + return 1; + } +diff --git a/kernel/locking/lockdep_proc.c b/kernel/locking/lockdep_proc.c +index a0f61effad25..75d80809c48c 100644 +--- a/kernel/locking/lockdep_proc.c ++++ b/kernel/locking/lockdep_proc.c +@@ -219,7 +219,6 @@ static void lockdep_stats_debug_show(struct seq_file *m) + + static int lockdep_stats_show(struct seq_file *m, void *v) + { +- struct lock_class *class; + unsigned long nr_unused = 0, nr_uncategorized = 0, + nr_irq_safe = 0, nr_irq_unsafe = 0, + nr_softirq_safe = 0, nr_softirq_unsafe = 0, +@@ -229,6 +228,9 @@ static int lockdep_stats_show(struct seq_file *m, void *v) + nr_hardirq_read_safe = 0, nr_hardirq_read_unsafe = 0, + sum_forward_deps = 0; + ++#ifdef CONFIG_PROVE_LOCKING ++ struct lock_class *class; ++ + list_for_each_entry(class, &all_lock_classes, lock_entry) { + + if (class->usage_mask == 0) +@@ -260,12 +262,12 @@ static int lockdep_stats_show(struct seq_file *m, void *v) + if (class->usage_mask & LOCKF_ENABLED_HARDIRQ_READ) + nr_hardirq_read_unsafe++; + +-#ifdef CONFIG_PROVE_LOCKING + sum_forward_deps += lockdep_count_forward_deps(class); +-#endif + } + #ifdef CONFIG_DEBUG_LOCKDEP + DEBUG_LOCKS_WARN_ON(debug_atomic_read(nr_unused_locks) != nr_unused); ++#endif ++ + #endif + seq_printf(m, " lock-classes: %11lu [max: %lu]\n", + nr_lock_classes, MAX_LOCKDEP_KEYS); +diff --git a/kernel/padata.c b/kernel/padata.c +index e4a8f8d9b31a..63449fc584da 100644 +--- a/kernel/padata.c ++++ b/kernel/padata.c +@@ -274,7 +274,12 @@ static void padata_reorder(struct parallel_data *pd) + * The next object that needs serialization might have arrived to + * the reorder queues in the meantime, we will be called again + * from the timer function if no one else cares for it. ++ * ++ * Ensure reorder_objects is read after pd->lock is dropped so we see ++ * an increment from another task in padata_do_serial. Pairs with ++ * smp_mb__after_atomic in padata_do_serial. + */ ++ smp_mb(); + if (atomic_read(&pd->reorder_objects) + && !(pinst->flags & PADATA_RESET)) + mod_timer(&pd->timer, jiffies + HZ); +@@ -343,6 +348,13 @@ void padata_do_serial(struct padata_priv *padata) + list_add_tail(&padata->list, &pqueue->reorder.list); + spin_unlock(&pqueue->reorder.lock); + ++ /* ++ * Ensure the atomic_inc of reorder_objects above is ordered correctly ++ * with the trylock of pd->lock in padata_reorder. Pairs with smp_mb ++ * in padata_reorder. ++ */ ++ smp_mb__after_atomic(); ++ + put_cpu(); + + padata_reorder(pd); +diff --git a/kernel/pid_namespace.c b/kernel/pid_namespace.c +index 3976dd57db78..0eab538841fd 100644 +--- a/kernel/pid_namespace.c ++++ b/kernel/pid_namespace.c +@@ -344,7 +344,7 @@ int reboot_pid_ns(struct pid_namespace *pid_ns, int cmd) + } + + read_lock(&tasklist_lock); +- force_sig(SIGKILL, pid_ns->child_reaper); ++ send_sig(SIGKILL, pid_ns->child_reaper, 1); + read_unlock(&tasklist_lock); + + do_exit(0); +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index f0c9b6925687..924bb307c0fa 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -2257,13 +2257,23 @@ no_join: + return; + } + +-void task_numa_free(struct task_struct *p) ++/* ++ * Get rid of NUMA staticstics associated with a task (either current or dead). ++ * If @final is set, the task is dead and has reached refcount zero, so we can ++ * safely free all relevant data structures. Otherwise, there might be ++ * concurrent reads from places like load balancing and procfs, and we should ++ * reset the data back to default state without freeing ->numa_faults. ++ */ ++void task_numa_free(struct task_struct *p, bool final) + { + struct numa_group *grp = p->numa_group; +- void *numa_faults = p->numa_faults; ++ unsigned long *numa_faults = p->numa_faults; + unsigned long flags; + int i; + ++ if (!numa_faults) ++ return; ++ + if (grp) { + spin_lock_irqsave(&grp->lock, flags); + for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) +@@ -2276,8 +2286,14 @@ void task_numa_free(struct task_struct *p) + put_numa_group(grp); + } + +- p->numa_faults = NULL; +- kfree(numa_faults); ++ if (final) { ++ p->numa_faults = NULL; ++ kfree(numa_faults); ++ } else { ++ p->total_numa_faults = 0; ++ for (i = 0; i < NR_NUMA_HINT_FAULT_STATS * nr_node_ids; i++) ++ numa_faults[i] = 0; ++ } + } + + /* +diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c +index 0a16419006f3..4bdb59604526 100644 +--- a/kernel/time/ntp.c ++++ b/kernel/time/ntp.c +@@ -42,6 +42,7 @@ static u64 tick_length_base; + #define MAX_TICKADJ 500LL /* usecs */ + #define MAX_TICKADJ_SCALED \ + (((MAX_TICKADJ * NSEC_PER_USEC) << NTP_SCALE_SHIFT) / NTP_INTERVAL_FREQ) ++#define MAX_TAI_OFFSET 100000 + + /* + * phase-lock loop variables +@@ -639,7 +640,8 @@ static inline void process_adjtimex_modes(struct timex *txc, + time_constant = max(time_constant, 0l); + } + +- if (txc->modes & ADJ_TAI && txc->constant >= 0) ++ if (txc->modes & ADJ_TAI && ++ txc->constant >= 0 && txc->constant <= MAX_TAI_OFFSET) + *time_tai = txc->constant; + + if (txc->modes & ADJ_OFFSET) +diff --git a/kernel/time/timer_list.c b/kernel/time/timer_list.c +index 1407ed20ea93..b7c5d230b4b2 100644 +--- a/kernel/time/timer_list.c ++++ b/kernel/time/timer_list.c +@@ -299,23 +299,6 @@ static inline void timer_list_header(struct seq_file *m, u64 now) + SEQ_printf(m, "\n"); + } + +-static int timer_list_show(struct seq_file *m, void *v) +-{ +- struct timer_list_iter *iter = v; +- +- if (iter->cpu == -1 && !iter->second_pass) +- timer_list_header(m, iter->now); +- else if (!iter->second_pass) +- print_cpu(m, iter->cpu, iter->now); +-#ifdef CONFIG_GENERIC_CLOCKEVENTS +- else if (iter->cpu == -1 && iter->second_pass) +- timer_list_show_tickdevices_header(m); +- else +- print_tickdevice(m, tick_get_device(iter->cpu), iter->cpu); +-#endif +- return 0; +-} +- + void sysrq_timer_list_show(void) + { + u64 now = ktime_to_ns(ktime_get()); +@@ -334,6 +317,24 @@ void sysrq_timer_list_show(void) + return; + } + ++#ifdef CONFIG_PROC_FS ++static int timer_list_show(struct seq_file *m, void *v) ++{ ++ struct timer_list_iter *iter = v; ++ ++ if (iter->cpu == -1 && !iter->second_pass) ++ timer_list_header(m, iter->now); ++ else if (!iter->second_pass) ++ print_cpu(m, iter->cpu, iter->now); ++#ifdef CONFIG_GENERIC_CLOCKEVENTS ++ else if (iter->cpu == -1 && iter->second_pass) ++ timer_list_show_tickdevices_header(m); ++ else ++ print_tickdevice(m, tick_get_device(iter->cpu), iter->cpu); ++#endif ++ return 0; ++} ++ + static void *move_iter(struct timer_list_iter *iter, loff_t offset) + { + for (; offset; offset--) { +@@ -405,3 +406,4 @@ static int __init init_timer_list_procfs(void) + return 0; + } + __initcall(init_timer_list_procfs); ++#endif +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index ea8a2760de24..70b82f4fd417 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -5820,11 +5820,15 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt, + break; + } + #endif +- if (!tr->allocated_snapshot) { ++ if (!tr->allocated_snapshot) ++ ret = resize_buffer_duplicate_size(&tr->max_buffer, ++ &tr->trace_buffer, iter->cpu_file); ++ else + ret = alloc_snapshot(tr); +- if (ret < 0) +- break; +- } ++ ++ if (ret < 0) ++ break; ++ + local_irq_disable(); + /* Now, we're going to swap */ + if (iter->cpu_file == RING_BUFFER_ALL_CPUS) +diff --git a/lib/reed_solomon/decode_rs.c b/lib/reed_solomon/decode_rs.c +index 0ec3f257ffdf..a5d313381539 100644 +--- a/lib/reed_solomon/decode_rs.c ++++ b/lib/reed_solomon/decode_rs.c +@@ -42,8 +42,18 @@ + BUG_ON(pad < 0 || pad >= nn); + + /* Does the caller provide the syndrome ? */ +- if (s != NULL) +- goto decode; ++ if (s != NULL) { ++ for (i = 0; i < nroots; i++) { ++ /* The syndrome is in index form, ++ * so nn represents zero ++ */ ++ if (s[i] != nn) ++ goto decode; ++ } ++ ++ /* syndrome is zero, no errors to correct */ ++ return 0; ++ } + + /* form the syndromes; i.e., evaluate data(x) at roots of + * g(x) */ +@@ -99,9 +109,9 @@ + if (no_eras > 0) { + /* Init lambda to be the erasure locator polynomial */ + lambda[1] = alpha_to[rs_modnn(rs, +- prim * (nn - 1 - eras_pos[0]))]; ++ prim * (nn - 1 - (eras_pos[0] + pad)))]; + for (i = 1; i < no_eras; i++) { +- u = rs_modnn(rs, prim * (nn - 1 - eras_pos[i])); ++ u = rs_modnn(rs, prim * (nn - 1 - (eras_pos[i] + pad))); + for (j = i + 1; j > 0; j--) { + tmp = index_of[lambda[j - 1]]; + if (tmp != nn) { +diff --git a/lib/scatterlist.c b/lib/scatterlist.c +index 004fc70fc56a..a854cc39f084 100644 +--- a/lib/scatterlist.c ++++ b/lib/scatterlist.c +@@ -496,17 +496,18 @@ static bool sg_miter_get_next_page(struct sg_mapping_iter *miter) + { + if (!miter->__remaining) { + struct scatterlist *sg; +- unsigned long pgoffset; + + if (!__sg_page_iter_next(&miter->piter)) + return false; + + sg = miter->piter.sg; +- pgoffset = miter->piter.sg_pgoffset; + +- miter->__offset = pgoffset ? 0 : sg->offset; ++ miter->__offset = miter->piter.sg_pgoffset ? 0 : sg->offset; ++ miter->piter.sg_pgoffset += miter->__offset >> PAGE_SHIFT; ++ miter->__offset &= PAGE_SIZE - 1; + miter->__remaining = sg->offset + sg->length - +- (pgoffset << PAGE_SHIFT) - miter->__offset; ++ (miter->piter.sg_pgoffset << PAGE_SHIFT) - ++ miter->__offset; + miter->__remaining = min_t(unsigned long, miter->__remaining, + PAGE_SIZE - miter->__offset); + } +diff --git a/lib/string.c b/lib/string.c +index 1cd9757291b1..8f1a2a04e22f 100644 +--- a/lib/string.c ++++ b/lib/string.c +@@ -202,7 +202,7 @@ ssize_t strscpy(char *dest, const char *src, size_t count) + while (max >= sizeof(unsigned long)) { + unsigned long c, data; + +- c = *(unsigned long *)(src+res); ++ c = read_word_at_a_time(src+res); + if (has_zero(c, &data, &constants)) { + data = prep_zero_mask(c, data, &constants); + data = create_zero_mask(data); +diff --git a/mm/kmemleak.c b/mm/kmemleak.c +index 9e66449ed91f..d05133b37b17 100644 +--- a/mm/kmemleak.c ++++ b/mm/kmemleak.c +@@ -569,7 +569,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size, + if (in_irq()) { + object->pid = 0; + strncpy(object->comm, "hardirq", sizeof(object->comm)); +- } else if (in_softirq()) { ++ } else if (in_serving_softirq()) { + object->pid = 0; + strncpy(object->comm, "softirq", sizeof(object->comm)); + } else { +diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c +index f4259e496f83..7a66e37efb4d 100644 +--- a/mm/mmu_notifier.c ++++ b/mm/mmu_notifier.c +@@ -286,7 +286,7 @@ static int do_mmu_notifier_register(struct mmu_notifier *mn, + * thanks to mm_take_all_locks(). + */ + spin_lock(&mm->mmu_notifier_mm->lock); +- hlist_add_head(&mn->hlist, &mm->mmu_notifier_mm->list); ++ hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier_mm->list); + spin_unlock(&mm->mmu_notifier_mm->lock); + + mm_drop_all_locks(mm); +diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c +index e73fd647065a..f88911cffa1a 100644 +--- a/net/9p/trans_virtio.c ++++ b/net/9p/trans_virtio.c +@@ -764,10 +764,16 @@ static struct p9_trans_module p9_virtio_trans = { + /* The standard init function */ + static int __init p9_virtio_init(void) + { ++ int rc; ++ + INIT_LIST_HEAD(&virtio_chan_list); + + v9fs_register_trans(&p9_virtio_trans); +- return register_virtio_driver(&p9_virtio_drv); ++ rc = register_virtio_driver(&p9_virtio_drv); ++ if (rc) ++ v9fs_unregister_trans(&p9_virtio_trans); ++ ++ return rc; + } + + static void __exit p9_virtio_cleanup(void) +diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c +index af4a02ad8503..1fab9bcf535d 100644 +--- a/net/batman-adv/translation-table.c ++++ b/net/batman-adv/translation-table.c +@@ -3700,6 +3700,8 @@ static void batadv_tt_purge(struct work_struct *work) + + void batadv_tt_free(struct batadv_priv *bat_priv) + { ++ batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_ROAM, 1); ++ + batadv_tvlv_container_unregister(bat_priv, BATADV_TVLV_TT, 1); + batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_TT, 1); + +diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c +index de7b82ece499..21096c882223 100644 +--- a/net/bluetooth/6lowpan.c ++++ b/net/bluetooth/6lowpan.c +@@ -187,10 +187,16 @@ static inline struct lowpan_peer *peer_lookup_dst(struct lowpan_btle_dev *dev, + } + + if (!rt) { +- nexthop = &lowpan_cb(skb)->gw; +- +- if (ipv6_addr_any(nexthop)) +- return NULL; ++ if (ipv6_addr_any(&lowpan_cb(skb)->gw)) { ++ /* There is neither route nor gateway, ++ * probably the destination is a direct peer. ++ */ ++ nexthop = daddr; ++ } else { ++ /* There is a known gateway ++ */ ++ nexthop = &lowpan_cb(skb)->gw; ++ } + } else { + nexthop = rt6_nexthop(rt, daddr); + +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index 6f78489fdb13..163a239bda91 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -5089,6 +5089,11 @@ static void hci_le_remote_conn_param_req_evt(struct hci_dev *hdev, + return send_conn_param_neg_reply(hdev, handle, + HCI_ERROR_UNKNOWN_CONN_ID); + ++ if (min < hcon->le_conn_min_interval || ++ max > hcon->le_conn_max_interval) ++ return send_conn_param_neg_reply(hdev, handle, ++ HCI_ERROR_INVALID_LL_PARAMS); ++ + if (hci_check_conn_params(min, max, latency, timeout)) + return send_conn_param_neg_reply(hdev, handle, + HCI_ERROR_INVALID_LL_PARAMS); +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index ec9b5d159591..4912e80dacef 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -4374,6 +4374,12 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, + + l2cap_chan_lock(chan); + ++ if (chan->state != BT_DISCONN) { ++ l2cap_chan_unlock(chan); ++ mutex_unlock(&conn->chan_lock); ++ return 0; ++ } ++ + l2cap_chan_hold(chan); + l2cap_chan_del(chan, 0); + +@@ -5271,7 +5277,14 @@ static inline int l2cap_conn_param_update_req(struct l2cap_conn *conn, + + memset(&rsp, 0, sizeof(rsp)); + +- err = hci_check_conn_params(min, max, latency, to_multiplier); ++ if (min < hcon->le_conn_min_interval || ++ max > hcon->le_conn_max_interval) { ++ BT_DBG("requested connection interval exceeds current bounds."); ++ err = -EINVAL; ++ } else { ++ err = hci_check_conn_params(min, max, latency, to_multiplier); ++ } ++ + if (err) + rsp.result = cpu_to_le16(L2CAP_CONN_PARAM_REJECTED); + else +diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c +index 1abfbcd8090a..6670b7ffc200 100644 +--- a/net/bluetooth/smp.c ++++ b/net/bluetooth/smp.c +@@ -2514,6 +2514,19 @@ static int smp_cmd_ident_addr_info(struct l2cap_conn *conn, + goto distribute; + } + ++ /* Drop IRK if peer is using identity address during pairing but is ++ * providing different address as identity information. ++ * ++ * Microsoft Surface Precision Mouse is known to have this bug. ++ */ ++ if (hci_is_identity_address(&hcon->dst, hcon->dst_type) && ++ (bacmp(&info->bdaddr, &hcon->dst) || ++ info->addr_type != hcon->dst_type)) { ++ bt_dev_err(hcon->hdev, ++ "ignoring IRK with invalid identity address"); ++ goto distribute; ++ } ++ + bacpy(&smp->id_addr, &info->bdaddr); + smp->id_addr_type = info->addr_type; + +diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c +index 964ffff90432..3626174456b7 100644 +--- a/net/bridge/br_multicast.c ++++ b/net/bridge/br_multicast.c +@@ -1036,6 +1036,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, + int type; + int err = 0; + __be32 group; ++ u16 nsrcs; + + ih = igmpv3_report_hdr(skb); + num = ntohs(ih->ngrec); +@@ -1049,8 +1050,9 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, + grec = (void *)(skb->data + len - sizeof(*grec)); + group = grec->grec_mca; + type = grec->grec_type; ++ nsrcs = ntohs(grec->grec_nsrcs); + +- len += ntohs(grec->grec_nsrcs) * 4; ++ len += nsrcs * 4; + if (!pskb_may_pull(skb, len)) + return -EINVAL; + +@@ -1070,7 +1072,7 @@ static int br_ip4_multicast_igmp3_report(struct net_bridge *br, + + if ((type == IGMPV3_CHANGE_TO_INCLUDE || + type == IGMPV3_MODE_IS_INCLUDE) && +- ntohs(grec->grec_nsrcs) == 0) { ++ nsrcs == 0) { + br_ip4_multicast_leave_group(br, port, group, vid); + } else { + err = br_ip4_multicast_add_group(br, port, group, vid); +@@ -1103,23 +1105,26 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br, + len = skb_transport_offset(skb) + sizeof(*icmp6h); + + for (i = 0; i < num; i++) { +- __be16 *nsrcs, _nsrcs; +- +- nsrcs = skb_header_pointer(skb, +- len + offsetof(struct mld2_grec, +- grec_nsrcs), +- sizeof(_nsrcs), &_nsrcs); +- if (!nsrcs) ++ __be16 *_nsrcs, __nsrcs; ++ u16 nsrcs; ++ ++ _nsrcs = skb_header_pointer(skb, ++ len + offsetof(struct mld2_grec, ++ grec_nsrcs), ++ sizeof(__nsrcs), &__nsrcs); ++ if (!_nsrcs) + return -EINVAL; + ++ nsrcs = ntohs(*_nsrcs); ++ + if (!pskb_may_pull(skb, + len + sizeof(*grec) + +- sizeof(struct in6_addr) * ntohs(*nsrcs))) ++ sizeof(struct in6_addr) * nsrcs)) + return -EINVAL; + + grec = (struct mld2_grec *)(skb->data + len); + len += sizeof(*grec) + +- sizeof(struct in6_addr) * ntohs(*nsrcs); ++ sizeof(struct in6_addr) * nsrcs; + + /* We treat these as MLDv1 reports for now. */ + switch (grec->grec_type) { +@@ -1137,7 +1142,7 @@ static int br_ip6_multicast_mld2_report(struct net_bridge *br, + + if ((grec->grec_type == MLD2_CHANGE_TO_INCLUDE || + grec->grec_type == MLD2_MODE_IS_INCLUDE) && +- ntohs(*nsrcs) == 0) { ++ nsrcs == 0) { + br_ip6_multicast_leave_group(br, port, &grec->grec_mca, + vid); + } else { +@@ -1374,7 +1379,6 @@ static int br_ip6_multicast_query(struct net_bridge *br, + struct sk_buff *skb, + u16 vid) + { +- const struct ipv6hdr *ip6h = ipv6_hdr(skb); + struct mld_msg *mld; + struct net_bridge_mdb_entry *mp; + struct mld2_query *mld2q; +@@ -1418,7 +1422,7 @@ static int br_ip6_multicast_query(struct net_bridge *br, + + if (is_general_query) { + saddr.proto = htons(ETH_P_IPV6); +- saddr.u.ip6 = ip6h->saddr; ++ saddr.u.ip6 = ipv6_hdr(skb)->saddr; + + br_multicast_query_received(br, port, &br->ip6_other_query, + &saddr, max_delay); +diff --git a/net/bridge/br_stp_bpdu.c b/net/bridge/br_stp_bpdu.c +index 5881fbc114a9..36282eb3492d 100644 +--- a/net/bridge/br_stp_bpdu.c ++++ b/net/bridge/br_stp_bpdu.c +@@ -147,7 +147,6 @@ void br_send_tcn_bpdu(struct net_bridge_port *p) + void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb, + struct net_device *dev) + { +- const unsigned char *dest = eth_hdr(skb)->h_dest; + struct net_bridge_port *p; + struct net_bridge *br; + const unsigned char *buf; +@@ -176,7 +175,7 @@ void br_stp_rcv(const struct stp_proto *proto, struct sk_buff *skb, + if (p->state == BR_STATE_DISABLED) + goto out; + +- if (!ether_addr_equal(dest, br->group_addr)) ++ if (!ether_addr_equal(eth_hdr(skb)->h_dest, br->group_addr)) + goto out; + + if (p->flags & BR_BPDU_GUARD) { +diff --git a/net/core/neighbour.c b/net/core/neighbour.c +index 01cdfe85bb09..6e964fec45cf 100644 +--- a/net/core/neighbour.c ++++ b/net/core/neighbour.c +@@ -982,6 +982,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) + + atomic_set(&neigh->probes, + NEIGH_VAR(neigh->parms, UCAST_PROBES)); ++ neigh_del_timer(neigh); + neigh->nud_state = NUD_INCOMPLETE; + neigh->updated = now; + next = now + max(NEIGH_VAR(neigh->parms, RETRANS_TIME), +@@ -998,6 +999,7 @@ int __neigh_event_send(struct neighbour *neigh, struct sk_buff *skb) + } + } else if (neigh->nud_state & NUD_STALE) { + neigh_dbg(2, "neigh %p is delayed\n", neigh); ++ neigh_del_timer(neigh); + neigh->nud_state = NUD_DELAY; + neigh->updated = jiffies; + neigh_add_timer(neigh, jiffies + +diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c +index f08f984ebc56..93438113d136 100644 +--- a/net/ipv4/devinet.c ++++ b/net/ipv4/devinet.c +@@ -67,6 +67,11 @@ + + #include "fib_lookup.h" + ++#define IPV6ONLY_FLAGS \ ++ (IFA_F_NODAD | IFA_F_OPTIMISTIC | IFA_F_DADFAILED | \ ++ IFA_F_HOMEADDRESS | IFA_F_TENTATIVE | \ ++ IFA_F_MANAGETEMPADDR | IFA_F_STABLE_PRIVACY) ++ + static struct ipv4_devconf ipv4_devconf = { + .data = { + [IPV4_DEVCONF_ACCEPT_REDIRECTS - 1] = 1, +@@ -453,6 +458,9 @@ static int __inet_insert_ifa(struct in_ifaddr *ifa, struct nlmsghdr *nlh, + ifa->ifa_flags &= ~IFA_F_SECONDARY; + last_primary = &in_dev->ifa_list; + ++ /* Don't set IPv6 only flags to IPv4 addresses */ ++ ifa->ifa_flags &= ~IPV6ONLY_FLAGS; ++ + for (ifap = &in_dev->ifa_list; (ifa1 = *ifap) != NULL; + ifap = &ifa1->ifa_next) { + if (!(ifa1->ifa_flags & IFA_F_SECONDARY) && +diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c +index 780dc6fe899d..02c1736c0b89 100644 +--- a/net/ipv4/igmp.c ++++ b/net/ipv4/igmp.c +@@ -1212,12 +1212,8 @@ static void igmpv3_del_delrec(struct in_device *in_dev, struct ip_mc_list *im) + im->interface = pmc->interface; + im->crcount = in_dev->mr_qrv ?: net->ipv4.sysctl_igmp_qrv; + if (im->sfmode == MCAST_INCLUDE) { +- im->tomb = pmc->tomb; +- pmc->tomb = NULL; +- +- im->sources = pmc->sources; +- pmc->sources = NULL; +- ++ swap(im->tomb, pmc->tomb); ++ swap(im->sources, pmc->sources); + for (psf = im->sources; psf; psf = psf->sf_next) + psf->sf_crcount = im->crcount; + } +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index ee2822a411f9..6e25524c6a74 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -2312,6 +2312,8 @@ int tcp_disconnect(struct sock *sk, int flags) + dst_release(sk->sk_rx_dst); + sk->sk_rx_dst = NULL; + tcp_saved_syn_free(tp); ++ tp->bytes_acked = 0; ++ tp->bytes_received = 0; + + WARN_ON(inet->inet_num && !icsk->icsk_bind_hash); + +diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c +index 41f67629ae59..f38b22f54c09 100644 +--- a/net/ipv6/ip6mr.c ++++ b/net/ipv6/ip6mr.c +@@ -1668,6 +1668,10 @@ int ip6_mroute_setsockopt(struct sock *sk, int optname, char __user *optval, uns + struct net *net = sock_net(sk); + struct mr6_table *mrt; + ++ if (sk->sk_type != SOCK_RAW || ++ inet_sk(sk)->inet_num != IPPROTO_ICMPV6) ++ return -EOPNOTSUPP; ++ + mrt = ip6mr_get_table(net, raw6_sk(sk)->ip6mr_table ? : RT6_TABLE_DFLT); + if (!mrt) + return -ENOENT; +@@ -1679,9 +1683,6 @@ int ip6_mroute_setsockopt(struct sock *sk, int optname, char __user *optval, uns + + switch (optname) { + case MRT6_INIT: +- if (sk->sk_type != SOCK_RAW || +- inet_sk(sk)->inet_num != IPPROTO_ICMPV6) +- return -EOPNOTSUPP; + if (optlen < sizeof(int)) + return -EINVAL; + +@@ -1818,6 +1819,10 @@ int ip6_mroute_getsockopt(struct sock *sk, int optname, char __user *optval, + struct net *net = sock_net(sk); + struct mr6_table *mrt; + ++ if (sk->sk_type != SOCK_RAW || ++ inet_sk(sk)->inet_num != IPPROTO_ICMPV6) ++ return -EOPNOTSUPP; ++ + mrt = ip6mr_get_table(net, raw6_sk(sk)->ip6mr_table ? : RT6_TABLE_DFLT); + if (!mrt) + return -ENOENT; +diff --git a/net/key/af_key.c b/net/key/af_key.c +index 3ba903ff2bb0..36db179d848e 100644 +--- a/net/key/af_key.c ++++ b/net/key/af_key.c +@@ -2463,8 +2463,10 @@ static int key_pol_get_resp(struct sock *sk, struct xfrm_policy *xp, const struc + goto out; + } + err = pfkey_xfrm_policy2msg(out_skb, xp, dir); +- if (err < 0) ++ if (err < 0) { ++ kfree_skb(out_skb); + goto out; ++ } + + out_hdr = (struct sadb_msg *) out_skb->data; + out_hdr->sadb_msg_version = hdr->sadb_msg_version; +@@ -2717,8 +2719,10 @@ static int dump_sp(struct xfrm_policy *xp, int dir, int count, void *ptr) + return PTR_ERR(out_skb); + + err = pfkey_xfrm_policy2msg(out_skb, xp, dir); +- if (err < 0) ++ if (err < 0) { ++ kfree_skb(out_skb); + return err; ++ } + + out_hdr = (struct sadb_msg *) out_skb->data; + out_hdr->sadb_msg_version = pfk->dump.msg_version; +diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c +index 046ae1caecea..e5888983bec4 100644 +--- a/net/netrom/af_netrom.c ++++ b/net/netrom/af_netrom.c +@@ -870,7 +870,7 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev) + unsigned short frametype, flags, window, timeout; + int ret; + +- skb->sk = NULL; /* Initially we don't know who it's for */ ++ skb_orphan(skb); + + /* + * skb->data points to the netrom frame start +@@ -968,7 +968,9 @@ int nr_rx_frame(struct sk_buff *skb, struct net_device *dev) + + window = skb->data[20]; + ++ sock_hold(make); + skb->sk = make; ++ skb->destructor = sock_efree; + make->sk_state = TCP_ESTABLISHED; + + /* Fill in his circuit details */ +diff --git a/net/nfc/nci/data.c b/net/nfc/nci/data.c +index dbd24254412a..d20383779710 100644 +--- a/net/nfc/nci/data.c ++++ b/net/nfc/nci/data.c +@@ -119,7 +119,7 @@ static int nci_queue_tx_data_frags(struct nci_dev *ndev, + conn_info = nci_get_conn_info_by_conn_id(ndev, conn_id); + if (!conn_info) { + rc = -EPROTO; +- goto free_exit; ++ goto exit; + } + + __skb_queue_head_init(&frags_q); +diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c +index 05d9f42fc309..7135aff3946d 100644 +--- a/net/openvswitch/actions.c ++++ b/net/openvswitch/actions.c +@@ -150,8 +150,7 @@ static void update_ethertype(struct sk_buff *skb, struct ethhdr *hdr, + if (skb->ip_summed == CHECKSUM_COMPLETE) { + __be16 diff[] = { ~(hdr->h_proto), ethertype }; + +- skb->csum = ~csum_partial((char *)diff, sizeof(diff), +- ~skb->csum); ++ skb->csum = csum_partial((char *)diff, sizeof(diff), skb->csum); + } + + hdr->h_proto = ethertype; +@@ -239,8 +238,7 @@ static int set_mpls(struct sk_buff *skb, struct sw_flow_key *flow_key, + if (skb->ip_summed == CHECKSUM_COMPLETE) { + __be32 diff[] = { ~(stack->label_stack_entry), lse }; + +- skb->csum = ~csum_partial((char *)diff, sizeof(diff), +- ~skb->csum); ++ skb->csum = csum_partial((char *)diff, sizeof(diff), skb->csum); + } + + stack->label_stack_entry = lse; +diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c +index 2d59c9be40e1..222c566cf25d 100644 +--- a/net/rxrpc/af_rxrpc.c ++++ b/net/rxrpc/af_rxrpc.c +@@ -405,6 +405,7 @@ static int rxrpc_sendmsg(struct socket *sock, struct msghdr *m, size_t len) + + switch (rx->sk.sk_state) { + case RXRPC_UNBOUND: ++ case RXRPC_CLIENT_UNBOUND: + rx->srx.srx_family = AF_RXRPC; + rx->srx.srx_service = 0; + rx->srx.transport_type = SOCK_DGRAM; +@@ -429,10 +430,9 @@ static int rxrpc_sendmsg(struct socket *sock, struct msghdr *m, size_t len) + } + + rx->local = local; +- rx->sk.sk_state = RXRPC_CLIENT_UNBOUND; ++ rx->sk.sk_state = RXRPC_CLIENT_BOUND; + /* Fall through */ + +- case RXRPC_CLIENT_UNBOUND: + case RXRPC_CLIENT_BOUND: + if (!m->msg_name && + test_bit(RXRPC_SOCK_CONNECTED, &rx->flags)) { +diff --git a/net/xfrm/Kconfig b/net/xfrm/Kconfig +index bda1a13628a8..c09336b5a028 100644 +--- a/net/xfrm/Kconfig ++++ b/net/xfrm/Kconfig +@@ -9,6 +9,8 @@ config XFRM_ALGO + tristate + select XFRM + select CRYPTO ++ select CRYPTO_HASH ++ select CRYPTO_BLKCIPHER + + config XFRM_USER + tristate "Transformation user configuration interface" +diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c +index ca5c79bfd9a5..f3e9d500fa5a 100644 +--- a/net/xfrm/xfrm_user.c ++++ b/net/xfrm/xfrm_user.c +@@ -150,6 +150,25 @@ static int verify_newsa_info(struct xfrm_usersa_info *p, + + err = -EINVAL; + switch (p->family) { ++ case AF_INET: ++ break; ++ ++ case AF_INET6: ++#if IS_ENABLED(CONFIG_IPV6) ++ break; ++#else ++ err = -EAFNOSUPPORT; ++ goto out; ++#endif ++ ++ default: ++ goto out; ++ } ++ ++ switch (p->sel.family) { ++ case AF_UNSPEC: ++ break; ++ + case AF_INET: + if (p->sel.prefixlen_d > 32 || p->sel.prefixlen_s > 32) + goto out; +diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c +index 1f22a186c18c..2c8b8c662da5 100644 +--- a/scripts/kallsyms.c ++++ b/scripts/kallsyms.c +@@ -161,6 +161,9 @@ static int read_symbol(FILE *in, struct sym_entry *s) + /* exclude debugging symbols */ + else if (stype == 'N') + return -1; ++ /* exclude s390 kasan local symbols */ ++ else if (!strncmp(sym, ".LASANPC", 8)) ++ return -1; + + /* include the type field in the symbol name, so that it gets + * compressed together */ +diff --git a/scripts/recordmcount.h b/scripts/recordmcount.h +index b9897e2be404..04151ede8043 100644 +--- a/scripts/recordmcount.h ++++ b/scripts/recordmcount.h +@@ -326,7 +326,8 @@ static uint_t *sift_rel_mcount(uint_t *mlocp, + if (!mcountsym) + mcountsym = get_mcountsym(sym0, relp, str0); + +- if (mcountsym == Elf_r_sym(relp) && !is_fake_mcount(relp)) { ++ if (mcountsym && mcountsym == Elf_r_sym(relp) && ++ !is_fake_mcount(relp)) { + uint_t const addend = + _w(_w(relp->r_offset) - recval + mcount_adjust); + mrelp->r_offset = _w(offbase +diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c +index bc6d371031fc..130e22742137 100644 +--- a/sound/core/seq/seq_clientmgr.c ++++ b/sound/core/seq/seq_clientmgr.c +@@ -1001,7 +1001,7 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf, + { + struct snd_seq_client *client = file->private_data; + int written = 0, len; +- int err; ++ int err, handled; + struct snd_seq_event event; + + if (!(snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT)) +@@ -1014,6 +1014,8 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf, + if (!client->accept_output || client->pool == NULL) + return -ENXIO; + ++ repeat: ++ handled = 0; + /* allocate the pool now if the pool is not allocated yet */ + mutex_lock(&client->ioctl_mutex); + if (client->pool->size > 0 && !snd_seq_write_pool_allocated(client)) { +@@ -1073,12 +1075,19 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf, + 0, 0, &client->ioctl_mutex); + if (err < 0) + break; ++ handled++; + + __skip_event: + /* Update pointers and counts */ + count -= len; + buf += len; + written += len; ++ ++ /* let's have a coffee break if too many events are queued */ ++ if (++handled >= 200) { ++ mutex_unlock(&client->ioctl_mutex); ++ goto repeat; ++ } + } + + out: +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c +index 447b3a8a83c3..df66969b124d 100644 +--- a/sound/pci/hda/patch_conexant.c ++++ b/sound/pci/hda/patch_conexant.c +@@ -1011,6 +1011,7 @@ static int patch_conexant_auto(struct hda_codec *codec) + */ + + static const struct hda_device_id snd_hda_id_conexant[] = { ++ HDA_CODEC_ENTRY(0x14f11f86, "CX8070", patch_conexant_auto), + HDA_CODEC_ENTRY(0x14f12008, "CX8200", patch_conexant_auto), + HDA_CODEC_ENTRY(0x14f15045, "CX20549 (Venice)", patch_conexant_auto), + HDA_CODEC_ENTRY(0x14f15047, "CX20551 (Waikiki)", patch_conexant_auto), +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 95fb213cf94b..04d2dc7097a1 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -7272,6 +7272,11 @@ static const struct snd_hda_pin_quirk alc662_pin_fixup_tbl[] = { + {0x18, 0x01a19030}, + {0x1a, 0x01813040}, + {0x21, 0x01014020}), ++ SND_HDA_PIN_QUIRK(0x10ec0867, 0x1028, "Dell", ALC891_FIXUP_DELL_MIC_NO_PRESENCE, ++ {0x16, 0x01813030}, ++ {0x17, 0x02211010}, ++ {0x18, 0x01a19040}, ++ {0x21, 0x01014020}), + SND_HDA_PIN_QUIRK(0x10ec0662, 0x1028, "Dell", ALC662_FIXUP_DELL_MIC_NO_PRESENCE, + {0x14, 0x01014010}, + {0x18, 0x01a19020}, +diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c +index c0b6733c0623..8c4375bf34ab 100644 +--- a/sound/usb/line6/podhd.c ++++ b/sound/usb/line6/podhd.c +@@ -385,7 +385,7 @@ static const struct line6_properties podhd_properties_table[] = { + .name = "POD HD500", + .capabilities = LINE6_CAP_PCM + | LINE6_CAP_HWMON, +- .altsetting = 1, ++ .altsetting = 0, + .ep_ctrl_r = 0x81, + .ep_ctrl_w = 0x01, + .ep_audio_r = 0x86, +diff --git a/tools/iio/iio_utils.c b/tools/iio/iio_utils.c +index 7a6d61c6c012..55272fef3b50 100644 +--- a/tools/iio/iio_utils.c ++++ b/tools/iio/iio_utils.c +@@ -159,9 +159,9 @@ int iioutils_get_type(unsigned *is_signed, unsigned *bytes, unsigned *bits_used, + *be = (endianchar == 'b'); + *bytes = padint / 8; + if (*bits_used == 64) +- *mask = ~0; ++ *mask = ~(0ULL); + else +- *mask = (1ULL << *bits_used) - 1; ++ *mask = (1ULL << *bits_used) - 1ULL; + + *is_signed = (signchar == 's'); + if (fclose(sysfsfp)) { +diff --git a/tools/perf/arch/arm/util/cs-etm.c b/tools/perf/arch/arm/util/cs-etm.c +index 47d584da5819..f6cff278aa5d 100644 +--- a/tools/perf/arch/arm/util/cs-etm.c ++++ b/tools/perf/arch/arm/util/cs-etm.c +@@ -41,6 +41,8 @@ struct cs_etm_recording { + struct auxtrace_record itr; + struct perf_pmu *cs_etm_pmu; + struct perf_evlist *evlist; ++ int wrapped_cnt; ++ bool *wrapped; + bool snapshot_mode; + size_t snapshot_size; + }; +@@ -458,16 +460,131 @@ static int cs_etm_info_fill(struct auxtrace_record *itr, + return 0; + } + +-static int cs_etm_find_snapshot(struct auxtrace_record *itr __maybe_unused, ++static int cs_etm_alloc_wrapped_array(struct cs_etm_recording *ptr, int idx) ++{ ++ bool *wrapped; ++ int cnt = ptr->wrapped_cnt; ++ ++ /* Make @ptr->wrapped as big as @idx */ ++ while (cnt <= idx) ++ cnt++; ++ ++ /* ++ * Free'ed in cs_etm_recording_free(). Using realloc() to avoid ++ * cross compilation problems where the host's system supports ++ * reallocarray() but not the target. ++ */ ++ wrapped = realloc(ptr->wrapped, cnt * sizeof(bool)); ++ if (!wrapped) ++ return -ENOMEM; ++ ++ wrapped[cnt - 1] = false; ++ ptr->wrapped_cnt = cnt; ++ ptr->wrapped = wrapped; ++ ++ return 0; ++} ++ ++static bool cs_etm_buffer_has_wrapped(unsigned char *buffer, ++ size_t buffer_size, u64 head) ++{ ++ u64 i, watermark; ++ u64 *buf = (u64 *)buffer; ++ size_t buf_size = buffer_size; ++ ++ /* ++ * We want to look the very last 512 byte (chosen arbitrarily) in ++ * the ring buffer. ++ */ ++ watermark = buf_size - 512; ++ ++ /* ++ * @head is continuously increasing - if its value is equal or greater ++ * than the size of the ring buffer, it has wrapped around. ++ */ ++ if (head >= buffer_size) ++ return true; ++ ++ /* ++ * The value of @head is somewhere within the size of the ring buffer. ++ * This can be that there hasn't been enough data to fill the ring ++ * buffer yet or the trace time was so long that @head has numerically ++ * wrapped around. To find we need to check if we have data at the very ++ * end of the ring buffer. We can reliably do this because mmap'ed ++ * pages are zeroed out and there is a fresh mapping with every new ++ * session. ++ */ ++ ++ /* @head is less than 512 byte from the end of the ring buffer */ ++ if (head > watermark) ++ watermark = head; ++ ++ /* ++ * Speed things up by using 64 bit transactions (see "u64 *buf" above) ++ */ ++ watermark >>= 3; ++ buf_size >>= 3; ++ ++ /* ++ * If we find trace data at the end of the ring buffer, @head has ++ * been there and has numerically wrapped around at least once. ++ */ ++ for (i = watermark; i < buf_size; i++) ++ if (buf[i]) ++ return true; ++ ++ return false; ++} ++ ++static int cs_etm_find_snapshot(struct auxtrace_record *itr, + int idx, struct auxtrace_mmap *mm, +- unsigned char *data __maybe_unused, ++ unsigned char *data, + u64 *head, u64 *old) + { ++ int err; ++ bool wrapped; ++ struct cs_etm_recording *ptr = ++ container_of(itr, struct cs_etm_recording, itr); ++ ++ /* ++ * Allocate memory to keep track of wrapping if this is the first ++ * time we deal with this *mm. ++ */ ++ if (idx >= ptr->wrapped_cnt) { ++ err = cs_etm_alloc_wrapped_array(ptr, idx); ++ if (err) ++ return err; ++ } ++ ++ /* ++ * Check to see if *head has wrapped around. If it hasn't only the ++ * amount of data between *head and *old is snapshot'ed to avoid ++ * bloating the perf.data file with zeros. But as soon as *head has ++ * wrapped around the entire size of the AUX ring buffer it taken. ++ */ ++ wrapped = ptr->wrapped[idx]; ++ if (!wrapped && cs_etm_buffer_has_wrapped(data, mm->len, *head)) { ++ wrapped = true; ++ ptr->wrapped[idx] = true; ++ } ++ + pr_debug3("%s: mmap index %d old head %zu new head %zu size %zu\n", + __func__, idx, (size_t)*old, (size_t)*head, mm->len); + +- *old = *head; +- *head += mm->len; ++ /* No wrap has occurred, we can just use *head and *old. */ ++ if (!wrapped) ++ return 0; ++ ++ /* ++ * *head has wrapped around - adjust *head and *old to pickup the ++ * entire content of the AUX buffer. ++ */ ++ if (*head >= mm->len) { ++ *old = *head - mm->len; ++ } else { ++ *head += mm->len; ++ *old = *head - mm->len; ++ } + + return 0; + } +@@ -508,6 +625,8 @@ static void cs_etm_recording_free(struct auxtrace_record *itr) + { + struct cs_etm_recording *ptr = + container_of(itr, struct cs_etm_recording, itr); ++ ++ zfree(&ptr->wrapped); + free(ptr); + } + +diff --git a/tools/perf/perf.h b/tools/perf/perf.h +index 8f8d895d5b74..3b9d56125ee2 100644 +--- a/tools/perf/perf.h ++++ b/tools/perf/perf.h +@@ -23,7 +23,7 @@ static inline unsigned long long rdclock(void) + } + + #ifndef MAX_NR_CPUS +-#define MAX_NR_CPUS 1024 ++#define MAX_NR_CPUS 2048 + #endif + + extern const char *input_name; +diff --git a/tools/perf/tests/mmap-thread-lookup.c b/tools/perf/tests/mmap-thread-lookup.c +index 0c5ce44f723f..e5d6e6584001 100644 +--- a/tools/perf/tests/mmap-thread-lookup.c ++++ b/tools/perf/tests/mmap-thread-lookup.c +@@ -49,7 +49,7 @@ static void *thread_fn(void *arg) + { + struct thread_data *td = arg; + ssize_t ret; +- int go; ++ int go = 0; + + if (thread_init(td)) + return NULL; +diff --git a/tools/perf/tests/parse-events.c b/tools/perf/tests/parse-events.c +index aa9276bfe3e9..9134a0c3e99d 100644 +--- a/tools/perf/tests/parse-events.c ++++ b/tools/perf/tests/parse-events.c +@@ -12,6 +12,32 @@ + #define PERF_TP_SAMPLE_TYPE (PERF_SAMPLE_RAW | PERF_SAMPLE_TIME | \ + PERF_SAMPLE_CPU | PERF_SAMPLE_PERIOD) + ++#if defined(__s390x__) ++/* Return true if kvm module is available and loaded. Test this ++ * and retun success when trace point kvm_s390_create_vm ++ * exists. Otherwise this test always fails. ++ */ ++static bool kvm_s390_create_vm_valid(void) ++{ ++ char *eventfile; ++ bool rc = false; ++ ++ eventfile = get_events_file("kvm-s390"); ++ ++ if (eventfile) { ++ DIR *mydir = opendir(eventfile); ++ ++ if (mydir) { ++ rc = true; ++ closedir(mydir); ++ } ++ put_events_file(eventfile); ++ } ++ ++ return rc; ++} ++#endif ++ + static int test__checkevent_tracepoint(struct perf_evlist *evlist) + { + struct perf_evsel *evsel = perf_evlist__first(evlist); +@@ -1593,6 +1619,7 @@ static struct evlist_test test__events[] = { + { + .name = "kvm-s390:kvm_s390_create_vm", + .check = test__checkevent_tracepoint, ++ .valid = kvm_s390_create_vm_valid, + .id = 100, + }, + #endif +diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c +index a62f79558146..758d0108c5a5 100644 +--- a/tools/perf/util/evsel.c ++++ b/tools/perf/util/evsel.c +@@ -558,6 +558,9 @@ const char *perf_evsel__name(struct perf_evsel *evsel) + { + char bf[128]; + ++ if (!evsel) ++ goto out_unknown; ++ + if (evsel->name) + return evsel->name; + +@@ -594,7 +597,10 @@ const char *perf_evsel__name(struct perf_evsel *evsel) + + evsel->name = strdup(bf); + +- return evsel->name ?: "unknown"; ++ if (evsel->name) ++ return evsel->name; ++out_unknown: ++ return "unknown"; + } + + const char *perf_evsel__group_name(struct perf_evsel *evsel) +diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c +index de9b369d2d2e..283148104ffb 100644 +--- a/tools/perf/util/header.c ++++ b/tools/perf/util/header.c +@@ -1008,7 +1008,7 @@ static int build_caches(struct cpu_cache_level caches[], u32 size, u32 *cntp) + return 0; + } + +-#define MAX_CACHES 2000 ++#define MAX_CACHES (MAX_NR_CPUS * 4) + + static int write_cache(int fd, struct perf_header *h __maybe_unused, + struct perf_evlist *evlist __maybe_unused) +diff --git a/tools/power/cpupower/utils/cpufreq-set.c b/tools/power/cpupower/utils/cpufreq-set.c +index 1eef0aed6423..08a405593a79 100644 +--- a/tools/power/cpupower/utils/cpufreq-set.c ++++ b/tools/power/cpupower/utils/cpufreq-set.c +@@ -306,6 +306,8 @@ int cmd_freq_set(int argc, char **argv) + bitmask_setbit(cpus_chosen, cpus->cpu); + cpus = cpus->next; + } ++ /* Set the last cpu in related cpus list */ ++ bitmask_setbit(cpus_chosen, cpus->cpu); + cpufreq_put_related_cpus(cpus); + } + }