From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 9559B138A1A for ; Sat, 15 Nov 2014 00:32:22 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 15DD1E0C80; Sat, 15 Nov 2014 00:32:22 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 8A28FE0C80 for ; Sat, 15 Nov 2014 00:32:21 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id CB807340680 for ; Sat, 15 Nov 2014 00:32:19 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 56059A282 for ; Sat, 15 Nov 2014 00:32:18 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1416014963.61a9f0b369fa2233aca562d6154670c9ae433e86.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.14 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1023_linux-3.14.24.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 61a9f0b369fa2233aca562d6154670c9ae433e86 X-VCS-Branch: 3.14 Date: Sat, 15 Nov 2014 00:32:18 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 551c5b15-69c0-4feb-b3d1-a953e955edac X-Archives-Hash: 80921520d88a579759854593472eb9c9 commit: 61a9f0b369fa2233aca562d6154670c9ae433e86 Author: Mike Pagano gentoo org> AuthorDate: Sat Nov 15 01:29:23 2014 +0000 Commit: Mike Pagano gentoo org> CommitDate: Sat Nov 15 01:29:23 2014 +0000 URL: http://sources.gentoo.org/gitweb/?p=proj/linux-patches.git;a=commit;h=61a9f0b3 Linux patch 3.14.24 --- 0000_README | 4 + 1023_linux-3.14.24.patch | 7091 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 7095 insertions(+) diff --git a/0000_README b/0000_README index e75cacd..09d4451 100644 --- a/0000_README +++ b/0000_README @@ -134,6 +134,10 @@ Patch: 1022_linux-3.14.23.patch From: http://www.kernel.org Desc: Linux 3.14.23 +Patch: 1023_linux-3.14.24.patch +From: http://www.kernel.org +Desc: Linux 3.14.24 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1023_linux-3.14.24.patch b/1023_linux-3.14.24.patch new file mode 100644 index 0000000..0e86292 --- /dev/null +++ b/1023_linux-3.14.24.patch @@ -0,0 +1,7091 @@ +diff --git a/Makefile b/Makefile +index 135a04a26076..8fd06101c482 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 14 +-SUBLEVEL = 23 ++SUBLEVEL = 24 + EXTRAVERSION = + NAME = Remembering Coco + +diff --git a/arch/arc/boot/dts/nsimosci.dts b/arch/arc/boot/dts/nsimosci.dts +index 4f31b2eb5cdf..398064cef746 100644 +--- a/arch/arc/boot/dts/nsimosci.dts ++++ b/arch/arc/boot/dts/nsimosci.dts +@@ -20,7 +20,7 @@ + /* this is for console on PGU */ + /* bootargs = "console=tty0 consoleblank=0"; */ + /* this is for console on serial */ +- bootargs = "earlycon=uart8250,mmio32,0xc0000000,115200n8 console=ttyS0,115200n8 consoleblank=0 debug"; ++ bootargs = "earlycon=uart8250,mmio32,0xc0000000,115200n8 console=tty0 console=ttyS0,115200n8 consoleblank=0 debug"; + }; + + aliases { +diff --git a/arch/arc/include/asm/cache.h b/arch/arc/include/asm/cache.h +index 2fd3162ec4df..c1d3d2da1191 100644 +--- a/arch/arc/include/asm/cache.h ++++ b/arch/arc/include/asm/cache.h +@@ -55,4 +55,31 @@ extern void read_decode_cache_bcr(void); + + #endif /* !__ASSEMBLY__ */ + ++/* Instruction cache related Auxiliary registers */ ++#define ARC_REG_IC_BCR 0x77 /* Build Config reg */ ++#define ARC_REG_IC_IVIC 0x10 ++#define ARC_REG_IC_CTRL 0x11 ++#define ARC_REG_IC_IVIL 0x19 ++#if defined(CONFIG_ARC_MMU_V3) || defined (CONFIG_ARC_MMU_V4) ++#define ARC_REG_IC_PTAG 0x1E ++#endif ++ ++/* Bit val in IC_CTRL */ ++#define IC_CTRL_CACHE_DISABLE 0x1 ++ ++/* Data cache related Auxiliary registers */ ++#define ARC_REG_DC_BCR 0x72 /* Build Config reg */ ++#define ARC_REG_DC_IVDC 0x47 ++#define ARC_REG_DC_CTRL 0x48 ++#define ARC_REG_DC_IVDL 0x4A ++#define ARC_REG_DC_FLSH 0x4B ++#define ARC_REG_DC_FLDL 0x4C ++#if defined(CONFIG_ARC_MMU_V3) || defined (CONFIG_ARC_MMU_V4) ++#define ARC_REG_DC_PTAG 0x5C ++#endif ++ ++/* Bit val in DC_CTRL */ ++#define DC_CTRL_INV_MODE_FLUSH 0x40 ++#define DC_CTRL_FLUSH_STATUS 0x100 ++ + #endif /* _ASM_CACHE_H */ +diff --git a/arch/arc/include/asm/kgdb.h b/arch/arc/include/asm/kgdb.h +index b65fca7ffeb5..fea931634136 100644 +--- a/arch/arc/include/asm/kgdb.h ++++ b/arch/arc/include/asm/kgdb.h +@@ -19,7 +19,7 @@ + * register API yet */ + #undef DBG_MAX_REG_NUM + +-#define GDB_MAX_REGS 39 ++#define GDB_MAX_REGS 87 + + #define BREAK_INSTR_SIZE 2 + #define CACHE_FLUSH_IS_SAFE 1 +@@ -33,23 +33,27 @@ static inline void arch_kgdb_breakpoint(void) + + extern void kgdb_trap(struct pt_regs *regs); + +-enum arc700_linux_regnums { ++/* This is the numbering of registers according to the GDB. See GDB's ++ * arc-tdep.h for details. ++ * ++ * Registers are ordered for GDB 7.5. It is incompatible with GDB 6.8. */ ++enum arc_linux_regnums { + _R0 = 0, + _R1, _R2, _R3, _R4, _R5, _R6, _R7, _R8, _R9, _R10, _R11, _R12, _R13, + _R14, _R15, _R16, _R17, _R18, _R19, _R20, _R21, _R22, _R23, _R24, + _R25, _R26, +- _BTA = 27, +- _LP_START = 28, +- _LP_END = 29, +- _LP_COUNT = 30, +- _STATUS32 = 31, +- _BLINK = 32, +- _FP = 33, +- __SP = 34, +- _EFA = 35, +- _RET = 36, +- _ORIG_R8 = 37, +- _STOP_PC = 38 ++ _FP = 27, ++ __SP = 28, ++ _R30 = 30, ++ _BLINK = 31, ++ _LP_COUNT = 60, ++ _STOP_PC = 64, ++ _RET = 64, ++ _LP_START = 65, ++ _LP_END = 66, ++ _STATUS32 = 67, ++ _ECR = 76, ++ _BTA = 82, + }; + + #else +diff --git a/arch/arc/kernel/head.S b/arch/arc/kernel/head.S +index 991997269d02..07a58f2d3077 100644 +--- a/arch/arc/kernel/head.S ++++ b/arch/arc/kernel/head.S +@@ -12,10 +12,42 @@ + * to skip certain things during boot on simulator + */ + ++#include + #include + #include +-#include + #include ++#include ++ ++.macro CPU_EARLY_SETUP ++ ++ ; Setting up Vectror Table (in case exception happens in early boot ++ sr @_int_vec_base_lds, [AUX_INTR_VEC_BASE] ++ ++ ; Disable I-cache/D-cache if kernel so configured ++ lr r5, [ARC_REG_IC_BCR] ++ breq r5, 0, 1f ; I$ doesn't exist ++ lr r5, [ARC_REG_IC_CTRL] ++#ifdef CONFIG_ARC_HAS_ICACHE ++ bclr r5, r5, 0 ; 0 - Enable, 1 is Disable ++#else ++ bset r5, r5, 0 ; I$ exists, but is not used ++#endif ++ sr r5, [ARC_REG_IC_CTRL] ++ ++1: ++ lr r5, [ARC_REG_DC_BCR] ++ breq r5, 0, 1f ; D$ doesn't exist ++ lr r5, [ARC_REG_DC_CTRL] ++ bclr r5, r5, 6 ; Invalidate (discard w/o wback) ++#ifdef CONFIG_ARC_HAS_DCACHE ++ bclr r5, r5, 0 ; Enable (+Inv) ++#else ++ bset r5, r5, 0 ; Disable (+Inv) ++#endif ++ sr r5, [ARC_REG_DC_CTRL] ++ ++1: ++.endm + + .cpu A7 + +@@ -24,13 +56,13 @@ + .globl stext + stext: + ;------------------------------------------------------------------- +- ; Don't clobber r0-r4 yet. It might have bootloader provided info ++ ; Don't clobber r0-r2 yet. It might have bootloader provided info + ;------------------------------------------------------------------- + +- sr @_int_vec_base_lds, [AUX_INTR_VEC_BASE] ++ CPU_EARLY_SETUP + + #ifdef CONFIG_SMP +- ; Only Boot (Master) proceeds. Others wait in platform dependent way ++ ; Ensure Boot (Master) proceeds. Others wait in platform dependent way + ; IDENTITY Reg [ 3 2 1 0 ] + ; (cpu-id) ^^^ => Zero for UP ARC700 + ; => #Core-ID if SMP (Master 0) +@@ -39,7 +71,8 @@ stext: + ; need to make sure only boot cpu takes this path. + GET_CPU_ID r5 + cmp r5, 0 +- jnz arc_platform_smp_wait_to_boot ++ mov.ne r0, r5 ++ jne arc_platform_smp_wait_to_boot + #endif + ; Clear BSS before updating any globals + ; XXX: use ZOL here +@@ -89,7 +122,7 @@ stext: + + first_lines_of_secondary: + +- sr @_int_vec_base_lds, [AUX_INTR_VEC_BASE] ++ CPU_EARLY_SETUP + + ; setup per-cpu idle task as "current" on this CPU + ld r0, [@secondary_idle_tsk] +diff --git a/arch/arc/mm/cache_arc700.c b/arch/arc/mm/cache_arc700.c +index 400c663b21c2..1f676c4794e0 100644 +--- a/arch/arc/mm/cache_arc700.c ++++ b/arch/arc/mm/cache_arc700.c +@@ -73,37 +73,9 @@ + #include + #include + +-/* Instruction cache related Auxiliary registers */ +-#define ARC_REG_IC_BCR 0x77 /* Build Config reg */ +-#define ARC_REG_IC_IVIC 0x10 +-#define ARC_REG_IC_CTRL 0x11 +-#define ARC_REG_IC_IVIL 0x19 +-#if (CONFIG_ARC_MMU_VER > 2) +-#define ARC_REG_IC_PTAG 0x1E +-#endif +- +-/* Bit val in IC_CTRL */ +-#define IC_CTRL_CACHE_DISABLE 0x1 +- +-/* Data cache related Auxiliary registers */ +-#define ARC_REG_DC_BCR 0x72 /* Build Config reg */ +-#define ARC_REG_DC_IVDC 0x47 +-#define ARC_REG_DC_CTRL 0x48 +-#define ARC_REG_DC_IVDL 0x4A +-#define ARC_REG_DC_FLSH 0x4B +-#define ARC_REG_DC_FLDL 0x4C +-#if (CONFIG_ARC_MMU_VER > 2) +-#define ARC_REG_DC_PTAG 0x5C +-#endif +- +-/* Bit val in DC_CTRL */ +-#define DC_CTRL_INV_MODE_FLUSH 0x40 +-#define DC_CTRL_FLUSH_STATUS 0x100 +- +-char *arc_cache_mumbojumbo(int cpu_id, char *buf, int len) ++char *arc_cache_mumbojumbo(int c, char *buf, int len) + { + int n = 0; +- unsigned int c = smp_processor_id(); + + #define PR_CACHE(p, enb, str) \ + { \ +@@ -169,72 +141,43 @@ void read_decode_cache_bcr(void) + */ + void arc_cache_init(void) + { +- unsigned int cpu = smp_processor_id(); +- struct cpuinfo_arc_cache *ic = &cpuinfo_arc700[cpu].icache; +- struct cpuinfo_arc_cache *dc = &cpuinfo_arc700[cpu].dcache; +- unsigned int dcache_does_alias, temp; ++ unsigned int __maybe_unused cpu = smp_processor_id(); ++ struct cpuinfo_arc_cache __maybe_unused *ic, __maybe_unused *dc; + char str[256]; + + printk(arc_cache_mumbojumbo(0, str, sizeof(str))); + +- if (!ic->ver) +- goto chk_dc; +- +-#ifdef CONFIG_ARC_HAS_ICACHE +- /* 1. Confirm some of I-cache params which Linux assumes */ +- if (ic->line_len != L1_CACHE_BYTES) +- panic("Cache H/W doesn't match kernel Config"); +- +- if (ic->ver != CONFIG_ARC_MMU_VER) +- panic("Cache ver doesn't match MMU ver\n"); +-#endif +- +- /* Enable/disable I-Cache */ +- temp = read_aux_reg(ARC_REG_IC_CTRL); +- + #ifdef CONFIG_ARC_HAS_ICACHE +- temp &= ~IC_CTRL_CACHE_DISABLE; +-#else +- temp |= IC_CTRL_CACHE_DISABLE; ++ ic = &cpuinfo_arc700[cpu].icache; ++ if (ic->ver) { ++ if (ic->line_len != L1_CACHE_BYTES) ++ panic("ICache line [%d] != kernel Config [%d]", ++ ic->line_len, L1_CACHE_BYTES); ++ ++ if (ic->ver != CONFIG_ARC_MMU_VER) ++ panic("Cache ver [%d] doesn't match MMU ver [%d]\n", ++ ic->ver, CONFIG_ARC_MMU_VER); ++ } + #endif + +- write_aux_reg(ARC_REG_IC_CTRL, temp); +- +-chk_dc: +- if (!dc->ver) +- return; +- + #ifdef CONFIG_ARC_HAS_DCACHE +- if (dc->line_len != L1_CACHE_BYTES) +- panic("Cache H/W doesn't match kernel Config"); ++ dc = &cpuinfo_arc700[cpu].dcache; ++ if (dc->ver) { ++ unsigned int dcache_does_alias; + +- /* check for D-Cache aliasing */ +- dcache_does_alias = (dc->sz / dc->assoc) > PAGE_SIZE; ++ if (dc->line_len != L1_CACHE_BYTES) ++ panic("DCache line [%d] != kernel Config [%d]", ++ dc->line_len, L1_CACHE_BYTES); + +- if (dcache_does_alias && !cache_is_vipt_aliasing()) +- panic("Enable CONFIG_ARC_CACHE_VIPT_ALIASING\n"); +- else if (!dcache_does_alias && cache_is_vipt_aliasing()) +- panic("Don't need CONFIG_ARC_CACHE_VIPT_ALIASING\n"); +-#endif +- +- /* Set the default Invalidate Mode to "simpy discard dirty lines" +- * as this is more frequent then flush before invalidate +- * Ofcourse we toggle this default behviour when desired +- */ +- temp = read_aux_reg(ARC_REG_DC_CTRL); +- temp &= ~DC_CTRL_INV_MODE_FLUSH; ++ /* check for D-Cache aliasing */ ++ dcache_does_alias = (dc->sz / dc->assoc) > PAGE_SIZE; + +-#ifdef CONFIG_ARC_HAS_DCACHE +- /* Enable D-Cache: Clear Bit 0 */ +- write_aux_reg(ARC_REG_DC_CTRL, temp & ~IC_CTRL_CACHE_DISABLE); +-#else +- /* Flush D cache */ +- write_aux_reg(ARC_REG_DC_FLSH, 0x1); +- /* Disable D cache */ +- write_aux_reg(ARC_REG_DC_CTRL, temp | IC_CTRL_CACHE_DISABLE); ++ if (dcache_does_alias && !cache_is_vipt_aliasing()) ++ panic("Enable CONFIG_ARC_CACHE_VIPT_ALIASING\n"); ++ else if (!dcache_does_alias && cache_is_vipt_aliasing()) ++ panic("Don't need CONFIG_ARC_CACHE_VIPT_ALIASING\n"); ++ } + #endif +- +- return; + } + + #define OP_INV 0x1 +@@ -254,12 +197,16 @@ static inline void __cache_line_loop(unsigned long paddr, unsigned long vaddr, + + if (cacheop == OP_INV_IC) { + aux_cmd = ARC_REG_IC_IVIL; ++#if (CONFIG_ARC_MMU_VER > 2) + aux_tag = ARC_REG_IC_PTAG; ++#endif + } + else { + /* d$ cmd: INV (discard or wback-n-discard) OR FLUSH (wback) */ + aux_cmd = cacheop & OP_INV ? ARC_REG_DC_IVDL : ARC_REG_DC_FLDL; ++#if (CONFIG_ARC_MMU_VER > 2) + aux_tag = ARC_REG_DC_PTAG; ++#endif + } + + /* Ensure we properly floor/ceil the non-line aligned/sized requests +diff --git a/arch/mips/include/asm/ftrace.h b/arch/mips/include/asm/ftrace.h +index 992aaba603b5..b463f2aa5a61 100644 +--- a/arch/mips/include/asm/ftrace.h ++++ b/arch/mips/include/asm/ftrace.h +@@ -24,7 +24,7 @@ do { \ + asm volatile ( \ + "1: " load " %[tmp_dst], 0(%[tmp_src])\n" \ + " li %[tmp_err], 0\n" \ +- "2:\n" \ ++ "2: .insn\n" \ + \ + ".section .fixup, \"ax\"\n" \ + "3: li %[tmp_err], 1\n" \ +@@ -46,7 +46,7 @@ do { \ + asm volatile ( \ + "1: " store " %[tmp_src], 0(%[tmp_dst])\n"\ + " li %[tmp_err], 0\n" \ +- "2:\n" \ ++ "2: .insn\n" \ + \ + ".section .fixup, \"ax\"\n" \ + "3: li %[tmp_err], 1\n" \ +diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c +index 65d452aa1fda..dd012c599ad1 100644 +--- a/arch/mips/mm/tlbex.c ++++ b/arch/mips/mm/tlbex.c +@@ -1057,6 +1057,7 @@ static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep) + struct mips_huge_tlb_info { + int huge_pte; + int restore_scratch; ++ bool need_reload_pte; + }; + + static struct mips_huge_tlb_info +@@ -1071,6 +1072,7 @@ build_fast_tlb_refill_handler (u32 **p, struct uasm_label **l, + + rv.huge_pte = scratch; + rv.restore_scratch = 0; ++ rv.need_reload_pte = false; + + if (check_for_high_segbits) { + UASM_i_MFC0(p, tmp, C0_BADVADDR); +@@ -1259,6 +1261,7 @@ static void build_r4000_tlb_refill_handler(void) + } else { + htlb_info.huge_pte = K0; + htlb_info.restore_scratch = 0; ++ htlb_info.need_reload_pte = true; + vmalloc_mode = refill_noscratch; + /* + * create the plain linear handler +@@ -1295,7 +1298,8 @@ static void build_r4000_tlb_refill_handler(void) + } + #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT + uasm_l_tlb_huge_update(&l, p); +- UASM_i_LW(&p, K0, 0, K1); ++ if (htlb_info.need_reload_pte) ++ UASM_i_LW(&p, htlb_info.huge_pte, 0, K1); + build_huge_update_entries(&p, htlb_info.huge_pte, K1); + build_huge_tlb_write_entry(&p, &l, &r, K0, tlb_random, + htlb_info.restore_scratch); +diff --git a/arch/powerpc/platforms/pseries/dlpar.c b/arch/powerpc/platforms/pseries/dlpar.c +index a8fe5aa3d34f..3b46eed1dcf6 100644 +--- a/arch/powerpc/platforms/pseries/dlpar.c ++++ b/arch/powerpc/platforms/pseries/dlpar.c +@@ -380,7 +380,7 @@ static int dlpar_online_cpu(struct device_node *dn) + BUG_ON(get_cpu_current_state(cpu) + != CPU_STATE_OFFLINE); + cpu_maps_update_done(); +- rc = cpu_up(cpu); ++ rc = device_online(get_cpu_device(cpu)); + if (rc) + goto out; + cpu_maps_update_begin(); +@@ -463,7 +463,7 @@ static int dlpar_offline_cpu(struct device_node *dn) + if (get_cpu_current_state(cpu) == CPU_STATE_ONLINE) { + set_preferred_offline_state(cpu, CPU_STATE_OFFLINE); + cpu_maps_update_done(); +- rc = cpu_down(cpu); ++ rc = device_offline(get_cpu_device(cpu)); + if (rc) + goto out; + cpu_maps_update_begin(); +diff --git a/arch/sh/kernel/cpu/sh3/setup-sh770x.c b/arch/sh/kernel/cpu/sh3/setup-sh770x.c +index ff1465c0519c..5acf89c1afc5 100644 +--- a/arch/sh/kernel/cpu/sh3/setup-sh770x.c ++++ b/arch/sh/kernel/cpu/sh3/setup-sh770x.c +@@ -118,7 +118,7 @@ static struct plat_sci_port scif0_platform_data = { + }; + + static struct resource scif0_resources[] = { +- DEFINE_RES_MEM(0xfffffe80, 0x100), ++ DEFINE_RES_MEM(0xfffffe80, 0x10), + DEFINE_RES_IRQ(evt2irq(0x4e0)), + }; + +@@ -143,7 +143,7 @@ static struct plat_sci_port scif1_platform_data = { + }; + + static struct resource scif1_resources[] = { +- DEFINE_RES_MEM(0xa4000150, 0x100), ++ DEFINE_RES_MEM(0xa4000150, 0x10), + DEFINE_RES_IRQ(evt2irq(0x900)), + }; + +@@ -169,7 +169,7 @@ static struct plat_sci_port scif2_platform_data = { + }; + + static struct resource scif2_resources[] = { +- DEFINE_RES_MEM(0xa4000140, 0x100), ++ DEFINE_RES_MEM(0xa4000140, 0x10), + DEFINE_RES_IRQ(evt2irq(0x880)), + }; + +diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c +index 3716e6952554..e8ab93c3e638 100644 +--- a/arch/um/drivers/ubd_kern.c ++++ b/arch/um/drivers/ubd_kern.c +@@ -1277,7 +1277,7 @@ static void do_ubd_request(struct request_queue *q) + + while(1){ + struct ubd *dev = q->queuedata; +- if(dev->end_sg == 0){ ++ if(dev->request == NULL){ + struct request *req = blk_fetch_request(q); + if(req == NULL) + return; +@@ -1299,7 +1299,8 @@ static void do_ubd_request(struct request_queue *q) + return; + } + prepare_flush_request(req, io_req); +- submit_request(io_req, dev); ++ if (submit_request(io_req, dev) == false) ++ return; + } + + while(dev->start_sg < dev->end_sg){ +diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig +index e4098912fef2..98aa930230ec 100644 +--- a/arch/x86/Kconfig ++++ b/arch/x86/Kconfig +@@ -2436,12 +2436,9 @@ config X86_DMA_REMAP + depends on STA2X11 + + config IOSF_MBI +- bool ++ tristate ++ default m + depends on PCI +- ---help--- +- To be selected by modules requiring access to the Intel OnChip System +- Fabric (IOSF) Sideband MailBox Interface (MBI). For MBI platforms +- enumerable by PCI. + + source "net/Kconfig" + +diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S +index 4299eb05023c..92a2e9333620 100644 +--- a/arch/x86/ia32/ia32entry.S ++++ b/arch/x86/ia32/ia32entry.S +@@ -151,6 +151,16 @@ ENTRY(ia32_sysenter_target) + 1: movl (%rbp),%ebp + _ASM_EXTABLE(1b,ia32_badarg) + ASM_CLAC ++ ++ /* ++ * Sysenter doesn't filter flags, so we need to clear NT ++ * ourselves. To save a few cycles, we can check whether ++ * NT was set instead of doing an unconditional popfq. ++ */ ++ testl $X86_EFLAGS_NT,EFLAGS-ARGOFFSET(%rsp) ++ jnz sysenter_fix_flags ++sysenter_flags_fixed: ++ + orl $TS_COMPAT,TI_status+THREAD_INFO(%rsp,RIP-ARGOFFSET) + testl $_TIF_WORK_SYSCALL_ENTRY,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) + CFI_REMEMBER_STATE +@@ -184,6 +194,8 @@ sysexit_from_sys_call: + TRACE_IRQS_ON + ENABLE_INTERRUPTS_SYSEXIT32 + ++ CFI_RESTORE_STATE ++ + #ifdef CONFIG_AUDITSYSCALL + .macro auditsys_entry_common + movl %esi,%r9d /* 6th arg: 4th syscall arg */ +@@ -226,7 +238,6 @@ sysexit_from_sys_call: + .endm + + sysenter_auditsys: +- CFI_RESTORE_STATE + auditsys_entry_common + movl %ebp,%r9d /* reload 6th syscall arg */ + jmp sysenter_dispatch +@@ -235,6 +246,11 @@ sysexit_audit: + auditsys_exit sysexit_from_sys_call + #endif + ++sysenter_fix_flags: ++ pushq_cfi $(X86_EFLAGS_IF|X86_EFLAGS_FIXED) ++ popfq_cfi ++ jmp sysenter_flags_fixed ++ + sysenter_tracesys: + #ifdef CONFIG_AUDITSYSCALL + testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) +diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h +index 9c999c1674fa..01f15b227d7e 100644 +--- a/arch/x86/include/asm/elf.h ++++ b/arch/x86/include/asm/elf.h +@@ -155,8 +155,9 @@ do { \ + #define elf_check_arch(x) \ + ((x)->e_machine == EM_X86_64) + +-#define compat_elf_check_arch(x) \ +- (elf_check_arch_ia32(x) || (x)->e_machine == EM_X86_64) ++#define compat_elf_check_arch(x) \ ++ (elf_check_arch_ia32(x) || \ ++ (IS_ENABLED(CONFIG_X86_X32_ABI) && (x)->e_machine == EM_X86_64)) + + #if __USER32_DS != __USER_DS + # error "The following code assumes __USER32_DS == __USER_DS" +diff --git a/arch/x86/include/asm/iosf_mbi.h b/arch/x86/include/asm/iosf_mbi.h +index 8e71c7941767..57995f0596a6 100644 +--- a/arch/x86/include/asm/iosf_mbi.h ++++ b/arch/x86/include/asm/iosf_mbi.h +@@ -50,6 +50,32 @@ + #define BT_MBI_PCIE_READ 0x00 + #define BT_MBI_PCIE_WRITE 0x01 + ++/* Quark available units */ ++#define QRK_MBI_UNIT_HBA 0x00 ++#define QRK_MBI_UNIT_HB 0x03 ++#define QRK_MBI_UNIT_RMU 0x04 ++#define QRK_MBI_UNIT_MM 0x05 ++#define QRK_MBI_UNIT_MMESRAM 0x05 ++#define QRK_MBI_UNIT_SOC 0x31 ++ ++/* Quark read/write opcodes */ ++#define QRK_MBI_HBA_READ 0x10 ++#define QRK_MBI_HBA_WRITE 0x11 ++#define QRK_MBI_HB_READ 0x10 ++#define QRK_MBI_HB_WRITE 0x11 ++#define QRK_MBI_RMU_READ 0x10 ++#define QRK_MBI_RMU_WRITE 0x11 ++#define QRK_MBI_MM_READ 0x10 ++#define QRK_MBI_MM_WRITE 0x11 ++#define QRK_MBI_MMESRAM_READ 0x12 ++#define QRK_MBI_MMESRAM_WRITE 0x13 ++#define QRK_MBI_SOC_READ 0x06 ++#define QRK_MBI_SOC_WRITE 0x07 ++ ++#if IS_ENABLED(CONFIG_IOSF_MBI) ++ ++bool iosf_mbi_available(void); ++ + /** + * iosf_mbi_read() - MailBox Interface read command + * @port: port indicating subunit being accessed +@@ -87,4 +113,33 @@ int iosf_mbi_write(u8 port, u8 opcode, u32 offset, u32 mdr); + */ + int iosf_mbi_modify(u8 port, u8 opcode, u32 offset, u32 mdr, u32 mask); + ++#else /* CONFIG_IOSF_MBI is not enabled */ ++static inline ++bool iosf_mbi_available(void) ++{ ++ return false; ++} ++ ++static inline ++int iosf_mbi_read(u8 port, u8 opcode, u32 offset, u32 *mdr) ++{ ++ WARN(1, "IOSF_MBI driver not available"); ++ return -EPERM; ++} ++ ++static inline ++int iosf_mbi_write(u8 port, u8 opcode, u32 offset, u32 mdr) ++{ ++ WARN(1, "IOSF_MBI driver not available"); ++ return -EPERM; ++} ++ ++static inline ++int iosf_mbi_modify(u8 port, u8 opcode, u32 offset, u32 mdr, u32 mask) ++{ ++ WARN(1, "IOSF_MBI driver not available"); ++ return -EPERM; ++} ++#endif /* CONFIG_IOSF_MBI */ ++ + #endif /* IOSF_MBI_SYMS_H */ +diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h +index ac63ea4af5b0..e9dc02968cf8 100644 +--- a/arch/x86/include/asm/kvm_host.h ++++ b/arch/x86/include/asm/kvm_host.h +@@ -984,6 +984,20 @@ static inline void kvm_inject_gp(struct kvm_vcpu *vcpu, u32 error_code) + kvm_queue_exception_e(vcpu, GP_VECTOR, error_code); + } + ++static inline u64 get_canonical(u64 la) ++{ ++ return ((int64_t)la << 16) >> 16; ++} ++ ++static inline bool is_noncanonical_address(u64 la) ++{ ++#ifdef CONFIG_X86_64 ++ return get_canonical(la) != la; ++#else ++ return false; ++#endif ++} ++ + #define TSS_IOPB_BASE_OFFSET 0x66 + #define TSS_BASE_SIZE 0x68 + #define TSS_IOPB_SIZE (65536 / 8) +@@ -1042,7 +1056,7 @@ int kvm_cpu_get_interrupt(struct kvm_vcpu *v); + void kvm_vcpu_reset(struct kvm_vcpu *vcpu); + + void kvm_define_shared_msr(unsigned index, u32 msr); +-void kvm_set_shared_msr(unsigned index, u64 val, u64 mask); ++int kvm_set_shared_msr(unsigned index, u64 val, u64 mask); + + bool kvm_is_linear_rip(struct kvm_vcpu *vcpu, unsigned long linear_rip); + +diff --git a/arch/x86/include/uapi/asm/vmx.h b/arch/x86/include/uapi/asm/vmx.h +index 0e79420376eb..990a2fe1588d 100644 +--- a/arch/x86/include/uapi/asm/vmx.h ++++ b/arch/x86/include/uapi/asm/vmx.h +@@ -67,6 +67,7 @@ + #define EXIT_REASON_EPT_MISCONFIG 49 + #define EXIT_REASON_INVEPT 50 + #define EXIT_REASON_PREEMPTION_TIMER 52 ++#define EXIT_REASON_INVVPID 53 + #define EXIT_REASON_WBINVD 54 + #define EXIT_REASON_XSETBV 55 + #define EXIT_REASON_APIC_WRITE 56 +@@ -114,6 +115,7 @@ + { EXIT_REASON_EOI_INDUCED, "EOI_INDUCED" }, \ + { EXIT_REASON_INVALID_STATE, "INVALID_STATE" }, \ + { EXIT_REASON_INVD, "INVD" }, \ ++ { EXIT_REASON_INVVPID, "INVVPID" }, \ + { EXIT_REASON_INVPCID, "INVPCID" } + + #endif /* _UAPIVMX_H */ +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c +index 7f26c9a70a9e..523f147b2470 100644 +--- a/arch/x86/kernel/apic/apic.c ++++ b/arch/x86/kernel/apic/apic.c +@@ -1290,7 +1290,7 @@ void setup_local_APIC(void) + unsigned int value, queued; + int i, j, acked = 0; + unsigned long long tsc = 0, ntsc; +- long long max_loops = cpu_khz; ++ long long max_loops = cpu_khz ? cpu_khz : 1000000; + + if (cpu_has_tsc) + rdtscll(tsc); +@@ -1387,7 +1387,7 @@ void setup_local_APIC(void) + break; + } + if (queued) { +- if (cpu_has_tsc) { ++ if (cpu_has_tsc && cpu_khz) { + rdtscll(ntsc); + max_loops = (cpu_khz << 10) - (ntsc - tsc); + } else +diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c +index 8e28bf2fc3ef..3f27f5fd0847 100644 +--- a/arch/x86/kernel/cpu/common.c ++++ b/arch/x86/kernel/cpu/common.c +@@ -1141,7 +1141,7 @@ void syscall_init(void) + /* Flags to clear on syscall */ + wrmsrl(MSR_SYSCALL_MASK, + X86_EFLAGS_TF|X86_EFLAGS_DF|X86_EFLAGS_IF| +- X86_EFLAGS_IOPL|X86_EFLAGS_AC); ++ X86_EFLAGS_IOPL|X86_EFLAGS_AC|X86_EFLAGS_NT); + } + + /* +diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c +index c1a07d33e67e..66746a880dec 100644 +--- a/arch/x86/kernel/cpu/intel.c ++++ b/arch/x86/kernel/cpu/intel.c +@@ -383,6 +383,13 @@ static void init_intel(struct cpuinfo_x86 *c) + detect_extended_topology(c); + + l2 = init_intel_cacheinfo(c); ++ ++ /* Detect legacy cache sizes if init_intel_cacheinfo did not */ ++ if (l2 == 0) { ++ cpu_detect_cache_sizes(c); ++ l2 = c->x86_cache_size; ++ } ++ + if (c->cpuid_level > 9) { + unsigned eax = cpuid_eax(10); + /* Check for version and the number of counters */ +@@ -497,6 +504,13 @@ static unsigned int intel_size_cache(struct cpuinfo_x86 *c, unsigned int size) + */ + if ((c->x86 == 6) && (c->x86_model == 11) && (size == 0)) + size = 256; ++ ++ /* ++ * Intel Quark SoC X1000 contains a 4-way set associative ++ * 16K cache with a 16 byte cache line and 256 lines per tag ++ */ ++ if ((c->x86 == 5) && (c->x86_model == 9)) ++ size = 16; + return size; + } + #endif +@@ -724,7 +738,8 @@ static const struct cpu_dev intel_cpu_dev = { + [3] = "OverDrive PODP5V83", + [4] = "Pentium MMX", + [7] = "Mobile Pentium 75 - 200", +- [8] = "Mobile Pentium MMX" ++ [8] = "Mobile Pentium MMX", ++ [9] = "Quark SoC X1000", + } + }, + { .family = 6, .model_names = +diff --git a/arch/x86/kernel/iosf_mbi.c b/arch/x86/kernel/iosf_mbi.c +index c3aae6672843..2e97b3cfa6c7 100644 +--- a/arch/x86/kernel/iosf_mbi.c ++++ b/arch/x86/kernel/iosf_mbi.c +@@ -25,6 +25,10 @@ + + #include + ++#define PCI_DEVICE_ID_BAYTRAIL 0x0F00 ++#define PCI_DEVICE_ID_BRASWELL 0x2280 ++#define PCI_DEVICE_ID_QUARK_X1000 0x0958 ++ + static DEFINE_SPINLOCK(iosf_mbi_lock); + + static inline u32 iosf_mbi_form_mcr(u8 op, u8 port, u8 offset) +@@ -177,6 +181,13 @@ int iosf_mbi_modify(u8 port, u8 opcode, u32 offset, u32 mdr, u32 mask) + } + EXPORT_SYMBOL(iosf_mbi_modify); + ++bool iosf_mbi_available(void) ++{ ++ /* Mbi isn't hot-pluggable. No remove routine is provided */ ++ return mbi_pdev; ++} ++EXPORT_SYMBOL(iosf_mbi_available); ++ + static int iosf_mbi_probe(struct pci_dev *pdev, + const struct pci_device_id *unused) + { +@@ -193,7 +204,9 @@ static int iosf_mbi_probe(struct pci_dev *pdev, + } + + static DEFINE_PCI_DEVICE_TABLE(iosf_mbi_pci_ids) = { +- { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0F00) }, ++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_BAYTRAIL) }, ++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_BRASWELL) }, ++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_QUARK_X1000) }, + { 0, }, + }; + MODULE_DEVICE_TABLE(pci, iosf_mbi_pci_ids); +diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c +index 9e5de6813e1f..b88fc86309bc 100644 +--- a/arch/x86/kernel/signal.c ++++ b/arch/x86/kernel/signal.c +@@ -673,6 +673,11 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs) + * handler too. + */ + regs->flags &= ~(X86_EFLAGS_DF|X86_EFLAGS_RF|X86_EFLAGS_TF); ++ /* ++ * Ensure the signal handler starts with the new fpu state. ++ */ ++ if (used_math()) ++ drop_init_fpu(current); + } + signal_setup_done(failed, ksig, test_thread_flag(TIF_SINGLESTEP)); + } +diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c +index e0d1d7a8354e..de0290605903 100644 +--- a/arch/x86/kernel/tsc.c ++++ b/arch/x86/kernel/tsc.c +@@ -1173,14 +1173,17 @@ void __init tsc_init(void) + + x86_init.timers.tsc_pre_init(); + +- if (!cpu_has_tsc) ++ if (!cpu_has_tsc) { ++ setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); + return; ++ } + + tsc_khz = x86_platform.calibrate_tsc(); + cpu_khz = tsc_khz; + + if (!tsc_khz) { + mark_tsc_unstable("could not calculate TSC khz"); ++ setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER); + return; + } + +diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c +index a4b451c6addf..dd50e26c58f6 100644 +--- a/arch/x86/kernel/xsave.c ++++ b/arch/x86/kernel/xsave.c +@@ -268,8 +268,6 @@ int save_xstate_sig(void __user *buf, void __user *buf_fx, int size) + if (use_fxsr() && save_xstate_epilog(buf_fx, ia32_fxstate)) + return -1; + +- drop_init_fpu(tsk); /* trigger finit */ +- + return 0; + } + +@@ -399,8 +397,11 @@ int __restore_xstate_sig(void __user *buf, void __user *buf_fx, int size) + set_used_math(); + } + +- if (use_eager_fpu()) ++ if (use_eager_fpu()) { ++ preempt_disable(); + math_state_restore(); ++ preempt_enable(); ++ } + + return err; + } else { +diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c +index 7bff3e2a7a11..38d3751472e4 100644 +--- a/arch/x86/kvm/emulate.c ++++ b/arch/x86/kvm/emulate.c +@@ -498,11 +498,6 @@ static void rsp_increment(struct x86_emulate_ctxt *ctxt, int inc) + masked_increment(reg_rmw(ctxt, VCPU_REGS_RSP), stack_mask(ctxt), inc); + } + +-static inline void jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) +-{ +- register_address_increment(ctxt, &ctxt->_eip, rel); +-} +- + static u32 desc_limit_scaled(struct desc_struct *desc) + { + u32 limit = get_desc_limit(desc); +@@ -576,6 +571,38 @@ static int emulate_nm(struct x86_emulate_ctxt *ctxt) + return emulate_exception(ctxt, NM_VECTOR, 0, false); + } + ++static inline int assign_eip_far(struct x86_emulate_ctxt *ctxt, ulong dst, ++ int cs_l) ++{ ++ switch (ctxt->op_bytes) { ++ case 2: ++ ctxt->_eip = (u16)dst; ++ break; ++ case 4: ++ ctxt->_eip = (u32)dst; ++ break; ++ case 8: ++ if ((cs_l && is_noncanonical_address(dst)) || ++ (!cs_l && (dst & ~(u32)-1))) ++ return emulate_gp(ctxt, 0); ++ ctxt->_eip = dst; ++ break; ++ default: ++ WARN(1, "unsupported eip assignment size\n"); ++ } ++ return X86EMUL_CONTINUE; ++} ++ ++static inline int assign_eip_near(struct x86_emulate_ctxt *ctxt, ulong dst) ++{ ++ return assign_eip_far(ctxt, dst, ctxt->mode == X86EMUL_MODE_PROT64); ++} ++ ++static inline int jmp_rel(struct x86_emulate_ctxt *ctxt, int rel) ++{ ++ return assign_eip_near(ctxt, ctxt->_eip + rel); ++} ++ + static u16 get_segment_selector(struct x86_emulate_ctxt *ctxt, unsigned seg) + { + u16 selector; +@@ -1958,13 +1985,15 @@ static int em_grp45(struct x86_emulate_ctxt *ctxt) + case 2: /* call near abs */ { + long int old_eip; + old_eip = ctxt->_eip; +- ctxt->_eip = ctxt->src.val; ++ rc = assign_eip_near(ctxt, ctxt->src.val); ++ if (rc != X86EMUL_CONTINUE) ++ break; + ctxt->src.val = old_eip; + rc = em_push(ctxt); + break; + } + case 4: /* jmp abs */ +- ctxt->_eip = ctxt->src.val; ++ rc = assign_eip_near(ctxt, ctxt->src.val); + break; + case 5: /* jmp far */ + rc = em_jmp_far(ctxt); +@@ -1996,10 +2025,14 @@ static int em_cmpxchg8b(struct x86_emulate_ctxt *ctxt) + + static int em_ret(struct x86_emulate_ctxt *ctxt) + { +- ctxt->dst.type = OP_REG; +- ctxt->dst.addr.reg = &ctxt->_eip; +- ctxt->dst.bytes = ctxt->op_bytes; +- return em_pop(ctxt); ++ int rc; ++ unsigned long eip; ++ ++ rc = emulate_pop(ctxt, &eip, ctxt->op_bytes); ++ if (rc != X86EMUL_CONTINUE) ++ return rc; ++ ++ return assign_eip_near(ctxt, eip); + } + + static int em_ret_far(struct x86_emulate_ctxt *ctxt) +@@ -2277,7 +2310,7 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) + { + const struct x86_emulate_ops *ops = ctxt->ops; + struct desc_struct cs, ss; +- u64 msr_data; ++ u64 msr_data, rcx, rdx; + int usermode; + u16 cs_sel = 0, ss_sel = 0; + +@@ -2293,6 +2326,9 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) + else + usermode = X86EMUL_MODE_PROT32; + ++ rcx = reg_read(ctxt, VCPU_REGS_RCX); ++ rdx = reg_read(ctxt, VCPU_REGS_RDX); ++ + cs.dpl = 3; + ss.dpl = 3; + ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data); +@@ -2310,6 +2346,9 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) + ss_sel = cs_sel + 8; + cs.d = 0; + cs.l = 1; ++ if (is_noncanonical_address(rcx) || ++ is_noncanonical_address(rdx)) ++ return emulate_gp(ctxt, 0); + break; + } + cs_sel |= SELECTOR_RPL_MASK; +@@ -2318,8 +2357,8 @@ static int em_sysexit(struct x86_emulate_ctxt *ctxt) + ops->set_segment(ctxt, cs_sel, &cs, 0, VCPU_SREG_CS); + ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); + +- ctxt->_eip = reg_read(ctxt, VCPU_REGS_RDX); +- *reg_write(ctxt, VCPU_REGS_RSP) = reg_read(ctxt, VCPU_REGS_RCX); ++ ctxt->_eip = rdx; ++ *reg_write(ctxt, VCPU_REGS_RSP) = rcx; + + return X86EMUL_CONTINUE; + } +@@ -2858,10 +2897,13 @@ static int em_aad(struct x86_emulate_ctxt *ctxt) + + static int em_call(struct x86_emulate_ctxt *ctxt) + { ++ int rc; + long rel = ctxt->src.val; + + ctxt->src.val = (unsigned long)ctxt->_eip; +- jmp_rel(ctxt, rel); ++ rc = jmp_rel(ctxt, rel); ++ if (rc != X86EMUL_CONTINUE) ++ return rc; + return em_push(ctxt); + } + +@@ -2893,11 +2935,12 @@ static int em_call_far(struct x86_emulate_ctxt *ctxt) + static int em_ret_near_imm(struct x86_emulate_ctxt *ctxt) + { + int rc; ++ unsigned long eip; + +- ctxt->dst.type = OP_REG; +- ctxt->dst.addr.reg = &ctxt->_eip; +- ctxt->dst.bytes = ctxt->op_bytes; +- rc = emulate_pop(ctxt, &ctxt->dst.val, ctxt->op_bytes); ++ rc = emulate_pop(ctxt, &eip, ctxt->op_bytes); ++ if (rc != X86EMUL_CONTINUE) ++ return rc; ++ rc = assign_eip_near(ctxt, eip); + if (rc != X86EMUL_CONTINUE) + return rc; + rsp_increment(ctxt, ctxt->src.val); +@@ -3227,20 +3270,24 @@ static int em_lmsw(struct x86_emulate_ctxt *ctxt) + + static int em_loop(struct x86_emulate_ctxt *ctxt) + { ++ int rc = X86EMUL_CONTINUE; ++ + register_address_increment(ctxt, reg_rmw(ctxt, VCPU_REGS_RCX), -1); + if ((address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) != 0) && + (ctxt->b == 0xe2 || test_cc(ctxt->b ^ 0x5, ctxt->eflags))) +- jmp_rel(ctxt, ctxt->src.val); ++ rc = jmp_rel(ctxt, ctxt->src.val); + +- return X86EMUL_CONTINUE; ++ return rc; + } + + static int em_jcxz(struct x86_emulate_ctxt *ctxt) + { ++ int rc = X86EMUL_CONTINUE; ++ + if (address_mask(ctxt, reg_read(ctxt, VCPU_REGS_RCX)) == 0) +- jmp_rel(ctxt, ctxt->src.val); ++ rc = jmp_rel(ctxt, ctxt->src.val); + +- return X86EMUL_CONTINUE; ++ return rc; + } + + static int em_in(struct x86_emulate_ctxt *ctxt) +@@ -4637,7 +4684,7 @@ special_insn: + break; + case 0x70 ... 0x7f: /* jcc (short) */ + if (test_cc(ctxt->b, ctxt->eflags)) +- jmp_rel(ctxt, ctxt->src.val); ++ rc = jmp_rel(ctxt, ctxt->src.val); + break; + case 0x8d: /* lea r16/r32, m */ + ctxt->dst.val = ctxt->src.addr.mem.ea; +@@ -4666,7 +4713,7 @@ special_insn: + break; + case 0xe9: /* jmp rel */ + case 0xeb: /* jmp rel short */ +- jmp_rel(ctxt, ctxt->src.val); ++ rc = jmp_rel(ctxt, ctxt->src.val); + ctxt->dst.type = OP_NONE; /* Disable writeback. */ + break; + case 0xf4: /* hlt */ +@@ -4786,7 +4833,7 @@ twobyte_insn: + break; + case 0x80 ... 0x8f: /* jnz rel, etc*/ + if (test_cc(ctxt->b, ctxt->eflags)) +- jmp_rel(ctxt, ctxt->src.val); ++ rc = jmp_rel(ctxt, ctxt->src.val); + break; + case 0x90 ... 0x9f: /* setcc r/m8 */ + ctxt->dst.val = test_cc(ctxt->b, ctxt->eflags); +diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c +index 518d86471b76..298781d4cfb4 100644 +--- a/arch/x86/kvm/i8254.c ++++ b/arch/x86/kvm/i8254.c +@@ -262,8 +262,10 @@ void __kvm_migrate_pit_timer(struct kvm_vcpu *vcpu) + return; + + timer = &pit->pit_state.timer; ++ mutex_lock(&pit->pit_state.lock); + if (hrtimer_cancel(timer)) + hrtimer_start_expires(timer, HRTIMER_MODE_ABS); ++ mutex_unlock(&pit->pit_state.lock); + } + + static void destroy_pit_timer(struct kvm_pit *pit) +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c +index 2de1bc09a8d4..9643eda60a52 100644 +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -3213,7 +3213,7 @@ static int wrmsr_interception(struct vcpu_svm *svm) + msr.host_initiated = false; + + svm->next_rip = kvm_rip_read(&svm->vcpu) + 2; +- if (svm_set_msr(&svm->vcpu, &msr)) { ++ if (kvm_set_msr(&svm->vcpu, &msr)) { + trace_kvm_msr_write_ex(ecx, data); + kvm_inject_gp(&svm->vcpu, 0); + } else { +@@ -3495,9 +3495,9 @@ static int handle_exit(struct kvm_vcpu *vcpu) + + if (exit_code >= ARRAY_SIZE(svm_exit_handlers) + || !svm_exit_handlers[exit_code]) { +- kvm_run->exit_reason = KVM_EXIT_UNKNOWN; +- kvm_run->hw.hardware_exit_reason = exit_code; +- return 0; ++ WARN_ONCE(1, "vmx: unexpected exit reason 0x%x\n", exit_code); ++ kvm_queue_exception(vcpu, UD_VECTOR); ++ return 1; + } + + return svm_exit_handlers[exit_code](svm); +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 392752834751..0c90f4b3f835 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -2582,12 +2582,15 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) + default: + msr = find_msr_entry(vmx, msr_index); + if (msr) { ++ u64 old_msr_data = msr->data; + msr->data = data; + if (msr - vmx->guest_msrs < vmx->save_nmsrs) { + preempt_disable(); +- kvm_set_shared_msr(msr->index, msr->data, +- msr->mask); ++ ret = kvm_set_shared_msr(msr->index, msr->data, ++ msr->mask); + preempt_enable(); ++ if (ret) ++ msr->data = old_msr_data; + } + break; + } +@@ -5169,7 +5172,7 @@ static int handle_wrmsr(struct kvm_vcpu *vcpu) + msr.data = data; + msr.index = ecx; + msr.host_initiated = false; +- if (vmx_set_msr(vcpu, &msr) != 0) { ++ if (kvm_set_msr(vcpu, &msr) != 0) { + trace_kvm_msr_write_ex(ecx, data); + kvm_inject_gp(vcpu, 0); + return 1; +@@ -6441,6 +6444,12 @@ static int handle_invept(struct kvm_vcpu *vcpu) + return 1; + } + ++static int handle_invvpid(struct kvm_vcpu *vcpu) ++{ ++ kvm_queue_exception(vcpu, UD_VECTOR); ++ return 1; ++} ++ + /* + * The exit handlers return 1 if the exit was handled fully and guest execution + * may resume. Otherwise they set the kvm_run parameter to indicate what needs +@@ -6486,6 +6495,7 @@ static int (*const kvm_vmx_exit_handlers[])(struct kvm_vcpu *vcpu) = { + [EXIT_REASON_MWAIT_INSTRUCTION] = handle_invalid_op, + [EXIT_REASON_MONITOR_INSTRUCTION] = handle_invalid_op, + [EXIT_REASON_INVEPT] = handle_invept, ++ [EXIT_REASON_INVVPID] = handle_invvpid, + }; + + static const int kvm_vmx_max_exit_handlers = +@@ -6719,7 +6729,7 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu) + case EXIT_REASON_VMPTRST: case EXIT_REASON_VMREAD: + case EXIT_REASON_VMRESUME: case EXIT_REASON_VMWRITE: + case EXIT_REASON_VMOFF: case EXIT_REASON_VMON: +- case EXIT_REASON_INVEPT: ++ case EXIT_REASON_INVEPT: case EXIT_REASON_INVVPID: + /* + * VMX instructions trap unconditionally. This allows L1 to + * emulate them for its L2 guest, i.e., allows 3-level nesting! +@@ -6884,10 +6894,10 @@ static int vmx_handle_exit(struct kvm_vcpu *vcpu) + && kvm_vmx_exit_handlers[exit_reason]) + return kvm_vmx_exit_handlers[exit_reason](vcpu); + else { +- vcpu->run->exit_reason = KVM_EXIT_UNKNOWN; +- vcpu->run->hw.hardware_exit_reason = exit_reason; ++ WARN_ONCE(1, "vmx: unexpected exit reason 0x%x\n", exit_reason); ++ kvm_queue_exception(vcpu, UD_VECTOR); ++ return 1; + } +- return 0; + } + + static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr) +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 8fbd1a772272..51c2851ca243 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -225,20 +225,25 @@ static void kvm_shared_msr_cpu_online(void) + shared_msr_update(i, shared_msrs_global.msrs[i]); + } + +-void kvm_set_shared_msr(unsigned slot, u64 value, u64 mask) ++int kvm_set_shared_msr(unsigned slot, u64 value, u64 mask) + { + unsigned int cpu = smp_processor_id(); + struct kvm_shared_msrs *smsr = per_cpu_ptr(shared_msrs, cpu); ++ int err; + + if (((value ^ smsr->values[slot].curr) & mask) == 0) +- return; ++ return 0; + smsr->values[slot].curr = value; +- wrmsrl(shared_msrs_global.msrs[slot], value); ++ err = wrmsrl_safe(shared_msrs_global.msrs[slot], value); ++ if (err) ++ return 1; ++ + if (!smsr->registered) { + smsr->urn.on_user_return = kvm_on_user_return; + user_return_notifier_register(&smsr->urn); + smsr->registered = true; + } ++ return 0; + } + EXPORT_SYMBOL_GPL(kvm_set_shared_msr); + +@@ -946,7 +951,6 @@ void kvm_enable_efer_bits(u64 mask) + } + EXPORT_SYMBOL_GPL(kvm_enable_efer_bits); + +- + /* + * Writes msr value into into the appropriate "register". + * Returns 0 on success, non-0 otherwise. +@@ -954,8 +958,34 @@ EXPORT_SYMBOL_GPL(kvm_enable_efer_bits); + */ + int kvm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr) + { ++ switch (msr->index) { ++ case MSR_FS_BASE: ++ case MSR_GS_BASE: ++ case MSR_KERNEL_GS_BASE: ++ case MSR_CSTAR: ++ case MSR_LSTAR: ++ if (is_noncanonical_address(msr->data)) ++ return 1; ++ break; ++ case MSR_IA32_SYSENTER_EIP: ++ case MSR_IA32_SYSENTER_ESP: ++ /* ++ * IA32_SYSENTER_ESP and IA32_SYSENTER_EIP cause #GP if ++ * non-canonical address is written on Intel but not on ++ * AMD (which ignores the top 32-bits, because it does ++ * not implement 64-bit SYSENTER). ++ * ++ * 64-bit code should hence be able to write a non-canonical ++ * value on AMD. Making the address canonical ensures that ++ * vmentry does not fail on Intel after writing a non-canonical ++ * value, and that something deterministic happens if the guest ++ * invokes 64-bit SYSENTER. ++ */ ++ msr->data = get_canonical(msr->data); ++ } + return kvm_x86_ops->set_msr(vcpu, msr); + } ++EXPORT_SYMBOL_GPL(kvm_set_msr); + + /* + * Adapt set_msr() to msr_io()'s calling convention +diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c +index a3488689e301..fed892de9baf 100644 +--- a/arch/x86/mm/pageattr.c ++++ b/arch/x86/mm/pageattr.c +@@ -405,7 +405,7 @@ phys_addr_t slow_virt_to_phys(void *__virt_addr) + psize = page_level_size(level); + pmask = page_level_mask(level); + offset = virt_addr & ~pmask; +- phys_addr = pte_pfn(*pte) << PAGE_SHIFT; ++ phys_addr = (phys_addr_t)pte_pfn(*pte) << PAGE_SHIFT; + return (phys_addr | offset); + } + EXPORT_SYMBOL_GPL(slow_virt_to_phys); +diff --git a/block/blk-settings.c b/block/blk-settings.c +index 5d21239bc859..95138e9d0ad5 100644 +--- a/block/blk-settings.c ++++ b/block/blk-settings.c +@@ -553,7 +553,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, + bottom = max(b->physical_block_size, b->io_min) + alignment; + + /* Verify that top and bottom intervals line up */ +- if (max(top, bottom) & (min(top, bottom) - 1)) { ++ if (max(top, bottom) % min(top, bottom)) { + t->misaligned = 1; + ret = -1; + } +@@ -598,7 +598,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b, + + /* Find lowest common alignment_offset */ + t->alignment_offset = lcm(t->alignment_offset, alignment) +- & (max(t->physical_block_size, t->io_min) - 1); ++ % max(t->physical_block_size, t->io_min); + + /* Verify that new alignment_offset is on a logical block boundary */ + if (t->alignment_offset & (t->logical_block_size - 1)) { +diff --git a/block/scsi_ioctl.c b/block/scsi_ioctl.c +index 26487972ac54..4044cf789c7a 100644 +--- a/block/scsi_ioctl.c ++++ b/block/scsi_ioctl.c +@@ -489,7 +489,7 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode, + + if (bytes && blk_rq_map_kern(q, rq, buffer, bytes, __GFP_WAIT)) { + err = DRIVER_ERROR << 24; +- goto out; ++ goto error; + } + + memset(sense, 0, sizeof(sense)); +@@ -499,7 +499,6 @@ int sg_scsi_ioctl(struct request_queue *q, struct gendisk *disk, fmode_t mode, + + blk_execute_rq(q, disk, rq, 0); + +-out: + err = rq->errors & 0xff; /* only 8 bit SCSI status */ + if (err) { + if (rq->sense_len && rq->sense) { +diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c +index a19c027b29bd..83187f497c7c 100644 +--- a/crypto/algif_skcipher.c ++++ b/crypto/algif_skcipher.c +@@ -49,7 +49,7 @@ struct skcipher_ctx { + struct ablkcipher_request req; + }; + +-#define MAX_SGL_ENTS ((PAGE_SIZE - sizeof(struct skcipher_sg_list)) / \ ++#define MAX_SGL_ENTS ((4096 - sizeof(struct skcipher_sg_list)) / \ + sizeof(struct scatterlist) - 1) + + static inline int skcipher_sndbuf(struct sock *sk) +diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c +index b603720b877d..37acda6fa7e4 100644 +--- a/drivers/ata/libata-sff.c ++++ b/drivers/ata/libata-sff.c +@@ -2008,13 +2008,15 @@ static int ata_bus_softreset(struct ata_port *ap, unsigned int devmask, + + DPRINTK("ata%u: bus reset via SRST\n", ap->print_id); + +- /* software reset. causes dev0 to be selected */ +- iowrite8(ap->ctl, ioaddr->ctl_addr); +- udelay(20); /* FIXME: flush */ +- iowrite8(ap->ctl | ATA_SRST, ioaddr->ctl_addr); +- udelay(20); /* FIXME: flush */ +- iowrite8(ap->ctl, ioaddr->ctl_addr); +- ap->last_ctl = ap->ctl; ++ if (ap->ioaddr.ctl_addr) { ++ /* software reset. causes dev0 to be selected */ ++ iowrite8(ap->ctl, ioaddr->ctl_addr); ++ udelay(20); /* FIXME: flush */ ++ iowrite8(ap->ctl | ATA_SRST, ioaddr->ctl_addr); ++ udelay(20); /* FIXME: flush */ ++ iowrite8(ap->ctl, ioaddr->ctl_addr); ++ ap->last_ctl = ap->ctl; ++ } + + /* wait the port to become ready */ + return ata_sff_wait_after_reset(&ap->link, devmask, deadline); +@@ -2215,10 +2217,6 @@ void ata_sff_error_handler(struct ata_port *ap) + + spin_unlock_irqrestore(ap->lock, flags); + +- /* ignore ata_sff_softreset if ctl isn't accessible */ +- if (softreset == ata_sff_softreset && !ap->ioaddr.ctl_addr) +- softreset = NULL; +- + /* ignore built-in hardresets if SCR access is not available */ + if ((hardreset == sata_std_hardreset || + hardreset == sata_sff_hardreset) && !sata_scr_valid(&ap->link)) +diff --git a/drivers/ata/pata_serverworks.c b/drivers/ata/pata_serverworks.c +index 96c6a79ef606..79dedbae282c 100644 +--- a/drivers/ata/pata_serverworks.c ++++ b/drivers/ata/pata_serverworks.c +@@ -252,12 +252,18 @@ static void serverworks_set_dmamode(struct ata_port *ap, struct ata_device *adev + pci_write_config_byte(pdev, 0x54, ultra_cfg); + } + +-static struct scsi_host_template serverworks_sht = { ++static struct scsi_host_template serverworks_osb4_sht = { ++ ATA_BMDMA_SHT(DRV_NAME), ++ .sg_tablesize = LIBATA_DUMB_MAX_PRD, ++}; ++ ++static struct scsi_host_template serverworks_csb_sht = { + ATA_BMDMA_SHT(DRV_NAME), + }; + + static struct ata_port_operations serverworks_osb4_port_ops = { + .inherits = &ata_bmdma_port_ops, ++ .qc_prep = ata_bmdma_dumb_qc_prep, + .cable_detect = serverworks_cable_detect, + .mode_filter = serverworks_osb4_filter, + .set_piomode = serverworks_set_piomode, +@@ -266,6 +272,7 @@ static struct ata_port_operations serverworks_osb4_port_ops = { + + static struct ata_port_operations serverworks_csb_port_ops = { + .inherits = &serverworks_osb4_port_ops, ++ .qc_prep = ata_bmdma_qc_prep, + .mode_filter = serverworks_csb_filter, + }; + +@@ -405,6 +412,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id + } + }; + const struct ata_port_info *ppi[] = { &info[id->driver_data], NULL }; ++ struct scsi_host_template *sht = &serverworks_csb_sht; + int rc; + + rc = pcim_enable_device(pdev); +@@ -418,6 +426,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id + /* Select non UDMA capable OSB4 if we can't do fixups */ + if (rc < 0) + ppi[0] = &info[1]; ++ sht = &serverworks_osb4_sht; + } + /* setup CSB5/CSB6 : South Bridge and IDE option RAID */ + else if ((pdev->device == PCI_DEVICE_ID_SERVERWORKS_CSB5IDE) || +@@ -434,7 +443,7 @@ static int serverworks_init_one(struct pci_dev *pdev, const struct pci_device_id + ppi[1] = &ata_dummy_port_info; + } + +- return ata_pci_bmdma_init_one(pdev, ppi, &serverworks_sht, NULL, 0); ++ return ata_pci_bmdma_init_one(pdev, ppi, sht, NULL, 0); + } + + #ifdef CONFIG_PM +diff --git a/drivers/base/core.c b/drivers/base/core.c +index 2b567177ef78..6a8955e78610 100644 +--- a/drivers/base/core.c ++++ b/drivers/base/core.c +@@ -741,12 +741,12 @@ class_dir_create_and_add(struct class *class, struct kobject *parent_kobj) + return &dir->kobj; + } + ++static DEFINE_MUTEX(gdp_mutex); + + static struct kobject *get_device_parent(struct device *dev, + struct device *parent) + { + if (dev->class) { +- static DEFINE_MUTEX(gdp_mutex); + struct kobject *kobj = NULL; + struct kobject *parent_kobj; + struct kobject *k; +@@ -810,7 +810,9 @@ static void cleanup_glue_dir(struct device *dev, struct kobject *glue_dir) + glue_dir->kset != &dev->class->p->glue_dirs) + return; + ++ mutex_lock(&gdp_mutex); + kobject_put(glue_dir); ++ mutex_unlock(&gdp_mutex); + } + + static void cleanup_device_parent(struct device *dev) +diff --git a/drivers/block/drbd/drbd_interval.c b/drivers/block/drbd/drbd_interval.c +index 89c497c630b4..04a14e0f8878 100644 +--- a/drivers/block/drbd/drbd_interval.c ++++ b/drivers/block/drbd/drbd_interval.c +@@ -79,6 +79,7 @@ bool + drbd_insert_interval(struct rb_root *root, struct drbd_interval *this) + { + struct rb_node **new = &root->rb_node, *parent = NULL; ++ sector_t this_end = this->sector + (this->size >> 9); + + BUG_ON(!IS_ALIGNED(this->size, 512)); + +@@ -87,6 +88,8 @@ drbd_insert_interval(struct rb_root *root, struct drbd_interval *this) + rb_entry(*new, struct drbd_interval, rb); + + parent = *new; ++ if (here->end < this_end) ++ here->end = this_end; + if (this->sector < here->sector) + new = &(*new)->rb_left; + else if (this->sector > here->sector) +@@ -99,6 +102,7 @@ drbd_insert_interval(struct rb_root *root, struct drbd_interval *this) + return false; + } + ++ this->end = this_end; + rb_link_node(&this->rb, parent, new); + rb_insert_augmented(&this->rb, root, &augment_callbacks); + return true; +diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c +index 7296c7f074bd..255ca232ecc7 100644 +--- a/drivers/block/rbd.c ++++ b/drivers/block/rbd.c +@@ -3217,7 +3217,7 @@ static int rbd_obj_read_sync(struct rbd_device *rbd_dev, + page_count = (u32) calc_pages_for(offset, length); + pages = ceph_alloc_page_vector(page_count, GFP_KERNEL); + if (IS_ERR(pages)) +- ret = PTR_ERR(pages); ++ return PTR_ERR(pages); + + ret = -ENOMEM; + obj_request = rbd_obj_request_create(object_name, offset, length, +diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c +index 64c60edcdfbc..63fc7f06a014 100644 +--- a/drivers/block/xen-blkback/blkback.c ++++ b/drivers/block/xen-blkback/blkback.c +@@ -763,6 +763,7 @@ again: + BUG_ON(new_map_idx >= segs_to_map); + if (unlikely(map[new_map_idx].status != 0)) { + pr_debug(DRV_PFX "invalid buffer -- could not remap it\n"); ++ put_free_pages(blkif, &pages[seg_idx]->page, 1); + pages[seg_idx]->handle = BLKBACK_INVALID_HANDLE; + ret |= 1; + goto next; +diff --git a/drivers/char/random.c b/drivers/char/random.c +index 429b75bb60e8..8a64dbeae7b1 100644 +--- a/drivers/char/random.c ++++ b/drivers/char/random.c +@@ -1063,8 +1063,8 @@ static void extract_buf(struct entropy_store *r, __u8 *out) + * pool while mixing, and hash one final time. + */ + sha_transform(hash.w, extract, workspace); +- memset(extract, 0, sizeof(extract)); +- memset(workspace, 0, sizeof(workspace)); ++ memzero_explicit(extract, sizeof(extract)); ++ memzero_explicit(workspace, sizeof(workspace)); + + /* + * In case the hash function has some recognizable output +@@ -1076,7 +1076,7 @@ static void extract_buf(struct entropy_store *r, __u8 *out) + hash.w[2] ^= rol32(hash.w[2], 16); + + memcpy(out, &hash, EXTRACT_SIZE); +- memset(&hash, 0, sizeof(hash)); ++ memzero_explicit(&hash, sizeof(hash)); + } + + static ssize_t extract_entropy(struct entropy_store *r, void *buf, +@@ -1124,7 +1124,7 @@ static ssize_t extract_entropy(struct entropy_store *r, void *buf, + } + + /* Wipe data just returned from memory */ +- memset(tmp, 0, sizeof(tmp)); ++ memzero_explicit(tmp, sizeof(tmp)); + + return ret; + } +@@ -1162,7 +1162,7 @@ static ssize_t extract_entropy_user(struct entropy_store *r, void __user *buf, + } + + /* Wipe data just returned from memory */ +- memset(tmp, 0, sizeof(tmp)); ++ memzero_explicit(tmp, sizeof(tmp)); + + return ret; + } +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 415923606164..4854f81d038b 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -460,7 +460,18 @@ show_one(cpuinfo_max_freq, cpuinfo.max_freq); + show_one(cpuinfo_transition_latency, cpuinfo.transition_latency); + show_one(scaling_min_freq, min); + show_one(scaling_max_freq, max); +-show_one(scaling_cur_freq, cur); ++ ++static ssize_t show_scaling_cur_freq( ++ struct cpufreq_policy *policy, char *buf) ++{ ++ ssize_t ret; ++ ++ if (cpufreq_driver && cpufreq_driver->setpolicy && cpufreq_driver->get) ++ ret = sprintf(buf, "%u\n", cpufreq_driver->get(policy->cpu)); ++ else ++ ret = sprintf(buf, "%u\n", policy->cur); ++ return ret; ++} + + static int cpufreq_set_policy(struct cpufreq_policy *policy, + struct cpufreq_policy *new_policy); +@@ -854,11 +865,11 @@ static int cpufreq_add_dev_interface(struct cpufreq_policy *policy, + if (ret) + goto err_out_kobj_put; + } +- if (has_target()) { +- ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); +- if (ret) +- goto err_out_kobj_put; +- } ++ ++ ret = sysfs_create_file(&policy->kobj, &scaling_cur_freq.attr); ++ if (ret) ++ goto err_out_kobj_put; ++ + if (cpufreq_driver->bios_limit) { + ret = sysfs_create_file(&policy->kobj, &bios_limit.attr); + if (ret) +diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c +index ae52c777339d..533a509439ca 100644 +--- a/drivers/cpufreq/intel_pstate.c ++++ b/drivers/cpufreq/intel_pstate.c +@@ -55,6 +55,17 @@ static inline int32_t div_fp(int32_t x, int32_t y) + return div_s64((int64_t)x << FRAC_BITS, (int64_t)y); + } + ++static inline int ceiling_fp(int32_t x) ++{ ++ int mask, ret; ++ ++ ret = fp_toint(x); ++ mask = (1 << FRAC_BITS) - 1; ++ if (x & mask) ++ ret += 1; ++ return ret; ++} ++ + struct sample { + int32_t core_pct_busy; + u64 aperf; +@@ -67,6 +78,7 @@ struct pstate_data { + int current_pstate; + int min_pstate; + int max_pstate; ++ int scaling; + int turbo_pstate; + }; + +@@ -118,6 +130,7 @@ struct pstate_funcs { + int (*get_max)(void); + int (*get_min)(void); + int (*get_turbo)(void); ++ int (*get_scaling)(void); + void (*set)(struct cpudata*, int pstate); + void (*get_vid)(struct cpudata *); + }; +@@ -397,7 +410,7 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate) + cpudata->vid.ratio); + + vid_fp = clamp_t(int32_t, vid_fp, cpudata->vid.min, cpudata->vid.max); +- vid = fp_toint(vid_fp); ++ vid = ceiling_fp(vid_fp); + + if (pstate > cpudata->pstate.max_pstate) + vid = cpudata->vid.turbo; +@@ -407,6 +420,22 @@ static void byt_set_pstate(struct cpudata *cpudata, int pstate) + wrmsrl(MSR_IA32_PERF_CTL, val); + } + ++#define BYT_BCLK_FREQS 5 ++static int byt_freq_table[BYT_BCLK_FREQS] = { 833, 1000, 1333, 1167, 800}; ++ ++static int byt_get_scaling(void) ++{ ++ u64 value; ++ int i; ++ ++ rdmsrl(MSR_FSB_FREQ, value); ++ i = value & 0x3; ++ ++ BUG_ON(i > BYT_BCLK_FREQS); ++ ++ return byt_freq_table[i] * 100; ++} ++ + static void byt_get_vid(struct cpudata *cpudata) + { + u64 value; +@@ -451,6 +480,11 @@ static int core_get_turbo_pstate(void) + return ret; + } + ++static inline int core_get_scaling(void) ++{ ++ return 100000; ++} ++ + static void core_set_pstate(struct cpudata *cpudata, int pstate) + { + u64 val; +@@ -475,6 +509,7 @@ static struct cpu_defaults core_params = { + .get_max = core_get_max_pstate, + .get_min = core_get_min_pstate, + .get_turbo = core_get_turbo_pstate, ++ .get_scaling = core_get_scaling, + .set = core_set_pstate, + }, + }; +@@ -493,6 +528,7 @@ static struct cpu_defaults byt_params = { + .get_min = byt_get_min_pstate, + .get_turbo = byt_get_turbo_pstate, + .set = byt_set_pstate, ++ .get_scaling = byt_get_scaling, + .get_vid = byt_get_vid, + }, + }; +@@ -526,7 +562,7 @@ static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate) + if (pstate == cpu->pstate.current_pstate) + return; + +- trace_cpu_frequency(pstate * 100000, cpu->cpu); ++ trace_cpu_frequency(pstate * cpu->pstate.scaling, cpu->cpu); + + cpu->pstate.current_pstate = pstate; + +@@ -555,6 +591,7 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) + cpu->pstate.min_pstate = pstate_funcs.get_min(); + cpu->pstate.max_pstate = pstate_funcs.get_max(); + cpu->pstate.turbo_pstate = pstate_funcs.get_turbo(); ++ cpu->pstate.scaling = pstate_funcs.get_scaling(); + + if (pstate_funcs.get_vid) + pstate_funcs.get_vid(cpu); +@@ -574,7 +611,9 @@ static inline void intel_pstate_calc_busy(struct cpudata *cpu, + core_pct += 1; + + sample->freq = fp_toint( +- mul_fp(int_tofp(cpu->pstate.max_pstate * 1000), core_pct)); ++ mul_fp(int_tofp( ++ cpu->pstate.max_pstate * cpu->pstate.scaling / 100), ++ core_pct)); + + sample->core_pct_busy = (int32_t)core_pct; + } +@@ -685,10 +724,14 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = { + ICPU(0x37, byt_params), + ICPU(0x3a, core_params), + ICPU(0x3c, core_params), ++ ICPU(0x3d, core_params), + ICPU(0x3e, core_params), + ICPU(0x3f, core_params), + ICPU(0x45, core_params), + ICPU(0x46, core_params), ++ ICPU(0x4c, byt_params), ++ ICPU(0x4f, core_params), ++ ICPU(0x56, core_params), + {} + }; + MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids); +@@ -751,6 +794,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) + if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) { + limits.min_perf_pct = 100; + limits.min_perf = int_tofp(1); ++ limits.max_policy_pct = 100; + limits.max_perf_pct = 100; + limits.max_perf = int_tofp(1); + limits.no_turbo = limits.turbo_disabled; +@@ -812,12 +856,13 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy) + else + policy->policy = CPUFREQ_POLICY_POWERSAVE; + +- policy->min = cpu->pstate.min_pstate * 100000; +- policy->max = cpu->pstate.turbo_pstate * 100000; ++ policy->min = cpu->pstate.min_pstate * cpu->pstate.scaling; ++ policy->max = cpu->pstate.turbo_pstate * cpu->pstate.scaling; + + /* cpuinfo and default policy values */ +- policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000; +- policy->cpuinfo.max_freq = cpu->pstate.turbo_pstate * 100000; ++ policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling; ++ policy->cpuinfo.max_freq = ++ cpu->pstate.turbo_pstate * cpu->pstate.scaling; + policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; + cpumask_set_cpu(policy->cpu, policy->cpus); + +@@ -875,6 +920,7 @@ static void copy_cpu_funcs(struct pstate_funcs *funcs) + pstate_funcs.get_max = funcs->get_max; + pstate_funcs.get_min = funcs->get_min; + pstate_funcs.get_turbo = funcs->get_turbo; ++ pstate_funcs.get_scaling = funcs->get_scaling; + pstate_funcs.set = funcs->set; + pstate_funcs.get_vid = funcs->get_vid; + } +diff --git a/drivers/edac/cpc925_edac.c b/drivers/edac/cpc925_edac.c +index df6575f1430d..682288ced4ac 100644 +--- a/drivers/edac/cpc925_edac.c ++++ b/drivers/edac/cpc925_edac.c +@@ -562,7 +562,7 @@ static void cpc925_mc_check(struct mem_ctl_info *mci) + + if (apiexcp & UECC_EXCP_DETECTED) { + cpc925_mc_printk(mci, KERN_INFO, "DRAM UECC Fault\n"); +- edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, ++ edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, + pfn, offset, 0, + csrow, -1, -1, + mci->ctl_name, ""); +diff --git a/drivers/edac/e7xxx_edac.c b/drivers/edac/e7xxx_edac.c +index 3cda79bc8b00..ece3aef16bb1 100644 +--- a/drivers/edac/e7xxx_edac.c ++++ b/drivers/edac/e7xxx_edac.c +@@ -226,7 +226,7 @@ static void process_ce(struct mem_ctl_info *mci, struct e7xxx_error_info *info) + static void process_ce_no_info(struct mem_ctl_info *mci) + { + edac_dbg(3, "\n"); +- edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, 0, 0, 0, -1, -1, -1, ++ edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, 0, 0, 0, -1, -1, -1, + "e7xxx CE log register overflow", ""); + } + +diff --git a/drivers/edac/i3200_edac.c b/drivers/edac/i3200_edac.c +index fa1326e5a4b0..ad76f10865c6 100644 +--- a/drivers/edac/i3200_edac.c ++++ b/drivers/edac/i3200_edac.c +@@ -242,11 +242,11 @@ static void i3200_process_error_info(struct mem_ctl_info *mci, + -1, -1, + "i3000 UE", ""); + } else if (log & I3200_ECCERRLOG_CE) { +- edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, ++ edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, + 0, 0, eccerrlog_syndrome(log), + eccerrlog_row(channel, log), + -1, -1, +- "i3000 UE", ""); ++ "i3000 CE", ""); + } + } + } +diff --git a/drivers/edac/i82860_edac.c b/drivers/edac/i82860_edac.c +index 3382f6344e42..4382343a7c60 100644 +--- a/drivers/edac/i82860_edac.c ++++ b/drivers/edac/i82860_edac.c +@@ -124,7 +124,7 @@ static int i82860_process_error_info(struct mem_ctl_info *mci, + dimm->location[0], dimm->location[1], -1, + "i82860 UE", ""); + else +- edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1, ++ edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1, + info->eap, 0, info->derrsyn, + dimm->location[0], dimm->location[1], -1, + "i82860 CE", ""); +diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c +index cca063b11083..d2e56e95d886 100644 +--- a/drivers/gpu/drm/ast/ast_mode.c ++++ b/drivers/gpu/drm/ast/ast_mode.c +@@ -1012,8 +1012,8 @@ static u32 copy_cursor_image(u8 *src, u8 *dst, int width, int height) + srcdata32[1].ul = *((u32 *)(srcxor + 4)) & 0xf0f0f0f0; + data32.b[0] = srcdata32[0].b[1] | (srcdata32[0].b[0] >> 4); + data32.b[1] = srcdata32[0].b[3] | (srcdata32[0].b[2] >> 4); +- data32.b[2] = srcdata32[0].b[1] | (srcdata32[1].b[0] >> 4); +- data32.b[3] = srcdata32[0].b[3] | (srcdata32[1].b[2] >> 4); ++ data32.b[2] = srcdata32[1].b[1] | (srcdata32[1].b[0] >> 4); ++ data32.b[3] = srcdata32[1].b[3] | (srcdata32[1].b[2] >> 4); + + writel(data32.ul, dstxor); + csum += data32.ul; +diff --git a/drivers/gpu/drm/cirrus/cirrus_drv.c b/drivers/gpu/drm/cirrus/cirrus_drv.c +index 08ce520f61a5..faa1f421f1b8 100644 +--- a/drivers/gpu/drm/cirrus/cirrus_drv.c ++++ b/drivers/gpu/drm/cirrus/cirrus_drv.c +@@ -32,6 +32,8 @@ static struct drm_driver driver; + static DEFINE_PCI_DEVICE_TABLE(pciidlist) = { + { PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_5446, 0x1af4, 0x1100, 0, + 0, 0 }, ++ { PCI_VENDOR_ID_CIRRUS, PCI_DEVICE_ID_CIRRUS_5446, PCI_VENDOR_ID_XEN, ++ 0x0001, 0, 0, 0 }, + {0,} + }; + +diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c +index fd98bec78816..c6d9777bdb45 100644 +--- a/drivers/gpu/drm/i915/intel_panel.c ++++ b/drivers/gpu/drm/i915/intel_panel.c +@@ -645,7 +645,7 @@ static void pch_enable_backlight(struct intel_connector *connector) + + cpu_ctl2 = I915_READ(BLC_PWM_CPU_CTL2); + if (cpu_ctl2 & BLM_PWM_ENABLE) { +- WARN(1, "cpu backlight already enabled\n"); ++ DRM_DEBUG_KMS("cpu backlight already enabled\n"); + cpu_ctl2 &= ~BLM_PWM_ENABLE; + I915_WRITE(BLC_PWM_CPU_CTL2, cpu_ctl2); + } +@@ -693,7 +693,7 @@ static void i9xx_enable_backlight(struct intel_connector *connector) + + ctl = I915_READ(BLC_PWM_CTL); + if (ctl & BACKLIGHT_DUTY_CYCLE_MASK_PNV) { +- WARN(1, "backlight already enabled\n"); ++ DRM_DEBUG_KMS("backlight already enabled\n"); + I915_WRITE(BLC_PWM_CTL, 0); + } + +@@ -724,7 +724,7 @@ static void i965_enable_backlight(struct intel_connector *connector) + + ctl2 = I915_READ(BLC_PWM_CTL2); + if (ctl2 & BLM_PWM_ENABLE) { +- WARN(1, "backlight already enabled\n"); ++ DRM_DEBUG_KMS("backlight already enabled\n"); + ctl2 &= ~BLM_PWM_ENABLE; + I915_WRITE(BLC_PWM_CTL2, ctl2); + } +@@ -758,7 +758,7 @@ static void vlv_enable_backlight(struct intel_connector *connector) + + ctl2 = I915_READ(VLV_BLC_PWM_CTL2(pipe)); + if (ctl2 & BLM_PWM_ENABLE) { +- WARN(1, "backlight already enabled\n"); ++ DRM_DEBUG_KMS("backlight already enabled\n"); + ctl2 &= ~BLM_PWM_ENABLE; + I915_WRITE(VLV_BLC_PWM_CTL2(pipe), ctl2); + } +diff --git a/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c b/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c +index 2d9b9d7a7992..f3edd2841f2d 100644 +--- a/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c ++++ b/drivers/gpu/drm/nouveau/core/subdev/bios/dcb.c +@@ -124,6 +124,7 @@ dcb_outp_parse(struct nouveau_bios *bios, u8 idx, u8 *ver, u8 *len, + struct dcb_output *outp) + { + u16 dcb = dcb_outp(bios, idx, ver, len); ++ memset(outp, 0x00, sizeof(*outp)); + if (dcb) { + if (*ver >= 0x20) { + u32 conn = nv_ro32(bios, dcb + 0x00); +diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c +index 798bde2e5881..c39c414c7751 100644 +--- a/drivers/gpu/drm/qxl/qxl_display.c ++++ b/drivers/gpu/drm/qxl/qxl_display.c +@@ -523,7 +523,6 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc, + struct qxl_framebuffer *qfb; + struct qxl_bo *bo, *old_bo = NULL; + struct qxl_crtc *qcrtc = to_qxl_crtc(crtc); +- uint32_t width, height, base_offset; + bool recreate_primary = false; + int ret; + int surf_id; +@@ -553,9 +552,10 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc, + if (qcrtc->index == 0) + recreate_primary = true; + +- width = mode->hdisplay; +- height = mode->vdisplay; +- base_offset = 0; ++ if (bo->surf.stride * bo->surf.height > qdev->vram_size) { ++ DRM_ERROR("Mode doesn't fit in vram size (vgamem)"); ++ return -EINVAL; ++ } + + ret = qxl_bo_reserve(bo, false); + if (ret != 0) +@@ -569,10 +569,10 @@ static int qxl_crtc_mode_set(struct drm_crtc *crtc, + if (recreate_primary) { + qxl_io_destroy_primary(qdev); + qxl_io_log(qdev, +- "recreate primary: %dx%d (was %dx%d,%d,%d)\n", +- width, height, bo->surf.width, +- bo->surf.height, bo->surf.stride, bo->surf.format); +- qxl_io_create_primary(qdev, base_offset, bo); ++ "recreate primary: %dx%d,%d,%d\n", ++ bo->surf.width, bo->surf.height, ++ bo->surf.stride, bo->surf.format); ++ qxl_io_create_primary(qdev, 0, bo); + bo->is_primary = true; + surf_id = 0; + } else { +diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c +index 0a2f5b4bca43..879e62844b2b 100644 +--- a/drivers/gpu/drm/radeon/si_dpm.c ++++ b/drivers/gpu/drm/radeon/si_dpm.c +@@ -6200,7 +6200,7 @@ static void si_parse_pplib_clock_info(struct radeon_device *rdev, + if ((rps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV) && + index == 0) { + /* XXX disable for A0 tahiti */ +- si_pi->ulv.supported = true; ++ si_pi->ulv.supported = false; + si_pi->ulv.pl = *pl; + si_pi->ulv.one_pcie_lane_in_ulv = false; + si_pi->ulv.volt_change_delay = SISLANDS_ULVVOLTAGECHANGEDELAY_DFLT; +diff --git a/drivers/gpu/drm/tilcdc/tilcdc_drv.c b/drivers/gpu/drm/tilcdc/tilcdc_drv.c +index 0644429f8559..52b47115b5cb 100644 +--- a/drivers/gpu/drm/tilcdc/tilcdc_drv.c ++++ b/drivers/gpu/drm/tilcdc/tilcdc_drv.c +@@ -84,6 +84,7 @@ static int modeset_init(struct drm_device *dev) + if ((priv->num_encoders == 0) || (priv->num_connectors == 0)) { + /* oh nos! */ + dev_err(dev->dev, "no encoders/connectors found\n"); ++ drm_mode_config_cleanup(dev); + return -ENXIO; + } + +@@ -178,33 +179,37 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) + dev->dev_private = priv; + + priv->wq = alloc_ordered_workqueue("tilcdc", 0); ++ if (!priv->wq) { ++ ret = -ENOMEM; ++ goto fail_free_priv; ++ } + + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) { + dev_err(dev->dev, "failed to get memory resource\n"); + ret = -EINVAL; +- goto fail; ++ goto fail_free_wq; + } + + priv->mmio = ioremap_nocache(res->start, resource_size(res)); + if (!priv->mmio) { + dev_err(dev->dev, "failed to ioremap\n"); + ret = -ENOMEM; +- goto fail; ++ goto fail_free_wq; + } + + priv->clk = clk_get(dev->dev, "fck"); + if (IS_ERR(priv->clk)) { + dev_err(dev->dev, "failed to get functional clock\n"); + ret = -ENODEV; +- goto fail; ++ goto fail_iounmap; + } + + priv->disp_clk = clk_get(dev->dev, "dpll_disp_ck"); + if (IS_ERR(priv->clk)) { + dev_err(dev->dev, "failed to get display clock\n"); + ret = -ENODEV; +- goto fail; ++ goto fail_put_clk; + } + + #ifdef CONFIG_CPU_FREQ +@@ -214,7 +219,7 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) + CPUFREQ_TRANSITION_NOTIFIER); + if (ret) { + dev_err(dev->dev, "failed to register cpufreq notifier\n"); +- goto fail; ++ goto fail_put_disp_clk; + } + #endif + +@@ -259,13 +264,13 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) + ret = modeset_init(dev); + if (ret < 0) { + dev_err(dev->dev, "failed to initialize mode setting\n"); +- goto fail; ++ goto fail_cpufreq_unregister; + } + + ret = drm_vblank_init(dev, 1); + if (ret < 0) { + dev_err(dev->dev, "failed to initialize vblank\n"); +- goto fail; ++ goto fail_mode_config_cleanup; + } + + pm_runtime_get_sync(dev->dev); +@@ -273,7 +278,7 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) + pm_runtime_put_sync(dev->dev); + if (ret < 0) { + dev_err(dev->dev, "failed to install IRQ handler\n"); +- goto fail; ++ goto fail_vblank_cleanup; + } + + platform_set_drvdata(pdev, dev); +@@ -289,13 +294,48 @@ static int tilcdc_load(struct drm_device *dev, unsigned long flags) + priv->fbdev = drm_fbdev_cma_init(dev, bpp, + dev->mode_config.num_crtc, + dev->mode_config.num_connector); ++ if (IS_ERR(priv->fbdev)) { ++ ret = PTR_ERR(priv->fbdev); ++ goto fail_irq_uninstall; ++ } + + drm_kms_helper_poll_init(dev); + + return 0; + +-fail: +- tilcdc_unload(dev); ++fail_irq_uninstall: ++ pm_runtime_get_sync(dev->dev); ++ drm_irq_uninstall(dev); ++ pm_runtime_put_sync(dev->dev); ++ ++fail_vblank_cleanup: ++ drm_vblank_cleanup(dev); ++ ++fail_mode_config_cleanup: ++ drm_mode_config_cleanup(dev); ++ ++fail_cpufreq_unregister: ++ pm_runtime_disable(dev->dev); ++#ifdef CONFIG_CPU_FREQ ++ cpufreq_unregister_notifier(&priv->freq_transition, ++ CPUFREQ_TRANSITION_NOTIFIER); ++fail_put_disp_clk: ++ clk_put(priv->disp_clk); ++#endif ++ ++fail_put_clk: ++ clk_put(priv->clk); ++ ++fail_iounmap: ++ iounmap(priv->mmio); ++ ++fail_free_wq: ++ flush_workqueue(priv->wq); ++ destroy_workqueue(priv->wq); ++ ++fail_free_priv: ++ dev->dev_private = NULL; ++ kfree(priv); + return ret; + } + +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +index 0083cbf99edf..fb7c36e93fd4 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +@@ -688,7 +688,11 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) + goto out_err0; + } + +- if (unlikely(dev_priv->prim_bb_mem < dev_priv->vram_size)) ++ /* ++ * Limit back buffer size to VRAM size. Remove this once ++ * screen targets are implemented. ++ */ ++ if (dev_priv->prim_bb_mem > dev_priv->vram_size) + dev_priv->prim_bb_mem = dev_priv->vram_size; + + mutex_unlock(&dev_priv->hw_mutex); +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +index 8a650413dea5..c8f8ecf7b282 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c +@@ -1954,6 +1954,14 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector, + DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) + }; + int i; ++ u32 assumed_bpp = 2; ++ ++ /* ++ * If using screen objects, then assume 32-bpp because that's what the ++ * SVGA device is assuming ++ */ ++ if (dev_priv->sou_priv) ++ assumed_bpp = 4; + + /* Add preferred mode */ + { +@@ -1964,8 +1972,9 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector, + mode->vdisplay = du->pref_height; + vmw_guess_mode_timing(mode); + +- if (vmw_kms_validate_mode_vram(dev_priv, mode->hdisplay * 2, +- mode->vdisplay)) { ++ if (vmw_kms_validate_mode_vram(dev_priv, ++ mode->hdisplay * assumed_bpp, ++ mode->vdisplay)) { + drm_mode_probed_add(connector, mode); + } else { + drm_mode_destroy(dev, mode); +@@ -1987,7 +1996,8 @@ int vmw_du_connector_fill_modes(struct drm_connector *connector, + bmode->vdisplay > max_height) + continue; + +- if (!vmw_kms_validate_mode_vram(dev_priv, bmode->hdisplay * 2, ++ if (!vmw_kms_validate_mode_vram(dev_priv, ++ bmode->hdisplay * assumed_bpp, + bmode->vdisplay)) + continue; + +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index 6e12cd0317f6..91bc66b4b151 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -292,6 +292,11 @@ + #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7 0x73f7 + #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001 + ++#define USB_VENDOR_ID_ELAN 0x04f3 ++#define USB_DEVICE_ID_ELAN_TOUCHSCREEN 0x0089 ++#define USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B 0x009b ++#define USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F 0x016f ++ + #define USB_VENDOR_ID_ELECOM 0x056e + #define USB_DEVICE_ID_ELECOM_BM084 0x0061 + +diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c +index 44df131d390a..617c47f9ebe6 100644 +--- a/drivers/hid/usbhid/hid-core.c ++++ b/drivers/hid/usbhid/hid-core.c +@@ -82,7 +82,7 @@ static int hid_start_in(struct hid_device *hid) + struct usbhid_device *usbhid = hid->driver_data; + + spin_lock_irqsave(&usbhid->lock, flags); +- if (hid->open > 0 && ++ if ((hid->open > 0 || hid->quirks & HID_QUIRK_ALWAYS_POLL) && + !test_bit(HID_DISCONNECTED, &usbhid->iofl) && + !test_bit(HID_SUSPENDED, &usbhid->iofl) && + !test_and_set_bit(HID_IN_RUNNING, &usbhid->iofl)) { +@@ -292,6 +292,8 @@ static void hid_irq_in(struct urb *urb) + case 0: /* success */ + usbhid_mark_busy(usbhid); + usbhid->retry_delay = 0; ++ if ((hid->quirks & HID_QUIRK_ALWAYS_POLL) && !hid->open) ++ break; + hid_input_report(urb->context, HID_INPUT_REPORT, + urb->transfer_buffer, + urb->actual_length, 1); +@@ -734,8 +736,10 @@ void usbhid_close(struct hid_device *hid) + if (!--hid->open) { + spin_unlock_irq(&usbhid->lock); + hid_cancel_delayed_stuff(usbhid); +- usb_kill_urb(usbhid->urbin); +- usbhid->intf->needs_remote_wakeup = 0; ++ if (!(hid->quirks & HID_QUIRK_ALWAYS_POLL)) { ++ usb_kill_urb(usbhid->urbin); ++ usbhid->intf->needs_remote_wakeup = 0; ++ } + } else { + spin_unlock_irq(&usbhid->lock); + } +@@ -1119,6 +1123,19 @@ static int usbhid_start(struct hid_device *hid) + + set_bit(HID_STARTED, &usbhid->iofl); + ++ if (hid->quirks & HID_QUIRK_ALWAYS_POLL) { ++ ret = usb_autopm_get_interface(usbhid->intf); ++ if (ret) ++ goto fail; ++ usbhid->intf->needs_remote_wakeup = 1; ++ ret = hid_start_in(hid); ++ if (ret) { ++ dev_err(&hid->dev, ++ "failed to start in urb: %d\n", ret); ++ } ++ usb_autopm_put_interface(usbhid->intf); ++ } ++ + /* Some keyboards don't work until their LEDs have been set. + * Since BIOSes do set the LEDs, it must be safe for any device + * that supports the keyboard boot protocol. +@@ -1151,6 +1168,9 @@ static void usbhid_stop(struct hid_device *hid) + if (WARN_ON(!usbhid)) + return; + ++ if (hid->quirks & HID_QUIRK_ALWAYS_POLL) ++ usbhid->intf->needs_remote_wakeup = 0; ++ + clear_bit(HID_STARTED, &usbhid->iofl); + spin_lock_irq(&usbhid->lock); /* Sync with error and led handlers */ + set_bit(HID_DISCONNECTED, &usbhid->iofl); +diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c +index 8e4ddb369883..deb364306636 100644 +--- a/drivers/hid/usbhid/hid-quirks.c ++++ b/drivers/hid/usbhid/hid-quirks.c +@@ -69,6 +69,9 @@ static const struct hid_blacklist { + { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, ++ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN, HID_QUIRK_ALWAYS_POLL }, ++ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B, HID_QUIRK_ALWAYS_POLL }, ++ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F, HID_QUIRK_ALWAYS_POLL }, + { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET }, +diff --git a/drivers/i2c/busses/i2c-at91.c b/drivers/i2c/busses/i2c-at91.c +index 11e9c7f9bf9b..8873d84e1d4f 100644 +--- a/drivers/i2c/busses/i2c-at91.c ++++ b/drivers/i2c/busses/i2c-at91.c +@@ -434,7 +434,7 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev) + } + } + +- ret = wait_for_completion_io_timeout(&dev->cmd_complete, ++ ret = wait_for_completion_timeout(&dev->cmd_complete, + dev->adapter.timeout); + if (ret == 0) { + dev_err(dev->dev, "controller timed out\n"); +diff --git a/drivers/iio/common/st_sensors/st_sensors_buffer.c b/drivers/iio/common/st_sensors/st_sensors_buffer.c +index 1665c8e4b62b..e18bc6782256 100644 +--- a/drivers/iio/common/st_sensors/st_sensors_buffer.c ++++ b/drivers/iio/common/st_sensors/st_sensors_buffer.c +@@ -71,7 +71,7 @@ int st_sensors_get_buffer_element(struct iio_dev *indio_dev, u8 *buf) + goto st_sensors_free_memory; + } + +- for (i = 0; i < n * num_data_channels; i++) { ++ for (i = 0; i < n * byte_for_channel; i++) { + if (i < n) + buf[i] = rx_array[i]; + else +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h +index f1da362c3e65..8fca488fdc15 100644 +--- a/drivers/input/serio/i8042-x86ia64io.h ++++ b/drivers/input/serio/i8042-x86ia64io.h +@@ -101,6 +101,12 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = { + }, + { + .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "X750LN"), ++ }, ++ }, ++ { ++ .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Compaq"), + DMI_MATCH(DMI_PRODUCT_NAME , "ProLiant"), + DMI_MATCH(DMI_PRODUCT_VERSION, "8500"), +@@ -609,6 +615,22 @@ static const struct dmi_system_id __initconst i8042_dmi_notimeout_table[] = { + }, + }, + { ++ /* Fujitsu A544 laptop */ ++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1111138 */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK A544"), ++ }, ++ }, ++ { ++ /* Fujitsu AH544 laptop */ ++ /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK AH544"), ++ }, ++ }, ++ { + /* Fujitsu U574 laptop */ + /* https://bugzilla.kernel.org/show_bug.cgi?id=69731 */ + .matches = { +diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c +index 0e722c103562..ca1621b49453 100644 +--- a/drivers/md/dm-bufio.c ++++ b/drivers/md/dm-bufio.c +@@ -465,6 +465,7 @@ static void __relink_lru(struct dm_buffer *b, int dirty) + c->n_buffers[dirty]++; + b->list_mode = dirty; + list_move(&b->lru_list, &c->lru[dirty]); ++ b->last_accessed = jiffies; + } + + /*---------------------------------------------------------------- +@@ -1485,9 +1486,9 @@ static long __scan(struct dm_bufio_client *c, unsigned long nr_to_scan, + list_for_each_entry_safe_reverse(b, tmp, &c->lru[l], lru_list) { + freed += __cleanup_old_buffer(b, gfp_mask, 0); + if (!--nr_to_scan) +- break; ++ return freed; ++ dm_bufio_cond_resched(); + } +- dm_bufio_cond_resched(); + } + return freed; + } +diff --git a/drivers/md/dm-log-userspace-transfer.c b/drivers/md/dm-log-userspace-transfer.c +index 08d9a207259a..c69d0b787746 100644 +--- a/drivers/md/dm-log-userspace-transfer.c ++++ b/drivers/md/dm-log-userspace-transfer.c +@@ -272,7 +272,7 @@ int dm_ulog_tfr_init(void) + + r = cn_add_callback(&ulog_cn_id, "dmlogusr", cn_ulog_callback); + if (r) { +- cn_del_callback(&ulog_cn_id); ++ kfree(prealloced_cn_msg); + return r; + } + +diff --git a/drivers/media/dvb-frontends/ds3000.c b/drivers/media/dvb-frontends/ds3000.c +index 1e344b033277..22e8c2032f6d 100644 +--- a/drivers/media/dvb-frontends/ds3000.c ++++ b/drivers/media/dvb-frontends/ds3000.c +@@ -864,6 +864,13 @@ struct dvb_frontend *ds3000_attach(const struct ds3000_config *config, + memcpy(&state->frontend.ops, &ds3000_ops, + sizeof(struct dvb_frontend_ops)); + state->frontend.demodulator_priv = state; ++ ++ /* ++ * Some devices like T480 starts with voltage on. Be sure ++ * to turn voltage off during init, as this can otherwise ++ * interfere with Unicable SCR systems. ++ */ ++ ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF); + return &state->frontend; + + error3: +diff --git a/drivers/media/i2c/tda7432.c b/drivers/media/i2c/tda7432.c +index 72af644fa051..cf93021a6500 100644 +--- a/drivers/media/i2c/tda7432.c ++++ b/drivers/media/i2c/tda7432.c +@@ -293,7 +293,7 @@ static int tda7432_s_ctrl(struct v4l2_ctrl *ctrl) + if (t->mute->val) { + lf |= TDA7432_MUTE; + lr |= TDA7432_MUTE; +- lf |= TDA7432_MUTE; ++ rf |= TDA7432_MUTE; + rr |= TDA7432_MUTE; + } + /* Mute & update balance*/ +diff --git a/drivers/media/tuners/m88ts2022.c b/drivers/media/tuners/m88ts2022.c +index 40c42dec721b..7a62097aa9ea 100644 +--- a/drivers/media/tuners/m88ts2022.c ++++ b/drivers/media/tuners/m88ts2022.c +@@ -314,7 +314,7 @@ static int m88ts2022_set_params(struct dvb_frontend *fe) + div_min = gdiv28 * 78 / 100; + div_max = clamp_val(div_max, 0U, 63U); + +- f_3db_hz = c->symbol_rate * 135UL / 200UL; ++ f_3db_hz = mult_frac(c->symbol_rate, 135, 200); + f_3db_hz += 2000000U + (frequency_offset_khz * 1000U); + f_3db_hz = clamp(f_3db_hz, 7000000U, 40000000U); + +diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c +index 4d97a76cc3b0..c1a3f8f95750 100644 +--- a/drivers/media/usb/em28xx/em28xx-cards.c ++++ b/drivers/media/usb/em28xx/em28xx-cards.c +@@ -2993,16 +2993,6 @@ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev, + } + } + +- if (dev->chip_id == CHIP_ID_EM2870 || +- dev->chip_id == CHIP_ID_EM2874 || +- dev->chip_id == CHIP_ID_EM28174 || +- dev->chip_id == CHIP_ID_EM28178) { +- /* Digital only device - don't load any alsa module */ +- dev->audio_mode.has_audio = false; +- dev->has_audio_class = false; +- dev->has_alsa_audio = false; +- } +- + if (chip_name != default_chip_name) + printk(KERN_INFO DRIVER_NAME + ": chip ID is %s\n", chip_name); +@@ -3272,7 +3262,6 @@ static int em28xx_usb_probe(struct usb_interface *interface, + dev->alt = -1; + dev->is_audio_only = has_audio && !(has_video || has_dvb); + dev->has_alsa_audio = has_audio; +- dev->audio_mode.has_audio = has_audio; + dev->has_video = has_video; + dev->ifnum = ifnum; + +diff --git a/drivers/media/usb/em28xx/em28xx-core.c b/drivers/media/usb/em28xx/em28xx-core.c +index 898fb9bd88a2..97fd881a4e7b 100644 +--- a/drivers/media/usb/em28xx/em28xx-core.c ++++ b/drivers/media/usb/em28xx/em28xx-core.c +@@ -506,8 +506,18 @@ int em28xx_audio_setup(struct em28xx *dev) + int vid1, vid2, feat, cfg; + u32 vid; + +- if (!dev->audio_mode.has_audio) ++ if (dev->chip_id == CHIP_ID_EM2870 || ++ dev->chip_id == CHIP_ID_EM2874 || ++ dev->chip_id == CHIP_ID_EM28174 || ++ dev->chip_id == CHIP_ID_EM28178) { ++ /* Digital only device - don't load any alsa module */ ++ dev->audio_mode.has_audio = false; ++ dev->has_audio_class = false; ++ dev->has_alsa_audio = false; + return 0; ++ } ++ ++ dev->audio_mode.has_audio = true; + + /* See how this device is configured */ + cfg = em28xx_read_reg(dev, EM28XX_R00_CHIPCFG); +diff --git a/drivers/media/usb/em28xx/em28xx-video.c b/drivers/media/usb/em28xx/em28xx-video.c +index c3c928937dcd..e24ee08e634e 100644 +--- a/drivers/media/usb/em28xx/em28xx-video.c ++++ b/drivers/media/usb/em28xx/em28xx-video.c +@@ -953,13 +953,16 @@ static int em28xx_stop_streaming(struct vb2_queue *vq) + } + + spin_lock_irqsave(&dev->slock, flags); ++ if (dev->usb_ctl.vid_buf != NULL) { ++ vb2_buffer_done(&dev->usb_ctl.vid_buf->vb, VB2_BUF_STATE_ERROR); ++ dev->usb_ctl.vid_buf = NULL; ++ } + while (!list_empty(&vidq->active)) { + struct em28xx_buffer *buf; + buf = list_entry(vidq->active.next, struct em28xx_buffer, list); + list_del(&buf->list); + vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR); + } +- dev->usb_ctl.vid_buf = NULL; + spin_unlock_irqrestore(&dev->slock, flags); + + return 0; +@@ -981,13 +984,16 @@ int em28xx_stop_vbi_streaming(struct vb2_queue *vq) + } + + spin_lock_irqsave(&dev->slock, flags); ++ if (dev->usb_ctl.vbi_buf != NULL) { ++ vb2_buffer_done(&dev->usb_ctl.vbi_buf->vb, VB2_BUF_STATE_ERROR); ++ dev->usb_ctl.vbi_buf = NULL; ++ } + while (!list_empty(&vbiq->active)) { + struct em28xx_buffer *buf; + buf = list_entry(vbiq->active.next, struct em28xx_buffer, list); + list_del(&buf->list); + vb2_buffer_done(&buf->vb, VB2_BUF_STATE_ERROR); + } +- dev->usb_ctl.vbi_buf = NULL; + spin_unlock_irqrestore(&dev->slock, flags); + + return 0; +diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c +index c3bb2502225b..753ad4cfc118 100644 +--- a/drivers/media/usb/uvc/uvc_driver.c ++++ b/drivers/media/usb/uvc/uvc_driver.c +@@ -2210,6 +2210,15 @@ static struct usb_device_id uvc_ids[] = { + .bInterfaceSubClass = 1, + .bInterfaceProtocol = 0, + .driver_info = UVC_QUIRK_PROBE_DEF }, ++ /* Dell XPS M1330 (OmniVision OV7670 webcam) */ ++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE ++ | USB_DEVICE_ID_MATCH_INT_INFO, ++ .idVendor = 0x05a9, ++ .idProduct = 0x7670, ++ .bInterfaceClass = USB_CLASS_VIDEO, ++ .bInterfaceSubClass = 1, ++ .bInterfaceProtocol = 0, ++ .driver_info = UVC_QUIRK_PROBE_DEF }, + /* Apple Built-In iSight */ + { .match_flags = USB_DEVICE_ID_MATCH_DEVICE + | USB_DEVICE_ID_MATCH_INT_INFO, +diff --git a/drivers/media/v4l2-core/v4l2-common.c b/drivers/media/v4l2-core/v4l2-common.c +index 433d6d77942e..c5521cec933b 100644 +--- a/drivers/media/v4l2-core/v4l2-common.c ++++ b/drivers/media/v4l2-core/v4l2-common.c +@@ -431,16 +431,13 @@ static unsigned int clamp_align(unsigned int x, unsigned int min, + /* Bits that must be zero to be aligned */ + unsigned int mask = ~((1 << align) - 1); + ++ /* Clamp to aligned min and max */ ++ x = clamp(x, (min + ~mask) & mask, max & mask); ++ + /* Round to nearest aligned value */ + if (align) + x = (x + (1 << (align - 1))) & mask; + +- /* Clamp to aligned value of min and max */ +- if (x < min) +- x = (min + ~mask) & mask; +- else if (x > max) +- x = max & mask; +- + return x; + } + +diff --git a/drivers/mfd/rtsx_pcr.c b/drivers/mfd/rtsx_pcr.c +index 1d15735f9ef9..89b4c4216d0c 100644 +--- a/drivers/mfd/rtsx_pcr.c ++++ b/drivers/mfd/rtsx_pcr.c +@@ -1177,7 +1177,7 @@ static int rtsx_pci_probe(struct pci_dev *pcidev, + pcr->msi_en = msi_en; + if (pcr->msi_en) { + ret = pci_enable_msi(pcidev); +- if (ret < 0) ++ if (ret) + pcr->msi_en = false; + } + +diff --git a/drivers/mfd/ti_am335x_tscadc.c b/drivers/mfd/ti_am335x_tscadc.c +index d4e860413bb5..e87a2485468f 100644 +--- a/drivers/mfd/ti_am335x_tscadc.c ++++ b/drivers/mfd/ti_am335x_tscadc.c +@@ -54,11 +54,11 @@ void am335x_tsc_se_set_cache(struct ti_tscadc_dev *tsadc, u32 val) + unsigned long flags; + + spin_lock_irqsave(&tsadc->reg_lock, flags); +- tsadc->reg_se_cache = val; ++ tsadc->reg_se_cache |= val; + if (tsadc->adc_waiting) + wake_up(&tsadc->reg_se_wait); + else if (!tsadc->adc_in_use) +- tscadc_writel(tsadc, REG_SE, val); ++ tscadc_writel(tsadc, REG_SE, tsadc->reg_se_cache); + + spin_unlock_irqrestore(&tsadc->reg_lock, flags); + } +@@ -97,6 +97,7 @@ static void am335x_tscadc_need_adc(struct ti_tscadc_dev *tsadc) + void am335x_tsc_se_set_once(struct ti_tscadc_dev *tsadc, u32 val) + { + spin_lock_irq(&tsadc->reg_lock); ++ tsadc->reg_se_cache |= val; + am335x_tscadc_need_adc(tsadc); + + tscadc_writel(tsadc, REG_SE, val); +diff --git a/drivers/mmc/host/rtsx_pci_sdmmc.c b/drivers/mmc/host/rtsx_pci_sdmmc.c +index 7e1866175e7b..ca297d741207 100644 +--- a/drivers/mmc/host/rtsx_pci_sdmmc.c ++++ b/drivers/mmc/host/rtsx_pci_sdmmc.c +@@ -342,6 +342,13 @@ static void sd_send_cmd_get_rsp(struct realtek_pci_sdmmc *host, + } + + if (rsp_type == SD_RSP_TYPE_R2) { ++ /* ++ * The controller offloads the last byte {CRC-7, end bit 1'b1} ++ * of response type R2. Assign dummy CRC, 0, and end bit to the ++ * byte(ptr[16], goes into the LSB of resp[3] later). ++ */ ++ ptr[16] = 1; ++ + for (i = 0; i < 4; i++) { + cmd->resp[i] = get_unaligned_be32(ptr + 1 + i * 4); + dev_dbg(sdmmc_dev(host), "cmd->resp[%d] = 0x%08x\n", +diff --git a/drivers/mmc/host/sdhci-pci.c b/drivers/mmc/host/sdhci-pci.c +index 0955777b6c7e..19bfa0ad70c4 100644 +--- a/drivers/mmc/host/sdhci-pci.c ++++ b/drivers/mmc/host/sdhci-pci.c +@@ -103,6 +103,10 @@ static const struct sdhci_pci_fixes sdhci_cafe = { + SDHCI_QUIRK_BROKEN_TIMEOUT_VAL, + }; + ++static const struct sdhci_pci_fixes sdhci_intel_qrk = { ++ .quirks = SDHCI_QUIRK_NO_HISPD_BIT, ++}; ++ + static int mrst_hc_probe_slot(struct sdhci_pci_slot *slot) + { + slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA; +@@ -733,6 +737,14 @@ static const struct pci_device_id pci_ids[] = { + + { + .vendor = PCI_VENDOR_ID_INTEL, ++ .device = PCI_DEVICE_ID_INTEL_QRK_SD, ++ .subvendor = PCI_ANY_ID, ++ .subdevice = PCI_ANY_ID, ++ .driver_data = (kernel_ulong_t)&sdhci_intel_qrk, ++ }, ++ ++ { ++ .vendor = PCI_VENDOR_ID_INTEL, + .device = PCI_DEVICE_ID_INTEL_MRST_SD0, + .subvendor = PCI_ANY_ID, + .subdevice = PCI_ANY_ID, +diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h +index 6d718719659e..c101477ef3be 100644 +--- a/drivers/mmc/host/sdhci-pci.h ++++ b/drivers/mmc/host/sdhci-pci.h +@@ -17,6 +17,7 @@ + #define PCI_DEVICE_ID_INTEL_CLV_SDIO2 0x08fb + #define PCI_DEVICE_ID_INTEL_CLV_EMMC0 0x08e5 + #define PCI_DEVICE_ID_INTEL_CLV_EMMC1 0x08e6 ++#define PCI_DEVICE_ID_INTEL_QRK_SD 0x08A7 + + /* + * PCI registers +diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c +index c5dad652614d..904b4517fc1e 100644 +--- a/drivers/mtd/ubi/fastmap.c ++++ b/drivers/mtd/ubi/fastmap.c +@@ -330,6 +330,7 @@ static int process_pool_aeb(struct ubi_device *ubi, struct ubi_attach_info *ai, + av = tmp_av; + else { + ubi_err("orphaned volume in fastmap pool!"); ++ kmem_cache_free(ai->aeb_slab_cache, new_aeb); + return UBI_BAD_FASTMAP; + } + +diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig +index 494b888a6568..7e5c6a8b89e7 100644 +--- a/drivers/net/Kconfig ++++ b/drivers/net/Kconfig +@@ -135,6 +135,7 @@ config MACVLAN + config MACVTAP + tristate "MAC-VLAN based tap driver" + depends on MACVLAN ++ depends on INET + help + This adds a specialized tap character device driver that is based + on the MAC-VLAN network interface, called macvtap. A macvtap device +@@ -205,6 +206,7 @@ config RIONET_RX_SIZE + + config TUN + tristate "Universal TUN/TAP device driver support" ++ depends on INET + select CRC32 + ---help--- + TUN/TAP provides packet reception and transmission for user space +diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c +index 0c6adaaf898c..f30ceb17d5fc 100644 +--- a/drivers/net/macvtap.c ++++ b/drivers/net/macvtap.c +@@ -16,6 +16,7 @@ + #include + #include + ++#include + #include + #include + #include +@@ -65,7 +66,7 @@ static struct cdev macvtap_cdev; + static const struct proto_ops macvtap_socket_ops; + + #define TUN_OFFLOADS (NETIF_F_HW_CSUM | NETIF_F_TSO_ECN | NETIF_F_TSO | \ +- NETIF_F_TSO6 | NETIF_F_UFO) ++ NETIF_F_TSO6) + #define RX_OFFLOADS (NETIF_F_GRO | NETIF_F_LRO) + #define TAP_FEATURES (NETIF_F_GSO | NETIF_F_SG) + +@@ -569,7 +570,11 @@ static int macvtap_skb_from_vnet_hdr(struct sk_buff *skb, + gso_type = SKB_GSO_TCPV6; + break; + case VIRTIO_NET_HDR_GSO_UDP: ++ pr_warn_once("macvtap: %s: using disabled UFO feature; please fix this program\n", ++ current->comm); + gso_type = SKB_GSO_UDP; ++ if (skb->protocol == htons(ETH_P_IPV6)) ++ ipv6_proxy_select_ident(skb); + break; + default: + return -EINVAL; +@@ -614,8 +619,6 @@ static void macvtap_skb_to_vnet_hdr(const struct sk_buff *skb, + vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4; + else if (sinfo->gso_type & SKB_GSO_TCPV6) + vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6; +- else if (sinfo->gso_type & SKB_GSO_UDP) +- vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP; + else + BUG(); + if (sinfo->gso_type & SKB_GSO_TCP_ECN) +@@ -950,9 +953,6 @@ static int set_offload(struct macvtap_queue *q, unsigned long arg) + if (arg & TUN_F_TSO6) + feature_mask |= NETIF_F_TSO6; + } +- +- if (arg & TUN_F_UFO) +- feature_mask |= NETIF_F_UFO; + } + + /* tun/tap driver inverts the usage for TSO offloads, where +@@ -963,7 +963,7 @@ static int set_offload(struct macvtap_queue *q, unsigned long arg) + * When user space turns off TSO, we turn off GSO/LRO so that + * user-space will not receive TSO frames. + */ +- if (feature_mask & (NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_UFO)) ++ if (feature_mask & (NETIF_F_TSO | NETIF_F_TSO6)) + features |= RX_OFFLOADS; + else + features &= ~RX_OFFLOADS; +@@ -1064,7 +1064,7 @@ static long macvtap_ioctl(struct file *file, unsigned int cmd, + case TUNSETOFFLOAD: + /* let the user check for future flags */ + if (arg & ~(TUN_F_CSUM | TUN_F_TSO4 | TUN_F_TSO6 | +- TUN_F_TSO_ECN | TUN_F_UFO)) ++ TUN_F_TSO_ECN)) + return -EINVAL; + + rtnl_lock(); +diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c +index 72ff14b811c6..5a1897d86e94 100644 +--- a/drivers/net/ppp/ppp_generic.c ++++ b/drivers/net/ppp/ppp_generic.c +@@ -601,7 +601,7 @@ static long ppp_ioctl(struct file *file, unsigned int cmd, unsigned long arg) + if (file == ppp->owner) + ppp_shutdown_interface(ppp); + } +- if (atomic_long_read(&file->f_count) <= 2) { ++ if (atomic_long_read(&file->f_count) < 2) { + ppp_release(NULL, file); + err = 0; + } else +diff --git a/drivers/net/tun.c b/drivers/net/tun.c +index 26f8635b027d..2c8b1c21c452 100644 +--- a/drivers/net/tun.c ++++ b/drivers/net/tun.c +@@ -65,6 +65,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -174,7 +175,7 @@ struct tun_struct { + struct net_device *dev; + netdev_features_t set_features; + #define TUN_USER_FEATURES (NETIF_F_HW_CSUM|NETIF_F_TSO_ECN|NETIF_F_TSO| \ +- NETIF_F_TSO6|NETIF_F_UFO) ++ NETIF_F_TSO6) + + int vnet_hdr_sz; + int sndbuf; +@@ -1140,6 +1141,8 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, + break; + } + ++ skb_reset_network_header(skb); ++ + if (gso.gso_type != VIRTIO_NET_HDR_GSO_NONE) { + pr_debug("GSO!\n"); + switch (gso.gso_type & ~VIRTIO_NET_HDR_GSO_ECN) { +@@ -1150,8 +1153,20 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + break; + case VIRTIO_NET_HDR_GSO_UDP: ++ { ++ static bool warned; ++ ++ if (!warned) { ++ warned = true; ++ netdev_warn(tun->dev, ++ "%s: using disabled UFO feature; please fix this program\n", ++ current->comm); ++ } + skb_shinfo(skb)->gso_type = SKB_GSO_UDP; ++ if (skb->protocol == htons(ETH_P_IPV6)) ++ ipv6_proxy_select_ident(skb); + break; ++ } + default: + tun->dev->stats.rx_frame_errors++; + kfree_skb(skb); +@@ -1180,7 +1195,6 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, + skb_shinfo(skb)->tx_flags |= SKBTX_SHARED_FRAG; + } + +- skb_reset_network_header(skb); + skb_probe_transport_header(skb, 0); + + rxhash = skb_get_hash(skb); +@@ -1252,8 +1266,6 @@ static ssize_t tun_put_user(struct tun_struct *tun, + gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; + else if (sinfo->gso_type & SKB_GSO_TCPV6) + gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV6; +- else if (sinfo->gso_type & SKB_GSO_UDP) +- gso.gso_type = VIRTIO_NET_HDR_GSO_UDP; + else { + pr_err("unexpected GSO type: " + "0x%x, gso_size %d, hdr_len %d\n", +@@ -1783,11 +1795,6 @@ static int set_offload(struct tun_struct *tun, unsigned long arg) + features |= NETIF_F_TSO6; + arg &= ~(TUN_F_TSO4|TUN_F_TSO6); + } +- +- if (arg & TUN_F_UFO) { +- features |= NETIF_F_UFO; +- arg &= ~TUN_F_UFO; +- } + } + + /* This gives the user a way to test for new features in future by +diff --git a/drivers/net/usb/ax88179_178a.c b/drivers/net/usb/ax88179_178a.c +index 054e59ca6946..8cee173eefb2 100644 +--- a/drivers/net/usb/ax88179_178a.c ++++ b/drivers/net/usb/ax88179_178a.c +@@ -696,6 +696,7 @@ static int ax88179_set_mac_addr(struct net_device *net, void *p) + { + struct usbnet *dev = netdev_priv(net); + struct sockaddr *addr = p; ++ int ret; + + if (netif_running(net)) + return -EBUSY; +@@ -705,8 +706,12 @@ static int ax88179_set_mac_addr(struct net_device *net, void *p) + memcpy(net->dev_addr, addr->sa_data, ETH_ALEN); + + /* Set the MAC address */ +- return ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_NODE_ID, ETH_ALEN, ++ ret = ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_NODE_ID, ETH_ALEN, + ETH_ALEN, net->dev_addr); ++ if (ret < 0) ++ return ret; ++ ++ return 0; + } + + static const struct net_device_ops ax88179_netdev_ops = { +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index 841b60831df1..07a3255fd3cc 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -496,8 +496,17 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len) + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + break; + case VIRTIO_NET_HDR_GSO_UDP: ++ { ++ static bool warned; ++ ++ if (!warned) { ++ warned = true; ++ netdev_warn(dev, ++ "host using disabled UFO feature; please fix it\n"); ++ } + skb_shinfo(skb)->gso_type = SKB_GSO_UDP; + break; ++ } + case VIRTIO_NET_HDR_GSO_TCPV6: + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + break; +@@ -836,8 +845,6 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) + hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; + else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) + hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6; +- else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP) +- hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_UDP; + else + BUG(); + if (skb_shinfo(skb)->gso_type & SKB_GSO_TCP_ECN) +@@ -1657,7 +1664,7 @@ static int virtnet_probe(struct virtio_device *vdev) + dev->features |= NETIF_F_HW_CSUM|NETIF_F_SG|NETIF_F_FRAGLIST; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) { +- dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO ++ dev->hw_features |= NETIF_F_TSO + | NETIF_F_TSO_ECN | NETIF_F_TSO6; + } + /* Individual feature bits: what can host handle? */ +@@ -1667,11 +1674,9 @@ static int virtnet_probe(struct virtio_device *vdev) + dev->hw_features |= NETIF_F_TSO6; + if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN)) + dev->hw_features |= NETIF_F_TSO_ECN; +- if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO)) +- dev->hw_features |= NETIF_F_UFO; + + if (gso) +- dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO); ++ dev->features |= dev->hw_features & NETIF_F_ALL_TSO; + /* (!csum && gso) case will be fixed by register_netdev() */ + } + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) +@@ -1711,8 +1716,7 @@ static int virtnet_probe(struct virtio_device *vdev) + /* If we can receive ANY GSO packets, we must allocate large ones. */ + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || + virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) || +- virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) || +- virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO)) ++ virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN)) + vi->big_packets = true; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) +@@ -1903,9 +1907,9 @@ static struct virtio_device_id id_table[] = { + static unsigned int features[] = { + VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM, + VIRTIO_NET_F_GSO, VIRTIO_NET_F_MAC, +- VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6, ++ VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_TSO6, + VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, +- VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO, ++ VIRTIO_NET_F_GUEST_ECN, + VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ, + VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN, + VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, +diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c +index 9b40532041cb..0704a0402897 100644 +--- a/drivers/net/vxlan.c ++++ b/drivers/net/vxlan.c +@@ -1447,9 +1447,6 @@ static int neigh_reduce(struct net_device *dev, struct sk_buff *skb) + if (!in6_dev) + goto out; + +- if (!pskb_may_pull(skb, skb->len)) +- goto out; +- + iphdr = ipv6_hdr(skb); + saddr = &iphdr->saddr; + daddr = &iphdr->daddr; +@@ -1770,6 +1767,8 @@ static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan, + struct pcpu_sw_netstats *tx_stats, *rx_stats; + union vxlan_addr loopback; + union vxlan_addr *remote_ip = &dst_vxlan->default_dst.remote_ip; ++ struct net_device *dev = skb->dev; ++ int len = skb->len; + + tx_stats = this_cpu_ptr(src_vxlan->dev->tstats); + rx_stats = this_cpu_ptr(dst_vxlan->dev->tstats); +@@ -1793,16 +1792,16 @@ static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan, + + u64_stats_update_begin(&tx_stats->syncp); + tx_stats->tx_packets++; +- tx_stats->tx_bytes += skb->len; ++ tx_stats->tx_bytes += len; + u64_stats_update_end(&tx_stats->syncp); + + if (netif_rx(skb) == NET_RX_SUCCESS) { + u64_stats_update_begin(&rx_stats->syncp); + rx_stats->rx_packets++; +- rx_stats->rx_bytes += skb->len; ++ rx_stats->rx_bytes += len; + u64_stats_update_end(&rx_stats->syncp); + } else { +- skb->dev->stats.rx_dropped++; ++ dev->stats.rx_dropped++; + } + } + +@@ -1977,7 +1976,8 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev) + return arp_reduce(dev, skb); + #if IS_ENABLED(CONFIG_IPV6) + else if (ntohs(eth->h_proto) == ETH_P_IPV6 && +- skb->len >= sizeof(struct ipv6hdr) + sizeof(struct nd_msg) && ++ pskb_may_pull(skb, sizeof(struct ipv6hdr) ++ + sizeof(struct nd_msg)) && + ipv6_hdr(skb)->nexthdr == IPPROTO_ICMPV6) { + struct nd_msg *msg; + +@@ -1986,6 +1986,7 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev) + msg->icmph.icmp6_type == NDISC_NEIGHBOUR_SOLICITATION) + return neigh_reduce(dev, skb); + } ++ eth = eth_hdr(skb); + #endif + } + +diff --git a/drivers/net/wireless/iwlwifi/mvm/tx.c b/drivers/net/wireless/iwlwifi/mvm/tx.c +index 2ca62af3f81b..76ee486039d7 100644 +--- a/drivers/net/wireless/iwlwifi/mvm/tx.c ++++ b/drivers/net/wireless/iwlwifi/mvm/tx.c +@@ -173,14 +173,10 @@ static void iwl_mvm_set_tx_cmd_rate(struct iwl_mvm *mvm, + + /* + * for data packets, rate info comes from the table inside the fw. This +- * table is controlled by LINK_QUALITY commands. Exclude ctrl port +- * frames like EAPOLs which should be treated as mgmt frames. This +- * avoids them being sent initially in high rates which increases the +- * chances for completion of the 4-Way handshake. ++ * table is controlled by LINK_QUALITY commands + */ + +- if (ieee80211_is_data(fc) && sta && +- !(info->control.flags & IEEE80211_TX_CTRL_PORT_CTRL_PROTO)) { ++ if (ieee80211_is_data(fc) && sta) { + tx_cmd->initial_rate_index = 0; + tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); + return; +diff --git a/drivers/net/wireless/rt2x00/rt2800.h b/drivers/net/wireless/rt2x00/rt2800.h +index 7cf6081a05a1..ebd5625d13f1 100644 +--- a/drivers/net/wireless/rt2x00/rt2800.h ++++ b/drivers/net/wireless/rt2x00/rt2800.h +@@ -52,6 +52,7 @@ + * RF5592 2.4G/5G 2T2R + * RF3070 2.4G 1T1R + * RF5360 2.4G 1T1R ++ * RF5362 2.4G 1T1R + * RF5370 2.4G 1T1R + * RF5390 2.4G 1T1R + */ +@@ -72,6 +73,7 @@ + #define RF3070 0x3070 + #define RF3290 0x3290 + #define RF5360 0x5360 ++#define RF5362 0x5362 + #define RF5370 0x5370 + #define RF5372 0x5372 + #define RF5390 0x5390 +@@ -2145,7 +2147,7 @@ struct mac_iveiv_entry { + /* Bits [7-4] for RF3320 (RT3370/RT3390), on other chipsets reserved */ + #define RFCSR3_PA1_BIAS_CCK FIELD8(0x70) + #define RFCSR3_PA2_CASCODE_BIAS_CCKK FIELD8(0x80) +-/* Bits for RF3290/RF5360/RF5370/RF5372/RF5390/RF5392 */ ++/* Bits for RF3290/RF5360/RF5362/RF5370/RF5372/RF5390/RF5392 */ + #define RFCSR3_VCOCAL_EN FIELD8(0x80) + /* Bits for RF3050 */ + #define RFCSR3_BIT1 FIELD8(0x02) +diff --git a/drivers/net/wireless/rt2x00/rt2800lib.c b/drivers/net/wireless/rt2x00/rt2800lib.c +index 41d4a8167dc3..4e16d4da9d82 100644 +--- a/drivers/net/wireless/rt2x00/rt2800lib.c ++++ b/drivers/net/wireless/rt2x00/rt2800lib.c +@@ -3142,6 +3142,7 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev, + break; + case RF3070: + case RF5360: ++ case RF5362: + case RF5370: + case RF5372: + case RF5390: +@@ -3159,6 +3160,7 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev, + rt2x00_rf(rt2x00dev, RF3290) || + rt2x00_rf(rt2x00dev, RF3322) || + rt2x00_rf(rt2x00dev, RF5360) || ++ rt2x00_rf(rt2x00dev, RF5362) || + rt2x00_rf(rt2x00dev, RF5370) || + rt2x00_rf(rt2x00dev, RF5372) || + rt2x00_rf(rt2x00dev, RF5390) || +@@ -4273,6 +4275,7 @@ void rt2800_vco_calibration(struct rt2x00_dev *rt2x00dev) + case RF3070: + case RF3290: + case RF5360: ++ case RF5362: + case RF5370: + case RF5372: + case RF5390: +@@ -7073,6 +7076,7 @@ static int rt2800_init_eeprom(struct rt2x00_dev *rt2x00dev) + case RF3320: + case RF3322: + case RF5360: ++ case RF5362: + case RF5370: + case RF5372: + case RF5390: +@@ -7529,6 +7533,7 @@ static int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev) + case RF3320: + case RF3322: + case RF5360: ++ case RF5362: + case RF5370: + case RF5372: + case RF5390: +@@ -7658,6 +7663,7 @@ static int rt2800_probe_hw_mode(struct rt2x00_dev *rt2x00dev) + case RF3070: + case RF3290: + case RF5360: ++ case RF5362: + case RF5370: + case RF5372: + case RF5390: +diff --git a/drivers/net/wireless/rt2x00/rt2800usb.c b/drivers/net/wireless/rt2x00/rt2800usb.c +index caddc1b427a9..57d3967de32f 100644 +--- a/drivers/net/wireless/rt2x00/rt2800usb.c ++++ b/drivers/net/wireless/rt2x00/rt2800usb.c +@@ -1062,6 +1062,7 @@ static struct usb_device_id rt2800usb_device_table[] = { + /* Ovislink */ + { USB_DEVICE(0x1b75, 0x3071) }, + { USB_DEVICE(0x1b75, 0x3072) }, ++ { USB_DEVICE(0x1b75, 0xa200) }, + /* Para */ + { USB_DEVICE(0x20b8, 0x8888) }, + /* Pegatron */ +@@ -1235,6 +1236,8 @@ static struct usb_device_id rt2800usb_device_table[] = { + /* Arcadyan */ + { USB_DEVICE(0x043e, 0x7a12) }, + { USB_DEVICE(0x043e, 0x7a32) }, ++ /* ASUS */ ++ { USB_DEVICE(0x0b05, 0x17e8) }, + /* Azurewave */ + { USB_DEVICE(0x13d3, 0x3329) }, + { USB_DEVICE(0x13d3, 0x3365) }, +@@ -1271,6 +1274,7 @@ static struct usb_device_id rt2800usb_device_table[] = { + { USB_DEVICE(0x057c, 0x8501) }, + /* Buffalo */ + { USB_DEVICE(0x0411, 0x0241) }, ++ { USB_DEVICE(0x0411, 0x0253) }, + /* D-Link */ + { USB_DEVICE(0x2001, 0x3c1a) }, + { USB_DEVICE(0x2001, 0x3c21) }, +@@ -1361,6 +1365,7 @@ static struct usb_device_id rt2800usb_device_table[] = { + { USB_DEVICE(0x0df6, 0x0053) }, + { USB_DEVICE(0x0df6, 0x0069) }, + { USB_DEVICE(0x0df6, 0x006f) }, ++ { USB_DEVICE(0x0df6, 0x0078) }, + /* SMC */ + { USB_DEVICE(0x083a, 0xa512) }, + { USB_DEVICE(0x083a, 0xc522) }, +diff --git a/drivers/of/base.c b/drivers/of/base.c +index 89e888a78899..3935614274eb 100644 +--- a/drivers/of/base.c ++++ b/drivers/of/base.c +@@ -1117,52 +1117,6 @@ int of_property_read_string(struct device_node *np, const char *propname, + EXPORT_SYMBOL_GPL(of_property_read_string); + + /** +- * of_property_read_string_index - Find and read a string from a multiple +- * strings property. +- * @np: device node from which the property value is to be read. +- * @propname: name of the property to be searched. +- * @index: index of the string in the list of strings +- * @out_string: pointer to null terminated return string, modified only if +- * return value is 0. +- * +- * Search for a property in a device tree node and retrieve a null +- * terminated string value (pointer to data, not a copy) in the list of strings +- * contained in that property. +- * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if +- * property does not have a value, and -EILSEQ if the string is not +- * null-terminated within the length of the property data. +- * +- * The out_string pointer is modified only if a valid string can be decoded. +- */ +-int of_property_read_string_index(struct device_node *np, const char *propname, +- int index, const char **output) +-{ +- struct property *prop = of_find_property(np, propname, NULL); +- int i = 0; +- size_t l = 0, total = 0; +- const char *p; +- +- if (!prop) +- return -EINVAL; +- if (!prop->value) +- return -ENODATA; +- if (strnlen(prop->value, prop->length) >= prop->length) +- return -EILSEQ; +- +- p = prop->value; +- +- for (i = 0; total < prop->length; total += l, p += l) { +- l = strlen(p) + 1; +- if (i++ == index) { +- *output = p; +- return 0; +- } +- } +- return -ENODATA; +-} +-EXPORT_SYMBOL_GPL(of_property_read_string_index); +- +-/** + * of_property_match_string() - Find string in a list and return index + * @np: pointer to node containing string list property + * @propname: string list property name +@@ -1188,7 +1142,7 @@ int of_property_match_string(struct device_node *np, const char *propname, + end = p + prop->length; + + for (i = 0; p < end; i++, p += l) { +- l = strlen(p) + 1; ++ l = strnlen(p, end - p) + 1; + if (p + l > end) + return -EILSEQ; + pr_debug("comparing %s with %s\n", string, p); +@@ -1200,39 +1154,41 @@ int of_property_match_string(struct device_node *np, const char *propname, + EXPORT_SYMBOL_GPL(of_property_match_string); + + /** +- * of_property_count_strings - Find and return the number of strings from a +- * multiple strings property. ++ * of_property_read_string_util() - Utility helper for parsing string properties + * @np: device node from which the property value is to be read. + * @propname: name of the property to be searched. ++ * @out_strs: output array of string pointers. ++ * @sz: number of array elements to read. ++ * @skip: Number of strings to skip over at beginning of list. + * +- * Search for a property in a device tree node and retrieve the number of null +- * terminated string contain in it. Returns the number of strings on +- * success, -EINVAL if the property does not exist, -ENODATA if property +- * does not have a value, and -EILSEQ if the string is not null-terminated +- * within the length of the property data. ++ * Don't call this function directly. It is a utility helper for the ++ * of_property_read_string*() family of functions. + */ +-int of_property_count_strings(struct device_node *np, const char *propname) ++int of_property_read_string_helper(struct device_node *np, const char *propname, ++ const char **out_strs, size_t sz, int skip) + { + struct property *prop = of_find_property(np, propname, NULL); +- int i = 0; +- size_t l = 0, total = 0; +- const char *p; ++ int l = 0, i = 0; ++ const char *p, *end; + + if (!prop) + return -EINVAL; + if (!prop->value) + return -ENODATA; +- if (strnlen(prop->value, prop->length) >= prop->length) +- return -EILSEQ; +- + p = prop->value; ++ end = p + prop->length; + +- for (i = 0; total < prop->length; total += l, p += l, i++) +- l = strlen(p) + 1; +- +- return i; ++ for (i = 0; p < end && (!out_strs || i < skip + sz); i++, p += l) { ++ l = strnlen(p, end - p) + 1; ++ if (p + l > end) ++ return -EILSEQ; ++ if (out_strs && i >= skip) ++ *out_strs++ = p; ++ } ++ i -= skip; ++ return i <= 0 ? -ENODATA : i; + } +-EXPORT_SYMBOL_GPL(of_property_count_strings); ++EXPORT_SYMBOL_GPL(of_property_read_string_helper); + + void of_print_phandle_args(const char *msg, const struct of_phandle_args *args) + { +diff --git a/drivers/of/selftest.c b/drivers/of/selftest.c +index 6643d1920985..70c61d75b75e 100644 +--- a/drivers/of/selftest.c ++++ b/drivers/of/selftest.c +@@ -132,8 +132,9 @@ static void __init of_selftest_parse_phandle_with_args(void) + selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); + } + +-static void __init of_selftest_property_match_string(void) ++static void __init of_selftest_property_string(void) + { ++ const char *strings[4]; + struct device_node *np; + int rc; + +@@ -150,13 +151,66 @@ static void __init of_selftest_property_match_string(void) + rc = of_property_match_string(np, "phandle-list-names", "third"); + selftest(rc == 2, "third expected:0 got:%i\n", rc); + rc = of_property_match_string(np, "phandle-list-names", "fourth"); +- selftest(rc == -ENODATA, "unmatched string; rc=%i", rc); ++ selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc); + rc = of_property_match_string(np, "missing-property", "blah"); +- selftest(rc == -EINVAL, "missing property; rc=%i", rc); ++ selftest(rc == -EINVAL, "missing property; rc=%i\n", rc); + rc = of_property_match_string(np, "empty-property", "blah"); +- selftest(rc == -ENODATA, "empty property; rc=%i", rc); ++ selftest(rc == -ENODATA, "empty property; rc=%i\n", rc); + rc = of_property_match_string(np, "unterminated-string", "blah"); +- selftest(rc == -EILSEQ, "unterminated string; rc=%i", rc); ++ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); ++ ++ /* of_property_count_strings() tests */ ++ rc = of_property_count_strings(np, "string-property"); ++ selftest(rc == 1, "Incorrect string count; rc=%i\n", rc); ++ rc = of_property_count_strings(np, "phandle-list-names"); ++ selftest(rc == 3, "Incorrect string count; rc=%i\n", rc); ++ rc = of_property_count_strings(np, "unterminated-string"); ++ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); ++ rc = of_property_count_strings(np, "unterminated-string-list"); ++ selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc); ++ ++ /* of_property_read_string_index() tests */ ++ rc = of_property_read_string_index(np, "string-property", 0, strings); ++ selftest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc); ++ strings[0] = NULL; ++ rc = of_property_read_string_index(np, "string-property", 1, strings); ++ selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); ++ rc = of_property_read_string_index(np, "phandle-list-names", 0, strings); ++ selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc); ++ rc = of_property_read_string_index(np, "phandle-list-names", 1, strings); ++ selftest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc); ++ rc = of_property_read_string_index(np, "phandle-list-names", 2, strings); ++ selftest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc); ++ strings[0] = NULL; ++ rc = of_property_read_string_index(np, "phandle-list-names", 3, strings); ++ selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); ++ strings[0] = NULL; ++ rc = of_property_read_string_index(np, "unterminated-string", 0, strings); ++ selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); ++ rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings); ++ selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc); ++ strings[0] = NULL; ++ rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */ ++ selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); ++ strings[1] = NULL; ++ ++ /* of_property_read_string_array() tests */ ++ rc = of_property_read_string_array(np, "string-property", strings, 4); ++ selftest(rc == 1, "Incorrect string count; rc=%i\n", rc); ++ rc = of_property_read_string_array(np, "phandle-list-names", strings, 4); ++ selftest(rc == 3, "Incorrect string count; rc=%i\n", rc); ++ rc = of_property_read_string_array(np, "unterminated-string", strings, 4); ++ selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); ++ /* -- An incorrectly formed string should cause a failure */ ++ rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4); ++ selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc); ++ /* -- parsing the correctly formed strings should still work: */ ++ strings[2] = NULL; ++ rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2); ++ selftest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc); ++ strings[1] = NULL; ++ rc = of_property_read_string_array(np, "phandle-list-names", strings, 1); ++ selftest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]); + } + + static void __init of_selftest_parse_interrupts(void) +@@ -379,7 +433,7 @@ static int __init of_selftest(void) + + pr_info("start of selftest - you will see error messages\n"); + of_selftest_parse_phandle_with_args(); +- of_selftest_property_match_string(); ++ of_selftest_property_string(); + of_selftest_parse_interrupts(); + of_selftest_parse_interrupts_extended(); + of_selftest_match_node(); +diff --git a/drivers/of/testcase-data/tests-phandle.dtsi b/drivers/of/testcase-data/tests-phandle.dtsi +index 0007d3cd7dc2..eedee37d70d7 100644 +--- a/drivers/of/testcase-data/tests-phandle.dtsi ++++ b/drivers/of/testcase-data/tests-phandle.dtsi +@@ -32,7 +32,9 @@ + phandle-list-bad-args = <&provider2 1 0>, + <&provider3 0>; + empty-property; ++ string-property = "foobar"; + unterminated-string = [40 41 42 43]; ++ unterminated-string-list = "first", "second", [40 41 42 43]; + }; + }; + }; +diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c +index 39a207abaa10..a943c6c0f206 100644 +--- a/drivers/pci/pci-sysfs.c ++++ b/drivers/pci/pci-sysfs.c +@@ -186,9 +186,9 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *attr, + } + static DEVICE_ATTR_RO(modalias); + +-static ssize_t enabled_store(struct device *dev, +- struct device_attribute *attr, const char *buf, +- size_t count) ++static ssize_t enable_store(struct device *dev, ++ struct device_attribute *attr, const char *buf, ++ size_t count) + { + struct pci_dev *pdev = to_pci_dev(dev); + unsigned long val; +@@ -212,15 +212,15 @@ static ssize_t enabled_store(struct device *dev, + return result < 0 ? result : count; + } + +-static ssize_t enabled_show(struct device *dev, +- struct device_attribute *attr, char *buf) ++static ssize_t enable_show(struct device *dev, ++ struct device_attribute *attr, char *buf) + { + struct pci_dev *pdev; + + pdev = to_pci_dev (dev); + return sprintf (buf, "%u\n", atomic_read(&pdev->enable_cnt)); + } +-static DEVICE_ATTR_RW(enabled); ++static DEVICE_ATTR_RW(enable); + + #ifdef CONFIG_NUMA + static ssize_t +@@ -526,7 +526,7 @@ static struct attribute *pci_dev_attrs[] = { + #endif + &dev_attr_dma_mask_bits.attr, + &dev_attr_consistent_dma_mask_bits.attr, +- &dev_attr_enabled.attr, ++ &dev_attr_enable.attr, + &dev_attr_broken_parity_status.attr, + &dev_attr_msi_bus.attr, + #if defined(CONFIG_PM_RUNTIME) && defined(CONFIG_ACPI) +diff --git a/drivers/pinctrl/pinctrl-baytrail.c b/drivers/pinctrl/pinctrl-baytrail.c +index 665b96bc0c3a..eb9f1906952a 100644 +--- a/drivers/pinctrl/pinctrl-baytrail.c ++++ b/drivers/pinctrl/pinctrl-baytrail.c +@@ -263,7 +263,7 @@ static int byt_gpio_direction_output(struct gpio_chip *chip, + spin_lock_irqsave(&vg->lock, flags); + + reg_val = readl(reg) | BYT_DIR_MASK; +- reg_val &= ~BYT_OUTPUT_EN; ++ reg_val &= ~(BYT_OUTPUT_EN | BYT_INPUT_EN); + + if (value) + writel(reg_val | BYT_LEVEL, reg); +diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c +index c91f69b39db4..dcfcaea76048 100644 +--- a/drivers/platform/x86/acer-wmi.c ++++ b/drivers/platform/x86/acer-wmi.c +@@ -570,6 +570,17 @@ static const struct dmi_system_id video_vendor_dmi_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5750"), + }, + }, ++ { ++ /* ++ * Note no video_set_backlight_video_vendor, we must use the ++ * acer interface, as there is no native backlight interface. ++ */ ++ .ident = "Acer KAV80", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "KAV80"), ++ }, ++ }, + {} + }; + +diff --git a/drivers/power/charger-manager.c b/drivers/power/charger-manager.c +index 9e4dab46eefd..ef1f4c928431 100644 +--- a/drivers/power/charger-manager.c ++++ b/drivers/power/charger-manager.c +@@ -1720,6 +1720,11 @@ static int charger_manager_probe(struct platform_device *pdev) + return -EINVAL; + } + ++ if (!desc->psy_fuel_gauge) { ++ dev_err(&pdev->dev, "No fuel gauge power supply defined\n"); ++ return -EINVAL; ++ } ++ + /* Counting index only */ + while (desc->psy_charger_stat[i]) + i++; +diff --git a/drivers/regulator/max77693.c b/drivers/regulator/max77693.c +index 5fb899f461d0..24c926bfe6d4 100644 +--- a/drivers/regulator/max77693.c ++++ b/drivers/regulator/max77693.c +@@ -232,7 +232,7 @@ static int max77693_pmic_probe(struct platform_device *pdev) + struct max77693_pmic_dev *max77693_pmic; + struct max77693_regulator_data *rdata = NULL; + int num_rdata, i; +- struct regulator_config config; ++ struct regulator_config config = { }; + + num_rdata = max77693_pmic_init_rdata(&pdev->dev, &rdata); + if (!rdata || num_rdata <= 0) { +diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c +index 788c4fe2b0c9..9d81f7693f99 100644 +--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c ++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c +@@ -707,7 +707,16 @@ static void tcm_qla2xxx_clear_nacl_from_fcport_map(struct qla_tgt_sess *sess) + pr_debug("fc_rport domain: port_id 0x%06x\n", nacl->nport_id); + + node = btree_remove32(&lport->lport_fcport_map, nacl->nport_id); +- WARN_ON(node && (node != se_nacl)); ++ if (WARN_ON(node && (node != se_nacl))) { ++ /* ++ * The nacl no longer matches what we think it should be. ++ * Most likely a new dynamic acl has been added while ++ * someone dropped the hardware lock. It clearly is a ++ * bug elsewhere, but this bit can't make things worse. ++ */ ++ btree_insert32(&lport->lport_fcport_map, nacl->nport_id, ++ node, GFP_ATOMIC); ++ } + + pr_debug("Removed from fcport_map: %p for WWNN: 0x%016LX, port_id: 0x%06x\n", + se_nacl, nacl->nport_wwnn, nacl->nport_id); +diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c +index a25392065d9b..a5db6f930fa3 100644 +--- a/drivers/spi/spi-fsl-dspi.c ++++ b/drivers/spi/spi-fsl-dspi.c +@@ -45,7 +45,7 @@ + + #define SPI_TCR 0x08 + +-#define SPI_CTAR(x) (0x0c + (x * 4)) ++#define SPI_CTAR(x) (0x0c + (((x) & 0x3) * 4)) + #define SPI_CTAR_FMSZ(x) (((x) & 0x0000000f) << 27) + #define SPI_CTAR_CPOL(x) ((x) << 26) + #define SPI_CTAR_CPHA(x) ((x) << 25) +@@ -69,7 +69,7 @@ + + #define SPI_PUSHR 0x34 + #define SPI_PUSHR_CONT (1 << 31) +-#define SPI_PUSHR_CTAS(x) (((x) & 0x00000007) << 28) ++#define SPI_PUSHR_CTAS(x) (((x) & 0x00000003) << 28) + #define SPI_PUSHR_EOQ (1 << 27) + #define SPI_PUSHR_CTCNT (1 << 26) + #define SPI_PUSHR_PCS(x) (((1 << x) & 0x0000003f) << 16) +diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c +index 2789b452e711..971855e859c7 100644 +--- a/drivers/spi/spi-pl022.c ++++ b/drivers/spi/spi-pl022.c +@@ -1075,7 +1075,7 @@ err_rxdesc: + pl022->sgt_tx.nents, DMA_TO_DEVICE); + err_tx_sgmap: + dma_unmap_sg(rxchan->device->dev, pl022->sgt_rx.sgl, +- pl022->sgt_tx.nents, DMA_FROM_DEVICE); ++ pl022->sgt_rx.nents, DMA_FROM_DEVICE); + err_rx_sgmap: + sg_free_table(&pl022->sgt_tx); + err_alloc_tx_sg: +diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c +index ced9ecffa163..7ab3ccb592eb 100644 +--- a/drivers/spi/spi-pxa2xx.c ++++ b/drivers/spi/spi-pxa2xx.c +@@ -1280,7 +1280,9 @@ static int pxa2xx_spi_suspend(struct device *dev) + if (status != 0) + return status; + write_SSCR0(0, drv_data->ioaddr); +- clk_disable_unprepare(ssp->clk); ++ ++ if (!pm_runtime_suspended(dev)) ++ clk_disable_unprepare(ssp->clk); + + return 0; + } +@@ -1294,7 +1296,8 @@ static int pxa2xx_spi_resume(struct device *dev) + pxa2xx_spi_dma_resume(drv_data); + + /* Enable the SSP clock */ +- clk_prepare_enable(ssp->clk); ++ if (!pm_runtime_suspended(dev)) ++ clk_prepare_enable(ssp->clk); + + /* Restore LPSS private register bits */ + lpss_ssp_setup(drv_data); +diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c +index 2b96665da8a2..97d4b3fb7e95 100644 +--- a/drivers/staging/iio/impedance-analyzer/ad5933.c ++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c +@@ -115,6 +115,7 @@ static const struct iio_chan_spec ad5933_channels[] = { + .channel = 0, + .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), + .address = AD5933_REG_TEMP_DATA, ++ .scan_index = -1, + .scan_type = { + .sign = 's', + .realbits = 14, +@@ -124,9 +125,7 @@ static const struct iio_chan_spec ad5933_channels[] = { + .type = IIO_VOLTAGE, + .indexed = 1, + .channel = 0, +- .extend_name = "real_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | +- BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "real", + .address = AD5933_REG_REAL_DATA, + .scan_index = 0, + .scan_type = { +@@ -138,9 +137,7 @@ static const struct iio_chan_spec ad5933_channels[] = { + .type = IIO_VOLTAGE, + .indexed = 1, + .channel = 0, +- .extend_name = "imag_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | +- BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "imag", + .address = AD5933_REG_IMAG_DATA, + .scan_index = 1, + .scan_type = { +@@ -748,14 +745,14 @@ static int ad5933_probe(struct i2c_client *client, + indio_dev->name = id->name; + indio_dev->modes = INDIO_DIRECT_MODE; + indio_dev->channels = ad5933_channels; +- indio_dev->num_channels = 1; /* only register temp0_input */ ++ indio_dev->num_channels = ARRAY_SIZE(ad5933_channels); + + ret = ad5933_register_ring_funcs_and_init(indio_dev); + if (ret) + goto error_disable_reg; + +- /* skip temp0_input, register in0_(real|imag)_raw */ +- ret = iio_buffer_register(indio_dev, &ad5933_channels[1], 2); ++ ret = iio_buffer_register(indio_dev, ad5933_channels, ++ ARRAY_SIZE(ad5933_channels)); + if (ret) + goto error_unreg_ring; + +diff --git a/drivers/staging/iio/meter/ade7758.h b/drivers/staging/iio/meter/ade7758.h +index 07318203a836..e8c98cf57070 100644 +--- a/drivers/staging/iio/meter/ade7758.h ++++ b/drivers/staging/iio/meter/ade7758.h +@@ -119,7 +119,6 @@ struct ade7758_state { + u8 *tx; + u8 *rx; + struct mutex buf_lock; +- const struct iio_chan_spec *ade7758_ring_channels; + struct spi_transfer ring_xfer[4]; + struct spi_message ring_msg; + /* +diff --git a/drivers/staging/iio/meter/ade7758_core.c b/drivers/staging/iio/meter/ade7758_core.c +index cba183e24838..94d9914a602c 100644 +--- a/drivers/staging/iio/meter/ade7758_core.c ++++ b/drivers/staging/iio/meter/ade7758_core.c +@@ -630,9 +630,6 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_VOLTAGE, + .indexed = 1, + .channel = 0, +- .extend_name = "raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), + .address = AD7758_WT(AD7758_PHASE_A, AD7758_VOLTAGE), + .scan_index = 0, + .scan_type = { +@@ -644,9 +641,6 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_CURRENT, + .indexed = 1, + .channel = 0, +- .extend_name = "raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), + .address = AD7758_WT(AD7758_PHASE_A, AD7758_CURRENT), + .scan_index = 1, + .scan_type = { +@@ -658,9 +652,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 0, +- .extend_name = "apparent_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "apparent", + .address = AD7758_WT(AD7758_PHASE_A, AD7758_APP_PWR), + .scan_index = 2, + .scan_type = { +@@ -672,9 +664,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 0, +- .extend_name = "active_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "active", + .address = AD7758_WT(AD7758_PHASE_A, AD7758_ACT_PWR), + .scan_index = 3, + .scan_type = { +@@ -686,9 +676,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 0, +- .extend_name = "reactive_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "reactive", + .address = AD7758_WT(AD7758_PHASE_A, AD7758_REACT_PWR), + .scan_index = 4, + .scan_type = { +@@ -700,9 +688,6 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_VOLTAGE, + .indexed = 1, + .channel = 1, +- .extend_name = "raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), + .address = AD7758_WT(AD7758_PHASE_B, AD7758_VOLTAGE), + .scan_index = 5, + .scan_type = { +@@ -714,9 +699,6 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_CURRENT, + .indexed = 1, + .channel = 1, +- .extend_name = "raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), + .address = AD7758_WT(AD7758_PHASE_B, AD7758_CURRENT), + .scan_index = 6, + .scan_type = { +@@ -728,9 +710,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 1, +- .extend_name = "apparent_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "apparent", + .address = AD7758_WT(AD7758_PHASE_B, AD7758_APP_PWR), + .scan_index = 7, + .scan_type = { +@@ -742,9 +722,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 1, +- .extend_name = "active_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "active", + .address = AD7758_WT(AD7758_PHASE_B, AD7758_ACT_PWR), + .scan_index = 8, + .scan_type = { +@@ -756,9 +734,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 1, +- .extend_name = "reactive_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "reactive", + .address = AD7758_WT(AD7758_PHASE_B, AD7758_REACT_PWR), + .scan_index = 9, + .scan_type = { +@@ -770,9 +746,6 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_VOLTAGE, + .indexed = 1, + .channel = 2, +- .extend_name = "raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), + .address = AD7758_WT(AD7758_PHASE_C, AD7758_VOLTAGE), + .scan_index = 10, + .scan_type = { +@@ -784,9 +757,6 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_CURRENT, + .indexed = 1, + .channel = 2, +- .extend_name = "raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), + .address = AD7758_WT(AD7758_PHASE_C, AD7758_CURRENT), + .scan_index = 11, + .scan_type = { +@@ -798,9 +768,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 2, +- .extend_name = "apparent_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "apparent", + .address = AD7758_WT(AD7758_PHASE_C, AD7758_APP_PWR), + .scan_index = 12, + .scan_type = { +@@ -812,9 +780,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 2, +- .extend_name = "active_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "active", + .address = AD7758_WT(AD7758_PHASE_C, AD7758_ACT_PWR), + .scan_index = 13, + .scan_type = { +@@ -826,9 +792,7 @@ static const struct iio_chan_spec ade7758_channels[] = { + .type = IIO_POWER, + .indexed = 1, + .channel = 2, +- .extend_name = "reactive_raw", +- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), +- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), ++ .extend_name = "reactive", + .address = AD7758_WT(AD7758_PHASE_C, AD7758_REACT_PWR), + .scan_index = 14, + .scan_type = { +@@ -869,13 +833,14 @@ static int ade7758_probe(struct spi_device *spi) + goto error_free_rx; + } + st->us = spi; +- st->ade7758_ring_channels = &ade7758_channels[0]; + mutex_init(&st->buf_lock); + + indio_dev->name = spi->dev.driver->name; + indio_dev->dev.parent = &spi->dev; + indio_dev->info = &ade7758_info; + indio_dev->modes = INDIO_DIRECT_MODE; ++ indio_dev->channels = ade7758_channels; ++ indio_dev->num_channels = ARRAY_SIZE(ade7758_channels); + + ret = ade7758_configure_ring(indio_dev); + if (ret) +diff --git a/drivers/staging/iio/meter/ade7758_ring.c b/drivers/staging/iio/meter/ade7758_ring.c +index c0accf8cce93..6e9006490742 100644 +--- a/drivers/staging/iio/meter/ade7758_ring.c ++++ b/drivers/staging/iio/meter/ade7758_ring.c +@@ -85,17 +85,16 @@ static irqreturn_t ade7758_trigger_handler(int irq, void *p) + **/ + static int ade7758_ring_preenable(struct iio_dev *indio_dev) + { +- struct ade7758_state *st = iio_priv(indio_dev); + unsigned channel; + +- if (!bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength)) ++ if (bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength)) + return -EINVAL; + + channel = find_first_bit(indio_dev->active_scan_mask, + indio_dev->masklength); + + ade7758_write_waveform_type(&indio_dev->dev, +- st->ade7758_ring_channels[channel].address); ++ indio_dev->channels[channel].address); + + return 0; + } +diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c +index 6ea95d216eb8..38b4be24d13f 100644 +--- a/drivers/target/target_core_device.c ++++ b/drivers/target/target_core_device.c +@@ -1409,7 +1409,8 @@ int core_dev_add_initiator_node_lun_acl( + * Check to see if there are any existing persistent reservation APTPL + * pre-registrations that need to be enabled for this LUN ACL.. + */ +- core_scsi3_check_aptpl_registration(lun->lun_se_dev, tpg, lun, lacl); ++ core_scsi3_check_aptpl_registration(lun->lun_se_dev, tpg, lun, nacl, ++ lacl->mapped_lun); + return 0; + } + +diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c +index 3013287a2aaa..1205dbd4f83d 100644 +--- a/drivers/target/target_core_pr.c ++++ b/drivers/target/target_core_pr.c +@@ -944,10 +944,10 @@ int core_scsi3_check_aptpl_registration( + struct se_device *dev, + struct se_portal_group *tpg, + struct se_lun *lun, +- struct se_lun_acl *lun_acl) ++ struct se_node_acl *nacl, ++ u32 mapped_lun) + { +- struct se_node_acl *nacl = lun_acl->se_lun_nacl; +- struct se_dev_entry *deve = nacl->device_list[lun_acl->mapped_lun]; ++ struct se_dev_entry *deve = nacl->device_list[mapped_lun]; + + if (dev->dev_reservation_flags & DRF_SPC2_RESERVATIONS) + return 0; +diff --git a/drivers/target/target_core_pr.h b/drivers/target/target_core_pr.h +index 2ee2936fa0bd..749fd7bb7510 100644 +--- a/drivers/target/target_core_pr.h ++++ b/drivers/target/target_core_pr.h +@@ -60,7 +60,7 @@ extern int core_scsi3_alloc_aptpl_registration( + unsigned char *, u16, u32, int, int, u8); + extern int core_scsi3_check_aptpl_registration(struct se_device *, + struct se_portal_group *, struct se_lun *, +- struct se_lun_acl *); ++ struct se_node_acl *, u32); + extern void core_scsi3_free_pr_reg_from_nacl(struct se_device *, + struct se_node_acl *); + extern void core_scsi3_free_all_registrations(struct se_device *); +diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c +index c036595b17cf..fb8a1a12dda9 100644 +--- a/drivers/target/target_core_tpg.c ++++ b/drivers/target/target_core_tpg.c +@@ -40,6 +40,7 @@ + #include + + #include "target_core_internal.h" ++#include "target_core_pr.h" + + extern struct se_device *g_lun0_dev; + +@@ -166,6 +167,13 @@ void core_tpg_add_node_to_devs( + + core_enable_device_list_for_node(lun, NULL, lun->unpacked_lun, + lun_access, acl, tpg); ++ /* ++ * Check to see if there are any existing persistent reservation ++ * APTPL pre-registrations that need to be enabled for this dynamic ++ * LUN ACL now.. ++ */ ++ core_scsi3_check_aptpl_registration(dev, tpg, lun, acl, ++ lun->unpacked_lun); + spin_lock(&tpg->tpg_lun_lock); + } + spin_unlock(&tpg->tpg_lun_lock); +diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c +index 24f527977ddb..9232c7738ed1 100644 +--- a/drivers/target/target_core_transport.c ++++ b/drivers/target/target_core_transport.c +@@ -1855,8 +1855,7 @@ static void transport_complete_qf(struct se_cmd *cmd) + if (cmd->se_cmd_flags & SCF_TRANSPORT_TASK_SENSE) { + trace_target_cmd_complete(cmd); + ret = cmd->se_tfo->queue_status(cmd); +- if (ret) +- goto out; ++ goto out; + } + + switch (cmd->data_direction) { +diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c +index 25b8f6868788..27b5554e20d9 100644 +--- a/drivers/tty/serial/serial_core.c ++++ b/drivers/tty/serial/serial_core.c +@@ -353,7 +353,7 @@ uart_get_baud_rate(struct uart_port *port, struct ktermios *termios, + * The spd_hi, spd_vhi, spd_shi, spd_warp kludge... + * Die! Die! Die! + */ +- if (baud == 38400) ++ if (try == 0 && baud == 38400) + baud = altbaud; + + /* +diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c +index d3448a90f0f9..25d07412e08e 100644 +--- a/drivers/tty/tty_io.c ++++ b/drivers/tty/tty_io.c +@@ -1701,6 +1701,7 @@ int tty_release(struct inode *inode, struct file *filp) + int pty_master, tty_closing, o_tty_closing, do_sleep; + int idx; + char buf[64]; ++ long timeout = 0; + + if (tty_paranoia_check(tty, inode, __func__)) + return 0; +@@ -1785,7 +1786,11 @@ int tty_release(struct inode *inode, struct file *filp) + __func__, tty_name(tty, buf)); + tty_unlock_pair(tty, o_tty); + mutex_unlock(&tty_mutex); +- schedule(); ++ schedule_timeout_killable(timeout); ++ if (timeout < 120 * HZ) ++ timeout = 2 * timeout + 1; ++ else ++ timeout = MAX_SCHEDULE_TIMEOUT; + } + + /* +diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c +index eabccd45f4e8..331f06a91cc3 100644 +--- a/drivers/usb/class/cdc-acm.c ++++ b/drivers/usb/class/cdc-acm.c +@@ -965,11 +965,12 @@ static void acm_tty_set_termios(struct tty_struct *tty, + /* FIXME: Needs to clear unsupported bits in the termios */ + acm->clocal = ((termios->c_cflag & CLOCAL) != 0); + +- if (!newline.dwDTERate) { ++ if (C_BAUD(tty) == B0) { + newline.dwDTERate = acm->line.dwDTERate; + newctrl &= ~ACM_CTRL_DTR; +- } else ++ } else if (termios_old && (termios_old->c_cflag & CBAUD) == B0) { + newctrl |= ACM_CTRL_DTR; ++ } + + if (newctrl != acm->ctrlout) + acm_set_control(acm, acm->ctrlout = newctrl); +@@ -1672,6 +1673,7 @@ static const struct usb_device_id acm_ids[] = { + { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */ + .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ + }, ++ { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */ + { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */ + }, + /* Motorola H24 HSPA module: */ +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index 2518c3250750..ef6ec13b6ae5 100644 +--- a/drivers/usb/core/hcd.c ++++ b/drivers/usb/core/hcd.c +@@ -2057,6 +2057,8 @@ int usb_alloc_streams(struct usb_interface *interface, + return -EINVAL; + if (dev->speed != USB_SPEED_SUPER) + return -EINVAL; ++ if (dev->state < USB_STATE_CONFIGURED) ++ return -ENODEV; + + /* Streams only apply to bulk endpoints. */ + for (i = 0; i < num_eps; i++) +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index 445d62a4316a..d2bd9d7c8f4b 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -4378,6 +4378,9 @@ check_highspeed (struct usb_hub *hub, struct usb_device *udev, int port1) + struct usb_qualifier_descriptor *qual; + int status; + ++ if (udev->quirks & USB_QUIRK_DEVICE_QUALIFIER) ++ return; ++ + qual = kmalloc (sizeof *qual, GFP_KERNEL); + if (qual == NULL) + return; +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index 5144d11d032c..c85459338991 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -93,6 +93,16 @@ static const struct usb_device_id usb_quirk_list[] = { + { USB_DEVICE(0x04e8, 0x6601), .driver_info = + USB_QUIRK_CONFIG_INTF_STRINGS }, + ++ /* Elan Touchscreen */ ++ { USB_DEVICE(0x04f3, 0x0089), .driver_info = ++ USB_QUIRK_DEVICE_QUALIFIER }, ++ ++ { USB_DEVICE(0x04f3, 0x009b), .driver_info = ++ USB_QUIRK_DEVICE_QUALIFIER }, ++ ++ { USB_DEVICE(0x04f3, 0x016f), .driver_info = ++ USB_QUIRK_DEVICE_QUALIFIER }, ++ + /* Roland SC-8820 */ + { USB_DEVICE(0x0582, 0x0007), .driver_info = USB_QUIRK_RESET_RESUME }, + +diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c +index 21a352079bc2..0985ff715c0c 100644 +--- a/drivers/usb/dwc3/ep0.c ++++ b/drivers/usb/dwc3/ep0.c +@@ -251,7 +251,7 @@ static void dwc3_ep0_stall_and_restart(struct dwc3 *dwc) + + /* stall is always issued on EP0 */ + dep = dwc->eps[0]; +- __dwc3_gadget_ep_set_halt(dep, 1); ++ __dwc3_gadget_ep_set_halt(dep, 1, false); + dep->flags = DWC3_EP_ENABLED; + dwc->delayed_status = false; + +@@ -461,7 +461,7 @@ static int dwc3_ep0_handle_feature(struct dwc3 *dwc, + return -EINVAL; + if (set == 0 && (dep->flags & DWC3_EP_WEDGE)) + break; +- ret = __dwc3_gadget_ep_set_halt(dep, set); ++ ret = __dwc3_gadget_ep_set_halt(dep, set, true); + if (ret) + return -EINVAL; + break; +diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c +index 09e9619ae381..d90c70c23adb 100644 +--- a/drivers/usb/dwc3/gadget.c ++++ b/drivers/usb/dwc3/gadget.c +@@ -532,12 +532,11 @@ static int __dwc3_gadget_ep_enable(struct dwc3_ep *dep, + if (!usb_endpoint_xfer_isoc(desc)) + return 0; + +- memset(&trb_link, 0, sizeof(trb_link)); +- + /* Link TRB for ISOC. The HWO bit is never reset */ + trb_st_hw = &dep->trb_pool[0]; + + trb_link = &dep->trb_pool[DWC3_TRB_NUM - 1]; ++ memset(trb_link, 0, sizeof(*trb_link)); + + trb_link->bpl = lower_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw)); + trb_link->bph = upper_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw)); +@@ -588,7 +587,7 @@ static int __dwc3_gadget_ep_disable(struct dwc3_ep *dep) + + /* make sure HW endpoint isn't stalled */ + if (dep->flags & DWC3_EP_STALL) +- __dwc3_gadget_ep_set_halt(dep, 0); ++ __dwc3_gadget_ep_set_halt(dep, 0, false); + + reg = dwc3_readl(dwc->regs, DWC3_DALEPENA); + reg &= ~DWC3_DALEPENA_EP(dep->number); +@@ -1186,7 +1185,7 @@ out0: + return ret; + } + +-int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value) ++int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol) + { + struct dwc3_gadget_ep_cmd_params params; + struct dwc3 *dwc = dep->dwc; +@@ -1195,6 +1194,14 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value) + memset(¶ms, 0x00, sizeof(params)); + + if (value) { ++ if (!protocol && ((dep->direction && dep->flags & DWC3_EP_BUSY) || ++ (!list_empty(&dep->req_queued) || ++ !list_empty(&dep->request_list)))) { ++ dev_dbg(dwc->dev, "%s: pending request, cannot halt\n", ++ dep->name); ++ return -EAGAIN; ++ } ++ + ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, + DWC3_DEPCMD_SETSTALL, ¶ms); + if (ret) +@@ -1234,7 +1241,7 @@ static int dwc3_gadget_ep_set_halt(struct usb_ep *ep, int value) + goto out; + } + +- ret = __dwc3_gadget_ep_set_halt(dep, value); ++ ret = __dwc3_gadget_ep_set_halt(dep, value, false); + out: + spin_unlock_irqrestore(&dwc->lock, flags); + +@@ -1254,7 +1261,7 @@ static int dwc3_gadget_ep_set_wedge(struct usb_ep *ep) + if (dep->number == 0 || dep->number == 1) + return dwc3_gadget_ep0_set_halt(ep, 1); + else +- return dwc3_gadget_ep_set_halt(ep, 1); ++ return __dwc3_gadget_ep_set_halt(dep, 1, false); + } + + /* -------------------------------------------------------------------------- */ +diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h +index a0ee75b68a80..ac62558231be 100644 +--- a/drivers/usb/dwc3/gadget.h ++++ b/drivers/usb/dwc3/gadget.h +@@ -85,7 +85,7 @@ void dwc3_ep0_out_start(struct dwc3 *dwc); + int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value); + int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, + gfp_t gfp_flags); +-int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value); ++int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol); + + /** + * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW +diff --git a/drivers/usb/gadget/f_acm.c b/drivers/usb/gadget/f_acm.c +index ab1065afbbd0..3384486c2884 100644 +--- a/drivers/usb/gadget/f_acm.c ++++ b/drivers/usb/gadget/f_acm.c +@@ -430,11 +430,12 @@ static int acm_set_alt(struct usb_function *f, unsigned intf, unsigned alt) + if (acm->notify->driver_data) { + VDBG(cdev, "reset acm control interface %d\n", intf); + usb_ep_disable(acm->notify); +- } else { +- VDBG(cdev, "init acm ctrl interface %d\n", intf); ++ } ++ ++ if (!acm->notify->desc) + if (config_ep_by_speed(cdev->gadget, f, acm->notify)) + return -EINVAL; +- } ++ + usb_ep_enable(acm->notify); + acm->notify->driver_data = acm; + +diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c +index 5bcf7d001259..afd0a159fe61 100644 +--- a/drivers/usb/gadget/f_fs.c ++++ b/drivers/usb/gadget/f_fs.c +@@ -1995,8 +1995,6 @@ static inline struct f_fs_opts *ffs_do_functionfs_bind(struct usb_function *f, + func->conf = c; + func->gadget = c->cdev->gadget; + +- ffs_data_get(func->ffs); +- + /* + * in drivers/usb/gadget/configfs.c:configfs_composite_bind() + * configurations are bound in sequence with list_for_each_entry, +diff --git a/drivers/usb/gadget/udc-core.c b/drivers/usb/gadget/udc-core.c +index 27768a7d986a..9ce0b135c8c8 100644 +--- a/drivers/usb/gadget/udc-core.c ++++ b/drivers/usb/gadget/udc-core.c +@@ -456,6 +456,11 @@ static ssize_t usb_udc_softconn_store(struct device *dev, + { + struct usb_udc *udc = container_of(dev, struct usb_udc, dev); + ++ if (!udc->driver) { ++ dev_err(dev, "soft-connect without a gadget driver\n"); ++ return -EOPNOTSUPP; ++ } ++ + if (sysfs_streq(buf, "connect")) { + usb_gadget_udc_start(udc->gadget, udc->driver); + usb_gadget_connect(udc->gadget); +diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c +index c2d5afc57e22..1d29bbfeb9d5 100644 +--- a/drivers/usb/musb/musb_cppi41.c ++++ b/drivers/usb/musb/musb_cppi41.c +@@ -190,7 +190,8 @@ static enum hrtimer_restart cppi41_recheck_tx_req(struct hrtimer *timer) + } + } + +- if (!list_empty(&controller->early_tx_list)) { ++ if (!list_empty(&controller->early_tx_list) && ++ !hrtimer_is_queued(&controller->early_tx)) { + ret = HRTIMER_RESTART; + hrtimer_forward_now(&controller->early_tx, + ktime_set(0, 150 * NSEC_PER_USEC)); +diff --git a/drivers/usb/musb/musb_dsps.c b/drivers/usb/musb/musb_dsps.c +index 85f5215871de..865243e818b7 100644 +--- a/drivers/usb/musb/musb_dsps.c ++++ b/drivers/usb/musb/musb_dsps.c +@@ -733,7 +733,9 @@ static int dsps_resume(struct device *dev) + dsps_writel(mbase, wrp->mode, glue->context.mode); + dsps_writel(mbase, wrp->tx_mode, glue->context.tx_mode); + dsps_writel(mbase, wrp->rx_mode, glue->context.rx_mode); +- setup_timer(&glue->timer, otg_timer, (unsigned long) musb); ++ if (musb->xceiv->state == OTG_STATE_B_IDLE && ++ musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE) ++ mod_timer(&glue->timer, jiffies + wrp->poll_seconds * HZ); + + return 0; + } +diff --git a/drivers/usb/phy/phy.c b/drivers/usb/phy/phy.c +index 8afa813d690b..0180eef05656 100644 +--- a/drivers/usb/phy/phy.c ++++ b/drivers/usb/phy/phy.c +@@ -229,6 +229,9 @@ struct usb_phy *usb_get_phy_dev(struct device *dev, u8 index) + phy = __usb_find_phy_dev(dev, &phy_bind_list, index); + if (IS_ERR(phy) || !try_module_get(phy->dev->driver->owner)) { + dev_dbg(dev, "unable to find transceiver\n"); ++ if (!IS_ERR(phy)) ++ phy = ERR_PTR(-ENODEV); ++ + goto err0; + } + +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c +index 63b2af2a87c0..3beae723ad3a 100644 +--- a/drivers/usb/serial/cp210x.c ++++ b/drivers/usb/serial/cp210x.c +@@ -155,6 +155,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ + { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ ++ { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */ + { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ + { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */ + { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */ +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index 3614620e09e1..a523adad6380 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -145,6 +145,7 @@ static struct ftdi_sio_quirk ftdi_8u2232c_quirk = { + * /sys/bus/usb-serial/drivers/ftdi_sio/new_id and send a patch or report. + */ + static const struct usb_device_id id_table_combined[] = { ++ { USB_DEVICE(FTDI_VID, FTDI_BRICK_PID) }, + { USB_DEVICE(FTDI_VID, FTDI_ZEITCONTROL_TAGTRACE_MIFARE_PID) }, + { USB_DEVICE(FTDI_VID, FTDI_CTI_MINI_PID) }, + { USB_DEVICE(FTDI_VID, FTDI_CTI_NANO_PID) }, +@@ -674,6 +675,8 @@ static const struct usb_device_id id_table_combined[] = { + { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_5_PID) }, + { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_6_PID) }, + { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_7_PID) }, ++ { USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) }, ++ { USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) }, + { USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) }, + { USB_DEVICE(XSENS_VID, XSENS_MTW_PID) }, + { USB_DEVICE(FTDI_VID, FTDI_OMNI1509) }, +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index 5937b2d242f2..6786b705ccf6 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -30,6 +30,12 @@ + + /*** third-party PIDs (using FTDI_VID) ***/ + ++/* ++ * Certain versions of the official Windows FTDI driver reprogrammed ++ * counterfeit FTDI devices to PID 0. Support these devices anyway. ++ */ ++#define FTDI_BRICK_PID 0x0000 ++ + #define FTDI_LUMEL_PD12_PID 0x6002 + + /* +@@ -143,8 +149,12 @@ + * Xsens Technologies BV products (http://www.xsens.com). + */ + #define XSENS_VID 0x2639 +-#define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */ ++#define XSENS_AWINDA_STATION_PID 0x0101 ++#define XSENS_AWINDA_DONGLE_PID 0x0102 + #define XSENS_MTW_PID 0x0200 /* Xsens MTw */ ++#define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */ ++ ++/* Xsens devices using FTDI VID */ + #define XSENS_CONVERTER_0_PID 0xD388 /* Xsens USB converter */ + #define XSENS_CONVERTER_1_PID 0xD389 /* Xsens Wireless Receiver */ + #define XSENS_CONVERTER_2_PID 0xD38A +diff --git a/drivers/usb/serial/kobil_sct.c b/drivers/usb/serial/kobil_sct.c +index 618c1c1f227e..5cdb32b37e85 100644 +--- a/drivers/usb/serial/kobil_sct.c ++++ b/drivers/usb/serial/kobil_sct.c +@@ -335,7 +335,8 @@ static int kobil_write(struct tty_struct *tty, struct usb_serial_port *port, + port->interrupt_out_urb->transfer_buffer_length = length; + + priv->cur_pos = priv->cur_pos + length; +- result = usb_submit_urb(port->interrupt_out_urb, GFP_NOIO); ++ result = usb_submit_urb(port->interrupt_out_urb, ++ GFP_ATOMIC); + dev_dbg(&port->dev, "%s - Send write URB returns: %i\n", __func__, result); + todo = priv->filled - priv->cur_pos; + +@@ -350,7 +351,7 @@ static int kobil_write(struct tty_struct *tty, struct usb_serial_port *port, + if (priv->device_type == KOBIL_ADAPTER_B_PRODUCT_ID || + priv->device_type == KOBIL_ADAPTER_K_PRODUCT_ID) { + result = usb_submit_urb(port->interrupt_in_urb, +- GFP_NOIO); ++ GFP_ATOMIC); + dev_dbg(&port->dev, "%s - Send read URB returns: %i\n", __func__, result); + } + } +diff --git a/drivers/usb/serial/opticon.c b/drivers/usb/serial/opticon.c +index 4856fb7e637e..4b7bfb394a32 100644 +--- a/drivers/usb/serial/opticon.c ++++ b/drivers/usb/serial/opticon.c +@@ -215,7 +215,7 @@ static int opticon_write(struct tty_struct *tty, struct usb_serial_port *port, + + /* The connected devices do not have a bulk write endpoint, + * to transmit data to de barcode device the control endpoint is used */ +- dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_NOIO); ++ dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC); + if (!dr) { + count = -ENOMEM; + goto error_no_dr; +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index e47aabe0c760..8b3484134ab0 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -269,6 +269,7 @@ static void option_instat_callback(struct urb *urb); + #define TELIT_PRODUCT_DE910_DUAL 0x1010 + #define TELIT_PRODUCT_UE910_V2 0x1012 + #define TELIT_PRODUCT_LE920 0x1200 ++#define TELIT_PRODUCT_LE910 0x1201 + + /* ZTE PRODUCTS */ + #define ZTE_VENDOR_ID 0x19d2 +@@ -361,6 +362,7 @@ static void option_instat_callback(struct urb *urb); + + /* Haier products */ + #define HAIER_VENDOR_ID 0x201e ++#define HAIER_PRODUCT_CE81B 0x10f8 + #define HAIER_PRODUCT_CE100 0x2009 + + /* Cinterion (formerly Siemens) products */ +@@ -588,6 +590,11 @@ static const struct option_blacklist_info zte_1255_blacklist = { + .reserved = BIT(3) | BIT(4), + }; + ++static const struct option_blacklist_info telit_le910_blacklist = { ++ .sendsetup = BIT(0), ++ .reserved = BIT(1) | BIT(2), ++}; ++ + static const struct option_blacklist_info telit_le920_blacklist = { + .sendsetup = BIT(0), + .reserved = BIT(1) | BIT(5), +@@ -1137,6 +1144,8 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) }, ++ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), ++ .driver_info = (kernel_ulong_t)&telit_le910_blacklist }, + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920), + .driver_info = (kernel_ulong_t)&telit_le920_blacklist }, + { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ +@@ -1612,6 +1621,7 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, + { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, + { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, ++ { USB_DEVICE_AND_INTERFACE_INFO(HAIER_VENDOR_ID, HAIER_PRODUCT_CE81B, 0xff, 0xff, 0xff) }, + /* Pirelli */ + { USB_DEVICE_INTERFACE_CLASS(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_1, 0xff) }, + { USB_DEVICE_INTERFACE_CLASS(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_2, 0xff) }, +diff --git a/drivers/usb/storage/transport.c b/drivers/usb/storage/transport.c +index 22c7d4360fa2..b1d815eb6d0b 100644 +--- a/drivers/usb/storage/transport.c ++++ b/drivers/usb/storage/transport.c +@@ -1118,6 +1118,31 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us) + */ + if (result == USB_STOR_XFER_LONG) + fake_sense = 1; ++ ++ /* ++ * Sometimes a device will mistakenly skip the data phase ++ * and go directly to the status phase without sending a ++ * zero-length packet. If we get a 13-byte response here, ++ * check whether it really is a CSW. ++ */ ++ if (result == USB_STOR_XFER_SHORT && ++ srb->sc_data_direction == DMA_FROM_DEVICE && ++ transfer_length - scsi_get_resid(srb) == ++ US_BULK_CS_WRAP_LEN) { ++ struct scatterlist *sg = NULL; ++ unsigned int offset = 0; ++ ++ if (usb_stor_access_xfer_buf((unsigned char *) bcs, ++ US_BULK_CS_WRAP_LEN, srb, &sg, ++ &offset, FROM_XFER_BUF) == ++ US_BULK_CS_WRAP_LEN && ++ bcs->Signature == ++ cpu_to_le32(US_BULK_CS_SIGN)) { ++ usb_stor_dbg(us, "Device skipped data phase\n"); ++ scsi_set_resid(srb, transfer_length); ++ goto skipped_data_phase; ++ } ++ } + } + + /* See flow chart on pg 15 of the Bulk Only Transport spec for +@@ -1153,6 +1178,7 @@ int usb_stor_Bulk_transport(struct scsi_cmnd *srb, struct us_data *us) + if (result != USB_STOR_XFER_GOOD) + return USB_STOR_TRANSPORT_ERROR; + ++ skipped_data_phase: + /* check bulk status */ + residue = le32_to_cpu(bcs->Residue); + usb_stor_dbg(us, "Bulk Status S 0x%x T 0x%x R %u Stat 0x%x\n", +diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c +index 61b182bf32a2..dbfe4eecf12e 100644 +--- a/drivers/video/console/bitblit.c ++++ b/drivers/video/console/bitblit.c +@@ -205,7 +205,6 @@ static void bit_putcs(struct vc_data *vc, struct fb_info *info, + static void bit_clear_margins(struct vc_data *vc, struct fb_info *info, + int bottom_only) + { +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; + unsigned int cw = vc->vc_font.width; + unsigned int ch = vc->vc_font.height; + unsigned int rw = info->var.xres - (vc->vc_cols*cw); +@@ -214,7 +213,7 @@ static void bit_clear_margins(struct vc_data *vc, struct fb_info *info, + unsigned int bs = info->var.yres - bh; + struct fb_fillrect region; + +- region.color = attr_bgcol_ec(bgshift, vc, info); ++ region.color = 0; + region.rop = ROP_COPY; + + if (rw && !bottom_only) { +diff --git a/drivers/video/console/fbcon_ccw.c b/drivers/video/console/fbcon_ccw.c +index 41b32ae23dac..5a3cbf6dff4d 100644 +--- a/drivers/video/console/fbcon_ccw.c ++++ b/drivers/video/console/fbcon_ccw.c +@@ -197,9 +197,8 @@ static void ccw_clear_margins(struct vc_data *vc, struct fb_info *info, + unsigned int bh = info->var.xres - (vc->vc_rows*ch); + unsigned int bs = vc->vc_rows*ch; + struct fb_fillrect region; +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; + +- region.color = attr_bgcol_ec(bgshift,vc,info); ++ region.color = 0; + region.rop = ROP_COPY; + + if (rw && !bottom_only) { +diff --git a/drivers/video/console/fbcon_cw.c b/drivers/video/console/fbcon_cw.c +index a93670ef7f89..e7ee44db4e98 100644 +--- a/drivers/video/console/fbcon_cw.c ++++ b/drivers/video/console/fbcon_cw.c +@@ -180,9 +180,8 @@ static void cw_clear_margins(struct vc_data *vc, struct fb_info *info, + unsigned int bh = info->var.xres - (vc->vc_rows*ch); + unsigned int rs = info->var.yres - rw; + struct fb_fillrect region; +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; + +- region.color = attr_bgcol_ec(bgshift,vc,info); ++ region.color = 0; + region.rop = ROP_COPY; + + if (rw && !bottom_only) { +diff --git a/drivers/video/console/fbcon_ud.c b/drivers/video/console/fbcon_ud.c +index ff0872c0498b..19e3714abfe8 100644 +--- a/drivers/video/console/fbcon_ud.c ++++ b/drivers/video/console/fbcon_ud.c +@@ -227,9 +227,8 @@ static void ud_clear_margins(struct vc_data *vc, struct fb_info *info, + unsigned int rw = info->var.xres - (vc->vc_cols*cw); + unsigned int bh = info->var.yres - (vc->vc_rows*ch); + struct fb_fillrect region; +- int bgshift = (vc->vc_hi_font_mask) ? 13 : 12; + +- region.color = attr_bgcol_ec(bgshift,vc,info); ++ region.color = 0; + region.rop = ROP_COPY; + + if (rw && !bottom_only) { +diff --git a/drivers/virtio/virtio_pci.c b/drivers/virtio/virtio_pci.c +index a416f9b2a7f6..827b5f8e6297 100644 +--- a/drivers/virtio/virtio_pci.c ++++ b/drivers/virtio/virtio_pci.c +@@ -791,6 +791,7 @@ static int virtio_pci_restore(struct device *dev) + struct pci_dev *pci_dev = to_pci_dev(dev); + struct virtio_pci_device *vp_dev = pci_get_drvdata(pci_dev); + struct virtio_driver *drv; ++ unsigned status = 0; + int ret; + + drv = container_of(vp_dev->vdev.dev.driver, +@@ -801,14 +802,40 @@ static int virtio_pci_restore(struct device *dev) + return ret; + + pci_set_master(pci_dev); ++ /* We always start by resetting the device, in case a previous ++ * driver messed it up. */ ++ vp_reset(&vp_dev->vdev); ++ ++ /* Acknowledge that we've seen the device. */ ++ status |= VIRTIO_CONFIG_S_ACKNOWLEDGE; ++ vp_set_status(&vp_dev->vdev, status); ++ ++ /* Maybe driver failed before freeze. ++ * Restore the failed status, for debugging. */ ++ status |= vp_dev->saved_status & VIRTIO_CONFIG_S_FAILED; ++ vp_set_status(&vp_dev->vdev, status); ++ ++ if (!drv) ++ return 0; ++ ++ /* We have a driver! */ ++ status |= VIRTIO_CONFIG_S_DRIVER; ++ vp_set_status(&vp_dev->vdev, status); ++ + vp_finalize_features(&vp_dev->vdev); + +- if (drv && drv->restore) ++ if (drv->restore) { + ret = drv->restore(&vp_dev->vdev); ++ if (ret) { ++ status |= VIRTIO_CONFIG_S_FAILED; ++ vp_set_status(&vp_dev->vdev, status); ++ return ret; ++ } ++ } + + /* Finally, tell the device we're all set */ +- if (!ret) +- vp_set_status(&vp_dev->vdev, vp_dev->saved_status); ++ status |= VIRTIO_CONFIG_S_DRIVER_OK; ++ vp_set_status(&vp_dev->vdev, status); + + return ret; + } +diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c +index ca248b0687f4..196b089b0052 100644 +--- a/fs/btrfs/file-item.c ++++ b/fs/btrfs/file-item.c +@@ -423,7 +423,7 @@ int btrfs_lookup_csums_range(struct btrfs_root *root, u64 start, u64 end, + ret = 0; + fail: + while (ret < 0 && !list_empty(&tmplist)) { +- sums = list_entry(&tmplist, struct btrfs_ordered_sum, list); ++ sums = list_entry(tmplist.next, struct btrfs_ordered_sum, list); + list_del(&sums->list); + kfree(sums); + } +diff --git a/fs/buffer.c b/fs/buffer.c +index 71e2d0ed8530..4d06a573d199 100644 +--- a/fs/buffer.c ++++ b/fs/buffer.c +@@ -2077,6 +2077,7 @@ int generic_write_end(struct file *file, struct address_space *mapping, + struct page *page, void *fsdata) + { + struct inode *inode = mapping->host; ++ loff_t old_size = inode->i_size; + int i_size_changed = 0; + + copied = block_write_end(file, mapping, pos, len, copied, page, fsdata); +@@ -2096,6 +2097,8 @@ int generic_write_end(struct file *file, struct address_space *mapping, + unlock_page(page); + page_cache_release(page); + ++ if (old_size < pos) ++ pagecache_isize_extended(inode, old_size, pos); + /* + * Don't mark the inode dirty under page lock. First, it unnecessarily + * makes the holding time of page lock longer. Second, it forces lock +@@ -2313,6 +2316,11 @@ static int cont_expand_zero(struct file *file, struct address_space *mapping, + err = 0; + + balance_dirty_pages_ratelimited(mapping); ++ ++ if (unlikely(fatal_signal_pending(current))) { ++ err = -EINTR; ++ goto out; ++ } + } + + /* page covers the boundary, find the boundary offset */ +diff --git a/fs/dcache.c b/fs/dcache.c +index 58d57da91d2a..436612777203 100644 +--- a/fs/dcache.c ++++ b/fs/dcache.c +@@ -2824,6 +2824,9 @@ static int prepend(char **buffer, int *buflen, const char *str, int namelen) + * the beginning of the name. The sequence number check at the caller will + * retry it again when a d_move() does happen. So any garbage in the buffer + * due to mismatched pointer and length will be discarded. ++ * ++ * Data dependency barrier is needed to make sure that we see that terminating ++ * NUL. Alpha strikes again, film at 11... + */ + static int prepend_name(char **buffer, int *buflen, struct qstr *name) + { +@@ -2831,6 +2834,8 @@ static int prepend_name(char **buffer, int *buflen, struct qstr *name) + u32 dlen = ACCESS_ONCE(name->len); + char *p; + ++ smp_read_barrier_depends(); ++ + *buflen -= dlen + 1; + if (*buflen < 0) + return -ENAMETOOLONG; +diff --git a/fs/ext3/super.c b/fs/ext3/super.c +index 37fd31ed16e7..0498390f309e 100644 +--- a/fs/ext3/super.c ++++ b/fs/ext3/super.c +@@ -1354,13 +1354,6 @@ set_qf_format: + "not specified."); + return 0; + } +- } else { +- if (sbi->s_jquota_fmt) { +- ext3_msg(sb, KERN_ERR, "error: journaled quota format " +- "specified with no journaling " +- "enabled."); +- return 0; +- } + } + #endif + return 1; +diff --git a/fs/ext4/bitmap.c b/fs/ext4/bitmap.c +index 3285aa5a706a..b610779a958c 100644 +--- a/fs/ext4/bitmap.c ++++ b/fs/ext4/bitmap.c +@@ -24,8 +24,7 @@ int ext4_inode_bitmap_csum_verify(struct super_block *sb, ext4_group_t group, + __u32 provided, calculated; + struct ext4_sb_info *sbi = EXT4_SB(sb); + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return 1; + + provided = le16_to_cpu(gdp->bg_inode_bitmap_csum_lo); +@@ -46,8 +45,7 @@ void ext4_inode_bitmap_csum_set(struct super_block *sb, ext4_group_t group, + __u32 csum; + struct ext4_sb_info *sbi = EXT4_SB(sb); + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return; + + csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)bh->b_data, sz); +@@ -65,8 +63,7 @@ int ext4_block_bitmap_csum_verify(struct super_block *sb, ext4_group_t group, + struct ext4_sb_info *sbi = EXT4_SB(sb); + int sz = EXT4_CLUSTERS_PER_GROUP(sb) / 8; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return 1; + + provided = le16_to_cpu(gdp->bg_block_bitmap_csum_lo); +@@ -91,8 +88,7 @@ void ext4_block_bitmap_csum_set(struct super_block *sb, ext4_group_t group, + __u32 csum; + struct ext4_sb_info *sbi = EXT4_SB(sb); + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return; + + csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)bh->b_data, sz); +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h +index 62f024c051ce..2a6830a7af33 100644 +--- a/fs/ext4/ext4.h ++++ b/fs/ext4/ext4.h +@@ -2110,6 +2110,7 @@ int do_journal_get_write_access(handle_t *handle, + #define CONVERT_INLINE_DATA 2 + + extern struct inode *ext4_iget(struct super_block *, unsigned long); ++extern struct inode *ext4_iget_normal(struct super_block *, unsigned long); + extern int ext4_write_inode(struct inode *, struct writeback_control *); + extern int ext4_setattr(struct dentry *, struct iattr *); + extern int ext4_getattr(struct vfsmount *mnt, struct dentry *dentry, +@@ -2340,10 +2341,18 @@ extern int ext4_register_li_request(struct super_block *sb, + static inline int ext4_has_group_desc_csum(struct super_block *sb) + { + return EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_GDT_CSUM | +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM); ++ EXT4_FEATURE_RO_COMPAT_GDT_CSUM) || ++ (EXT4_SB(sb)->s_chksum_driver != NULL); + } + ++static inline int ext4_has_metadata_csum(struct super_block *sb) ++{ ++ WARN_ON_ONCE(EXT4_HAS_RO_COMPAT_FEATURE(sb, ++ EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) && ++ !EXT4_SB(sb)->s_chksum_driver); ++ ++ return (EXT4_SB(sb)->s_chksum_driver != NULL); ++} + static inline ext4_fsblk_t ext4_blocks_count(struct ext4_super_block *es) + { + return ((ext4_fsblk_t)le32_to_cpu(es->s_blocks_count_hi) << 32) | +diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c +index 47188916dd8d..96a1ce159f51 100644 +--- a/fs/ext4/extents.c ++++ b/fs/ext4/extents.c +@@ -74,8 +74,7 @@ static int ext4_extent_block_csum_verify(struct inode *inode, + { + struct ext4_extent_tail *et; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return 1; + + et = find_ext4_extent_tail(eh); +@@ -89,8 +88,7 @@ static void ext4_extent_block_csum_set(struct inode *inode, + { + struct ext4_extent_tail *et; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return; + + et = find_ext4_extent_tail(eh); +diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c +index 64bb32f17903..a8d1a64d8cb0 100644 +--- a/fs/ext4/ialloc.c ++++ b/fs/ext4/ialloc.c +@@ -864,6 +864,10 @@ got: + struct buffer_head *block_bitmap_bh; + + block_bitmap_bh = ext4_read_block_bitmap(sb, group); ++ if (!block_bitmap_bh) { ++ err = -EIO; ++ goto out; ++ } + BUFFER_TRACE(block_bitmap_bh, "get block bitmap access"); + err = ext4_journal_get_write_access(handle, block_bitmap_bh); + if (err) { +@@ -988,8 +992,7 @@ got: + spin_unlock(&sbi->s_next_gen_lock); + + /* Precompute checksum seed for inode metadata */ +- if (EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) { ++ if (ext4_has_metadata_csum(sb)) { + __u32 csum; + __le32 inum = cpu_to_le32(inode->i_ino); + __le32 gen = cpu_to_le32(inode->i_generation); +diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c +index 82edf5b93352..8c03b747021b 100644 +--- a/fs/ext4/inline.c ++++ b/fs/ext4/inline.c +@@ -1128,8 +1128,7 @@ static int ext4_finish_convert_inline_dir(handle_t *handle, + memcpy((void *)de, buf + EXT4_INLINE_DOTDOT_SIZE, + inline_size - EXT4_INLINE_DOTDOT_SIZE); + +- if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(inode->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + inode->i_size = inode->i_sb->s_blocksize; +diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c +index b56062dc8b62..3a7e0341447f 100644 +--- a/fs/ext4/inode.c ++++ b/fs/ext4/inode.c +@@ -83,8 +83,7 @@ static int ext4_inode_csum_verify(struct inode *inode, struct ext4_inode *raw, + + if (EXT4_SB(inode->i_sb)->s_es->s_creator_os != + cpu_to_le32(EXT4_OS_LINUX) || +- !EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ !ext4_has_metadata_csum(inode->i_sb)) + return 1; + + provided = le16_to_cpu(raw->i_checksum_lo); +@@ -105,8 +104,7 @@ static void ext4_inode_csum_set(struct inode *inode, struct ext4_inode *raw, + + if (EXT4_SB(inode->i_sb)->s_es->s_creator_os != + cpu_to_le32(EXT4_OS_LINUX) || +- !EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ !ext4_has_metadata_csum(inode->i_sb)) + return; + + csum = ext4_inode_csum(inode, raw, ei); +@@ -2633,6 +2631,20 @@ static int ext4_nonda_switch(struct super_block *sb) + return 0; + } + ++/* We always reserve for an inode update; the superblock could be there too */ ++static int ext4_da_write_credits(struct inode *inode, loff_t pos, unsigned len) ++{ ++ if (likely(EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, ++ EXT4_FEATURE_RO_COMPAT_LARGE_FILE))) ++ return 1; ++ ++ if (pos + len <= 0x7fffffffULL) ++ return 1; ++ ++ /* We might need to update the superblock to set LARGE_FILE */ ++ return 2; ++} ++ + static int ext4_da_write_begin(struct file *file, struct address_space *mapping, + loff_t pos, unsigned len, unsigned flags, + struct page **pagep, void **fsdata) +@@ -2683,7 +2695,8 @@ retry_grab: + * of file which has an already mapped buffer. + */ + retry_journal: +- handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, 1); ++ handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE, ++ ext4_da_write_credits(inode, pos, len)); + if (IS_ERR(handle)) { + page_cache_release(page); + return PTR_ERR(handle); +@@ -4061,8 +4074,7 @@ struct inode *ext4_iget(struct super_block *sb, unsigned long ino) + ei->i_extra_isize = 0; + + /* Precompute checksum seed for inode metadata */ +- if (EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) { ++ if (ext4_has_metadata_csum(sb)) { + struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); + __u32 csum; + __le32 inum = cpu_to_le32(inode->i_ino); +@@ -4250,6 +4262,13 @@ bad_inode: + return ERR_PTR(ret); + } + ++struct inode *ext4_iget_normal(struct super_block *sb, unsigned long ino) ++{ ++ if (ino < EXT4_FIRST_INO(sb) && ino != EXT4_ROOT_INO) ++ return ERR_PTR(-EIO); ++ return ext4_iget(sb, ino); ++} ++ + static int ext4_inode_blocks_set(handle_t *handle, + struct ext4_inode *raw_inode, + struct ext4_inode_info *ei) +@@ -4645,8 +4664,12 @@ int ext4_setattr(struct dentry *dentry, struct iattr *attr) + ext4_orphan_del(NULL, inode); + goto err_out; + } +- } else ++ } else { ++ loff_t oldsize = inode->i_size; ++ + i_size_write(inode, attr->ia_size); ++ pagecache_isize_extended(inode, oldsize, inode->i_size); ++ } + + /* + * Blocks are going to be removed from the inode. Wait +diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c +index a2a837f00407..dfe982dee0b3 100644 +--- a/fs/ext4/ioctl.c ++++ b/fs/ext4/ioctl.c +@@ -343,8 +343,7 @@ flags_out: + if (!inode_owner_or_capable(inode)) + return -EPERM; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) { ++ if (ext4_has_metadata_csum(inode->i_sb)) { + ext4_warning(sb, "Setting inode version is not " + "supported with metadata_csum enabled."); + return -ENOTTY; +@@ -544,9 +543,17 @@ group_add_out: + } + + case EXT4_IOC_SWAP_BOOT: ++ { ++ int err; + if (!(filp->f_mode & FMODE_WRITE)) + return -EBADF; +- return swap_inode_boot_loader(sb, inode); ++ err = mnt_want_write_file(filp); ++ if (err) ++ return err; ++ err = swap_inode_boot_loader(sb, inode); ++ mnt_drop_write_file(filp); ++ return err; ++ } + + case EXT4_IOC_RESIZE_FS: { + ext4_fsblk_t n_blocks_count; +diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c +index 04434ad3e8e0..1268a1b5afa9 100644 +--- a/fs/ext4/mmp.c ++++ b/fs/ext4/mmp.c +@@ -20,8 +20,7 @@ static __le32 ext4_mmp_csum(struct super_block *sb, struct mmp_struct *mmp) + + int ext4_mmp_csum_verify(struct super_block *sb, struct mmp_struct *mmp) + { +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return 1; + + return mmp->mmp_checksum == ext4_mmp_csum(sb, mmp); +@@ -29,8 +28,7 @@ int ext4_mmp_csum_verify(struct super_block *sb, struct mmp_struct *mmp) + + void ext4_mmp_csum_set(struct super_block *sb, struct mmp_struct *mmp) + { +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return; + + mmp->mmp_checksum = ext4_mmp_csum(sb, mmp); +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index d050e043e884..2dcbfb6245d8 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -123,8 +123,7 @@ static struct buffer_head *__ext4_read_dirblock(struct inode *inode, + "directory leaf block found instead of index block"); + return ERR_PTR(-EIO); + } +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) || ++ if (!ext4_has_metadata_csum(inode->i_sb) || + buffer_verified(bh)) + return bh; + +@@ -339,8 +338,7 @@ int ext4_dirent_csum_verify(struct inode *inode, struct ext4_dir_entry *dirent) + { + struct ext4_dir_entry_tail *t; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return 1; + + t = get_dirent_tail(inode, dirent); +@@ -361,8 +359,7 @@ static void ext4_dirent_csum_set(struct inode *inode, + { + struct ext4_dir_entry_tail *t; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return; + + t = get_dirent_tail(inode, dirent); +@@ -437,8 +434,7 @@ static int ext4_dx_csum_verify(struct inode *inode, + struct dx_tail *t; + int count_offset, limit, count; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return 1; + + c = get_dx_countlimit(inode, dirent, &count_offset); +@@ -467,8 +463,7 @@ static void ext4_dx_csum_set(struct inode *inode, struct ext4_dir_entry *dirent) + struct dx_tail *t; + int count_offset, limit, count; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return; + + c = get_dx_countlimit(inode, dirent, &count_offset); +@@ -556,8 +551,7 @@ static inline unsigned dx_root_limit(struct inode *dir, unsigned infosize) + unsigned entry_space = dir->i_sb->s_blocksize - EXT4_DIR_REC_LEN(1) - + EXT4_DIR_REC_LEN(2) - infosize; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(dir->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(dir->i_sb)) + entry_space -= sizeof(struct dx_tail); + return entry_space / sizeof(struct dx_entry); + } +@@ -566,8 +560,7 @@ static inline unsigned dx_node_limit(struct inode *dir) + { + unsigned entry_space = dir->i_sb->s_blocksize - EXT4_DIR_REC_LEN(0); + +- if (EXT4_HAS_RO_COMPAT_FEATURE(dir->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(dir->i_sb)) + entry_space -= sizeof(struct dx_tail); + return entry_space / sizeof(struct dx_entry); + } +@@ -1429,7 +1422,7 @@ static struct dentry *ext4_lookup(struct inode *dir, struct dentry *dentry, unsi + dentry); + return ERR_PTR(-EIO); + } +- inode = ext4_iget(dir->i_sb, ino); ++ inode = ext4_iget_normal(dir->i_sb, ino); + if (inode == ERR_PTR(-ESTALE)) { + EXT4_ERROR_INODE(dir, + "deleted inode referenced: %u", +@@ -1460,7 +1453,7 @@ struct dentry *ext4_get_parent(struct dentry *child) + return ERR_PTR(-EIO); + } + +- return d_obtain_alias(ext4_iget(child->d_inode->i_sb, ino)); ++ return d_obtain_alias(ext4_iget_normal(child->d_inode->i_sb, ino)); + } + + /* +@@ -1534,8 +1527,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir, + int csum_size = 0; + int err = 0, i; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(dir->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(dir->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + bh2 = ext4_append(handle, dir, &newblock); +@@ -1704,8 +1696,7 @@ static int add_dirent_to_buf(handle_t *handle, struct dentry *dentry, + int csum_size = 0; + int err; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(inode->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + if (!de) { +@@ -1772,8 +1763,7 @@ static int make_indexed_dir(handle_t *handle, struct dentry *dentry, + struct fake_dirent *fde; + int csum_size = 0; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(inode->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + blocksize = dir->i_sb->s_blocksize; +@@ -1889,8 +1879,7 @@ static int ext4_add_entry(handle_t *handle, struct dentry *dentry, + ext4_lblk_t block, blocks; + int csum_size = 0; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(inode->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + sb = dir->i_sb; +@@ -2152,8 +2141,7 @@ static int ext4_delete_entry(handle_t *handle, + return err; + } + +- if (EXT4_HAS_RO_COMPAT_FEATURE(dir->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(dir->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + BUFFER_TRACE(bh, "get_write_access"); +@@ -2372,8 +2360,7 @@ static int ext4_init_new_dir(handle_t *handle, struct inode *dir, + int csum_size = 0; + int err; + +- if (EXT4_HAS_RO_COMPAT_FEATURE(dir->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(dir->i_sb)) + csum_size = sizeof(struct ext4_dir_entry_tail); + + if (ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA)) { +diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c +index f3b84cd9de56..2400ad1c3d12 100644 +--- a/fs/ext4/resize.c ++++ b/fs/ext4/resize.c +@@ -1071,7 +1071,7 @@ static void update_backups(struct super_block *sb, int blk_off, char *data, + break; + + if (meta_bg == 0) +- backup_block = group * bpg + blk_off; ++ backup_block = ((ext4_fsblk_t)group) * bpg + blk_off; + else + backup_block = (ext4_group_first_block_no(sb, group) + + ext4_bg_has_super(sb, group)); +@@ -1200,8 +1200,7 @@ static int ext4_set_bitmap_checksums(struct super_block *sb, + { + struct buffer_head *bh; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return 0; + + bh = ext4_get_bitmap(sb, group_data->inode_bitmap); +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index a46030d6b4af..9fb3e6c0c578 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -140,8 +140,7 @@ static __le32 ext4_superblock_csum(struct super_block *sb, + int ext4_superblock_csum_verify(struct super_block *sb, + struct ext4_super_block *es) + { +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return 1; + + return es->s_checksum == ext4_superblock_csum(sb, es); +@@ -151,8 +150,7 @@ void ext4_superblock_csum_set(struct super_block *sb) + { + struct ext4_super_block *es = EXT4_SB(sb)->s_es; + +- if (!EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(sb)) + return; + + es->s_checksum = ext4_superblock_csum(sb, es); +@@ -996,7 +994,7 @@ static struct inode *ext4_nfs_get_inode(struct super_block *sb, + * Currently we don't know the generation for parent directory, so + * a generation of 0 means "accept any" + */ +- inode = ext4_iget(sb, ino); ++ inode = ext4_iget_normal(sb, ino); + if (IS_ERR(inode)) + return ERR_CAST(inode); + if (generation && inode->i_generation != generation) { +@@ -1706,13 +1704,6 @@ static int parse_options(char *options, struct super_block *sb, + "not specified"); + return 0; + } +- } else { +- if (sbi->s_jquota_fmt) { +- ext4_msg(sb, KERN_ERR, "journaled quota format " +- "specified with no journaling " +- "enabled"); +- return 0; +- } + } + #endif + if (test_opt(sb, DIOREAD_NOLOCK)) { +@@ -2010,8 +2001,7 @@ static __le16 ext4_group_desc_csum(struct ext4_sb_info *sbi, __u32 block_group, + __u16 crc = 0; + __le32 le_group = cpu_to_le32(block_group); + +- if ((sbi->s_es->s_feature_ro_compat & +- cpu_to_le32(EXT4_FEATURE_RO_COMPAT_METADATA_CSUM))) { ++ if (ext4_has_metadata_csum(sbi->s_sb)) { + /* Use new metadata_csum algorithm */ + __le16 save_csum; + __u32 csum32; +@@ -2029,6 +2019,10 @@ static __le16 ext4_group_desc_csum(struct ext4_sb_info *sbi, __u32 block_group, + } + + /* old crc16 code */ ++ if (!(sbi->s_es->s_feature_ro_compat & ++ cpu_to_le32(EXT4_FEATURE_RO_COMPAT_GDT_CSUM))) ++ return 0; ++ + offset = offsetof(struct ext4_group_desc, bg_checksum); + + crc = crc16(~0, sbi->s_es->s_uuid, sizeof(sbi->s_es->s_uuid)); +@@ -3167,8 +3161,7 @@ static int set_journal_csum_feature_set(struct super_block *sb) + int compat, incompat; + struct ext4_sb_info *sbi = EXT4_SB(sb); + +- if (EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) { ++ if (ext4_has_metadata_csum(sb)) { + /* journal checksum v3 */ + compat = 0; + incompat = JBD2_FEATURE_INCOMPAT_CSUM_V3; +@@ -3475,8 +3468,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) + } + + /* Precompute checksum seed for all metadata */ +- if (EXT4_HAS_RO_COMPAT_FEATURE(sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (ext4_has_metadata_csum(sb)) + sbi->s_csum_seed = ext4_chksum(sbi, ~0, es->s_uuid, + sizeof(es->s_uuid)); + +@@ -3494,6 +3486,10 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) + #ifdef CONFIG_EXT4_FS_POSIX_ACL + set_opt(sb, POSIX_ACL); + #endif ++ /* don't forget to enable journal_csum when metadata_csum is enabled. */ ++ if (ext4_has_metadata_csum(sb)) ++ set_opt(sb, JOURNAL_CHECKSUM); ++ + if ((def_mount_opts & EXT4_DEFM_JMODE) == EXT4_DEFM_JMODE_DATA) + set_opt(sb, JOURNAL_DATA); + else if ((def_mount_opts & EXT4_DEFM_JMODE) == EXT4_DEFM_JMODE_ORDERED) +diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c +index 55e611c1513c..8825154b20b6 100644 +--- a/fs/ext4/xattr.c ++++ b/fs/ext4/xattr.c +@@ -141,8 +141,7 @@ static int ext4_xattr_block_csum_verify(struct inode *inode, + sector_t block_nr, + struct ext4_xattr_header *hdr) + { +- if (EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) && ++ if (ext4_has_metadata_csum(inode->i_sb) && + (hdr->h_checksum != ext4_xattr_block_csum(inode, block_nr, hdr))) + return 0; + return 1; +@@ -152,8 +151,7 @@ static void ext4_xattr_block_csum_set(struct inode *inode, + sector_t block_nr, + struct ext4_xattr_header *hdr) + { +- if (!EXT4_HAS_RO_COMPAT_FEATURE(inode->i_sb, +- EXT4_FEATURE_RO_COMPAT_METADATA_CSUM)) ++ if (!ext4_has_metadata_csum(inode->i_sb)) + return; + + hdr->h_checksum = ext4_xattr_block_csum(inode, block_nr, hdr); +@@ -189,14 +187,28 @@ ext4_listxattr(struct dentry *dentry, char *buffer, size_t size) + } + + static int +-ext4_xattr_check_names(struct ext4_xattr_entry *entry, void *end) ++ext4_xattr_check_names(struct ext4_xattr_entry *entry, void *end, ++ void *value_start) + { +- while (!IS_LAST_ENTRY(entry)) { +- struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(entry); ++ struct ext4_xattr_entry *e = entry; ++ ++ while (!IS_LAST_ENTRY(e)) { ++ struct ext4_xattr_entry *next = EXT4_XATTR_NEXT(e); + if ((void *)next >= end) + return -EIO; +- entry = next; ++ e = next; + } ++ ++ while (!IS_LAST_ENTRY(entry)) { ++ if (entry->e_value_size != 0 && ++ (value_start + le16_to_cpu(entry->e_value_offs) < ++ (void *)e + sizeof(__u32) || ++ value_start + le16_to_cpu(entry->e_value_offs) + ++ le32_to_cpu(entry->e_value_size) > end)) ++ return -EIO; ++ entry = EXT4_XATTR_NEXT(entry); ++ } ++ + return 0; + } + +@@ -213,7 +225,8 @@ ext4_xattr_check_block(struct inode *inode, struct buffer_head *bh) + return -EIO; + if (!ext4_xattr_block_csum_verify(inode, bh->b_blocknr, BHDR(bh))) + return -EIO; +- error = ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size); ++ error = ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size, ++ bh->b_data); + if (!error) + set_buffer_verified(bh); + return error; +@@ -329,7 +342,7 @@ ext4_xattr_ibody_get(struct inode *inode, int name_index, const char *name, + header = IHDR(inode, raw_inode); + entry = IFIRST(header); + end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; +- error = ext4_xattr_check_names(entry, end); ++ error = ext4_xattr_check_names(entry, end, entry); + if (error) + goto cleanup; + error = ext4_xattr_find_entry(&entry, name_index, name, +@@ -457,7 +470,7 @@ ext4_xattr_ibody_list(struct dentry *dentry, char *buffer, size_t buffer_size) + raw_inode = ext4_raw_inode(&iloc); + header = IHDR(inode, raw_inode); + end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; +- error = ext4_xattr_check_names(IFIRST(header), end); ++ error = ext4_xattr_check_names(IFIRST(header), end, IFIRST(header)); + if (error) + goto cleanup; + error = ext4_xattr_list_entries(dentry, IFIRST(header), +@@ -972,7 +985,8 @@ int ext4_xattr_ibody_find(struct inode *inode, struct ext4_xattr_info *i, + is->s.here = is->s.first; + is->s.end = (void *)raw_inode + EXT4_SB(inode->i_sb)->s_inode_size; + if (ext4_test_inode_state(inode, EXT4_STATE_XATTR)) { +- error = ext4_xattr_check_names(IFIRST(header), is->s.end); ++ error = ext4_xattr_check_names(IFIRST(header), is->s.end, ++ IFIRST(header)); + if (error) + return error; + /* Find the named attribute. */ +diff --git a/fs/jbd2/recovery.c b/fs/jbd2/recovery.c +index 9b329b55ffe3..bcbef08a4d8f 100644 +--- a/fs/jbd2/recovery.c ++++ b/fs/jbd2/recovery.c +@@ -525,6 +525,7 @@ static int do_one_pass(journal_t *journal, + !jbd2_descr_block_csum_verify(journal, + bh->b_data)) { + err = -EIO; ++ brelse(bh); + goto failed; + } + +diff --git a/fs/jffs2/jffs2_fs_sb.h b/fs/jffs2/jffs2_fs_sb.h +index 413ef89c2d1b..046fee8b6e9b 100644 +--- a/fs/jffs2/jffs2_fs_sb.h ++++ b/fs/jffs2/jffs2_fs_sb.h +@@ -134,8 +134,6 @@ struct jffs2_sb_info { + struct rw_semaphore wbuf_sem; /* Protects the write buffer */ + + struct delayed_work wbuf_dwork; /* write-buffer write-out work */ +- int wbuf_queued; /* non-zero delayed work is queued */ +- spinlock_t wbuf_dwork_lock; /* protects wbuf_dwork and and wbuf_queued */ + + unsigned char *oobbuf; + int oobavail; /* How many bytes are available for JFFS2 in OOB */ +diff --git a/fs/jffs2/wbuf.c b/fs/jffs2/wbuf.c +index a6597d60d76d..09ed55190ee2 100644 +--- a/fs/jffs2/wbuf.c ++++ b/fs/jffs2/wbuf.c +@@ -1162,10 +1162,6 @@ static void delayed_wbuf_sync(struct work_struct *work) + struct jffs2_sb_info *c = work_to_sb(work); + struct super_block *sb = OFNI_BS_2SFFJ(c); + +- spin_lock(&c->wbuf_dwork_lock); +- c->wbuf_queued = 0; +- spin_unlock(&c->wbuf_dwork_lock); +- + if (!(sb->s_flags & MS_RDONLY)) { + jffs2_dbg(1, "%s()\n", __func__); + jffs2_flush_wbuf_gc(c, 0); +@@ -1180,14 +1176,9 @@ void jffs2_dirty_trigger(struct jffs2_sb_info *c) + if (sb->s_flags & MS_RDONLY) + return; + +- spin_lock(&c->wbuf_dwork_lock); +- if (!c->wbuf_queued) { ++ delay = msecs_to_jiffies(dirty_writeback_interval * 10); ++ if (queue_delayed_work(system_long_wq, &c->wbuf_dwork, delay)) + jffs2_dbg(1, "%s()\n", __func__); +- delay = msecs_to_jiffies(dirty_writeback_interval * 10); +- queue_delayed_work(system_long_wq, &c->wbuf_dwork, delay); +- c->wbuf_queued = 1; +- } +- spin_unlock(&c->wbuf_dwork_lock); + } + + int jffs2_nand_flash_setup(struct jffs2_sb_info *c) +@@ -1211,7 +1202,6 @@ int jffs2_nand_flash_setup(struct jffs2_sb_info *c) + + /* Initialise write buffer */ + init_rwsem(&c->wbuf_sem); +- spin_lock_init(&c->wbuf_dwork_lock); + INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); + c->wbuf_pagesize = c->mtd->writesize; + c->wbuf_ofs = 0xFFFFFFFF; +@@ -1251,7 +1241,6 @@ int jffs2_dataflash_setup(struct jffs2_sb_info *c) { + + /* Initialize write buffer */ + init_rwsem(&c->wbuf_sem); +- spin_lock_init(&c->wbuf_dwork_lock); + INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); + c->wbuf_pagesize = c->mtd->erasesize; + +@@ -1311,7 +1300,6 @@ int jffs2_nor_wbuf_flash_setup(struct jffs2_sb_info *c) { + + /* Initialize write buffer */ + init_rwsem(&c->wbuf_sem); +- spin_lock_init(&c->wbuf_dwork_lock); + INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); + + c->wbuf_pagesize = c->mtd->writesize; +@@ -1346,7 +1334,6 @@ int jffs2_ubivol_setup(struct jffs2_sb_info *c) { + return 0; + + init_rwsem(&c->wbuf_sem); +- spin_lock_init(&c->wbuf_dwork_lock); + INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync); + + c->wbuf_pagesize = c->mtd->writesize; +diff --git a/fs/lockd/mon.c b/fs/lockd/mon.c +index 1812f026960c..6ae664b489af 100644 +--- a/fs/lockd/mon.c ++++ b/fs/lockd/mon.c +@@ -159,6 +159,12 @@ static int nsm_mon_unmon(struct nsm_handle *nsm, u32 proc, struct nsm_res *res, + + msg.rpc_proc = &clnt->cl_procinfo[proc]; + status = rpc_call_sync(clnt, &msg, RPC_TASK_SOFTCONN); ++ if (status == -ECONNREFUSED) { ++ dprintk("lockd: NSM upcall RPC failed, status=%d, forcing rebind\n", ++ status); ++ rpc_force_rebind(clnt); ++ status = rpc_call_sync(clnt, &msg, RPC_TASK_SOFTCONN); ++ } + if (status < 0) + dprintk("lockd: NSM upcall RPC failed, status=%d\n", + status); +diff --git a/fs/namei.c b/fs/namei.c +index dd2f2c5bda55..0dd72c8e65fd 100644 +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -3128,7 +3128,8 @@ static int do_tmpfile(int dfd, struct filename *pathname, + if (error) + goto out2; + audit_inode(pathname, nd->path.dentry, 0); +- error = may_open(&nd->path, op->acc_mode, op->open_flag); ++ /* Don't check for other permissions, the inode was just created */ ++ error = may_open(&nd->path, MAY_OPEN, op->open_flag); + if (error) + goto out2; + file->f_path.mnt = nd->path.mnt; +diff --git a/fs/namespace.c b/fs/namespace.c +index c7d4a0ae2c65..d9bf3efbf040 100644 +--- a/fs/namespace.c ++++ b/fs/namespace.c +@@ -2831,6 +2831,9 @@ SYSCALL_DEFINE2(pivot_root, const char __user *, new_root, + /* make sure we can reach put_old from new_root */ + if (!is_path_reachable(old_mnt, old.dentry, &new)) + goto out4; ++ /* make certain new is below the root */ ++ if (!is_path_reachable(new_mnt, new.dentry, &root)) ++ goto out4; + root_mp->m_count++; /* pin it so it won't go away */ + lock_mount_hash(); + detach_mnt(new_mnt, &parent_path); +diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c +index f23a6ca37504..86f5d3e474bf 100644 +--- a/fs/nfsd/nfs4proc.c ++++ b/fs/nfsd/nfs4proc.c +@@ -1243,7 +1243,8 @@ static bool need_wrongsec_check(struct svc_rqst *rqstp) + */ + if (argp->opcnt == resp->opcnt) + return false; +- ++ if (next->opnum == OP_ILLEGAL) ++ return false; + nextd = OPDESC(next); + /* + * Rest of 2.6.3.1.1: certain operations will return WRONGSEC +diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c +index 12823845d324..14120a3c6195 100644 +--- a/fs/pstore/inode.c ++++ b/fs/pstore/inode.c +@@ -319,10 +319,10 @@ int pstore_mkfile(enum pstore_type_id type, char *psname, u64 id, int count, + compressed ? ".enc.z" : ""); + break; + case PSTORE_TYPE_CONSOLE: +- sprintf(name, "console-%s", psname); ++ sprintf(name, "console-%s-%lld", psname, id); + break; + case PSTORE_TYPE_FTRACE: +- sprintf(name, "ftrace-%s", psname); ++ sprintf(name, "ftrace-%s-%lld", psname, id); + break; + case PSTORE_TYPE_MCE: + sprintf(name, "mce-%s-%lld", psname, id); +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index ce87c9007b0f..89da95700c69 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -637,7 +637,7 @@ int dquot_writeback_dquots(struct super_block *sb, int type) + dqstats_inc(DQST_LOOKUPS); + err = sb->dq_op->write_dquot(dquot); + if (!ret && err) +- err = ret; ++ ret = err; + dqput(dquot); + spin_lock(&dq_list_lock); + } +diff --git a/fs/super.c b/fs/super.c +index 7624267b2043..88a6bc6e3cc9 100644 +--- a/fs/super.c ++++ b/fs/super.c +@@ -81,6 +81,8 @@ static unsigned long super_cache_scan(struct shrinker *shrink, + inodes = list_lru_count_node(&sb->s_inode_lru, sc->nid); + dentries = list_lru_count_node(&sb->s_dentry_lru, sc->nid); + total_objects = dentries + inodes + fs_objects + 1; ++ if (!total_objects) ++ total_objects = 1; + + /* proportion the scan between the caches */ + dentries = mult_frac(sc->nr_to_scan, dentries, total_objects); +diff --git a/fs/ubifs/commit.c b/fs/ubifs/commit.c +index ff8229340cd5..26b69b2d4a45 100644 +--- a/fs/ubifs/commit.c ++++ b/fs/ubifs/commit.c +@@ -166,15 +166,10 @@ static int do_commit(struct ubifs_info *c) + err = ubifs_orphan_end_commit(c); + if (err) + goto out; +- old_ltail_lnum = c->ltail_lnum; +- err = ubifs_log_end_commit(c, new_ltail_lnum); +- if (err) +- goto out; + err = dbg_check_old_index(c, &zroot); + if (err) + goto out; + +- mutex_lock(&c->mst_mutex); + c->mst_node->cmt_no = cpu_to_le64(c->cmt_no); + c->mst_node->log_lnum = cpu_to_le32(new_ltail_lnum); + c->mst_node->root_lnum = cpu_to_le32(zroot.lnum); +@@ -203,8 +198,9 @@ static int do_commit(struct ubifs_info *c) + c->mst_node->flags |= cpu_to_le32(UBIFS_MST_NO_ORPHS); + else + c->mst_node->flags &= ~cpu_to_le32(UBIFS_MST_NO_ORPHS); +- err = ubifs_write_master(c); +- mutex_unlock(&c->mst_mutex); ++ ++ old_ltail_lnum = c->ltail_lnum; ++ err = ubifs_log_end_commit(c, new_ltail_lnum); + if (err) + goto out; + +diff --git a/fs/ubifs/log.c b/fs/ubifs/log.c +index a902c5919e42..8d59de86dc9a 100644 +--- a/fs/ubifs/log.c ++++ b/fs/ubifs/log.c +@@ -106,10 +106,14 @@ static inline long long empty_log_bytes(const struct ubifs_info *c) + h = (long long)c->lhead_lnum * c->leb_size + c->lhead_offs; + t = (long long)c->ltail_lnum * c->leb_size; + +- if (h >= t) ++ if (h > t) + return c->log_bytes - h + t; +- else ++ else if (h != t) + return t - h; ++ else if (c->lhead_lnum != c->ltail_lnum) ++ return 0; ++ else ++ return c->log_bytes; + } + + /** +@@ -447,9 +451,9 @@ out: + * @ltail_lnum: new log tail LEB number + * + * This function is called on when the commit operation was finished. It +- * moves log tail to new position and unmaps LEBs which contain obsolete data. +- * Returns zero in case of success and a negative error code in case of +- * failure. ++ * moves log tail to new position and updates the master node so that it stores ++ * the new log tail LEB number. Returns zero in case of success and a negative ++ * error code in case of failure. + */ + int ubifs_log_end_commit(struct ubifs_info *c, int ltail_lnum) + { +@@ -477,7 +481,12 @@ int ubifs_log_end_commit(struct ubifs_info *c, int ltail_lnum) + spin_unlock(&c->buds_lock); + + err = dbg_check_bud_bytes(c); ++ if (err) ++ goto out; + ++ err = ubifs_write_master(c); ++ ++out: + mutex_unlock(&c->log_mutex); + return err; + } +diff --git a/fs/ubifs/master.c b/fs/ubifs/master.c +index ab83ace9910a..1a4bb9e8b3b8 100644 +--- a/fs/ubifs/master.c ++++ b/fs/ubifs/master.c +@@ -352,10 +352,9 @@ int ubifs_read_master(struct ubifs_info *c) + * ubifs_write_master - write master node. + * @c: UBIFS file-system description object + * +- * This function writes the master node. The caller has to take the +- * @c->mst_mutex lock before calling this function. Returns zero in case of +- * success and a negative error code in case of failure. The master node is +- * written twice to enable recovery. ++ * This function writes the master node. Returns zero in case of success and a ++ * negative error code in case of failure. The master node is written twice to ++ * enable recovery. + */ + int ubifs_write_master(struct ubifs_info *c) + { +diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c +index 5ded8490c0c6..94d9a64287b7 100644 +--- a/fs/ubifs/super.c ++++ b/fs/ubifs/super.c +@@ -1957,7 +1957,6 @@ static struct ubifs_info *alloc_ubifs_info(struct ubi_volume_desc *ubi) + mutex_init(&c->lp_mutex); + mutex_init(&c->tnc_mutex); + mutex_init(&c->log_mutex); +- mutex_init(&c->mst_mutex); + mutex_init(&c->umount_mutex); + mutex_init(&c->bu_mutex); + mutex_init(&c->write_reserve_mutex); +diff --git a/fs/ubifs/ubifs.h b/fs/ubifs/ubifs.h +index e8c8cfe1435c..7ab9c710c749 100644 +--- a/fs/ubifs/ubifs.h ++++ b/fs/ubifs/ubifs.h +@@ -1042,7 +1042,6 @@ struct ubifs_debug_info; + * + * @mst_node: master node + * @mst_offs: offset of valid master node +- * @mst_mutex: protects the master node area, @mst_node, and @mst_offs + * + * @max_bu_buf_len: maximum bulk-read buffer length + * @bu_mutex: protects the pre-allocated bulk-read buffer and @c->bu +@@ -1282,7 +1281,6 @@ struct ubifs_info { + + struct ubifs_mst_node *mst_node; + int mst_offs; +- struct mutex mst_mutex; + + int max_bu_buf_len; + struct mutex bu_mutex; +diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c +index c6ff3cf5a5bb..0eaaa2d296f0 100644 +--- a/fs/xfs/xfs_mount.c ++++ b/fs/xfs/xfs_mount.c +@@ -321,7 +321,6 @@ reread: + * Initialize the mount structure from the superblock. + */ + xfs_sb_from_disk(sbp, XFS_BUF_TO_SBP(bp)); +- xfs_sb_quota_from_disk(sbp); + + /* + * If we haven't validated the superblock, do so now before we try +diff --git a/fs/xfs/xfs_sb.c b/fs/xfs/xfs_sb.c +index 1e116794bb66..4afd393846d3 100644 +--- a/fs/xfs/xfs_sb.c ++++ b/fs/xfs/xfs_sb.c +@@ -397,10 +397,11 @@ xfs_sb_quota_from_disk(struct xfs_sb *sbp) + } + } + +-void +-xfs_sb_from_disk( ++static void ++__xfs_sb_from_disk( + struct xfs_sb *to, +- xfs_dsb_t *from) ++ xfs_dsb_t *from, ++ bool convert_xquota) + { + to->sb_magicnum = be32_to_cpu(from->sb_magicnum); + to->sb_blocksize = be32_to_cpu(from->sb_blocksize); +@@ -456,6 +457,17 @@ xfs_sb_from_disk( + to->sb_pad = 0; + to->sb_pquotino = be64_to_cpu(from->sb_pquotino); + to->sb_lsn = be64_to_cpu(from->sb_lsn); ++ /* Convert on-disk flags to in-memory flags? */ ++ if (convert_xquota) ++ xfs_sb_quota_from_disk(to); ++} ++ ++void ++xfs_sb_from_disk( ++ struct xfs_sb *to, ++ xfs_dsb_t *from) ++{ ++ __xfs_sb_from_disk(to, from, true); + } + + static inline void +@@ -571,7 +583,11 @@ xfs_sb_verify( + struct xfs_mount *mp = bp->b_target->bt_mount; + struct xfs_sb sb; + +- xfs_sb_from_disk(&sb, XFS_BUF_TO_SBP(bp)); ++ /* ++ * Use call variant which doesn't convert quota flags from disk ++ * format, because xfs_mount_validate_sb checks the on-disk flags. ++ */ ++ __xfs_sb_from_disk(&sb, XFS_BUF_TO_SBP(bp), false); + + /* + * Only check the in progress field for the primary superblock as +diff --git a/include/drm/drm_pciids.h b/include/drm/drm_pciids.h +index bcec4c46cc2e..ca52de5a5c97 100644 +--- a/include/drm/drm_pciids.h ++++ b/include/drm/drm_pciids.h +@@ -74,7 +74,6 @@ + {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ + {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ + {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ +- {0x1002, 0x4C6E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|RADEON_IS_MOBILITY}, \ + {0x1002, 0x4E44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ + {0x1002, 0x4E45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ + {0x1002, 0x4E46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ +diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h +index 4afa4f8f6090..a693c6d29328 100644 +--- a/include/linux/blkdev.h ++++ b/include/linux/blkdev.h +@@ -1232,10 +1232,9 @@ static inline int queue_alignment_offset(struct request_queue *q) + static inline int queue_limit_alignment_offset(struct queue_limits *lim, sector_t sector) + { + unsigned int granularity = max(lim->physical_block_size, lim->io_min); +- unsigned int alignment = (sector << 9) & (granularity - 1); ++ unsigned int alignment = sector_div(sector, granularity >> 9) << 9; + +- return (granularity + lim->alignment_offset - alignment) +- & (granularity - 1); ++ return (granularity + lim->alignment_offset - alignment) % granularity; + } + + static inline int bdev_alignment_offset(struct block_device *bdev) +diff --git a/include/linux/hid.h b/include/linux/hid.h +index 31b9d299ef6c..00c88fccd162 100644 +--- a/include/linux/hid.h ++++ b/include/linux/hid.h +@@ -286,6 +286,7 @@ struct hid_item { + #define HID_QUIRK_HIDINPUT_FORCE 0x00000080 + #define HID_QUIRK_NO_EMPTY_INPUT 0x00000100 + #define HID_QUIRK_NO_INIT_INPUT_REPORTS 0x00000200 ++#define HID_QUIRK_ALWAYS_POLL 0x00000400 + #define HID_QUIRK_SKIP_OUTPUT_REPORTS 0x00010000 + #define HID_QUIRK_FULLSPEED_INTERVAL 0x10000000 + #define HID_QUIRK_NO_INIT_REPORTS 0x20000000 +diff --git a/include/linux/mm.h b/include/linux/mm.h +index c1b7414c7bef..0a0b024ec7e8 100644 +--- a/include/linux/mm.h ++++ b/include/linux/mm.h +@@ -1123,6 +1123,7 @@ static inline void unmap_shared_mapping_range(struct address_space *mapping, + + extern void truncate_pagecache(struct inode *inode, loff_t new); + extern void truncate_setsize(struct inode *inode, loff_t newsize); ++void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to); + void truncate_pagecache_range(struct inode *inode, loff_t offset, loff_t end); + int truncate_inode_page(struct address_space *mapping, struct page *page); + int generic_error_remove_page(struct address_space *mapping, struct page *page); +diff --git a/include/linux/of.h b/include/linux/of.h +index 435cb995904d..3f8144dadaef 100644 +--- a/include/linux/of.h ++++ b/include/linux/of.h +@@ -215,14 +215,12 @@ extern int of_property_read_u64(const struct device_node *np, + extern int of_property_read_string(struct device_node *np, + const char *propname, + const char **out_string); +-extern int of_property_read_string_index(struct device_node *np, +- const char *propname, +- int index, const char **output); + extern int of_property_match_string(struct device_node *np, + const char *propname, + const char *string); +-extern int of_property_count_strings(struct device_node *np, +- const char *propname); ++extern int of_property_read_string_helper(struct device_node *np, ++ const char *propname, ++ const char **out_strs, size_t sz, int index); + extern int of_device_is_compatible(const struct device_node *device, + const char *); + extern int of_device_is_available(const struct device_node *device); +@@ -422,15 +420,9 @@ static inline int of_property_read_string(struct device_node *np, + return -ENOSYS; + } + +-static inline int of_property_read_string_index(struct device_node *np, +- const char *propname, int index, +- const char **out_string) +-{ +- return -ENOSYS; +-} +- +-static inline int of_property_count_strings(struct device_node *np, +- const char *propname) ++static inline int of_property_read_string_helper(struct device_node *np, ++ const char *propname, ++ const char **out_strs, size_t sz, int index) + { + return -ENOSYS; + } +@@ -536,6 +528,70 @@ static inline struct device_node *of_find_matching_node( + } + + /** ++ * of_property_read_string_array() - Read an array of strings from a multiple ++ * strings property. ++ * @np: device node from which the property value is to be read. ++ * @propname: name of the property to be searched. ++ * @out_strs: output array of string pointers. ++ * @sz: number of array elements to read. ++ * ++ * Search for a property in a device tree node and retrieve a list of ++ * terminated string values (pointer to data, not a copy) in that property. ++ * ++ * If @out_strs is NULL, the number of strings in the property is returned. ++ */ ++static inline int of_property_read_string_array(struct device_node *np, ++ const char *propname, const char **out_strs, ++ size_t sz) ++{ ++ return of_property_read_string_helper(np, propname, out_strs, sz, 0); ++} ++ ++/** ++ * of_property_count_strings() - Find and return the number of strings from a ++ * multiple strings property. ++ * @np: device node from which the property value is to be read. ++ * @propname: name of the property to be searched. ++ * ++ * Search for a property in a device tree node and retrieve the number of null ++ * terminated string contain in it. Returns the number of strings on ++ * success, -EINVAL if the property does not exist, -ENODATA if property ++ * does not have a value, and -EILSEQ if the string is not null-terminated ++ * within the length of the property data. ++ */ ++static inline int of_property_count_strings(struct device_node *np, ++ const char *propname) ++{ ++ return of_property_read_string_helper(np, propname, NULL, 0, 0); ++} ++ ++/** ++ * of_property_read_string_index() - Find and read a string from a multiple ++ * strings property. ++ * @np: device node from which the property value is to be read. ++ * @propname: name of the property to be searched. ++ * @index: index of the string in the list of strings ++ * @out_string: pointer to null terminated return string, modified only if ++ * return value is 0. ++ * ++ * Search for a property in a device tree node and retrieve a null ++ * terminated string value (pointer to data, not a copy) in the list of strings ++ * contained in that property. ++ * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if ++ * property does not have a value, and -EILSEQ if the string is not ++ * null-terminated within the length of the property data. ++ * ++ * The out_string pointer is modified only if a valid string can be decoded. ++ */ ++static inline int of_property_read_string_index(struct device_node *np, ++ const char *propname, ++ int index, const char **output) ++{ ++ int rc = of_property_read_string_helper(np, propname, output, 1, index); ++ return rc < 0 ? rc : 0; ++} ++ ++/** + * of_property_read_bool - Findfrom a property + * @np: device node from which the property value is to be read. + * @propname: name of the property to be searched. +diff --git a/include/linux/oom.h b/include/linux/oom.h +index 4cd62677feb9..17f0949bd822 100644 +--- a/include/linux/oom.h ++++ b/include/linux/oom.h +@@ -50,6 +50,9 @@ static inline bool oom_task_origin(const struct task_struct *p) + extern unsigned long oom_badness(struct task_struct *p, + struct mem_cgroup *memcg, const nodemask_t *nodemask, + unsigned long totalpages); ++ ++extern int oom_kills_count(void); ++extern void note_oom_kill(void); + extern void oom_kill_process(struct task_struct *p, gfp_t gfp_mask, int order, + unsigned int points, unsigned long totalpages, + struct mem_cgroup *memcg, nodemask_t *nodemask, +diff --git a/include/linux/string.h b/include/linux/string.h +index ac889c5ea11b..0ed878d0465c 100644 +--- a/include/linux/string.h ++++ b/include/linux/string.h +@@ -129,7 +129,7 @@ int bprintf(u32 *bin_buf, size_t size, const char *fmt, ...) __printf(3, 4); + #endif + + extern ssize_t memory_read_from_buffer(void *to, size_t count, loff_t *ppos, +- const void *from, size_t available); ++ const void *from, size_t available); + + /** + * strstarts - does @str start with @prefix? +@@ -141,7 +141,8 @@ static inline bool strstarts(const char *str, const char *prefix) + return strncmp(str, prefix, strlen(prefix)) == 0; + } + +-extern size_t memweight(const void *ptr, size_t bytes); ++size_t memweight(const void *ptr, size_t bytes); ++void memzero_explicit(void *s, size_t count); + + /** + * kbasename - return the last part of a pathname. +diff --git a/include/linux/sunrpc/xprt.h b/include/linux/sunrpc/xprt.h +index 8097b9df6773..51009d280ac7 100644 +--- a/include/linux/sunrpc/xprt.h ++++ b/include/linux/sunrpc/xprt.h +@@ -340,6 +340,7 @@ int xs_swapper(struct rpc_xprt *xprt, int enable); + #define XPRT_CONNECTION_ABORT (7) + #define XPRT_CONNECTION_CLOSE (8) + #define XPRT_CONGESTED (9) ++#define XPRT_CONNECTION_REUSE (10) + + static inline void xprt_set_connected(struct rpc_xprt *xprt) + { +diff --git a/include/linux/usb/quirks.h b/include/linux/usb/quirks.h +index 49587dc22f5d..8b96ae2a38fe 100644 +--- a/include/linux/usb/quirks.h ++++ b/include/linux/usb/quirks.h +@@ -33,4 +33,7 @@ + /* device generates spurious wakeup, ignore remote wakeup capability */ + #define USB_QUIRK_IGNORE_REMOTE_WAKEUP 0x00000200 + ++/* device can't handle device_qualifier descriptor requests */ ++#define USB_QUIRK_DEVICE_QUALIFIER 0x00000100 ++ + #endif /* __LINUX_USB_QUIRKS_H */ +diff --git a/include/net/ipv6.h b/include/net/ipv6.h +index 9ac65781d44b..a60948d7bcea 100644 +--- a/include/net/ipv6.h ++++ b/include/net/ipv6.h +@@ -660,6 +660,8 @@ static inline int ipv6_addr_diff(const struct in6_addr *a1, const struct in6_add + return __ipv6_addr_diff(a1, a2, sizeof(struct in6_addr)); + } + ++void ipv6_proxy_select_ident(struct sk_buff *skb); ++ + int ip6_dst_hoplimit(struct dst_entry *dst); + + /* +diff --git a/include/uapi/drm/vmwgfx_drm.h b/include/uapi/drm/vmwgfx_drm.h +index 87792a5fee3b..33b739522840 100644 +--- a/include/uapi/drm/vmwgfx_drm.h ++++ b/include/uapi/drm/vmwgfx_drm.h +@@ -29,7 +29,7 @@ + #define __VMWGFX_DRM_H__ + + #ifndef __KERNEL__ +-#include ++#include + #endif + + #define DRM_VMW_MAX_SURFACE_FACES 6 +diff --git a/kernel/freezer.c b/kernel/freezer.c +index aa6a8aadb911..8f9279b9c6d7 100644 +--- a/kernel/freezer.c ++++ b/kernel/freezer.c +@@ -42,6 +42,9 @@ bool freezing_slow_path(struct task_struct *p) + if (p->flags & (PF_NOFREEZE | PF_SUSPEND_TASK)) + return false; + ++ if (test_thread_flag(TIF_MEMDIE)) ++ return false; ++ + if (pm_nosig_freezing || cgroup_freezing(p)) + return true; + +diff --git a/kernel/module.c b/kernel/module.c +index 6716a1fa618b..1d679a6c942f 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -1841,7 +1841,9 @@ static void free_module(struct module *mod) + + /* We leave it in list to prevent duplicate loads, but make sure + * that noone uses it while it's being deconstructed. */ ++ mutex_lock(&module_mutex); + mod->state = MODULE_STATE_UNFORMED; ++ mutex_unlock(&module_mutex); + + /* Remove dynamic debug info */ + ddebug_remove_module(mod->name); +diff --git a/kernel/posix-timers.c b/kernel/posix-timers.c +index 424c2d4265c9..77e6b83c0431 100644 +--- a/kernel/posix-timers.c ++++ b/kernel/posix-timers.c +@@ -634,6 +634,7 @@ SYSCALL_DEFINE3(timer_create, const clockid_t, which_clock, + goto out; + } + } else { ++ memset(&event.sigev_value, 0, sizeof(event.sigev_value)); + event.sigev_notify = SIGEV_SIGNAL; + event.sigev_signo = SIGALRM; + event.sigev_value.sival_int = new_timer->it_id; +diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c +index 37170d4dd9a6..126586a31408 100644 +--- a/kernel/power/hibernate.c ++++ b/kernel/power/hibernate.c +@@ -492,8 +492,14 @@ int hibernation_restore(int platform_mode) + error = dpm_suspend_start(PMSG_QUIESCE); + if (!error) { + error = resume_target_kernel(platform_mode); +- dpm_resume_end(PMSG_RECOVER); ++ /* ++ * The above should either succeed and jump to the new kernel, ++ * or return with an error. Otherwise things are just ++ * undefined, so let's be paranoid. ++ */ ++ BUG_ON(!error); + } ++ dpm_resume_end(PMSG_RECOVER); + pm_restore_gfp_mask(); + ftrace_start(); + resume_console(); +diff --git a/kernel/power/process.c b/kernel/power/process.c +index 14f9a8d4725d..f1fe7ec110bb 100644 +--- a/kernel/power/process.c ++++ b/kernel/power/process.c +@@ -107,6 +107,28 @@ static int try_to_freeze_tasks(bool user_only) + return todo ? -EBUSY : 0; + } + ++/* ++ * Returns true if all freezable tasks (except for current) are frozen already ++ */ ++static bool check_frozen_processes(void) ++{ ++ struct task_struct *g, *p; ++ bool ret = true; ++ ++ read_lock(&tasklist_lock); ++ for_each_process_thread(g, p) { ++ if (p != current && !freezer_should_skip(p) && ++ !frozen(p)) { ++ ret = false; ++ goto done; ++ } ++ } ++done: ++ read_unlock(&tasklist_lock); ++ ++ return ret; ++} ++ + /** + * freeze_processes - Signal user space processes to enter the refrigerator. + * The current thread will not be frozen. The same process that calls +@@ -117,6 +139,7 @@ static int try_to_freeze_tasks(bool user_only) + int freeze_processes(void) + { + int error; ++ int oom_kills_saved; + + error = __usermodehelper_disable(UMH_FREEZING); + if (error) +@@ -130,12 +153,27 @@ int freeze_processes(void) + + printk("Freezing user space processes ... "); + pm_freezing = true; ++ oom_kills_saved = oom_kills_count(); + error = try_to_freeze_tasks(true); + if (!error) { +- printk("done."); + __usermodehelper_set_disable_depth(UMH_DISABLED); + oom_killer_disable(); ++ ++ /* ++ * There might have been an OOM kill while we were ++ * freezing tasks and the killed task might be still ++ * on the way out so we have to double check for race. ++ */ ++ if (oom_kills_count() != oom_kills_saved && ++ !check_frozen_processes()) { ++ __usermodehelper_set_disable_depth(UMH_ENABLED); ++ printk("OOM in progress."); ++ error = -EBUSY; ++ goto done; ++ } ++ printk("done."); + } ++done: + printk("\n"); + BUG_ON(in_atomic()); + +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index 677ebad70ce1..9a3f3c4e1f5a 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -1895,6 +1895,8 @@ unsigned long to_ratio(u64 period, u64 runtime) + #ifdef CONFIG_SMP + inline struct dl_bw *dl_bw_of(int i) + { ++ rcu_lockdep_assert(rcu_read_lock_sched_held(), ++ "sched RCU must be held"); + return &cpu_rq(i)->rd->dl_bw; + } + +@@ -1903,6 +1905,8 @@ static inline int dl_bw_cpus(int i) + struct root_domain *rd = cpu_rq(i)->rd; + int cpus = 0; + ++ rcu_lockdep_assert(rcu_read_lock_sched_held(), ++ "sched RCU must be held"); + for_each_cpu_and(i, rd->span, cpu_active_mask) + cpus++; + +@@ -3937,13 +3941,14 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) + * root_domain. + */ + #ifdef CONFIG_SMP +- if (task_has_dl_policy(p)) { +- const struct cpumask *span = task_rq(p)->rd->span; +- +- if (dl_bandwidth_enabled() && !cpumask_subset(span, new_mask)) { ++ if (task_has_dl_policy(p) && dl_bandwidth_enabled()) { ++ rcu_read_lock(); ++ if (!cpumask_subset(task_rq(p)->rd->span, new_mask)) { + retval = -EBUSY; ++ rcu_read_unlock(); + goto out_unlock; + } ++ rcu_read_unlock(); + } + #endif + again: +@@ -7458,6 +7463,8 @@ static int sched_dl_global_constraints(void) + int cpu, ret = 0; + unsigned long flags; + ++ rcu_read_lock(); ++ + /* + * Here we want to check the bandwidth not being set to some + * value smaller than the currently allocated bandwidth in +@@ -7479,6 +7486,8 @@ static int sched_dl_global_constraints(void) + break; + } + ++ rcu_read_unlock(); ++ + return ret; + } + +@@ -7494,6 +7503,7 @@ static void sched_dl_do_global(void) + if (global_rt_runtime() != RUNTIME_INF) + new_bw = to_ratio(global_rt_period(), global_rt_runtime()); + ++ rcu_read_lock(); + /* + * FIXME: As above... + */ +@@ -7504,6 +7514,7 @@ static void sched_dl_do_global(void) + dl_b->bw = new_bw; + raw_spin_unlock_irqrestore(&dl_b->lock, flags); + } ++ rcu_read_unlock(); + } + + static int sched_rt_global_validate(void) +diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c +index 759d5e004517..7e3cd7aaec83 100644 +--- a/kernel/trace/trace_syscalls.c ++++ b/kernel/trace/trace_syscalls.c +@@ -313,7 +313,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id) + int size; + + syscall_nr = trace_get_syscall_nr(current, regs); +- if (syscall_nr < 0) ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + return; + + /* Here we're inside tp handler's rcu_read_lock_sched (__DO_TRACE) */ +@@ -360,7 +360,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret) + int syscall_nr; + + syscall_nr = trace_get_syscall_nr(current, regs); +- if (syscall_nr < 0) ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + return; + + /* Here we're inside tp handler's rcu_read_lock_sched (__DO_TRACE()) */ +@@ -567,7 +567,7 @@ static void perf_syscall_enter(void *ignore, struct pt_regs *regs, long id) + int size; + + syscall_nr = trace_get_syscall_nr(current, regs); +- if (syscall_nr < 0) ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + return; + if (!test_bit(syscall_nr, enabled_perf_enter_syscalls)) + return; +@@ -641,7 +641,7 @@ static void perf_syscall_exit(void *ignore, struct pt_regs *regs, long ret) + int size; + + syscall_nr = trace_get_syscall_nr(current, regs); +- if (syscall_nr < 0) ++ if (syscall_nr < 0 || syscall_nr >= NR_syscalls) + return; + if (!test_bit(syscall_nr, enabled_perf_exit_syscalls)) + return; +diff --git a/lib/bitmap.c b/lib/bitmap.c +index 06f7e4fe8d2d..e5c4ebe586ba 100644 +--- a/lib/bitmap.c ++++ b/lib/bitmap.c +@@ -131,7 +131,9 @@ void __bitmap_shift_right(unsigned long *dst, + lower = src[off + k]; + if (left && off + k == lim - 1) + lower &= mask; +- dst[k] = upper << (BITS_PER_LONG - rem) | lower >> rem; ++ dst[k] = lower >> rem; ++ if (rem) ++ dst[k] |= upper << (BITS_PER_LONG - rem); + if (left && k == lim - 1) + dst[k] &= mask; + } +@@ -172,7 +174,9 @@ void __bitmap_shift_left(unsigned long *dst, + upper = src[k]; + if (left && k == lim - 1) + upper &= (1UL << left) - 1; +- dst[k + off] = lower >> (BITS_PER_LONG - rem) | upper << rem; ++ dst[k + off] = upper << rem; ++ if (rem) ++ dst[k + off] |= lower >> (BITS_PER_LONG - rem); + if (left && k + off == lim - 1) + dst[k + off] &= (1UL << left) - 1; + } +diff --git a/lib/string.c b/lib/string.c +index e5878de4f101..43d0781daf47 100644 +--- a/lib/string.c ++++ b/lib/string.c +@@ -586,6 +586,22 @@ void *memset(void *s, int c, size_t count) + EXPORT_SYMBOL(memset); + #endif + ++/** ++ * memzero_explicit - Fill a region of memory (e.g. sensitive ++ * keying data) with 0s. ++ * @s: Pointer to the start of the area. ++ * @count: The size of the area. ++ * ++ * memzero_explicit() doesn't need an arch-specific version as ++ * it just invokes the one of memset() implicitly. ++ */ ++void memzero_explicit(void *s, size_t count) ++{ ++ memset(s, 0, count); ++ OPTIMIZER_HIDE_VAR(s); ++} ++EXPORT_SYMBOL(memzero_explicit); ++ + #ifndef __HAVE_ARCH_MEMCPY + /** + * memcpy - Copy one area of memory to another +diff --git a/mm/huge_memory.c b/mm/huge_memory.c +index 718bfa16a36f..331faa5c0d5e 100644 +--- a/mm/huge_memory.c ++++ b/mm/huge_memory.c +@@ -199,7 +199,7 @@ retry: + preempt_disable(); + if (cmpxchg(&huge_zero_page, NULL, zero_page)) { + preempt_enable(); +- __free_page(zero_page); ++ __free_pages(zero_page, compound_order(zero_page)); + goto retry; + } + +@@ -231,7 +231,7 @@ static unsigned long shrink_huge_zero_page_scan(struct shrinker *shrink, + if (atomic_cmpxchg(&huge_zero_refcount, 1, 0) == 1) { + struct page *zero_page = xchg(&huge_zero_page, NULL); + BUG_ON(zero_page == NULL); +- __free_page(zero_page); ++ __free_pages(zero_page, compound_order(zero_page)); + return HPAGE_PMD_NR; + } + +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index 9b35da28b587..b58d4fbe6c48 100644 +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -292,6 +292,9 @@ struct mem_cgroup { + /* vmpressure notifications */ + struct vmpressure vmpressure; + ++ /* css_online() has been completed */ ++ int initialized; ++ + /* + * the counter to account for mem+swap usage. + */ +@@ -1127,9 +1130,21 @@ skip_node: + * skipping css reference should be safe. + */ + if (next_css) { +- if ((next_css == &root->css) || +- ((next_css->flags & CSS_ONLINE) && css_tryget(next_css))) +- return mem_cgroup_from_css(next_css); ++ struct mem_cgroup *memcg = mem_cgroup_from_css(next_css); ++ ++ if (next_css == &root->css) ++ return memcg; ++ ++ if (css_tryget(next_css)) { ++ /* ++ * Make sure the memcg is initialized: ++ * mem_cgroup_css_online() orders the the ++ * initialization against setting the flag. ++ */ ++ if (smp_load_acquire(&memcg->initialized)) ++ return memcg; ++ css_put(next_css); ++ } + + prev_css = next_css; + goto skip_node; +@@ -6538,6 +6553,7 @@ mem_cgroup_css_online(struct cgroup_subsys_state *css) + { + struct mem_cgroup *memcg = mem_cgroup_from_css(css); + struct mem_cgroup *parent = mem_cgroup_from_css(css_parent(css)); ++ int ret; + + if (css->cgroup->id > MEM_CGROUP_ID_MAX) + return -ENOSPC; +@@ -6574,7 +6590,18 @@ mem_cgroup_css_online(struct cgroup_subsys_state *css) + } + mutex_unlock(&memcg_create_mutex); + +- return memcg_init_kmem(memcg, &mem_cgroup_subsys); ++ ret = memcg_init_kmem(memcg, &mem_cgroup_subsys); ++ if (ret) ++ return ret; ++ ++ /* ++ * Make sure the memcg is initialized: mem_cgroup_iter() ++ * orders reading memcg->initialized against its callers ++ * reading the memcg members. ++ */ ++ smp_store_release(&memcg->initialized, 1); ++ ++ return 0; + } + + /* +diff --git a/mm/oom_kill.c b/mm/oom_kill.c +index 3291e82d4352..171c00f2e495 100644 +--- a/mm/oom_kill.c ++++ b/mm/oom_kill.c +@@ -406,6 +406,23 @@ static void dump_header(struct task_struct *p, gfp_t gfp_mask, int order, + dump_tasks(memcg, nodemask); + } + ++/* ++ * Number of OOM killer invocations (including memcg OOM killer). ++ * Primarily used by PM freezer to check for potential races with ++ * OOM killed frozen task. ++ */ ++static atomic_t oom_kills = ATOMIC_INIT(0); ++ ++int oom_kills_count(void) ++{ ++ return atomic_read(&oom_kills); ++} ++ ++void note_oom_kill(void) ++{ ++ atomic_inc(&oom_kills); ++} ++ + #define K(x) ((x) << (PAGE_SHIFT-10)) + /* + * Must be called while holding a reference to p, which will be released upon +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index ff0f6b13f32f..7b2611a055a7 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -1957,7 +1957,7 @@ zonelist_scan: + if (alloc_flags & ALLOC_FAIR) { + if (!zone_local(preferred_zone, zone)) + continue; +- if (zone_page_state(zone, NR_ALLOC_BATCH) <= 0) ++ if (atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH]) <= 0) + continue; + } + /* +@@ -2196,6 +2196,14 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, + } + + /* ++ * PM-freezer should be notified that there might be an OOM killer on ++ * its way to kill and wake somebody up. This is too early and we might ++ * end up not killing anything but false positives are acceptable. ++ * See freeze_processes. ++ */ ++ note_oom_kill(); ++ ++ /* + * Go through the zonelist yet one more time, keep very high watermark + * here, this is only to catch a parallel oom killing, we must fail if + * we're still under heavy pressure. +@@ -5662,9 +5670,8 @@ static void __setup_per_zone_wmarks(void) + zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1); + + __mod_zone_page_state(zone, NR_ALLOC_BATCH, +- high_wmark_pages(zone) - +- low_wmark_pages(zone) - +- zone_page_state(zone, NR_ALLOC_BATCH)); ++ high_wmark_pages(zone) - low_wmark_pages(zone) - ++ atomic_long_read(&zone->vm_stat[NR_ALLOC_BATCH])); + + setup_zone_migrate_reserve(zone); + spin_unlock_irqrestore(&zone->lock, flags); +diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c +index cfd162882c00..0e9a319d5f8d 100644 +--- a/mm/page_cgroup.c ++++ b/mm/page_cgroup.c +@@ -171,6 +171,7 @@ static void free_page_cgroup(void *addr) + sizeof(struct page_cgroup) * PAGES_PER_SECTION; + + BUG_ON(PageReserved(page)); ++ kmemleak_free(addr); + free_pages_exact(addr, table_size); + } + } +diff --git a/mm/percpu.c b/mm/percpu.c +index 8cd4308471c3..a2a54a85f691 100644 +--- a/mm/percpu.c ++++ b/mm/percpu.c +@@ -1917,8 +1917,6 @@ void __init setup_per_cpu_areas(void) + + if (pcpu_setup_first_chunk(ai, fc) < 0) + panic("Failed to initialize percpu areas."); +- +- pcpu_free_alloc_info(ai); + } + + #endif /* CONFIG_SMP */ +diff --git a/mm/truncate.c b/mm/truncate.c +index 353b683afd6e..ac18edc30649 100644 +--- a/mm/truncate.c ++++ b/mm/truncate.c +@@ -20,6 +20,7 @@ + #include /* grr. try_to_release_page, + do_invalidatepage */ + #include ++#include + #include "internal.h" + + +@@ -613,12 +614,67 @@ EXPORT_SYMBOL(truncate_pagecache); + */ + void truncate_setsize(struct inode *inode, loff_t newsize) + { ++ loff_t oldsize = inode->i_size; ++ + i_size_write(inode, newsize); ++ if (newsize > oldsize) ++ pagecache_isize_extended(inode, oldsize, newsize); + truncate_pagecache(inode, newsize); + } + EXPORT_SYMBOL(truncate_setsize); + + /** ++ * pagecache_isize_extended - update pagecache after extension of i_size ++ * @inode: inode for which i_size was extended ++ * @from: original inode size ++ * @to: new inode size ++ * ++ * Handle extension of inode size either caused by extending truncate or by ++ * write starting after current i_size. We mark the page straddling current ++ * i_size RO so that page_mkwrite() is called on the nearest write access to ++ * the page. This way filesystem can be sure that page_mkwrite() is called on ++ * the page before user writes to the page via mmap after the i_size has been ++ * changed. ++ * ++ * The function must be called after i_size is updated so that page fault ++ * coming after we unlock the page will already see the new i_size. ++ * The function must be called while we still hold i_mutex - this not only ++ * makes sure i_size is stable but also that userspace cannot observe new ++ * i_size value before we are prepared to store mmap writes at new inode size. ++ */ ++void pagecache_isize_extended(struct inode *inode, loff_t from, loff_t to) ++{ ++ int bsize = 1 << inode->i_blkbits; ++ loff_t rounded_from; ++ struct page *page; ++ pgoff_t index; ++ ++ WARN_ON(to > inode->i_size); ++ ++ if (from >= to || bsize == PAGE_CACHE_SIZE) ++ return; ++ /* Page straddling @from will not have any hole block created? */ ++ rounded_from = round_up(from, bsize); ++ if (to <= rounded_from || !(rounded_from & (PAGE_CACHE_SIZE - 1))) ++ return; ++ ++ index = from >> PAGE_CACHE_SHIFT; ++ page = find_lock_page(inode->i_mapping, index); ++ /* Page not cached? Nothing to do */ ++ if (!page) ++ return; ++ /* ++ * See clear_page_dirty_for_io() for details why set_page_dirty() ++ * is needed. ++ */ ++ if (page_mkclean(page)) ++ set_page_dirty(page); ++ unlock_page(page); ++ page_cache_release(page); ++} ++EXPORT_SYMBOL(pagecache_isize_extended); ++ ++/** + * truncate_pagecache_range - unmap and remove pagecache that is hole-punched + * @inode: inode + * @lstart: offset of beginning of hole +diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c +index 0a31298737ac..2e87eecec8f6 100644 +--- a/net/ceph/messenger.c ++++ b/net/ceph/messenger.c +@@ -291,7 +291,11 @@ int ceph_msgr_init(void) + if (ceph_msgr_slab_init()) + return -ENOMEM; + +- ceph_msgr_wq = alloc_workqueue("ceph-msgr", 0, 0); ++ /* ++ * The number of active work items is limited by the number of ++ * connections, so leave @max_active at default. ++ */ ++ ceph_msgr_wq = alloc_workqueue("ceph-msgr", WQ_MEM_RECLAIM, 0); + if (ceph_msgr_wq) + return 0; + +diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c +index 9d43468722ed..017fa5e17594 100644 +--- a/net/ipv4/fib_semantics.c ++++ b/net/ipv4/fib_semantics.c +@@ -535,7 +535,7 @@ int fib_nh_match(struct fib_config *cfg, struct fib_info *fi) + return 1; + + attrlen = rtnh_attrlen(rtnh); +- if (attrlen < 0) { ++ if (attrlen > 0) { + struct nlattr *nla, *attrs = rtnh_attrs(rtnh); + + nla = nla_find(attrs, attrlen, RTA_GATEWAY); +diff --git a/net/ipv4/gre_offload.c b/net/ipv4/gre_offload.c +index 2d24f293f977..8c8493ea6b1c 100644 +--- a/net/ipv4/gre_offload.c ++++ b/net/ipv4/gre_offload.c +@@ -50,7 +50,7 @@ static struct sk_buff *gre_gso_segment(struct sk_buff *skb, + + greh = (struct gre_base_hdr *)skb_transport_header(skb); + +- ghl = skb_inner_network_header(skb) - skb_transport_header(skb); ++ ghl = skb_inner_mac_header(skb) - skb_transport_header(skb); + if (unlikely(ghl < sizeof(*greh))) + goto out; + +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c +index ed88d781248f..844323b6cfb9 100644 +--- a/net/ipv4/ip_output.c ++++ b/net/ipv4/ip_output.c +@@ -1487,6 +1487,7 @@ void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, + struct sk_buff *nskb; + struct sock *sk; + struct inet_sock *inet; ++ int err; + + if (ip_options_echo(&replyopts.opt.opt, skb)) + return; +@@ -1525,8 +1526,13 @@ void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, + sock_net_set(sk, net); + __skb_queue_head_init(&sk->sk_write_queue); + sk->sk_sndbuf = sysctl_wmem_default; +- ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base, len, 0, +- &ipc, &rt, MSG_DONTWAIT); ++ err = ip_append_data(sk, &fl4, ip_reply_glue_bits, arg->iov->iov_base, ++ len, 0, &ipc, &rt, MSG_DONTWAIT); ++ if (unlikely(err)) { ++ ip_flush_pending_frames(sk); ++ goto out; ++ } ++ + nskb = skb_peek(&sk->sk_write_queue); + if (nskb) { + if (arg->csumoffset >= 0) +@@ -1538,7 +1544,7 @@ void ip_send_unicast_reply(struct net *net, struct sk_buff *skb, __be32 daddr, + skb_set_queue_mapping(nskb, skb_get_queue_mapping(skb)); + ip_push_pending_frames(sk, &fl4); + } +- ++out: + put_cpu_var(unicast_sock); + + ip_rt_put(rt); +diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c +index 65b664d30fa1..791a419f0699 100644 +--- a/net/ipv4/ip_tunnel_core.c ++++ b/net/ipv4/ip_tunnel_core.c +@@ -91,11 +91,12 @@ int iptunnel_pull_header(struct sk_buff *skb, int hdr_len, __be16 inner_proto) + skb_pull_rcsum(skb, hdr_len); + + if (inner_proto == htons(ETH_P_TEB)) { +- struct ethhdr *eh = (struct ethhdr *)skb->data; ++ struct ethhdr *eh; + + if (unlikely(!pskb_may_pull(skb, ETH_HLEN))) + return -ENOMEM; + ++ eh = (struct ethhdr *)skb->data; + if (likely(ntohs(eh->h_proto) >= ETH_P_802_3_MIN)) + skb->protocol = eh->h_proto; + else +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index f7d71ec72a47..29d240b87af1 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -2954,61 +2954,42 @@ EXPORT_SYMBOL(compat_tcp_getsockopt); + #endif + + #ifdef CONFIG_TCP_MD5SIG +-static struct tcp_md5sig_pool __percpu *tcp_md5sig_pool __read_mostly; ++static DEFINE_PER_CPU(struct tcp_md5sig_pool, tcp_md5sig_pool); + static DEFINE_MUTEX(tcp_md5sig_mutex); +- +-static void __tcp_free_md5sig_pool(struct tcp_md5sig_pool __percpu *pool) +-{ +- int cpu; +- +- for_each_possible_cpu(cpu) { +- struct tcp_md5sig_pool *p = per_cpu_ptr(pool, cpu); +- +- if (p->md5_desc.tfm) +- crypto_free_hash(p->md5_desc.tfm); +- } +- free_percpu(pool); +-} ++static bool tcp_md5sig_pool_populated = false; + + static void __tcp_alloc_md5sig_pool(void) + { + int cpu; +- struct tcp_md5sig_pool __percpu *pool; +- +- pool = alloc_percpu(struct tcp_md5sig_pool); +- if (!pool) +- return; + + for_each_possible_cpu(cpu) { +- struct crypto_hash *hash; +- +- hash = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC); +- if (IS_ERR_OR_NULL(hash)) +- goto out_free; ++ if (!per_cpu(tcp_md5sig_pool, cpu).md5_desc.tfm) { ++ struct crypto_hash *hash; + +- per_cpu_ptr(pool, cpu)->md5_desc.tfm = hash; ++ hash = crypto_alloc_hash("md5", 0, CRYPTO_ALG_ASYNC); ++ if (IS_ERR_OR_NULL(hash)) ++ return; ++ per_cpu(tcp_md5sig_pool, cpu).md5_desc.tfm = hash; ++ } + } +- /* before setting tcp_md5sig_pool, we must commit all writes +- * to memory. See ACCESS_ONCE() in tcp_get_md5sig_pool() ++ /* before setting tcp_md5sig_pool_populated, we must commit all writes ++ * to memory. See smp_rmb() in tcp_get_md5sig_pool() + */ + smp_wmb(); +- tcp_md5sig_pool = pool; +- return; +-out_free: +- __tcp_free_md5sig_pool(pool); ++ tcp_md5sig_pool_populated = true; + } + + bool tcp_alloc_md5sig_pool(void) + { +- if (unlikely(!tcp_md5sig_pool)) { ++ if (unlikely(!tcp_md5sig_pool_populated)) { + mutex_lock(&tcp_md5sig_mutex); + +- if (!tcp_md5sig_pool) ++ if (!tcp_md5sig_pool_populated) + __tcp_alloc_md5sig_pool(); + + mutex_unlock(&tcp_md5sig_mutex); + } +- return tcp_md5sig_pool != NULL; ++ return tcp_md5sig_pool_populated; + } + EXPORT_SYMBOL(tcp_alloc_md5sig_pool); + +@@ -3022,13 +3003,13 @@ EXPORT_SYMBOL(tcp_alloc_md5sig_pool); + */ + struct tcp_md5sig_pool *tcp_get_md5sig_pool(void) + { +- struct tcp_md5sig_pool __percpu *p; +- + local_bh_disable(); +- p = ACCESS_ONCE(tcp_md5sig_pool); +- if (p) +- return __this_cpu_ptr(p); + ++ if (tcp_md5sig_pool_populated) { ++ /* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */ ++ smp_rmb(); ++ return this_cpu_ptr(&tcp_md5sig_pool); ++ } + local_bh_enable(); + return NULL; + } +diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c +index 798eb0f79078..ae4a06be14df 100644 +--- a/net/ipv6/output_core.c ++++ b/net/ipv6/output_core.c +@@ -3,10 +3,43 @@ + * not configured or static. These functions are needed by GSO/GRO implementation. + */ + #include ++#include + #include + #include + #include + ++/* This function exists only for tap drivers that must support broken ++ * clients requesting UFO without specifying an IPv6 fragment ID. ++ * ++ * This is similar to ipv6_select_ident() but we use an independent hash ++ * seed to limit information leakage. ++ * ++ * The network header must be set before calling this. ++ */ ++void ipv6_proxy_select_ident(struct sk_buff *skb) ++{ ++ static u32 ip6_proxy_idents_hashrnd __read_mostly; ++ struct in6_addr buf[2]; ++ struct in6_addr *addrs; ++ u32 hash, id; ++ ++ addrs = skb_header_pointer(skb, ++ skb_network_offset(skb) + ++ offsetof(struct ipv6hdr, saddr), ++ sizeof(buf), buf); ++ if (!addrs) ++ return; ++ ++ net_get_random_once(&ip6_proxy_idents_hashrnd, ++ sizeof(ip6_proxy_idents_hashrnd)); ++ ++ hash = __ipv6_addr_jhash(&addrs[1], ip6_proxy_idents_hashrnd); ++ hash = __ipv6_addr_jhash(&addrs[0], hash); ++ ++ id = ip_idents_reserve(hash, 1); ++ skb_shinfo(skb)->ip6_frag_id = htonl(id); ++} ++EXPORT_SYMBOL_GPL(ipv6_proxy_select_ident); + + int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) + { +diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c +index 22b223f13c9f..74350c3863b8 100644 +--- a/net/mac80211/rate.c ++++ b/net/mac80211/rate.c +@@ -462,7 +462,7 @@ static void rate_fixup_ratelist(struct ieee80211_vif *vif, + */ + if (!(rates[0].flags & IEEE80211_TX_RC_MCS)) { + u32 basic_rates = vif->bss_conf.basic_rates; +- s8 baserate = basic_rates ? ffs(basic_rates - 1) : 0; ++ s8 baserate = basic_rates ? ffs(basic_rates) - 1 : 0; + + rate = &sband->bitrates[rates[0].idx]; + +diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c +index c375d731587f..7c177bc43806 100644 +--- a/net/netlink/af_netlink.c ++++ b/net/netlink/af_netlink.c +@@ -707,7 +707,7 @@ static int netlink_mmap_sendmsg(struct sock *sk, struct msghdr *msg, + * after validation, the socket and the ring may only be used by a + * single process, otherwise we fall back to copying. + */ +- if (atomic_long_read(&sk->sk_socket->file->f_count) > 2 || ++ if (atomic_long_read(&sk->sk_socket->file->f_count) > 1 || + atomic_read(&nlk->mapped) > 1) + excl = false; + +diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c +index 3ea5cda787c7..5ff8b87c3d04 100644 +--- a/net/sunrpc/clnt.c ++++ b/net/sunrpc/clnt.c +@@ -533,6 +533,8 @@ struct rpc_clnt *rpc_create(struct rpc_create_args *args) + + if (args->flags & RPC_CLNT_CREATE_AUTOBIND) + clnt->cl_autobind = 1; ++ if (args->flags & RPC_CLNT_CREATE_NO_RETRANS_TIMEOUT) ++ clnt->cl_noretranstimeo = 1; + if (args->flags & RPC_CLNT_CREATE_DISCRTRY) + clnt->cl_discrtry = 1; + if (!(args->flags & RPC_CLNT_CREATE_QUIET)) +@@ -571,6 +573,7 @@ static struct rpc_clnt *__rpc_clone_client(struct rpc_create_args *args, + /* Turn off autobind on clones */ + new->cl_autobind = 0; + new->cl_softrtry = clnt->cl_softrtry; ++ new->cl_noretranstimeo = clnt->cl_noretranstimeo; + new->cl_discrtry = clnt->cl_discrtry; + new->cl_chatty = clnt->cl_chatty; + return new; +diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c +index 0addefca8e77..41c2f9d7a148 100644 +--- a/net/sunrpc/xprtsock.c ++++ b/net/sunrpc/xprtsock.c +@@ -842,6 +842,8 @@ static void xs_error_report(struct sock *sk) + dprintk("RPC: xs_error_report client %p, error=%d...\n", + xprt, -err); + trace_rpc_socket_error(xprt, sk->sk_socket, err); ++ if (test_bit(XPRT_CONNECTION_REUSE, &xprt->state)) ++ goto out; + xprt_wake_pending_tasks(xprt, err); + out: + read_unlock_bh(&sk->sk_callback_lock); +@@ -2251,7 +2253,9 @@ static void xs_tcp_setup_socket(struct work_struct *work) + abort_and_exit = test_and_clear_bit(XPRT_CONNECTION_ABORT, + &xprt->state); + /* "close" the socket, preserving the local port */ ++ set_bit(XPRT_CONNECTION_REUSE, &xprt->state); + xs_tcp_reuse_connection(transport); ++ clear_bit(XPRT_CONNECTION_REUSE, &xprt->state); + + if (abort_and_exit) + goto out_eagain; +diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c +index 3c5cbb977254..7e71e066198f 100644 +--- a/security/integrity/evm/evm_main.c ++++ b/security/integrity/evm/evm_main.c +@@ -269,6 +269,13 @@ static int evm_protect_xattr(struct dentry *dentry, const char *xattr_name, + goto out; + } + evm_status = evm_verify_current_integrity(dentry); ++ if (evm_status == INTEGRITY_NOXATTRS) { ++ struct integrity_iint_cache *iint; ++ ++ iint = integrity_iint_find(dentry->d_inode); ++ if (iint && (iint->flags & IMA_NEW_FILE)) ++ return 0; ++ } + out: + if (evm_status != INTEGRITY_PASS) + integrity_audit_msg(AUDIT_INTEGRITY_METADATA, dentry->d_inode, +@@ -296,9 +303,12 @@ int evm_inode_setxattr(struct dentry *dentry, const char *xattr_name, + { + const struct evm_ima_xattr_data *xattr_data = xattr_value; + +- if ((strcmp(xattr_name, XATTR_NAME_EVM) == 0) +- && (xattr_data->type == EVM_XATTR_HMAC)) +- return -EPERM; ++ if (strcmp(xattr_name, XATTR_NAME_EVM) == 0) { ++ if (!xattr_value_len) ++ return -EINVAL; ++ if (xattr_data->type != EVM_IMA_XATTR_DIGSIG) ++ return -EPERM; ++ } + return evm_protect_xattr(dentry, xattr_name, xattr_value, + xattr_value_len); + } +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c +index e294b86c8d88..47b5c69e4605 100644 +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -470,6 +470,7 @@ next_inode: + list_entry(sbsec->isec_head.next, + struct inode_security_struct, list); + struct inode *inode = isec->inode; ++ list_del_init(&isec->list); + spin_unlock(&sbsec->isec_lock); + inode = igrab(inode); + if (inode) { +@@ -478,7 +479,6 @@ next_inode: + iput(inode); + } + spin_lock(&sbsec->isec_lock); +- list_del_init(&isec->list); + goto next_inode; + } + spin_unlock(&sbsec->isec_lock); +diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c +index af49721ba0e3..c4ac3c1e19af 100644 +--- a/sound/core/pcm_compat.c ++++ b/sound/core/pcm_compat.c +@@ -206,6 +206,8 @@ static int snd_pcm_status_user_compat(struct snd_pcm_substream *substream, + if (err < 0) + return err; + ++ if (clear_user(src, sizeof(*src))) ++ return -EFAULT; + if (put_user(status.state, &src->state) || + compat_put_timespec(&status.trigger_tstamp, &src->trigger_tstamp) || + compat_put_timespec(&status.tstamp, &src->tstamp) || +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 7ec91424ba22..103e85a13f35 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -4027,6 +4027,9 @@ static DEFINE_PCI_DEVICE_TABLE(azx_ids) = { + /* BayTrail */ + { PCI_DEVICE(0x8086, 0x0f04), + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH_NOPM }, ++ /* Braswell */ ++ { PCI_DEVICE(0x8086, 0x2284), ++ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, + /* ICH */ + { PCI_DEVICE(0x8086, 0x2668), + .driver_data = AZX_DRIVER_ICH | AZX_DCAPS_OLD_SSYNC | +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index 8253b48a435b..611110a3f1a4 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -3317,6 +3317,7 @@ static const struct hda_codec_preset snd_hda_preset_hdmi[] = { + { .id = 0x80862808, .name = "Broadwell HDMI", .patch = patch_generic_hdmi }, + { .id = 0x80862880, .name = "CedarTrail HDMI", .patch = patch_generic_hdmi }, + { .id = 0x80862882, .name = "Valleyview2 HDMI", .patch = patch_generic_hdmi }, ++{ .id = 0x80862883, .name = "Braswell HDMI", .patch = patch_generic_hdmi }, + { .id = 0x808629fb, .name = "Crestline HDMI", .patch = patch_generic_hdmi }, + {} /* terminator */ + }; +@@ -3373,6 +3374,7 @@ MODULE_ALIAS("snd-hda-codec-id:80862807"); + MODULE_ALIAS("snd-hda-codec-id:80862808"); + MODULE_ALIAS("snd-hda-codec-id:80862880"); + MODULE_ALIAS("snd-hda-codec-id:80862882"); ++MODULE_ALIAS("snd-hda-codec-id:80862883"); + MODULE_ALIAS("snd-hda-codec-id:808629fb"); + + MODULE_LICENSE("GPL"); +diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c +index eb241c6571a9..fd53d37e1181 100644 +--- a/sound/soc/codecs/tlv320aic3x.c ++++ b/sound/soc/codecs/tlv320aic3x.c +@@ -1121,6 +1121,7 @@ static int aic3x_regulator_event(struct notifier_block *nb, + static int aic3x_set_power(struct snd_soc_codec *codec, int power) + { + struct aic3x_priv *aic3x = snd_soc_codec_get_drvdata(codec); ++ unsigned int pll_c, pll_d; + int ret; + + if (power) { +@@ -1138,6 +1139,18 @@ static int aic3x_set_power(struct snd_soc_codec *codec, int power) + /* Sync reg_cache with the hardware */ + regcache_cache_only(aic3x->regmap, false); + regcache_sync(aic3x->regmap); ++ ++ /* Rewrite paired PLL D registers in case cached sync skipped ++ * writing one of them and thus caused other one also not ++ * being written ++ */ ++ pll_c = snd_soc_read(codec, AIC3X_PLL_PROGC_REG); ++ pll_d = snd_soc_read(codec, AIC3X_PLL_PROGD_REG); ++ if (pll_c == aic3x_reg[AIC3X_PLL_PROGC_REG].def || ++ pll_d == aic3x_reg[AIC3X_PLL_PROGD_REG].def) { ++ snd_soc_write(codec, AIC3X_PLL_PROGC_REG, pll_c); ++ snd_soc_write(codec, AIC3X_PLL_PROGD_REG, pll_d); ++ } + } else { + /* + * Do soft reset to this codec instance in order to clear +diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c +index 731d47b64daa..e4da224d7253 100644 +--- a/sound/soc/soc-dapm.c ++++ b/sound/soc/soc-dapm.c +@@ -689,9 +689,9 @@ static int dapm_create_or_share_mixmux_kcontrol(struct snd_soc_dapm_widget *w, + int shared; + struct snd_kcontrol *kcontrol; + bool wname_in_long_name, kcname_in_long_name; +- char *long_name; ++ char *long_name = NULL; + const char *name; +- int ret; ++ int ret = 0; + + if (dapm->codec) + prefix = dapm->codec->name_prefix; +@@ -756,15 +756,17 @@ static int dapm_create_or_share_mixmux_kcontrol(struct snd_soc_dapm_widget *w, + + kcontrol = snd_soc_cnew(&w->kcontrol_news[kci], NULL, name, + prefix); +- kfree(long_name); +- if (!kcontrol) +- return -ENOMEM; ++ if (!kcontrol) { ++ ret = -ENOMEM; ++ goto exit_free; ++ } ++ + kcontrol->private_free = dapm_kcontrol_free; + + ret = dapm_kcontrol_data_alloc(w, kcontrol); + if (ret) { + snd_ctl_free_one(kcontrol); +- return ret; ++ goto exit_free; + } + + ret = snd_ctl_add(card, kcontrol); +@@ -772,17 +774,18 @@ static int dapm_create_or_share_mixmux_kcontrol(struct snd_soc_dapm_widget *w, + dev_err(dapm->dev, + "ASoC: failed to add widget %s dapm kcontrol %s: %d\n", + w->name, name, ret); +- return ret; ++ goto exit_free; + } + } + + ret = dapm_kcontrol_add_widget(kcontrol, w); +- if (ret) +- return ret; ++ if (ret == 0) ++ w->kcontrols[kci] = kcontrol; + +- w->kcontrols[kci] = kcontrol; ++exit_free: ++ kfree(long_name); + +- return 0; ++ return ret; + } + + /* create new dapm mixer control */ +diff --git a/sound/usb/card.c b/sound/usb/card.c +index af1956042c9e..ab433a02dbf1 100644 +--- a/sound/usb/card.c ++++ b/sound/usb/card.c +@@ -586,18 +586,19 @@ static void snd_usb_audio_disconnect(struct usb_device *dev, + { + struct snd_card *card; + struct list_head *p; ++ bool was_shutdown; + + if (chip == (void *)-1L) + return; + + card = chip->card; + down_write(&chip->shutdown_rwsem); ++ was_shutdown = chip->shutdown; + chip->shutdown = 1; + up_write(&chip->shutdown_rwsem); + + mutex_lock(®ister_mutex); +- chip->num_interfaces--; +- if (chip->num_interfaces <= 0) { ++ if (!was_shutdown) { + struct snd_usb_endpoint *ep; + + snd_card_disconnect(card); +@@ -617,6 +618,10 @@ static void snd_usb_audio_disconnect(struct usb_device *dev, + list_for_each(p, &chip->mixer_list) { + snd_usb_mixer_disconnect(p); + } ++ } ++ ++ chip->num_interfaces--; ++ if (chip->num_interfaces <= 0) { + usb_chip[chip->index] = NULL; + mutex_unlock(®ister_mutex); + snd_card_free_when_closed(card); +diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c +index 714b94932312..1f0dc1e5f1f0 100644 +--- a/virt/kvm/iommu.c ++++ b/virt/kvm/iommu.c +@@ -43,13 +43,13 @@ static void kvm_iommu_put_pages(struct kvm *kvm, + gfn_t base_gfn, unsigned long npages); + + static pfn_t kvm_pin_pages(struct kvm_memory_slot *slot, gfn_t gfn, +- unsigned long size) ++ unsigned long npages) + { + gfn_t end_gfn; + pfn_t pfn; + + pfn = gfn_to_pfn_memslot(slot, gfn); +- end_gfn = gfn + (size >> PAGE_SHIFT); ++ end_gfn = gfn + npages; + gfn += 1; + + if (is_error_noslot_pfn(pfn)) +@@ -119,7 +119,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) + * Pin all pages we are about to map in memory. This is + * important because we unmap and unpin in 4kb steps later. + */ +- pfn = kvm_pin_pages(slot, gfn, page_size); ++ pfn = kvm_pin_pages(slot, gfn, page_size >> PAGE_SHIFT); + if (is_error_noslot_pfn(pfn)) { + gfn += 1; + continue; +@@ -131,7 +131,7 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot) + if (r) { + printk(KERN_ERR "kvm_iommu_map_address:" + "iommu failed to map pfn=%llx\n", pfn); +- kvm_unpin_pages(kvm, pfn, page_size); ++ kvm_unpin_pages(kvm, pfn, page_size >> PAGE_SHIFT); + goto unmap_pages; + } +