From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 0ACE3138334 for ; Wed, 20 Feb 2019 11:20:34 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 067D4E0969; Wed, 20 Feb 2019 11:20:33 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id A1C06E0969 for ; Wed, 20 Feb 2019 11:20:32 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id E6C36335CCF for ; Wed, 20 Feb 2019 11:20:30 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 5FC5744B for ; Wed, 20 Feb 2019 11:20:29 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1550661602.101885640cae489e3732b5d3f803620a1c092c4a.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.20 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1010_linux-4.20.11.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 101885640cae489e3732b5d3f803620a1c092c4a X-VCS-Branch: 4.20 Date: Wed, 20 Feb 2019 11:20:29 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: adfd29a0-4042-425e-a1b5-5d64d70c90f7 X-Archives-Hash: 8ca9a32bf6e7015c12b5d6339854f09d commit: 101885640cae489e3732b5d3f803620a1c092c4a Author: Mike Pagano gentoo org> AuthorDate: Wed Feb 20 11:20:02 2019 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Feb 20 11:20:02 2019 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=10188564 proj/linux-patches: Linux patch 4.20.11 Signed-off-by: Mike Pagano gentoo.org> 0000_README | 4 + 1010_linux-4.20.11.patch | 3570 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 3574 insertions(+) diff --git a/0000_README b/0000_README index 35bccfa..068574e 100644 --- a/0000_README +++ b/0000_README @@ -83,6 +83,10 @@ Patch: 1009_linux-4.20.10.patch From: http://www.kernel.org Desc: Linux 4.20.10 +Patch: 1010_linux-4.20.11.patch +From: http://www.kernel.org +Desc: Linux 4.20.11 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1010_linux-4.20.11.patch b/1010_linux-4.20.11.patch new file mode 100644 index 0000000..5b451cc --- /dev/null +++ b/1010_linux-4.20.11.patch @@ -0,0 +1,3570 @@ +diff --git a/Documentation/devicetree/bindings/eeprom/at24.txt b/Documentation/devicetree/bindings/eeprom/at24.txt +index aededdbc262b..f9a7c984274c 100644 +--- a/Documentation/devicetree/bindings/eeprom/at24.txt ++++ b/Documentation/devicetree/bindings/eeprom/at24.txt +@@ -27,6 +27,7 @@ Required properties: + "atmel,24c256", + "atmel,24c512", + "atmel,24c1024", ++ "atmel,24c2048", + + If is not "atmel", then a fallback must be used + with the same and "atmel" as manufacturer. +diff --git a/Makefile b/Makefile +index 6f7a8172de44..193cfe3a3d70 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 20 +-SUBLEVEL = 10 ++SUBLEVEL = 11 + EXTRAVERSION = + NAME = Shy Crocodile + +diff --git a/arch/alpha/include/asm/irq.h b/arch/alpha/include/asm/irq.h +index 4d17cacd1462..432402c8e47f 100644 +--- a/arch/alpha/include/asm/irq.h ++++ b/arch/alpha/include/asm/irq.h +@@ -56,15 +56,15 @@ + + #elif defined(CONFIG_ALPHA_DP264) || \ + defined(CONFIG_ALPHA_LYNX) || \ +- defined(CONFIG_ALPHA_SHARK) || \ +- defined(CONFIG_ALPHA_EIGER) ++ defined(CONFIG_ALPHA_SHARK) + # define NR_IRQS 64 + + #elif defined(CONFIG_ALPHA_TITAN) + #define NR_IRQS 80 + + #elif defined(CONFIG_ALPHA_RAWHIDE) || \ +- defined(CONFIG_ALPHA_TAKARA) ++ defined(CONFIG_ALPHA_TAKARA) || \ ++ defined(CONFIG_ALPHA_EIGER) + # define NR_IRQS 128 + + #elif defined(CONFIG_ALPHA_WILDFIRE) +diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c +index d73dc473fbb9..188fc9256baf 100644 +--- a/arch/alpha/mm/fault.c ++++ b/arch/alpha/mm/fault.c +@@ -78,7 +78,7 @@ __load_new_mm_context(struct mm_struct *next_mm) + /* Macro for exception fixup code to access integer registers. */ + #define dpf_reg(r) \ + (((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-16 : \ +- (r) <= 18 ? (r)+8 : (r)-10]) ++ (r) <= 18 ? (r)+10 : (r)-10]) + + asmlinkage void + do_page_fault(unsigned long address, unsigned long mmcsr, +diff --git a/arch/arm/boot/dts/da850-evm.dts b/arch/arm/boot/dts/da850-evm.dts +index a3c9b346721d..f04bc3e15332 100644 +--- a/arch/arm/boot/dts/da850-evm.dts ++++ b/arch/arm/boot/dts/da850-evm.dts +@@ -94,6 +94,28 @@ + regulator-boot-on; + }; + ++ baseboard_3v3: fixedregulator-3v3 { ++ /* TPS73701DCQ */ ++ compatible = "regulator-fixed"; ++ regulator-name = "baseboard_3v3"; ++ regulator-min-microvolt = <3300000>; ++ regulator-max-microvolt = <3300000>; ++ vin-supply = <&vbat>; ++ regulator-always-on; ++ regulator-boot-on; ++ }; ++ ++ baseboard_1v8: fixedregulator-1v8 { ++ /* TPS73701DCQ */ ++ compatible = "regulator-fixed"; ++ regulator-name = "baseboard_1v8"; ++ regulator-min-microvolt = <1800000>; ++ regulator-max-microvolt = <1800000>; ++ vin-supply = <&vbat>; ++ regulator-always-on; ++ regulator-boot-on; ++ }; ++ + backlight_lcd: backlight-regulator { + compatible = "regulator-fixed"; + regulator-name = "lcd_backlight_pwr"; +@@ -105,7 +127,7 @@ + + sound { + compatible = "simple-audio-card"; +- simple-audio-card,name = "DA850/OMAP-L138 EVM"; ++ simple-audio-card,name = "DA850-OMAPL138 EVM"; + simple-audio-card,widgets = + "Line", "Line In", + "Line", "Line Out"; +@@ -210,10 +232,9 @@ + + /* Regulators */ + IOVDD-supply = <&vdcdc2_reg>; +- /* Derived from VBAT: Baseboard 3.3V / 1.8V */ +- AVDD-supply = <&vbat>; +- DRVDD-supply = <&vbat>; +- DVDD-supply = <&vbat>; ++ AVDD-supply = <&baseboard_3v3>; ++ DRVDD-supply = <&baseboard_3v3>; ++ DVDD-supply = <&baseboard_1v8>; + }; + tca6416: gpio@20 { + compatible = "ti,tca6416"; +diff --git a/arch/arm/boot/dts/da850-lcdk.dts b/arch/arm/boot/dts/da850-lcdk.dts +index 0177e3ed20fe..3a2fa6e035a3 100644 +--- a/arch/arm/boot/dts/da850-lcdk.dts ++++ b/arch/arm/boot/dts/da850-lcdk.dts +@@ -39,9 +39,39 @@ + }; + }; + ++ vcc_5vd: fixedregulator-vcc_5vd { ++ compatible = "regulator-fixed"; ++ regulator-name = "vcc_5vd"; ++ regulator-min-microvolt = <5000000>; ++ regulator-max-microvolt = <5000000>; ++ regulator-boot-on; ++ }; ++ ++ vcc_3v3d: fixedregulator-vcc_3v3d { ++ /* TPS650250 - VDCDC1 */ ++ compatible = "regulator-fixed"; ++ regulator-name = "vcc_3v3d"; ++ regulator-min-microvolt = <3300000>; ++ regulator-max-microvolt = <3300000>; ++ vin-supply = <&vcc_5vd>; ++ regulator-always-on; ++ regulator-boot-on; ++ }; ++ ++ vcc_1v8d: fixedregulator-vcc_1v8d { ++ /* TPS650250 - VDCDC2 */ ++ compatible = "regulator-fixed"; ++ regulator-name = "vcc_1v8d"; ++ regulator-min-microvolt = <1800000>; ++ regulator-max-microvolt = <1800000>; ++ vin-supply = <&vcc_5vd>; ++ regulator-always-on; ++ regulator-boot-on; ++ }; ++ + sound { + compatible = "simple-audio-card"; +- simple-audio-card,name = "DA850/OMAP-L138 LCDK"; ++ simple-audio-card,name = "DA850-OMAPL138 LCDK"; + simple-audio-card,widgets = + "Line", "Line In", + "Line", "Line Out"; +@@ -221,6 +251,12 @@ + compatible = "ti,tlv320aic3106"; + reg = <0x18>; + status = "okay"; ++ ++ /* Regulators */ ++ IOVDD-supply = <&vcc_3v3d>; ++ AVDD-supply = <&vcc_3v3d>; ++ DRVDD-supply = <&vcc_3v3d>; ++ DVDD-supply = <&vcc_1v8d>; + }; + }; + +diff --git a/arch/arm/boot/dts/kirkwood-dnskw.dtsi b/arch/arm/boot/dts/kirkwood-dnskw.dtsi +index cbaf06f2f78e..eb917462b219 100644 +--- a/arch/arm/boot/dts/kirkwood-dnskw.dtsi ++++ b/arch/arm/boot/dts/kirkwood-dnskw.dtsi +@@ -36,8 +36,8 @@ + compatible = "gpio-fan"; + pinctrl-0 = <&pmx_fan_high_speed &pmx_fan_low_speed>; + pinctrl-names = "default"; +- gpios = <&gpio1 14 GPIO_ACTIVE_LOW +- &gpio1 13 GPIO_ACTIVE_LOW>; ++ gpios = <&gpio1 14 GPIO_ACTIVE_HIGH ++ &gpio1 13 GPIO_ACTIVE_HIGH>; + gpio-fan,speed-map = <0 0 + 3000 1 + 6000 2>; +diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi +index bf7ca00f4c21..c2dc4199b4ec 100644 +--- a/arch/arm/boot/dts/omap5-board-common.dtsi ++++ b/arch/arm/boot/dts/omap5-board-common.dtsi +@@ -317,7 +317,8 @@ + + palmas_sys_nirq_pins: pinmux_palmas_sys_nirq_pins { + pinctrl-single,pins = < +- OMAP5_IOPAD(0x068, PIN_INPUT_PULLUP | MUX_MODE0) /* sys_nirq1 */ ++ /* sys_nirq1 is pulled down as the SoC is inverting it for GIC */ ++ OMAP5_IOPAD(0x068, PIN_INPUT_PULLUP | MUX_MODE0) + >; + }; + +@@ -385,7 +386,8 @@ + + palmas: palmas@48 { + compatible = "ti,palmas"; +- interrupts = ; /* IRQ_SYS_1N */ ++ /* sys_nirq/ext_sys_irq pins get inverted at mpuss wakeupgen */ ++ interrupts = ; + reg = <0x48>; + interrupt-controller; + #interrupt-cells = <2>; +@@ -651,7 +653,8 @@ + pinctrl-names = "default"; + pinctrl-0 = <&twl6040_pins>; + +- interrupts = ; /* IRQ_SYS_2N cascaded to gic */ ++ /* sys_nirq/ext_sys_irq pins get inverted at mpuss wakeupgen */ ++ interrupts = ; + + /* audpwron gpio defined in the board specific dts */ + +diff --git a/arch/arm/boot/dts/omap5-cm-t54.dts b/arch/arm/boot/dts/omap5-cm-t54.dts +index 5e21fb430a65..e78d3718f145 100644 +--- a/arch/arm/boot/dts/omap5-cm-t54.dts ++++ b/arch/arm/boot/dts/omap5-cm-t54.dts +@@ -181,6 +181,13 @@ + OMAP5_IOPAD(0x0042, PIN_INPUT_PULLDOWN | MUX_MODE6) /* llib_wakereqin.gpio1_wk15 */ + >; + }; ++ ++ palmas_sys_nirq_pins: pinmux_palmas_sys_nirq_pins { ++ pinctrl-single,pins = < ++ /* sys_nirq1 is pulled down as the SoC is inverting it for GIC */ ++ OMAP5_IOPAD(0x068, PIN_INPUT_PULLUP | MUX_MODE0) ++ >; ++ }; + }; + + &omap5_pmx_core { +@@ -414,8 +421,11 @@ + + palmas: palmas@48 { + compatible = "ti,palmas"; +- interrupts = ; /* IRQ_SYS_1N */ + reg = <0x48>; ++ pinctrl-0 = <&palmas_sys_nirq_pins>; ++ pinctrl-names = "default"; ++ /* sys_nirq/ext_sys_irq pins get inverted at mpuss wakeupgen */ ++ interrupts = ; + interrupt-controller; + #interrupt-cells = <2>; + ti,system-power-controller; +diff --git a/arch/arm/mach-integrator/impd1.c b/arch/arm/mach-integrator/impd1.c +index a109f6482413..0f916c245a2e 100644 +--- a/arch/arm/mach-integrator/impd1.c ++++ b/arch/arm/mach-integrator/impd1.c +@@ -393,7 +393,11 @@ static int __ref impd1_probe(struct lm_device *dev) + sizeof(*lookup) + 3 * sizeof(struct gpiod_lookup), + GFP_KERNEL); + chipname = devm_kstrdup(&dev->dev, devname, GFP_KERNEL); +- mmciname = kasprintf(GFP_KERNEL, "lm%x:00700", dev->id); ++ mmciname = devm_kasprintf(&dev->dev, GFP_KERNEL, ++ "lm%x:00700", dev->id); ++ if (!lookup || !chipname || !mmciname) ++ return -ENOMEM; ++ + lookup->dev_id = mmciname; + /* + * Offsets on GPIO block 1: +diff --git a/arch/arm/mach-omap2/omap-wakeupgen.c b/arch/arm/mach-omap2/omap-wakeupgen.c +index fc5fb776a710..17558be4bf0a 100644 +--- a/arch/arm/mach-omap2/omap-wakeupgen.c ++++ b/arch/arm/mach-omap2/omap-wakeupgen.c +@@ -50,6 +50,9 @@ + #define OMAP4_NR_BANKS 4 + #define OMAP4_NR_IRQS 128 + ++#define SYS_NIRQ1_EXT_SYS_IRQ_1 7 ++#define SYS_NIRQ2_EXT_SYS_IRQ_2 119 ++ + static void __iomem *wakeupgen_base; + static void __iomem *sar_base; + static DEFINE_RAW_SPINLOCK(wakeupgen_lock); +@@ -153,6 +156,37 @@ static void wakeupgen_unmask(struct irq_data *d) + irq_chip_unmask_parent(d); + } + ++/* ++ * The sys_nirq pins bypass peripheral modules and are wired directly ++ * to MPUSS wakeupgen. They get automatically inverted for GIC. ++ */ ++static int wakeupgen_irq_set_type(struct irq_data *d, unsigned int type) ++{ ++ bool inverted = false; ++ ++ switch (type) { ++ case IRQ_TYPE_LEVEL_LOW: ++ type &= ~IRQ_TYPE_LEVEL_MASK; ++ type |= IRQ_TYPE_LEVEL_HIGH; ++ inverted = true; ++ break; ++ case IRQ_TYPE_EDGE_FALLING: ++ type &= ~IRQ_TYPE_EDGE_BOTH; ++ type |= IRQ_TYPE_EDGE_RISING; ++ inverted = true; ++ break; ++ default: ++ break; ++ } ++ ++ if (inverted && d->hwirq != SYS_NIRQ1_EXT_SYS_IRQ_1 && ++ d->hwirq != SYS_NIRQ2_EXT_SYS_IRQ_2) ++ pr_warn("wakeupgen: irq%li polarity inverted in dts\n", ++ d->hwirq); ++ ++ return irq_chip_set_type_parent(d, type); ++} ++ + #ifdef CONFIG_HOTPLUG_CPU + static DEFINE_PER_CPU(u32 [MAX_NR_REG_BANKS], irqmasks); + +@@ -446,7 +480,7 @@ static struct irq_chip wakeupgen_chip = { + .irq_mask = wakeupgen_mask, + .irq_unmask = wakeupgen_unmask, + .irq_retrigger = irq_chip_retrigger_hierarchy, +- .irq_set_type = irq_chip_set_type_parent, ++ .irq_set_type = wakeupgen_irq_set_type, + .flags = IRQCHIP_SKIP_SET_WAKE | IRQCHIP_MASK_ON_SUSPEND, + #ifdef CONFIG_SMP + .irq_set_affinity = irq_chip_set_affinity_parent, +diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts +index dc20145dd393..c6509a02480d 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts ++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts +@@ -40,6 +40,7 @@ + pinctrl-0 = <&usb30_host_drv>; + regulator-name = "vcc_host_5v"; + regulator-always-on; ++ regulator-boot-on; + vin-supply = <&vcc_sys>; + }; + +@@ -51,6 +52,7 @@ + pinctrl-0 = <&usb20_host_drv>; + regulator-name = "vcc_host1_5v"; + regulator-always-on; ++ regulator-boot-on; + vin-supply = <&vcc_sys>; + }; + +diff --git a/arch/csky/include/asm/io.h b/arch/csky/include/asm/io.h +index ecae6b358f95..c1dfa9c10e36 100644 +--- a/arch/csky/include/asm/io.h ++++ b/arch/csky/include/asm/io.h +@@ -15,6 +15,31 @@ extern void iounmap(void *addr); + extern int remap_area_pages(unsigned long address, phys_addr_t phys_addr, + size_t size, unsigned long flags); + ++/* ++ * I/O memory access primitives. Reads are ordered relative to any ++ * following Normal memory access. Writes are ordered relative to any prior ++ * Normal memory access. ++ * ++ * For CACHEV1 (807, 810), store instruction could fast retire, so we need ++ * another mb() to prevent st fast retire. ++ * ++ * For CACHEV2 (860), store instruction with PAGE_ATTR_NO_BUFFERABLE won't ++ * fast retire. ++ */ ++#define readb(c) ({ u8 __v = readb_relaxed(c); rmb(); __v; }) ++#define readw(c) ({ u16 __v = readw_relaxed(c); rmb(); __v; }) ++#define readl(c) ({ u32 __v = readl_relaxed(c); rmb(); __v; }) ++ ++#ifdef CONFIG_CPU_HAS_CACHEV2 ++#define writeb(v,c) ({ wmb(); writeb_relaxed((v),(c)); }) ++#define writew(v,c) ({ wmb(); writew_relaxed((v),(c)); }) ++#define writel(v,c) ({ wmb(); writel_relaxed((v),(c)); }) ++#else ++#define writeb(v,c) ({ wmb(); writeb_relaxed((v),(c)); mb(); }) ++#define writew(v,c) ({ wmb(); writew_relaxed((v),(c)); mb(); }) ++#define writel(v,c) ({ wmb(); writel_relaxed((v),(c)); mb(); }) ++#endif ++ + #define ioremap_nocache(phy, sz) ioremap(phy, sz) + #define ioremap_wc ioremap_nocache + #define ioremap_wt ioremap_nocache +diff --git a/arch/csky/kernel/module.c b/arch/csky/kernel/module.c +index 65abab0c7a47..b5ad7d9de18c 100644 +--- a/arch/csky/kernel/module.c ++++ b/arch/csky/kernel/module.c +@@ -12,7 +12,7 @@ + #include + #include + +-#if defined(__CSKYABIV2__) ++#ifdef CONFIG_CPU_CK810 + #define IS_BSR32(hi16, lo16) (((hi16) & 0xFC00) == 0xE000) + #define IS_JSRI32(hi16, lo16) ((hi16) == 0xEAE0) + +@@ -25,6 +25,26 @@ + *(uint16_t *)(addr) = 0xE8Fa; \ + *((uint16_t *)(addr) + 1) = 0x0000; \ + } while (0) ++ ++static void jsri_2_lrw_jsr(uint32_t *location) ++{ ++ uint16_t *location_tmp = (uint16_t *)location; ++ ++ if (IS_BSR32(*location_tmp, *(location_tmp + 1))) ++ return; ++ ++ if (IS_JSRI32(*location_tmp, *(location_tmp + 1))) { ++ /* jsri 0x... --> lrw r26, 0x... */ ++ CHANGE_JSRI_TO_LRW(location); ++ /* lsli r0, r0 --> jsr r26 */ ++ SET_JSR32_R26(location + 1); ++ } ++} ++#else ++static void inline jsri_2_lrw_jsr(uint32_t *location) ++{ ++ return; ++} + #endif + + int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, +@@ -35,9 +55,6 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, + Elf32_Sym *sym; + uint32_t *location; + short *temp; +-#if defined(__CSKYABIV2__) +- uint16_t *location_tmp; +-#endif + + for (i = 0; i < sechdrs[relsec].sh_size / sizeof(*rel); i++) { + /* This is where to make the change */ +@@ -59,18 +76,7 @@ int apply_relocate_add(Elf32_Shdr *sechdrs, const char *strtab, + case R_CSKY_PCRELJSR_IMM11BY2: + break; + case R_CSKY_PCRELJSR_IMM26BY2: +-#if defined(__CSKYABIV2__) +- location_tmp = (uint16_t *)location; +- if (IS_BSR32(*location_tmp, *(location_tmp + 1))) +- break; +- +- if (IS_JSRI32(*location_tmp, *(location_tmp + 1))) { +- /* jsri 0x... --> lrw r26, 0x... */ +- CHANGE_JSRI_TO_LRW(location); +- /* lsli r0, r0 --> jsr r26 */ +- SET_JSR32_R26(location + 1); +- } +-#endif ++ jsri_2_lrw_jsr(location); + break; + case R_CSKY_ADDR_HI16: + temp = ((short *)location) + 1; +diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h +index db706ffc4ca9..5ff63d53b31c 100644 +--- a/arch/powerpc/include/asm/book3s/64/pgtable.h ++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h +@@ -904,7 +904,7 @@ static inline int pud_none(pud_t pud) + + static inline int pud_present(pud_t pud) + { +- return (pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT)); ++ return !!(pud_raw(pud) & cpu_to_be64(_PAGE_PRESENT)); + } + + extern struct page *pud_page(pud_t pud); +@@ -951,7 +951,7 @@ static inline int pgd_none(pgd_t pgd) + + static inline int pgd_present(pgd_t pgd) + { +- return (pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT)); ++ return !!(pgd_raw(pgd) & cpu_to_be64(_PAGE_PRESENT)); + } + + static inline pte_t pgd_pte(pgd_t pgd) +diff --git a/arch/riscv/include/asm/pgtable-bits.h b/arch/riscv/include/asm/pgtable-bits.h +index 2fa2942be221..470755cb7558 100644 +--- a/arch/riscv/include/asm/pgtable-bits.h ++++ b/arch/riscv/include/asm/pgtable-bits.h +@@ -35,6 +35,12 @@ + #define _PAGE_SPECIAL _PAGE_SOFT + #define _PAGE_TABLE _PAGE_PRESENT + ++/* ++ * _PAGE_PROT_NONE is set on not-present pages (and ignored by the hardware) to ++ * distinguish them from swapped out pages ++ */ ++#define _PAGE_PROT_NONE _PAGE_READ ++ + #define _PAGE_PFN_SHIFT 10 + + /* Set of bits to preserve across pte_modify() */ +diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h +index 16301966d65b..a8179a8c1491 100644 +--- a/arch/riscv/include/asm/pgtable.h ++++ b/arch/riscv/include/asm/pgtable.h +@@ -44,7 +44,7 @@ + /* Page protection bits */ + #define _PAGE_BASE (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER) + +-#define PAGE_NONE __pgprot(0) ++#define PAGE_NONE __pgprot(_PAGE_PROT_NONE) + #define PAGE_READ __pgprot(_PAGE_BASE | _PAGE_READ) + #define PAGE_WRITE __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_WRITE) + #define PAGE_EXEC __pgprot(_PAGE_BASE | _PAGE_EXEC) +@@ -98,7 +98,7 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]; + + static inline int pmd_present(pmd_t pmd) + { +- return (pmd_val(pmd) & _PAGE_PRESENT); ++ return (pmd_val(pmd) & (_PAGE_PRESENT | _PAGE_PROT_NONE)); + } + + static inline int pmd_none(pmd_t pmd) +@@ -178,7 +178,7 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long addr) + + static inline int pte_present(pte_t pte) + { +- return (pte_val(pte) & _PAGE_PRESENT); ++ return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROT_NONE)); + } + + static inline int pte_none(pte_t pte) +@@ -380,7 +380,7 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma, + * + * Format of swap PTE: + * bit 0: _PAGE_PRESENT (zero) +- * bit 1: reserved for future use (zero) ++ * bit 1: _PAGE_PROT_NONE (zero) + * bits 2 to 6: swap type + * bits 7 to XLEN-1: swap offset + */ +diff --git a/arch/riscv/kernel/ptrace.c b/arch/riscv/kernel/ptrace.c +index 60f1e02eed36..6c898d540d9d 100644 +--- a/arch/riscv/kernel/ptrace.c ++++ b/arch/riscv/kernel/ptrace.c +@@ -172,6 +172,6 @@ void do_syscall_trace_exit(struct pt_regs *regs) + + #ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS + if (test_thread_flag(TIF_SYSCALL_TRACEPOINT)) +- trace_sys_exit(regs, regs->regs[0]); ++ trace_sys_exit(regs, regs_return_value(regs)); + #endif + } +diff --git a/arch/s390/kernel/swsusp.S b/arch/s390/kernel/swsusp.S +index 537f97fde37f..b6796e616812 100644 +--- a/arch/s390/kernel/swsusp.S ++++ b/arch/s390/kernel/swsusp.S +@@ -30,10 +30,10 @@ + .section .text + ENTRY(swsusp_arch_suspend) + lg %r1,__LC_NODAT_STACK +- aghi %r1,-STACK_FRAME_OVERHEAD + stmg %r6,%r15,__SF_GPRS(%r1) ++ aghi %r1,-STACK_FRAME_OVERHEAD + stg %r15,__SF_BACKCHAIN(%r1) +- lgr %r1,%r15 ++ lgr %r15,%r1 + + /* Store FPU registers */ + brasl %r14,save_fpu_regs +diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c +index 374a19712e20..b684f0294f35 100644 +--- a/arch/x86/events/core.c ++++ b/arch/x86/events/core.c +@@ -2278,6 +2278,19 @@ void perf_check_microcode(void) + x86_pmu.check_microcode(); + } + ++static int x86_pmu_check_period(struct perf_event *event, u64 value) ++{ ++ if (x86_pmu.check_period && x86_pmu.check_period(event, value)) ++ return -EINVAL; ++ ++ if (value && x86_pmu.limit_period) { ++ if (x86_pmu.limit_period(event, value) > value) ++ return -EINVAL; ++ } ++ ++ return 0; ++} ++ + static struct pmu pmu = { + .pmu_enable = x86_pmu_enable, + .pmu_disable = x86_pmu_disable, +@@ -2302,6 +2315,7 @@ static struct pmu pmu = { + .event_idx = x86_pmu_event_idx, + .sched_task = x86_pmu_sched_task, + .task_ctx_size = sizeof(struct x86_perf_task_context), ++ .check_period = x86_pmu_check_period, + }; + + void arch_perf_update_userpage(struct perf_event *event, +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index 90b6718ff861..ede20c44cc69 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -3587,6 +3587,11 @@ static void intel_pmu_sched_task(struct perf_event_context *ctx, + intel_pmu_lbr_sched_task(ctx, sched_in); + } + ++static int intel_pmu_check_period(struct perf_event *event, u64 value) ++{ ++ return intel_pmu_has_bts_period(event, value) ? -EINVAL : 0; ++} ++ + PMU_FORMAT_ATTR(offcore_rsp, "config1:0-63"); + + PMU_FORMAT_ATTR(ldlat, "config1:0-15"); +@@ -3667,6 +3672,8 @@ static __initconst const struct x86_pmu core_pmu = { + .cpu_starting = intel_pmu_cpu_starting, + .cpu_dying = intel_pmu_cpu_dying, + .cpu_dead = intel_pmu_cpu_dead, ++ ++ .check_period = intel_pmu_check_period, + }; + + static struct attribute *intel_pmu_attrs[]; +@@ -3711,6 +3718,8 @@ static __initconst const struct x86_pmu intel_pmu = { + + .guest_get_msrs = intel_guest_get_msrs, + .sched_task = intel_pmu_sched_task, ++ ++ .check_period = intel_pmu_check_period, + }; + + static __init void intel_clovertown_quirk(void) +diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h +index 78d7b7031bfc..d46fd6754d92 100644 +--- a/arch/x86/events/perf_event.h ++++ b/arch/x86/events/perf_event.h +@@ -646,6 +646,11 @@ struct x86_pmu { + * Intel host/guest support (KVM) + */ + struct perf_guest_switch_msr *(*guest_get_msrs)(int *nr); ++ ++ /* ++ * Check period value for PERF_EVENT_IOC_PERIOD ioctl. ++ */ ++ int (*check_period) (struct perf_event *event, u64 period); + }; + + struct x86_perf_task_context { +@@ -857,7 +862,7 @@ static inline int amd_pmu_init(void) + + #ifdef CONFIG_CPU_SUP_INTEL + +-static inline bool intel_pmu_has_bts(struct perf_event *event) ++static inline bool intel_pmu_has_bts_period(struct perf_event *event, u64 period) + { + struct hw_perf_event *hwc = &event->hw; + unsigned int hw_event, bts_event; +@@ -868,7 +873,14 @@ static inline bool intel_pmu_has_bts(struct perf_event *event) + hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; + bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); + +- return hw_event == bts_event && hwc->sample_period == 1; ++ return hw_event == bts_event && period == 1; ++} ++ ++static inline bool intel_pmu_has_bts(struct perf_event *event) ++{ ++ struct hw_perf_event *hwc = &event->hw; ++ ++ return intel_pmu_has_bts_period(event, hwc->sample_period); + } + + int intel_pmu_save_and_restart(struct perf_event *event); +diff --git a/arch/x86/ia32/ia32_aout.c b/arch/x86/ia32/ia32_aout.c +index 8e02b30cf08e..3ebd77770f98 100644 +--- a/arch/x86/ia32/ia32_aout.c ++++ b/arch/x86/ia32/ia32_aout.c +@@ -51,7 +51,7 @@ static unsigned long get_dr(int n) + /* + * fill in the user structure for a core dump.. + */ +-static void dump_thread32(struct pt_regs *regs, struct user32 *dump) ++static void fill_dump(struct pt_regs *regs, struct user32 *dump) + { + u32 fs, gs; + memset(dump, 0, sizeof(*dump)); +@@ -157,10 +157,12 @@ static int aout_core_dump(struct coredump_params *cprm) + fs = get_fs(); + set_fs(KERNEL_DS); + has_dumped = 1; ++ ++ fill_dump(cprm->regs, &dump); ++ + strncpy(dump.u_comm, current->comm, sizeof(current->comm)); + dump.u_ar0 = offsetof(struct user32, regs); + dump.signal = cprm->siginfo->si_signo; +- dump_thread32(cprm->regs, &dump); + + /* + * If the size of the dump file exceeds the rlimit, then see +diff --git a/arch/x86/include/asm/uv/bios.h b/arch/x86/include/asm/uv/bios.h +index e652a7cc6186..3f697a9e3f59 100644 +--- a/arch/x86/include/asm/uv/bios.h ++++ b/arch/x86/include/asm/uv/bios.h +@@ -48,7 +48,8 @@ enum { + BIOS_STATUS_SUCCESS = 0, + BIOS_STATUS_UNIMPLEMENTED = -ENOSYS, + BIOS_STATUS_EINVAL = -EINVAL, +- BIOS_STATUS_UNAVAIL = -EBUSY ++ BIOS_STATUS_UNAVAIL = -EBUSY, ++ BIOS_STATUS_ABORT = -EINTR, + }; + + /* Address map parameters */ +@@ -167,4 +168,9 @@ extern long system_serial_number; + + extern struct kobject *sgi_uv_kobj; /* /sys/firmware/sgi_uv */ + ++/* ++ * EFI runtime lock; cf. firmware/efi/runtime-wrappers.c for details ++ */ ++extern struct semaphore __efi_uv_runtime_lock; ++ + #endif /* _ASM_X86_UV_BIOS_H */ +diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c +index 11641d9e7f6f..13baba9d1cc1 100644 +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -6255,6 +6255,9 @@ static int sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp) + int asid, ret; + + ret = -EBUSY; ++ if (unlikely(sev->active)) ++ return ret; ++ + asid = sev_asid_new(); + if (asid < 0) + return ret; +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 4ce6595e454c..bbd0520867a8 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -2779,7 +2779,8 @@ static void add_atomic_switch_msr(struct vcpu_vmx *vmx, unsigned msr, + if (!entry_only) + j = find_msr(&m->host, msr); + +- if (i == NR_AUTOLOAD_MSRS || j == NR_AUTOLOAD_MSRS) { ++ if ((i < 0 && m->guest.nr == NR_AUTOLOAD_MSRS) || ++ (j < 0 && m->host.nr == NR_AUTOLOAD_MSRS)) { + printk_once(KERN_WARNING "Not enough msr switch entries. " + "Can't add msr %x\n", msr); + return; +@@ -3620,9 +3621,11 @@ static void nested_vmx_setup_ctls_msrs(struct nested_vmx_msrs *msrs, bool apicv) + * secondary cpu-based controls. Do not include those that + * depend on CPUID bits, they are added later by vmx_cpuid_update. + */ +- rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, +- msrs->secondary_ctls_low, +- msrs->secondary_ctls_high); ++ if (msrs->procbased_ctls_high & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS) ++ rdmsr(MSR_IA32_VMX_PROCBASED_CTLS2, ++ msrs->secondary_ctls_low, ++ msrs->secondary_ctls_high); ++ + msrs->secondary_ctls_low = 0; + msrs->secondary_ctls_high &= + SECONDARY_EXEC_DESC | +diff --git a/arch/x86/platform/uv/bios_uv.c b/arch/x86/platform/uv/bios_uv.c +index 4a6a5a26c582..eb33432f2f24 100644 +--- a/arch/x86/platform/uv/bios_uv.c ++++ b/arch/x86/platform/uv/bios_uv.c +@@ -29,7 +29,8 @@ + + struct uv_systab *uv_systab; + +-s64 uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, u64 a4, u64 a5) ++static s64 __uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, ++ u64 a4, u64 a5) + { + struct uv_systab *tab = uv_systab; + s64 ret; +@@ -51,6 +52,19 @@ s64 uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, u64 a4, u64 a5) + + return ret; + } ++ ++s64 uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, u64 a4, u64 a5) ++{ ++ s64 ret; ++ ++ if (down_interruptible(&__efi_uv_runtime_lock)) ++ return BIOS_STATUS_ABORT; ++ ++ ret = __uv_bios_call(which, a1, a2, a3, a4, a5); ++ up(&__efi_uv_runtime_lock); ++ ++ return ret; ++} + EXPORT_SYMBOL_GPL(uv_bios_call); + + s64 uv_bios_call_irqsave(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, +@@ -59,10 +73,15 @@ s64 uv_bios_call_irqsave(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, + unsigned long bios_flags; + s64 ret; + ++ if (down_interruptible(&__efi_uv_runtime_lock)) ++ return BIOS_STATUS_ABORT; ++ + local_irq_save(bios_flags); +- ret = uv_bios_call(which, a1, a2, a3, a4, a5); ++ ret = __uv_bios_call(which, a1, a2, a3, a4, a5); + local_irq_restore(bios_flags); + ++ up(&__efi_uv_runtime_lock); ++ + return ret; + } + +diff --git a/block/blk-flush.c b/block/blk-flush.c +index 8b44b86779da..87fc49daa2b4 100644 +--- a/block/blk-flush.c ++++ b/block/blk-flush.c +@@ -424,7 +424,7 @@ static void mq_flush_data_end_io(struct request *rq, blk_status_t error) + blk_flush_complete_seq(rq, fq, REQ_FSEQ_DATA, error); + spin_unlock_irqrestore(&fq->mq_flush_lock, flags); + +- blk_mq_run_hw_queue(hctx, true); ++ blk_mq_sched_restart(hctx); + } + + /** +diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c +index 274699463b4f..7bbbf8256a41 100644 +--- a/drivers/acpi/numa.c ++++ b/drivers/acpi/numa.c +@@ -146,9 +146,9 @@ acpi_table_print_srat_entry(struct acpi_subtable_header *header) + { + struct acpi_srat_mem_affinity *p = + (struct acpi_srat_mem_affinity *)header; +- pr_debug("SRAT Memory (0x%lx length 0x%lx) in proximity domain %d %s%s%s\n", +- (unsigned long)p->base_address, +- (unsigned long)p->length, ++ pr_debug("SRAT Memory (0x%llx length 0x%llx) in proximity domain %d %s%s%s\n", ++ (unsigned long long)p->base_address, ++ (unsigned long long)p->length, + p->proximity_domain, + (p->flags & ACPI_SRAT_MEM_ENABLED) ? + "enabled" : "disabled", +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 7aa3dcad2175..df34a12a388f 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -1530,17 +1530,16 @@ static unsigned int __cpufreq_get(struct cpufreq_policy *policy) + { + unsigned int ret_freq = 0; + +- if (!cpufreq_driver->get) ++ if (unlikely(policy_is_inactive(policy)) || !cpufreq_driver->get) + return ret_freq; + + ret_freq = cpufreq_driver->get(policy->cpu); + + /* +- * Updating inactive policies is invalid, so avoid doing that. Also +- * if fast frequency switching is used with the given policy, the check ++ * If fast frequency switching is used with the given policy, the check + * against policy->cur is pointless, so skip it in that case too. + */ +- if (unlikely(policy_is_inactive(policy)) || policy->fast_switch_enabled) ++ if (policy->fast_switch_enabled) + return ret_freq; + + if (ret_freq && policy->cur && +@@ -1569,10 +1568,7 @@ unsigned int cpufreq_get(unsigned int cpu) + + if (policy) { + down_read(&policy->rwsem); +- +- if (!policy_is_inactive(policy)) +- ret_freq = __cpufreq_get(policy); +- ++ ret_freq = __cpufreq_get(policy); + up_read(&policy->rwsem); + + cpufreq_cpu_put(policy); +diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c +index 1ff229c2aeab..186a2536fb8b 100644 +--- a/drivers/crypto/ccree/cc_driver.c ++++ b/drivers/crypto/ccree/cc_driver.c +@@ -364,7 +364,7 @@ static int init_cc_resources(struct platform_device *plat_dev) + rc = cc_ivgen_init(new_drvdata); + if (rc) { + dev_err(dev, "cc_ivgen_init failed\n"); +- goto post_power_mgr_err; ++ goto post_buf_mgr_err; + } + + /* Allocate crypto algs */ +@@ -387,6 +387,9 @@ static int init_cc_resources(struct platform_device *plat_dev) + goto post_hash_err; + } + ++ /* All set, we can allow autosuspend */ ++ cc_pm_go(new_drvdata); ++ + /* If we got here and FIPS mode is enabled + * it means all FIPS test passed, so let TEE + * know we're good. +@@ -401,8 +404,6 @@ post_cipher_err: + cc_cipher_free(new_drvdata); + post_ivgen_err: + cc_ivgen_fini(new_drvdata); +-post_power_mgr_err: +- cc_pm_fini(new_drvdata); + post_buf_mgr_err: + cc_buffer_mgr_fini(new_drvdata); + post_req_mgr_err: +diff --git a/drivers/crypto/ccree/cc_pm.c b/drivers/crypto/ccree/cc_pm.c +index d990f472e89f..6ff7e75ad90e 100644 +--- a/drivers/crypto/ccree/cc_pm.c ++++ b/drivers/crypto/ccree/cc_pm.c +@@ -100,20 +100,19 @@ int cc_pm_put_suspend(struct device *dev) + + int cc_pm_init(struct cc_drvdata *drvdata) + { +- int rc = 0; + struct device *dev = drvdata_to_dev(drvdata); + + /* must be before the enabling to avoid resdundent suspending */ + pm_runtime_set_autosuspend_delay(dev, CC_SUSPEND_TIMEOUT); + pm_runtime_use_autosuspend(dev); + /* activate the PM module */ +- rc = pm_runtime_set_active(dev); +- if (rc) +- return rc; +- /* enable the PM module*/ +- pm_runtime_enable(dev); ++ return pm_runtime_set_active(dev); ++} + +- return rc; ++/* enable the PM module*/ ++void cc_pm_go(struct cc_drvdata *drvdata) ++{ ++ pm_runtime_enable(drvdata_to_dev(drvdata)); + } + + void cc_pm_fini(struct cc_drvdata *drvdata) +diff --git a/drivers/crypto/ccree/cc_pm.h b/drivers/crypto/ccree/cc_pm.h +index 020a5403c58b..f62624357020 100644 +--- a/drivers/crypto/ccree/cc_pm.h ++++ b/drivers/crypto/ccree/cc_pm.h +@@ -16,6 +16,7 @@ + extern const struct dev_pm_ops ccree_pm; + + int cc_pm_init(struct cc_drvdata *drvdata); ++void cc_pm_go(struct cc_drvdata *drvdata); + void cc_pm_fini(struct cc_drvdata *drvdata); + int cc_pm_suspend(struct device *dev); + int cc_pm_resume(struct device *dev); +@@ -29,6 +30,8 @@ static inline int cc_pm_init(struct cc_drvdata *drvdata) + return 0; + } + ++static void cc_pm_go(struct cc_drvdata *drvdata) {} ++ + static inline void cc_pm_fini(struct cc_drvdata *drvdata) {} + + static inline int cc_pm_suspend(struct device *dev) +diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c +index 8903b9ccfc2b..e2abfdb5cee6 100644 +--- a/drivers/firmware/efi/runtime-wrappers.c ++++ b/drivers/firmware/efi/runtime-wrappers.c +@@ -146,6 +146,13 @@ void efi_call_virt_check_flags(unsigned long flags, const char *call) + */ + static DEFINE_SEMAPHORE(efi_runtime_lock); + ++/* ++ * Expose the EFI runtime lock to the UV platform ++ */ ++#ifdef CONFIG_X86_UV ++extern struct semaphore __efi_uv_runtime_lock __alias(efi_runtime_lock); ++#endif ++ + /* + * Calls the appropriate efi_runtime_service() with the appropriate + * arguments. +diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c +index 995cf0b9e0b1..2d1dfa1e0745 100644 +--- a/drivers/gpio/gpio-mxc.c ++++ b/drivers/gpio/gpio-mxc.c +@@ -17,6 +17,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -550,33 +551,38 @@ static void mxc_gpio_restore_regs(struct mxc_gpio_port *port) + writel(port->gpio_saved_reg.dr, port->base + GPIO_DR); + } + +-static int __maybe_unused mxc_gpio_noirq_suspend(struct device *dev) ++static int mxc_gpio_syscore_suspend(void) + { +- struct platform_device *pdev = to_platform_device(dev); +- struct mxc_gpio_port *port = platform_get_drvdata(pdev); ++ struct mxc_gpio_port *port; + +- mxc_gpio_save_regs(port); +- clk_disable_unprepare(port->clk); ++ /* walk through all ports */ ++ list_for_each_entry(port, &mxc_gpio_ports, node) { ++ mxc_gpio_save_regs(port); ++ clk_disable_unprepare(port->clk); ++ } + + return 0; + } + +-static int __maybe_unused mxc_gpio_noirq_resume(struct device *dev) ++static void mxc_gpio_syscore_resume(void) + { +- struct platform_device *pdev = to_platform_device(dev); +- struct mxc_gpio_port *port = platform_get_drvdata(pdev); ++ struct mxc_gpio_port *port; + int ret; + +- ret = clk_prepare_enable(port->clk); +- if (ret) +- return ret; +- mxc_gpio_restore_regs(port); +- +- return 0; ++ /* walk through all ports */ ++ list_for_each_entry(port, &mxc_gpio_ports, node) { ++ ret = clk_prepare_enable(port->clk); ++ if (ret) { ++ pr_err("mxc: failed to enable gpio clock %d\n", ret); ++ return; ++ } ++ mxc_gpio_restore_regs(port); ++ } + } + +-static const struct dev_pm_ops mxc_gpio_dev_pm_ops = { +- SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(mxc_gpio_noirq_suspend, mxc_gpio_noirq_resume) ++static struct syscore_ops mxc_gpio_syscore_ops = { ++ .suspend = mxc_gpio_syscore_suspend, ++ .resume = mxc_gpio_syscore_resume, + }; + + static struct platform_driver mxc_gpio_driver = { +@@ -584,7 +590,6 @@ static struct platform_driver mxc_gpio_driver = { + .name = "gpio-mxc", + .of_match_table = mxc_gpio_dt_ids, + .suppress_bind_attrs = true, +- .pm = &mxc_gpio_dev_pm_ops, + }, + .probe = mxc_gpio_probe, + .id_table = mxc_gpio_devtype, +@@ -592,6 +597,8 @@ static struct platform_driver mxc_gpio_driver = { + + static int __init gpio_mxc_init(void) + { ++ register_syscore_ops(&mxc_gpio_syscore_ops); ++ + return platform_driver_register(&mxc_gpio_driver); + } + subsys_initcall(gpio_mxc_init); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +index 30bc345d6fdf..8547fdaf8273 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c +@@ -1684,8 +1684,10 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev) + amdgpu_xgmi_add_device(adev); + amdgpu_amdkfd_device_init(adev); + +- if (amdgpu_sriov_vf(adev)) ++ if (amdgpu_sriov_vf(adev)) { ++ amdgpu_virt_init_data_exchange(adev); + amdgpu_virt_release_full_gpu(adev, true); ++ } + + return 0; + } +@@ -2597,9 +2599,6 @@ fence_driver_init: + goto failed; + } + +- if (amdgpu_sriov_vf(adev)) +- amdgpu_virt_init_data_exchange(adev); +- + amdgpu_fbdev_init(adev); + + r = amdgpu_pm_sysfs_init(adev); +@@ -3271,6 +3270,7 @@ static int amdgpu_device_reset_sriov(struct amdgpu_device *adev, + r = amdgpu_ib_ring_tests(adev); + + error: ++ amdgpu_virt_init_data_exchange(adev); + amdgpu_virt_release_full_gpu(adev, true); + if (!r && adev->virt.gim_feature & AMDGIM_FEATURE_GIM_FLR_VRAMLOST) { + atomic_inc(&adev->vram_lost_counter); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +index 0877ff9a9594..8c9abaa7601a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +@@ -850,9 +850,6 @@ static void amdgpu_vm_bo_param(struct amdgpu_device *adev, struct amdgpu_vm *vm, + bp->size = amdgpu_vm_bo_size(adev, level); + bp->byte_align = AMDGPU_GPU_PAGE_SIZE; + bp->domain = AMDGPU_GEM_DOMAIN_VRAM; +- if (bp->size <= PAGE_SIZE && adev->asic_type >= CHIP_VEGA10 && +- adev->flags & AMD_IS_APU) +- bp->domain |= AMDGPU_GEM_DOMAIN_GTT; + bp->domain = amdgpu_bo_get_preferred_pin_domain(adev, bp->domain); + bp->flags = AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS | + AMDGPU_GEM_CREATE_CPU_GTT_USWC; +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +index 21363b2b2ee5..88ed064b3585 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +@@ -112,7 +112,10 @@ static const struct soc15_reg_golden golden_settings_gc_9_0[] = + SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_HI, 0xffffffff, 0x4a2c0e68), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmTCP_CHAN_STEER_LO, 0xffffffff, 0xb5d3f197), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_CACHE_INVALIDATION, 0x3fff3af3, 0x19200000), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x00000fff, 0x000003ff) ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmVGT_GS_MAX_WAVE_ID, 0x00000fff, 0x000003ff), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC1_F32_INT_DIS, 0x00000000, 0x00000800), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC2_F32_INT_DIS, 0x00000000, 0x00000800), ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_DEBUG, 0x00000000, 0x00008000) + }; + + static const struct soc15_reg_golden golden_settings_gc_9_0_vg10[] = +@@ -134,10 +137,7 @@ static const struct soc15_reg_golden golden_settings_gc_9_0_vg10[] = + SOC15_REG_GOLDEN_VALUE(GC, 0, mmRMI_UTCL1_CNTL2, 0x00030000, 0x00020000), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmSPI_CONFIG_CNTL_1, 0x0000000f, 0x01000107), + SOC15_REG_GOLDEN_VALUE(GC, 0, mmTD_CNTL, 0x00001800, 0x00000800), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmWD_UTCL1_CNTL, 0x08000000, 0x08000080), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC1_F32_INT_DIS, 0x00000000, 0x00000800), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_MEC2_F32_INT_DIS, 0x00000000, 0x00000800), +- SOC15_REG_GOLDEN_VALUE(GC, 0, mmCP_DEBUG, 0x00000000, 0x00008000) ++ SOC15_REG_GOLDEN_VALUE(GC, 0, mmWD_UTCL1_CNTL, 0x08000000, 0x08000080) + }; + + static const struct soc15_reg_golden golden_settings_gc_9_0_vg20[] = +diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c +index 8cbb4655896a..b11a1c17a7f2 100644 +--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c ++++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c +@@ -174,7 +174,7 @@ static int xgpu_ai_send_access_requests(struct amdgpu_device *adev, + return r; + } + /* Retrieve checksum from mailbox2 */ +- if (req == IDH_REQ_GPU_INIT_ACCESS) { ++ if (req == IDH_REQ_GPU_INIT_ACCESS || req == IDH_REQ_GPU_RESET_ACCESS) { + adev->virt.fw_reserve.checksum_key = + RREG32_NO_KIQ(SOC15_REG_OFFSET(NBIO, 0, + mmBIF_BX_PF0_MAILBOX_MSGBUF_RCV_DW2)); +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +index 7a8c9172d30a..86d5dc5f8887 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c +@@ -73,7 +73,6 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = { + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_RLC1_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_PAGE, 0x000003ff, 0x000003c0), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_UTCL1_WATERMK, 0xfc000000, 0x00000000), +- SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831f07), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CLK_CTRL, 0xffffffff, 0x3f000100), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_IB_CNTL, 0x800f0100, 0x00000100), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GFX_RB_WPTR_POLL_CNTL, 0x0000fff0, 0x00403000), +@@ -91,6 +90,7 @@ static const struct soc15_reg_golden golden_settings_sdma_4[] = { + static const struct soc15_reg_golden golden_settings_sdma_vg10[] = { + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 0x00104002), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104002), ++ SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 0x00104002), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104002) + }; +@@ -98,6 +98,7 @@ static const struct soc15_reg_golden golden_settings_sdma_vg10[] = { + static const struct soc15_reg_golden golden_settings_sdma_vg12[] = { + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG, 0x0018773f, 0x00104001), + SOC15_REG_GOLDEN_VALUE(SDMA0, 0, mmSDMA0_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104001), ++ SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_CHICKEN_BITS, 0xfe931f07, 0x02831d07), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG, 0x0018773f, 0x00104001), + SOC15_REG_GOLDEN_VALUE(SDMA1, 0, mmSDMA1_GB_ADDR_CONFIG_READ, 0x0018773f, 0x00104001) + }; +diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c +index 3b7fce5d7258..b9e19b0eb905 100644 +--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c ++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega20_hwmgr.c +@@ -2244,6 +2244,13 @@ static int vega20_force_clock_level(struct pp_hwmgr *hwmgr, + soft_min_level = mask ? (ffs(mask) - 1) : 0; + soft_max_level = mask ? (fls(mask) - 1) : 0; + ++ if (soft_max_level >= data->dpm_table.gfx_table.count) { ++ pr_err("Clock level specified %d is over max allowed %d\n", ++ soft_max_level, ++ data->dpm_table.gfx_table.count - 1); ++ return -EINVAL; ++ } ++ + data->dpm_table.gfx_table.dpm_state.soft_min_level = + data->dpm_table.gfx_table.dpm_levels[soft_min_level].value; + data->dpm_table.gfx_table.dpm_state.soft_max_level = +@@ -2264,6 +2271,13 @@ static int vega20_force_clock_level(struct pp_hwmgr *hwmgr, + soft_min_level = mask ? (ffs(mask) - 1) : 0; + soft_max_level = mask ? (fls(mask) - 1) : 0; + ++ if (soft_max_level >= data->dpm_table.mem_table.count) { ++ pr_err("Clock level specified %d is over max allowed %d\n", ++ soft_max_level, ++ data->dpm_table.mem_table.count - 1); ++ return -EINVAL; ++ } ++ + data->dpm_table.mem_table.dpm_state.soft_min_level = + data->dpm_table.mem_table.dpm_levels[soft_min_level].value; + data->dpm_table.mem_table.dpm_state.soft_max_level = +diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c +index 8e28e738cb52..391547358756 100644 +--- a/drivers/gpu/drm/bridge/tc358767.c ++++ b/drivers/gpu/drm/bridge/tc358767.c +@@ -98,6 +98,8 @@ + #define DP0_STARTVAL 0x064c + #define DP0_ACTIVEVAL 0x0650 + #define DP0_SYNCVAL 0x0654 ++#define SYNCVAL_HS_POL_ACTIVE_LOW (1 << 15) ++#define SYNCVAL_VS_POL_ACTIVE_LOW (1 << 31) + #define DP0_MISC 0x0658 + #define TU_SIZE_RECOMMENDED (63) /* LSCLK cycles per TU */ + #define BPC_6 (0 << 5) +@@ -142,6 +144,8 @@ + #define DP0_LTLOOPCTRL 0x06d8 + #define DP0_SNKLTCTRL 0x06e4 + ++#define DP1_SRCCTRL 0x07a0 ++ + /* PHY */ + #define DP_PHY_CTRL 0x0800 + #define DP_PHY_RST BIT(28) /* DP PHY Global Soft Reset */ +@@ -150,6 +154,7 @@ + #define PHY_M1_RST BIT(12) /* Reset PHY1 Main Channel */ + #define PHY_RDY BIT(16) /* PHY Main Channels Ready */ + #define PHY_M0_RST BIT(8) /* Reset PHY0 Main Channel */ ++#define PHY_2LANE BIT(2) /* PHY Enable 2 lanes */ + #define PHY_A0_EN BIT(1) /* PHY Aux Channel0 Enable */ + #define PHY_M0_EN BIT(0) /* PHY Main Channel0 Enable */ + +@@ -540,6 +545,7 @@ static int tc_aux_link_setup(struct tc_data *tc) + unsigned long rate; + u32 value; + int ret; ++ u32 dp_phy_ctrl; + + rate = clk_get_rate(tc->refclk); + switch (rate) { +@@ -564,7 +570,10 @@ static int tc_aux_link_setup(struct tc_data *tc) + value |= SYSCLK_SEL_LSCLK | LSCLK_DIV_2; + tc_write(SYS_PLLPARAM, value); + +- tc_write(DP_PHY_CTRL, BGREN | PWR_SW_EN | BIT(2) | PHY_A0_EN); ++ dp_phy_ctrl = BGREN | PWR_SW_EN | PHY_A0_EN; ++ if (tc->link.base.num_lanes == 2) ++ dp_phy_ctrl |= PHY_2LANE; ++ tc_write(DP_PHY_CTRL, dp_phy_ctrl); + + /* + * Initially PLLs are in bypass. Force PLL parameter update, +@@ -719,7 +728,9 @@ static int tc_set_video_mode(struct tc_data *tc, struct drm_display_mode *mode) + + tc_write(DP0_ACTIVEVAL, (mode->vdisplay << 16) | (mode->hdisplay)); + +- tc_write(DP0_SYNCVAL, (vsync_len << 16) | (hsync_len << 0)); ++ tc_write(DP0_SYNCVAL, (vsync_len << 16) | (hsync_len << 0) | ++ ((mode->flags & DRM_MODE_FLAG_NHSYNC) ? SYNCVAL_HS_POL_ACTIVE_LOW : 0) | ++ ((mode->flags & DRM_MODE_FLAG_NVSYNC) ? SYNCVAL_VS_POL_ACTIVE_LOW : 0)); + + tc_write(DPIPXLFMT, VS_POL_ACTIVE_LOW | HS_POL_ACTIVE_LOW | + DE_POL_ACTIVE_HIGH | SUB_CFG_TYPE_CONFIG1 | DPI_BPP_RGB888); +@@ -829,12 +840,11 @@ static int tc_main_link_setup(struct tc_data *tc) + if (!tc->mode) + return -EINVAL; + +- /* from excel file - DP0_SrcCtrl */ +- tc_write(DP0_SRCCTRL, DP0_SRCCTRL_SCRMBLDIS | DP0_SRCCTRL_EN810B | +- DP0_SRCCTRL_LANESKEW | DP0_SRCCTRL_LANES_2 | +- DP0_SRCCTRL_BW27 | DP0_SRCCTRL_AUTOCORRECT); +- /* from excel file - DP1_SrcCtrl */ +- tc_write(0x07a0, 0x00003083); ++ tc_write(DP0_SRCCTRL, tc_srcctrl(tc)); ++ /* SSCG and BW27 on DP1 must be set to the same as on DP0 */ ++ tc_write(DP1_SRCCTRL, ++ (tc->link.spread ? DP0_SRCCTRL_SSCG : 0) | ++ ((tc->link.base.rate != 162000) ? DP0_SRCCTRL_BW27 : 0)); + + rate = clk_get_rate(tc->refclk); + switch (rate) { +@@ -855,8 +865,11 @@ static int tc_main_link_setup(struct tc_data *tc) + } + value |= SYSCLK_SEL_LSCLK | LSCLK_DIV_2; + tc_write(SYS_PLLPARAM, value); ++ + /* Setup Main Link */ +- dp_phy_ctrl = BGREN | PWR_SW_EN | BIT(2) | PHY_A0_EN | PHY_M0_EN; ++ dp_phy_ctrl = BGREN | PWR_SW_EN | PHY_A0_EN | PHY_M0_EN; ++ if (tc->link.base.num_lanes == 2) ++ dp_phy_ctrl |= PHY_2LANE; + tc_write(DP_PHY_CTRL, dp_phy_ctrl); + msleep(100); + +@@ -1105,10 +1118,20 @@ static bool tc_bridge_mode_fixup(struct drm_bridge *bridge, + static enum drm_mode_status tc_connector_mode_valid(struct drm_connector *connector, + struct drm_display_mode *mode) + { ++ struct tc_data *tc = connector_to_tc(connector); ++ u32 req, avail; ++ u32 bits_per_pixel = 24; ++ + /* DPI interface clock limitation: upto 154 MHz */ + if (mode->clock > 154000) + return MODE_CLOCK_HIGH; + ++ req = mode->clock * bits_per_pixel / 8; ++ avail = tc->link.base.num_lanes * tc->link.base.rate; ++ ++ if (req > avail) ++ return MODE_BAD; ++ + return MODE_OK; + } + +@@ -1195,6 +1218,10 @@ static int tc_bridge_attach(struct drm_bridge *bridge) + + drm_display_info_set_bus_formats(&tc->connector.display_info, + &bus_format, 1); ++ tc->connector.display_info.bus_flags = ++ DRM_BUS_FLAG_DE_HIGH | ++ DRM_BUS_FLAG_PIXDATA_NEGEDGE | ++ DRM_BUS_FLAG_SYNC_NEGEDGE; + drm_connector_attach_encoder(&tc->connector, tc->bridge.encoder); + + return 0; +diff --git a/drivers/gpu/drm/drm_lease.c b/drivers/gpu/drm/drm_lease.c +index c61680ad962d..6e59789e3316 100644 +--- a/drivers/gpu/drm/drm_lease.c ++++ b/drivers/gpu/drm/drm_lease.c +@@ -521,7 +521,8 @@ int drm_mode_create_lease_ioctl(struct drm_device *dev, + + object_count = cl->object_count; + +- object_ids = memdup_user(u64_to_user_ptr(cl->object_ids), object_count * sizeof(__u32)); ++ object_ids = memdup_user(u64_to_user_ptr(cl->object_ids), ++ array_size(object_count, sizeof(__u32))); + if (IS_ERR(object_ids)) + return PTR_ERR(object_ids); + +diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c +index 6ae9a6080cc8..296f9c0fe19b 100644 +--- a/drivers/gpu/drm/i915/i915_gem.c ++++ b/drivers/gpu/drm/i915/i915_gem.c +@@ -1826,6 +1826,16 @@ i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data, + return 0; + } + ++static inline bool ++__vma_matches(struct vm_area_struct *vma, struct file *filp, ++ unsigned long addr, unsigned long size) ++{ ++ if (vma->vm_file != filp) ++ return false; ++ ++ return vma->vm_start == addr && (vma->vm_end - vma->vm_start) == size; ++} ++ + /** + * i915_gem_mmap_ioctl - Maps the contents of an object, returning the address + * it is mapped to. +@@ -1884,7 +1894,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data, + return -EINTR; + } + vma = find_vma(mm, addr); +- if (vma) ++ if (vma && __vma_matches(vma, obj->base.filp, addr, args->size)) + vma->vm_page_prot = + pgprot_writecombine(vm_get_page_prot(vma->vm_flags)); + else +diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h +index db6fa1d0cbda..f35139b3ebc5 100644 +--- a/drivers/gpu/drm/i915/intel_drv.h ++++ b/drivers/gpu/drm/i915/intel_drv.h +@@ -209,6 +209,16 @@ struct intel_fbdev { + unsigned long vma_flags; + async_cookie_t cookie; + int preferred_bpp; ++ ++ /* Whether or not fbdev hpd processing is temporarily suspended */ ++ bool hpd_suspended : 1; ++ /* Set when a hotplug was received while HPD processing was ++ * suspended ++ */ ++ bool hpd_waiting : 1; ++ ++ /* Protects hpd_suspended */ ++ struct mutex hpd_lock; + }; + + struct intel_encoder { +diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c +index f99332972b7a..9e2e998b198f 100644 +--- a/drivers/gpu/drm/i915/intel_fbdev.c ++++ b/drivers/gpu/drm/i915/intel_fbdev.c +@@ -679,6 +679,7 @@ int intel_fbdev_init(struct drm_device *dev) + if (ifbdev == NULL) + return -ENOMEM; + ++ mutex_init(&ifbdev->hpd_lock); + drm_fb_helper_prepare(dev, &ifbdev->helper, &intel_fb_helper_funcs); + + if (!intel_fbdev_init_bios(dev, ifbdev)) +@@ -752,6 +753,26 @@ void intel_fbdev_fini(struct drm_i915_private *dev_priv) + intel_fbdev_destroy(ifbdev); + } + ++/* Suspends/resumes fbdev processing of incoming HPD events. When resuming HPD ++ * processing, fbdev will perform a full connector reprobe if a hotplug event ++ * was received while HPD was suspended. ++ */ ++static void intel_fbdev_hpd_set_suspend(struct intel_fbdev *ifbdev, int state) ++{ ++ bool send_hpd = false; ++ ++ mutex_lock(&ifbdev->hpd_lock); ++ ifbdev->hpd_suspended = state == FBINFO_STATE_SUSPENDED; ++ send_hpd = !ifbdev->hpd_suspended && ifbdev->hpd_waiting; ++ ifbdev->hpd_waiting = false; ++ mutex_unlock(&ifbdev->hpd_lock); ++ ++ if (send_hpd) { ++ DRM_DEBUG_KMS("Handling delayed fbcon HPD event\n"); ++ drm_fb_helper_hotplug_event(&ifbdev->helper); ++ } ++} ++ + void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous) + { + struct drm_i915_private *dev_priv = to_i915(dev); +@@ -773,6 +794,7 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous + */ + if (state != FBINFO_STATE_RUNNING) + flush_work(&dev_priv->fbdev_suspend_work); ++ + console_lock(); + } else { + /* +@@ -800,17 +822,26 @@ void intel_fbdev_set_suspend(struct drm_device *dev, int state, bool synchronous + + drm_fb_helper_set_suspend(&ifbdev->helper, state); + console_unlock(); ++ ++ intel_fbdev_hpd_set_suspend(ifbdev, state); + } + + void intel_fbdev_output_poll_changed(struct drm_device *dev) + { + struct intel_fbdev *ifbdev = to_i915(dev)->fbdev; ++ bool send_hpd; + + if (!ifbdev) + return; + + intel_fbdev_sync(ifbdev); +- if (ifbdev->vma || ifbdev->helper.deferred_setup) ++ ++ mutex_lock(&ifbdev->hpd_lock); ++ send_hpd = !ifbdev->hpd_suspended; ++ ifbdev->hpd_waiting = true; ++ mutex_unlock(&ifbdev->hpd_lock); ++ ++ if (send_hpd && (ifbdev->vma || ifbdev->helper.deferred_setup)) + drm_fb_helper_hotplug_event(&ifbdev->helper); + } + +diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/falcon.c b/drivers/gpu/drm/nouveau/nvkm/engine/falcon.c +index 816ccaedfc73..8675613e142b 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/engine/falcon.c ++++ b/drivers/gpu/drm/nouveau/nvkm/engine/falcon.c +@@ -22,6 +22,7 @@ + #include + + #include ++#include + #include + #include + +@@ -107,8 +108,10 @@ nvkm_falcon_fini(struct nvkm_engine *engine, bool suspend) + } + } + +- nvkm_mask(device, base + 0x048, 0x00000003, 0x00000000); +- nvkm_wr32(device, base + 0x014, 0xffffffff); ++ if (nvkm_mc_enabled(device, engine->subdev.index)) { ++ nvkm_mask(device, base + 0x048, 0x00000003, 0x00000000); ++ nvkm_wr32(device, base + 0x014, 0xffffffff); ++ } + return 0; + } + +diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c +index 3695cde669f8..07914e36939e 100644 +--- a/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c ++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/therm/base.c +@@ -132,11 +132,12 @@ nvkm_therm_update(struct nvkm_therm *therm, int mode) + duty = nvkm_therm_update_linear(therm); + break; + case NVBIOS_THERM_FAN_OTHER: +- if (therm->cstate) ++ if (therm->cstate) { + duty = therm->cstate; +- else ++ poll = false; ++ } else { + duty = nvkm_therm_update_linear_fallback(therm); +- poll = false; ++ } + break; + } + immd = false; +diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c +index 3e22a54a99c2..2c02f5b03db8 100644 +--- a/drivers/gpu/drm/scheduler/sched_entity.c ++++ b/drivers/gpu/drm/scheduler/sched_entity.c +@@ -434,13 +434,10 @@ struct drm_sched_job *drm_sched_entity_pop_job(struct drm_sched_entity *entity) + + while ((entity->dependency = + sched->ops->dependency(sched_job, entity))) { ++ trace_drm_sched_job_wait_dep(sched_job, entity->dependency); + +- if (drm_sched_entity_add_dependency_cb(entity)) { +- +- trace_drm_sched_job_wait_dep(sched_job, +- entity->dependency); ++ if (drm_sched_entity_add_dependency_cb(entity)) + return NULL; +- } + } + + /* skip jobs from entity that marked guilty */ +diff --git a/drivers/gpu/drm/vkms/vkms_crc.c b/drivers/gpu/drm/vkms/vkms_crc.c +index 9d9e8146db90..d7b409a3c0f8 100644 +--- a/drivers/gpu/drm/vkms/vkms_crc.c ++++ b/drivers/gpu/drm/vkms/vkms_crc.c +@@ -1,4 +1,5 @@ +-// SPDX-License-Identifier: GPL-2.0 ++// SPDX-License-Identifier: GPL-2.0+ ++ + #include "vkms_drv.h" + #include + #include +diff --git a/drivers/gpu/drm/vkms/vkms_crtc.c b/drivers/gpu/drm/vkms/vkms_crtc.c +index 177bbcb38306..eb56ee893761 100644 +--- a/drivers/gpu/drm/vkms/vkms_crtc.c ++++ b/drivers/gpu/drm/vkms/vkms_crtc.c +@@ -1,10 +1,4 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation; either version 2 of the License, or +- * (at your option) any later version. +- */ ++// SPDX-License-Identifier: GPL-2.0+ + + #include "vkms_drv.h" + #include +diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c +index 07cfde1b4132..8048b2486b0e 100644 +--- a/drivers/gpu/drm/vkms/vkms_drv.c ++++ b/drivers/gpu/drm/vkms/vkms_drv.c +@@ -1,9 +1,4 @@ +-/* +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation; either version 2 of the License, or +- * (at your option) any later version. +- */ ++// SPDX-License-Identifier: GPL-2.0+ + + /** + * DOC: vkms (Virtual Kernel Modesetting) +diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h +index 1c93990693e3..5adbc6fca41b 100644 +--- a/drivers/gpu/drm/vkms/vkms_drv.h ++++ b/drivers/gpu/drm/vkms/vkms_drv.h +@@ -1,3 +1,5 @@ ++/* SPDX-License-Identifier: GPL-2.0+ */ ++ + #ifndef _VKMS_DRV_H_ + #define _VKMS_DRV_H_ + +diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c +index d04e988b4cbe..8310b96d4a9c 100644 +--- a/drivers/gpu/drm/vkms/vkms_gem.c ++++ b/drivers/gpu/drm/vkms/vkms_gem.c +@@ -1,10 +1,4 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation; either version 2 of the License, or +- * (at your option) any later version. +- */ ++// SPDX-License-Identifier: GPL-2.0+ + + #include + +diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c +index 271a0eb9042c..4173e4f48334 100644 +--- a/drivers/gpu/drm/vkms/vkms_output.c ++++ b/drivers/gpu/drm/vkms/vkms_output.c +@@ -1,10 +1,4 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation; either version 2 of the License, or +- * (at your option) any later version. +- */ ++// SPDX-License-Identifier: GPL-2.0+ + + #include "vkms_drv.h" + #include +diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c +index e3bcea4b4891..8ffc1dad6485 100644 +--- a/drivers/gpu/drm/vkms/vkms_plane.c ++++ b/drivers/gpu/drm/vkms/vkms_plane.c +@@ -1,10 +1,4 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License as published by +- * the Free Software Foundation; either version 2 of the License, or +- * (at your option) any later version. +- */ ++// SPDX-License-Identifier: GPL-2.0+ + + #include "vkms_drv.h" + #include +diff --git a/drivers/input/misc/bma150.c b/drivers/input/misc/bma150.c +index 1efcfdf9f8a8..dd9dd4e40827 100644 +--- a/drivers/input/misc/bma150.c ++++ b/drivers/input/misc/bma150.c +@@ -481,13 +481,14 @@ static int bma150_register_input_device(struct bma150_data *bma150) + idev->close = bma150_irq_close; + input_set_drvdata(idev, bma150); + ++ bma150->input = idev; ++ + error = input_register_device(idev); + if (error) { + input_free_device(idev); + return error; + } + +- bma150->input = idev; + return 0; + } + +@@ -510,15 +511,15 @@ static int bma150_register_polled_device(struct bma150_data *bma150) + + bma150_init_input_device(bma150, ipoll_dev->input); + ++ bma150->input_polled = ipoll_dev; ++ bma150->input = ipoll_dev->input; ++ + error = input_register_polled_device(ipoll_dev); + if (error) { + input_free_polled_device(ipoll_dev); + return error; + } + +- bma150->input_polled = ipoll_dev; +- bma150->input = ipoll_dev->input; +- + return 0; + } + +diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c +index f322a1768fbb..225ae6980182 100644 +--- a/drivers/input/mouse/elan_i2c_core.c ++++ b/drivers/input/mouse/elan_i2c_core.c +@@ -1336,7 +1336,6 @@ MODULE_DEVICE_TABLE(i2c, elan_id); + static const struct acpi_device_id elan_acpi_id[] = { + { "ELAN0000", 0 }, + { "ELAN0100", 0 }, +- { "ELAN0501", 0 }, + { "ELAN0600", 0 }, + { "ELAN0602", 0 }, + { "ELAN0605", 0 }, +@@ -1346,6 +1345,7 @@ static const struct acpi_device_id elan_acpi_id[] = { + { "ELAN060C", 0 }, + { "ELAN0611", 0 }, + { "ELAN0612", 0 }, ++ { "ELAN0617", 0 }, + { "ELAN0618", 0 }, + { "ELAN061C", 0 }, + { "ELAN061D", 0 }, +diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c +index 9fe075c137dc..a7f8b1614559 100644 +--- a/drivers/input/mouse/elantech.c ++++ b/drivers/input/mouse/elantech.c +@@ -1119,6 +1119,8 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse, + * Asus UX31 0x361f00 20, 15, 0e clickpad + * Asus UX32VD 0x361f02 00, 15, 0e clickpad + * Avatar AVIU-145A2 0x361f00 ? clickpad ++ * Fujitsu CELSIUS H760 0x570f02 40, 14, 0c 3 hw buttons (**) ++ * Fujitsu CELSIUS H780 0x5d0f02 41, 16, 0d 3 hw buttons (**) + * Fujitsu LIFEBOOK E544 0x470f00 d0, 12, 09 2 hw buttons + * Fujitsu LIFEBOOK E546 0x470f00 50, 12, 09 2 hw buttons + * Fujitsu LIFEBOOK E547 0x470f00 50, 12, 09 2 hw buttons +@@ -1171,6 +1173,13 @@ static const struct dmi_system_id elantech_dmi_has_middle_button[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H760"), + }, + }, ++ { ++ /* Fujitsu H780 also has a middle button */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "CELSIUS H780"), ++ }, ++ }, + #endif + { } + }; +diff --git a/drivers/irqchip/irq-csky-apb-intc.c b/drivers/irqchip/irq-csky-apb-intc.c +index 2543baba8b1f..5a2ec43b7ddd 100644 +--- a/drivers/irqchip/irq-csky-apb-intc.c ++++ b/drivers/irqchip/irq-csky-apb-intc.c +@@ -95,7 +95,7 @@ static inline void setup_irq_channel(u32 magic, void __iomem *reg_addr) + + /* Setup 64 channel slots */ + for (i = 0; i < INTC_IRQS; i += 4) +- writel_relaxed(build_channel_val(i, magic), reg_addr + i); ++ writel(build_channel_val(i, magic), reg_addr + i); + } + + static int __init +@@ -135,16 +135,10 @@ ck_intc_init_comm(struct device_node *node, struct device_node *parent) + static inline bool handle_irq_perbit(struct pt_regs *regs, u32 hwirq, + u32 irq_base) + { +- u32 irq; +- + if (hwirq == 0) + return 0; + +- while (hwirq) { +- irq = __ffs(hwirq); +- hwirq &= ~BIT(irq); +- handle_domain_irq(root_domain, irq_base + irq, regs); +- } ++ handle_domain_irq(root_domain, irq_base + __fls(hwirq), regs); + + return 1; + } +@@ -154,12 +148,16 @@ static void gx_irq_handler(struct pt_regs *regs) + { + bool ret; + +- do { +- ret = handle_irq_perbit(regs, +- readl_relaxed(reg_base + GX_INTC_PEN31_00), 0); +- ret |= handle_irq_perbit(regs, +- readl_relaxed(reg_base + GX_INTC_PEN63_32), 32); +- } while (ret); ++retry: ++ ret = handle_irq_perbit(regs, ++ readl(reg_base + GX_INTC_PEN63_32), 32); ++ if (ret) ++ goto retry; ++ ++ ret = handle_irq_perbit(regs, ++ readl(reg_base + GX_INTC_PEN31_00), 0); ++ if (ret) ++ goto retry; + } + + static int __init +@@ -174,14 +172,14 @@ gx_intc_init(struct device_node *node, struct device_node *parent) + /* + * Initial enable reg to disable all interrupts + */ +- writel_relaxed(0x0, reg_base + GX_INTC_NEN31_00); +- writel_relaxed(0x0, reg_base + GX_INTC_NEN63_32); ++ writel(0x0, reg_base + GX_INTC_NEN31_00); ++ writel(0x0, reg_base + GX_INTC_NEN63_32); + + /* + * Initial mask reg with all unmasked, because we only use enalbe reg + */ +- writel_relaxed(0x0, reg_base + GX_INTC_NMASK31_00); +- writel_relaxed(0x0, reg_base + GX_INTC_NMASK63_32); ++ writel(0x0, reg_base + GX_INTC_NMASK31_00); ++ writel(0x0, reg_base + GX_INTC_NMASK63_32); + + setup_irq_channel(0x03020100, reg_base + GX_INTC_SOURCE); + +@@ -204,20 +202,29 @@ static void ck_irq_handler(struct pt_regs *regs) + void __iomem *reg_pen_lo = reg_base + CK_INTC_PEN31_00; + void __iomem *reg_pen_hi = reg_base + CK_INTC_PEN63_32; + +- do { +- /* handle 0 - 31 irqs */ +- ret = handle_irq_perbit(regs, readl_relaxed(reg_pen_lo), 0); +- ret |= handle_irq_perbit(regs, readl_relaxed(reg_pen_hi), 32); ++retry: ++ /* handle 0 - 63 irqs */ ++ ret = handle_irq_perbit(regs, readl(reg_pen_hi), 32); ++ if (ret) ++ goto retry; + +- if (nr_irq == INTC_IRQS) +- continue; ++ ret = handle_irq_perbit(regs, readl(reg_pen_lo), 0); ++ if (ret) ++ goto retry; ++ ++ if (nr_irq == INTC_IRQS) ++ return; + +- /* handle 64 - 127 irqs */ +- ret |= handle_irq_perbit(regs, +- readl_relaxed(reg_pen_lo + CK_INTC_DUAL_BASE), 64); +- ret |= handle_irq_perbit(regs, +- readl_relaxed(reg_pen_hi + CK_INTC_DUAL_BASE), 96); +- } while (ret); ++ /* handle 64 - 127 irqs */ ++ ret = handle_irq_perbit(regs, ++ readl(reg_pen_hi + CK_INTC_DUAL_BASE), 96); ++ if (ret) ++ goto retry; ++ ++ ret = handle_irq_perbit(regs, ++ readl(reg_pen_lo + CK_INTC_DUAL_BASE), 64); ++ if (ret) ++ goto retry; + } + + static int __init +@@ -230,11 +237,11 @@ ck_intc_init(struct device_node *node, struct device_node *parent) + return ret; + + /* Initial enable reg to disable all interrupts */ +- writel_relaxed(0, reg_base + CK_INTC_NEN31_00); +- writel_relaxed(0, reg_base + CK_INTC_NEN63_32); ++ writel(0, reg_base + CK_INTC_NEN31_00); ++ writel(0, reg_base + CK_INTC_NEN63_32); + + /* Enable irq intc */ +- writel_relaxed(BIT(31), reg_base + CK_INTC_ICR); ++ writel(BIT(31), reg_base + CK_INTC_ICR); + + ck_set_gc(node, reg_base, CK_INTC_NEN31_00, 0); + ck_set_gc(node, reg_base, CK_INTC_NEN63_32, 32); +@@ -260,8 +267,8 @@ ck_dual_intc_init(struct device_node *node, struct device_node *parent) + return ret; + + /* Initial enable reg to disable all interrupts */ +- writel_relaxed(0, reg_base + CK_INTC_NEN31_00 + CK_INTC_DUAL_BASE); +- writel_relaxed(0, reg_base + CK_INTC_NEN63_32 + CK_INTC_DUAL_BASE); ++ writel(0, reg_base + CK_INTC_NEN31_00 + CK_INTC_DUAL_BASE); ++ writel(0, reg_base + CK_INTC_NEN63_32 + CK_INTC_DUAL_BASE); + + ck_set_gc(node, reg_base + CK_INTC_DUAL_BASE, CK_INTC_NEN31_00, 64); + ck_set_gc(node, reg_base + CK_INTC_DUAL_BASE, CK_INTC_NEN63_32, 96); +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c +index 1ef828575fae..9fc5423f83c1 100644 +--- a/drivers/md/dm-crypt.c ++++ b/drivers/md/dm-crypt.c +@@ -932,7 +932,7 @@ static int dm_crypt_integrity_io_alloc(struct dm_crypt_io *io, struct bio *bio) + if (IS_ERR(bip)) + return PTR_ERR(bip); + +- tag_len = io->cc->on_disk_tag_size * bio_sectors(bio); ++ tag_len = io->cc->on_disk_tag_size * (bio_sectors(bio) >> io->cc->sector_shift); + + bip->bip_iter.bi_size = tag_len; + bip->bip_iter.bi_sector = io->cc->start + io->sector; +diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c +index ca8af21bf644..e83b63608262 100644 +--- a/drivers/md/dm-thin.c ++++ b/drivers/md/dm-thin.c +@@ -257,6 +257,7 @@ struct pool { + + spinlock_t lock; + struct bio_list deferred_flush_bios; ++ struct bio_list deferred_flush_completions; + struct list_head prepared_mappings; + struct list_head prepared_discards; + struct list_head prepared_discards_pt2; +@@ -956,6 +957,39 @@ static void process_prepared_mapping_fail(struct dm_thin_new_mapping *m) + mempool_free(m, &m->tc->pool->mapping_pool); + } + ++static void complete_overwrite_bio(struct thin_c *tc, struct bio *bio) ++{ ++ struct pool *pool = tc->pool; ++ unsigned long flags; ++ ++ /* ++ * If the bio has the REQ_FUA flag set we must commit the metadata ++ * before signaling its completion. ++ */ ++ if (!bio_triggers_commit(tc, bio)) { ++ bio_endio(bio); ++ return; ++ } ++ ++ /* ++ * Complete bio with an error if earlier I/O caused changes to the ++ * metadata that can't be committed, e.g, due to I/O errors on the ++ * metadata device. ++ */ ++ if (dm_thin_aborted_changes(tc->td)) { ++ bio_io_error(bio); ++ return; ++ } ++ ++ /* ++ * Batch together any bios that trigger commits and then issue a ++ * single commit for them in process_deferred_bios(). ++ */ ++ spin_lock_irqsave(&pool->lock, flags); ++ bio_list_add(&pool->deferred_flush_completions, bio); ++ spin_unlock_irqrestore(&pool->lock, flags); ++} ++ + static void process_prepared_mapping(struct dm_thin_new_mapping *m) + { + struct thin_c *tc = m->tc; +@@ -988,7 +1022,7 @@ static void process_prepared_mapping(struct dm_thin_new_mapping *m) + */ + if (bio) { + inc_remap_and_issue_cell(tc, m->cell, m->data_block); +- bio_endio(bio); ++ complete_overwrite_bio(tc, bio); + } else { + inc_all_io_entry(tc->pool, m->cell->holder); + remap_and_issue(tc, m->cell->holder, m->data_block); +@@ -2317,7 +2351,7 @@ static void process_deferred_bios(struct pool *pool) + { + unsigned long flags; + struct bio *bio; +- struct bio_list bios; ++ struct bio_list bios, bio_completions; + struct thin_c *tc; + + tc = get_first_thin(pool); +@@ -2328,26 +2362,36 @@ static void process_deferred_bios(struct pool *pool) + } + + /* +- * If there are any deferred flush bios, we must commit +- * the metadata before issuing them. ++ * If there are any deferred flush bios, we must commit the metadata ++ * before issuing them or signaling their completion. + */ + bio_list_init(&bios); ++ bio_list_init(&bio_completions); ++ + spin_lock_irqsave(&pool->lock, flags); + bio_list_merge(&bios, &pool->deferred_flush_bios); + bio_list_init(&pool->deferred_flush_bios); ++ ++ bio_list_merge(&bio_completions, &pool->deferred_flush_completions); ++ bio_list_init(&pool->deferred_flush_completions); + spin_unlock_irqrestore(&pool->lock, flags); + +- if (bio_list_empty(&bios) && ++ if (bio_list_empty(&bios) && bio_list_empty(&bio_completions) && + !(dm_pool_changed_this_transaction(pool->pmd) && need_commit_due_to_time(pool))) + return; + + if (commit(pool)) { ++ bio_list_merge(&bios, &bio_completions); ++ + while ((bio = bio_list_pop(&bios))) + bio_io_error(bio); + return; + } + pool->last_commit_jiffies = jiffies; + ++ while ((bio = bio_list_pop(&bio_completions))) ++ bio_endio(bio); ++ + while ((bio = bio_list_pop(&bios))) + generic_make_request(bio); + } +@@ -2954,6 +2998,7 @@ static struct pool *pool_create(struct mapped_device *pool_md, + INIT_DELAYED_WORK(&pool->no_space_timeout, do_no_space_timeout); + spin_lock_init(&pool->lock); + bio_list_init(&pool->deferred_flush_bios); ++ bio_list_init(&pool->deferred_flush_completions); + INIT_LIST_HEAD(&pool->prepared_mappings); + INIT_LIST_HEAD(&pool->prepared_discards); + INIT_LIST_HEAD(&pool->prepared_discards_pt2); +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c +index 1d54109071cc..fa47249fa3e4 100644 +--- a/drivers/md/raid1.c ++++ b/drivers/md/raid1.c +@@ -1863,6 +1863,20 @@ static void end_sync_read(struct bio *bio) + reschedule_retry(r1_bio); + } + ++static void abort_sync_write(struct mddev *mddev, struct r1bio *r1_bio) ++{ ++ sector_t sync_blocks = 0; ++ sector_t s = r1_bio->sector; ++ long sectors_to_go = r1_bio->sectors; ++ ++ /* make sure these bits don't get cleared. */ ++ do { ++ md_bitmap_end_sync(mddev->bitmap, s, &sync_blocks, 1); ++ s += sync_blocks; ++ sectors_to_go -= sync_blocks; ++ } while (sectors_to_go > 0); ++} ++ + static void end_sync_write(struct bio *bio) + { + int uptodate = !bio->bi_status; +@@ -1874,15 +1888,7 @@ static void end_sync_write(struct bio *bio) + struct md_rdev *rdev = conf->mirrors[find_bio_disk(r1_bio, bio)].rdev; + + if (!uptodate) { +- sector_t sync_blocks = 0; +- sector_t s = r1_bio->sector; +- long sectors_to_go = r1_bio->sectors; +- /* make sure these bits doesn't get cleared. */ +- do { +- md_bitmap_end_sync(mddev->bitmap, s, &sync_blocks, 1); +- s += sync_blocks; +- sectors_to_go -= sync_blocks; +- } while (sectors_to_go > 0); ++ abort_sync_write(mddev, r1_bio); + set_bit(WriteErrorSeen, &rdev->flags); + if (!test_and_set_bit(WantReplacement, &rdev->flags)) + set_bit(MD_RECOVERY_NEEDED, & +@@ -2172,8 +2178,10 @@ static void sync_request_write(struct mddev *mddev, struct r1bio *r1_bio) + (i == r1_bio->read_disk || + !test_bit(MD_RECOVERY_SYNC, &mddev->recovery)))) + continue; +- if (test_bit(Faulty, &conf->mirrors[i].rdev->flags)) ++ if (test_bit(Faulty, &conf->mirrors[i].rdev->flags)) { ++ abort_sync_write(mddev, r1_bio); + continue; ++ } + + bio_set_op_attrs(wbio, REQ_OP_WRITE, 0); + if (test_bit(FailFast, &conf->mirrors[i].rdev->flags)) +diff --git a/drivers/misc/eeprom/Kconfig b/drivers/misc/eeprom/Kconfig +index fe7a1d27a017..a846faefa210 100644 +--- a/drivers/misc/eeprom/Kconfig ++++ b/drivers/misc/eeprom/Kconfig +@@ -13,7 +13,7 @@ config EEPROM_AT24 + ones like at24c64, 24lc02 or fm24c04: + + 24c00, 24c01, 24c02, spd (readonly 24c02), 24c04, 24c08, +- 24c16, 24c32, 24c64, 24c128, 24c256, 24c512, 24c1024 ++ 24c16, 24c32, 24c64, 24c128, 24c256, 24c512, 24c1024, 24c2048 + + Unless you like data loss puzzles, always be sure that any chip + you configure as a 24c32 (32 kbit) or larger is NOT really a +diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c +index 636ed7149793..ddfcf4ade7bf 100644 +--- a/drivers/misc/eeprom/at24.c ++++ b/drivers/misc/eeprom/at24.c +@@ -156,6 +156,7 @@ AT24_CHIP_DATA(at24_data_24c128, 131072 / 8, AT24_FLAG_ADDR16); + AT24_CHIP_DATA(at24_data_24c256, 262144 / 8, AT24_FLAG_ADDR16); + AT24_CHIP_DATA(at24_data_24c512, 524288 / 8, AT24_FLAG_ADDR16); + AT24_CHIP_DATA(at24_data_24c1024, 1048576 / 8, AT24_FLAG_ADDR16); ++AT24_CHIP_DATA(at24_data_24c2048, 2097152 / 8, AT24_FLAG_ADDR16); + /* identical to 24c08 ? */ + AT24_CHIP_DATA(at24_data_INT3499, 8192 / 8, 0); + +@@ -182,6 +183,7 @@ static const struct i2c_device_id at24_ids[] = { + { "24c256", (kernel_ulong_t)&at24_data_24c256 }, + { "24c512", (kernel_ulong_t)&at24_data_24c512 }, + { "24c1024", (kernel_ulong_t)&at24_data_24c1024 }, ++ { "24c2048", (kernel_ulong_t)&at24_data_24c2048 }, + { "at24", 0 }, + { /* END OF LIST */ } + }; +@@ -210,6 +212,7 @@ static const struct of_device_id at24_of_match[] = { + { .compatible = "atmel,24c256", .data = &at24_data_24c256 }, + { .compatible = "atmel,24c512", .data = &at24_data_24c512 }, + { .compatible = "atmel,24c1024", .data = &at24_data_24c1024 }, ++ { .compatible = "atmel,24c2048", .data = &at24_data_24c2048 }, + { /* END OF LIST */ }, + }; + MODULE_DEVICE_TABLE(of, at24_of_match); +diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c +index 111934838da2..4f1912a1e071 100644 +--- a/drivers/mmc/core/block.c ++++ b/drivers/mmc/core/block.c +@@ -2114,7 +2114,7 @@ static void mmc_blk_mq_req_done(struct mmc_request *mrq) + if (waiting) + wake_up(&mq->wait); + else +- kblockd_schedule_work(&mq->complete_work); ++ queue_work(mq->card->complete_wq, &mq->complete_work); + + return; + } +@@ -2928,6 +2928,13 @@ static int mmc_blk_probe(struct mmc_card *card) + + mmc_fixup_device(card, mmc_blk_fixups); + ++ card->complete_wq = alloc_workqueue("mmc_complete", ++ WQ_MEM_RECLAIM | WQ_HIGHPRI, 0); ++ if (unlikely(!card->complete_wq)) { ++ pr_err("Failed to create mmc completion workqueue"); ++ return -ENOMEM; ++ } ++ + md = mmc_blk_alloc(card); + if (IS_ERR(md)) + return PTR_ERR(md); +@@ -2991,6 +2998,7 @@ static void mmc_blk_remove(struct mmc_card *card) + pm_runtime_put_noidle(&card->dev); + mmc_blk_remove_req(md); + dev_set_drvdata(&card->dev, NULL); ++ destroy_workqueue(card->complete_wq); + } + + static int _mmc_blk_suspend(struct mmc_card *card) +diff --git a/drivers/mmc/host/sunxi-mmc.c b/drivers/mmc/host/sunxi-mmc.c +index 279e326e397e..70fadc976795 100644 +--- a/drivers/mmc/host/sunxi-mmc.c ++++ b/drivers/mmc/host/sunxi-mmc.c +@@ -1399,13 +1399,37 @@ static int sunxi_mmc_probe(struct platform_device *pdev) + mmc->caps |= MMC_CAP_MMC_HIGHSPEED | MMC_CAP_SD_HIGHSPEED | + MMC_CAP_ERASE | MMC_CAP_SDIO_IRQ; + +- if (host->cfg->clk_delays || host->use_new_timings) ++ /* ++ * Some H5 devices do not have signal traces precise enough to ++ * use HS DDR mode for their eMMC chips. ++ * ++ * We still enable HS DDR modes for all the other controller ++ * variants that support them. ++ */ ++ if ((host->cfg->clk_delays || host->use_new_timings) && ++ !of_device_is_compatible(pdev->dev.of_node, ++ "allwinner,sun50i-h5-emmc")) + mmc->caps |= MMC_CAP_1_8V_DDR | MMC_CAP_3_3V_DDR; + + ret = mmc_of_parse(mmc); + if (ret) + goto error_free_dma; + ++ /* ++ * If we don't support delay chains in the SoC, we can't use any ++ * of the higher speed modes. Mask them out in case the device ++ * tree specifies the properties for them, which gets added to ++ * the caps by mmc_of_parse() above. ++ */ ++ if (!(host->cfg->clk_delays || host->use_new_timings)) { ++ mmc->caps &= ~(MMC_CAP_3_3V_DDR | MMC_CAP_1_8V_DDR | ++ MMC_CAP_1_2V_DDR | MMC_CAP_UHS); ++ mmc->caps2 &= ~MMC_CAP2_HS200; ++ } ++ ++ /* TODO: This driver doesn't support HS400 mode yet */ ++ mmc->caps2 &= ~MMC_CAP2_HS400; ++ + ret = sunxi_mmc_init_host(host); + if (ret) + goto error_free_dma; +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 962012135b62..5f9a5ef93969 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -2084,18 +2084,20 @@ static void nvme_init_subnqn(struct nvme_subsystem *subsys, struct nvme_ctrl *ct + size_t nqnlen; + int off; + +- nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE); +- if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) { +- strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE); +- return; +- } ++ if(!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) { ++ nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE); ++ if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) { ++ strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE); ++ return; ++ } + +- if (ctrl->vs >= NVME_VS(1, 2, 1)) +- dev_warn(ctrl->device, "missing or invalid SUBNQN field.\n"); ++ if (ctrl->vs >= NVME_VS(1, 2, 1)) ++ dev_warn(ctrl->device, "missing or invalid SUBNQN field.\n"); ++ } + + /* Generate a "fake" NQN per Figure 254 in NVMe 1.3 + ECN 001 */ + off = snprintf(subsys->subnqn, NVMF_NQN_SIZE, +- "nqn.2014.08.org.nvmexpress:%4x%4x", ++ "nqn.2014.08.org.nvmexpress:%04x%04x", + le16_to_cpu(id->vid), le16_to_cpu(id->ssvid)); + memcpy(subsys->subnqn + off, id->sn, sizeof(id->sn)); + off += sizeof(id->sn); +diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c +index 9901afd804ce..2b1d1f066efa 100644 +--- a/drivers/nvme/host/multipath.c ++++ b/drivers/nvme/host/multipath.c +@@ -586,6 +586,7 @@ int nvme_mpath_init(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id) + return 0; + out_free_ana_log_buf: + kfree(ctrl->ana_log_buf); ++ ctrl->ana_log_buf = NULL; + out: + return error; + } +@@ -593,5 +594,6 @@ out: + void nvme_mpath_uninit(struct nvme_ctrl *ctrl) + { + kfree(ctrl->ana_log_buf); ++ ctrl->ana_log_buf = NULL; + } + +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h +index 081cbdcce880..6ffa99a10a60 100644 +--- a/drivers/nvme/host/nvme.h ++++ b/drivers/nvme/host/nvme.h +@@ -90,6 +90,11 @@ enum nvme_quirks { + * Set MEDIUM priority on SQ creation + */ + NVME_QUIRK_MEDIUM_PRIO_SQ = (1 << 7), ++ ++ /* ++ * Ignore device provided subnqn. ++ */ ++ NVME_QUIRK_IGNORE_DEV_SUBNQN = (1 << 8), + }; + + /* +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index c33bb201b884..c0d01048ce4d 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -913,9 +913,11 @@ static void nvme_complete_cqes(struct nvme_queue *nvmeq, u16 start, u16 end) + + static inline void nvme_update_cq_head(struct nvme_queue *nvmeq) + { +- if (++nvmeq->cq_head == nvmeq->q_depth) { ++ if (nvmeq->cq_head == nvmeq->q_depth - 1) { + nvmeq->cq_head = 0; + nvmeq->cq_phase = !nvmeq->cq_phase; ++ } else { ++ nvmeq->cq_head++; + } + } + +@@ -1748,8 +1750,9 @@ static void nvme_free_host_mem(struct nvme_dev *dev) + struct nvme_host_mem_buf_desc *desc = &dev->host_mem_descs[i]; + size_t size = le32_to_cpu(desc->size) * dev->ctrl.page_size; + +- dma_free_coherent(dev->dev, size, dev->host_mem_desc_bufs[i], +- le64_to_cpu(desc->addr)); ++ dma_free_attrs(dev->dev, size, dev->host_mem_desc_bufs[i], ++ le64_to_cpu(desc->addr), ++ DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN); + } + + kfree(dev->host_mem_desc_bufs); +@@ -1815,8 +1818,9 @@ out_free_bufs: + while (--i >= 0) { + size_t size = le32_to_cpu(descs[i].size) * dev->ctrl.page_size; + +- dma_free_coherent(dev->dev, size, bufs[i], +- le64_to_cpu(descs[i].addr)); ++ dma_free_attrs(dev->dev, size, bufs[i], ++ le64_to_cpu(descs[i].addr), ++ DMA_ATTR_NO_KERNEL_MAPPING | DMA_ATTR_NO_WARN); + } + + kfree(bufs); +@@ -2696,6 +2700,8 @@ static const struct pci_device_id nvme_id_table[] = { + { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ + .driver_data = NVME_QUIRK_NO_DEEPEST_PS | + NVME_QUIRK_MEDIUM_PRIO_SQ }, ++ { PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */ ++ .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, + { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */ + .driver_data = NVME_QUIRK_IDENTIFY_CNS, }, + { PCI_DEVICE(0x1bb1, 0x0100), /* Seagate Nytro Flash Storage */ +diff --git a/drivers/s390/crypto/ap_bus.c b/drivers/s390/crypto/ap_bus.c +index 9f5a201c4c87..02b52cacde33 100644 +--- a/drivers/s390/crypto/ap_bus.c ++++ b/drivers/s390/crypto/ap_bus.c +@@ -248,7 +248,8 @@ static inline int ap_test_config(unsigned int *field, unsigned int nr) + static inline int ap_test_config_card_id(unsigned int id) + { + if (!ap_configuration) /* QCI not supported */ +- return 1; ++ /* only ids 0...3F may be probed */ ++ return id < 0x40 ? 1 : 0; + return ap_test_config(ap_configuration->apm, id); + } + +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index ba4b8b3ce8cf..c0e21433b1d8 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -2960,9 +2960,6 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp) + if (rot == 1) { + blk_queue_flag_set(QUEUE_FLAG_NONROT, q); + blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q); +- } else { +- blk_queue_flag_clear(QUEUE_FLAG_NONROT, q); +- blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q); + } + + if (sdkp->device->type == TYPE_ZBC) { +@@ -3099,6 +3096,15 @@ static int sd_revalidate_disk(struct gendisk *disk) + if (sdkp->media_present) { + sd_read_capacity(sdkp, buffer); + ++ /* ++ * set the default to rotational. All non-rotational devices ++ * support the block characteristics VPD page, which will ++ * cause this to be updated correctly and any device which ++ * doesn't support it should be treated as rotational. ++ */ ++ blk_queue_flag_clear(QUEUE_FLAG_NONROT, q); ++ blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q); ++ + if (scsi_device_supports_vpd(sdp)) { + sd_read_block_provisioning(sdkp); + sd_read_block_limits(sdkp); +diff --git a/drivers/soc/renesas/r8a774c0-sysc.c b/drivers/soc/renesas/r8a774c0-sysc.c +index e1ac4c0f6640..11050e17ea81 100644 +--- a/drivers/soc/renesas/r8a774c0-sysc.c ++++ b/drivers/soc/renesas/r8a774c0-sysc.c +@@ -28,19 +28,6 @@ static struct rcar_sysc_area r8a774c0_areas[] __initdata = { + { "3dg-b", 0x100, 1, R8A774C0_PD_3DG_B, R8A774C0_PD_3DG_A }, + }; + +-static void __init rcar_sysc_fix_parent(struct rcar_sysc_area *areas, +- unsigned int num_areas, u8 id, +- int new_parent) +-{ +- unsigned int i; +- +- for (i = 0; i < num_areas; i++) +- if (areas[i].isr_bit == id) { +- areas[i].parent = new_parent; +- return; +- } +-} +- + /* Fixups for RZ/G2E ES1.0 revision */ + static const struct soc_device_attribute r8a774c0[] __initconst = { + { .soc_id = "r8a774c0", .revision = "ES1.0" }, +@@ -50,12 +37,10 @@ static const struct soc_device_attribute r8a774c0[] __initconst = { + static int __init r8a774c0_sysc_init(void) + { + if (soc_device_match(r8a774c0)) { +- rcar_sysc_fix_parent(r8a774c0_areas, +- ARRAY_SIZE(r8a774c0_areas), +- R8A774C0_PD_3DG_A, R8A774C0_PD_3DG_B); +- rcar_sysc_fix_parent(r8a774c0_areas, +- ARRAY_SIZE(r8a774c0_areas), +- R8A774C0_PD_3DG_B, R8A774C0_PD_ALWAYS_ON); ++ /* Fix incorrect 3DG hierarchy */ ++ swap(r8a774c0_areas[6], r8a774c0_areas[7]); ++ r8a774c0_areas[6].parent = R8A774C0_PD_ALWAYS_ON; ++ r8a774c0_areas[7].parent = R8A774C0_PD_3DG_B; + } + + return 0; +diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h +index 38ab0fca49e1..373639199291 100644 +--- a/fs/cifs/cifsglob.h ++++ b/fs/cifs/cifsglob.h +@@ -1426,6 +1426,7 @@ struct mid_q_entry { + int mid_state; /* wish this were enum but can not pass to wait_event */ + unsigned int mid_flags; + __le16 command; /* smb command code */ ++ unsigned int optype; /* operation type */ + bool large_buf:1; /* if valid response, is pointer to large buf */ + bool multiRsp:1; /* multiple trans2 responses for one request */ + bool multiEnd:1; /* both received */ +@@ -1562,6 +1563,25 @@ static inline void free_dfs_info_array(struct dfs_info3_param *param, + kfree(param); + } + ++static inline bool is_interrupt_error(int error) ++{ ++ switch (error) { ++ case -EINTR: ++ case -ERESTARTSYS: ++ case -ERESTARTNOHAND: ++ case -ERESTARTNOINTR: ++ return true; ++ } ++ return false; ++} ++ ++static inline bool is_retryable_error(int error) ++{ ++ if (is_interrupt_error(error) || error == -EAGAIN) ++ return true; ++ return false; ++} ++ + #define MID_FREE 0 + #define MID_REQUEST_ALLOCATED 1 + #define MID_REQUEST_SUBMITTED 2 +diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c +index fce610f6cd24..327a101f7894 100644 +--- a/fs/cifs/cifssmb.c ++++ b/fs/cifs/cifssmb.c +@@ -2043,7 +2043,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) + + for (j = 0; j < nr_pages; j++) { + unlock_page(wdata2->pages[j]); +- if (rc != 0 && rc != -EAGAIN) { ++ if (rc != 0 && !is_retryable_error(rc)) { + SetPageError(wdata2->pages[j]); + end_page_writeback(wdata2->pages[j]); + put_page(wdata2->pages[j]); +@@ -2052,7 +2052,7 @@ cifs_writev_requeue(struct cifs_writedata *wdata) + + if (rc) { + kref_put(&wdata2->refcount, cifs_writedata_release); +- if (rc == -EAGAIN) ++ if (is_retryable_error(rc)) + continue; + break; + } +@@ -2061,7 +2061,8 @@ cifs_writev_requeue(struct cifs_writedata *wdata) + i += nr_pages; + } while (i < wdata->nr_pages); + +- mapping_set_error(inode->i_mapping, rc); ++ if (rc != 0 && !is_retryable_error(rc)) ++ mapping_set_error(inode->i_mapping, rc); + kref_put(&wdata->refcount, cifs_writedata_release); + } + +diff --git a/fs/cifs/file.c b/fs/cifs/file.c +index 8431854b129f..c13effbaadba 100644 +--- a/fs/cifs/file.c ++++ b/fs/cifs/file.c +@@ -732,7 +732,8 @@ reopen_success: + + if (can_flush) { + rc = filemap_write_and_wait(inode->i_mapping); +- mapping_set_error(inode->i_mapping, rc); ++ if (!is_interrupt_error(rc)) ++ mapping_set_error(inode->i_mapping, rc); + + if (tcon->unix_ext) + rc = cifs_get_inode_info_unix(&inode, full_path, +@@ -1139,6 +1140,10 @@ cifs_push_mandatory_locks(struct cifsFileInfo *cfile) + return -EINVAL; + } + ++ BUILD_BUG_ON(sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE) > ++ PAGE_SIZE); ++ max_buf = min_t(unsigned int, max_buf - sizeof(struct smb_hdr), ++ PAGE_SIZE); + max_num = (max_buf - sizeof(struct smb_hdr)) / + sizeof(LOCKING_ANDX_RANGE); + buf = kcalloc(max_num, sizeof(LOCKING_ANDX_RANGE), GFP_KERNEL); +@@ -1477,6 +1482,10 @@ cifs_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock, + if (max_buf < (sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE))) + return -EINVAL; + ++ BUILD_BUG_ON(sizeof(struct smb_hdr) + sizeof(LOCKING_ANDX_RANGE) > ++ PAGE_SIZE); ++ max_buf = min_t(unsigned int, max_buf - sizeof(struct smb_hdr), ++ PAGE_SIZE); + max_num = (max_buf - sizeof(struct smb_hdr)) / + sizeof(LOCKING_ANDX_RANGE); + buf = kcalloc(max_num, sizeof(LOCKING_ANDX_RANGE), GFP_KERNEL); +@@ -2109,6 +2118,7 @@ static int cifs_writepages(struct address_space *mapping, + pgoff_t end, index; + struct cifs_writedata *wdata; + int rc = 0; ++ int saved_rc = 0; + unsigned int xid; + + /* +@@ -2137,8 +2147,10 @@ retry: + + rc = server->ops->wait_mtu_credits(server, cifs_sb->wsize, + &wsize, &credits); +- if (rc) ++ if (rc != 0) { ++ done = true; + break; ++ } + + tofind = min((wsize / PAGE_SIZE) - 1, end - index) + 1; + +@@ -2146,6 +2158,7 @@ retry: + &found_pages); + if (!wdata) { + rc = -ENOMEM; ++ done = true; + add_credits_and_wake_if(server, credits, 0); + break; + } +@@ -2174,7 +2187,7 @@ retry: + if (rc != 0) { + add_credits_and_wake_if(server, wdata->credits, 0); + for (i = 0; i < nr_pages; ++i) { +- if (rc == -EAGAIN) ++ if (is_retryable_error(rc)) + redirty_page_for_writepage(wbc, + wdata->pages[i]); + else +@@ -2182,7 +2195,7 @@ retry: + end_page_writeback(wdata->pages[i]); + put_page(wdata->pages[i]); + } +- if (rc != -EAGAIN) ++ if (!is_retryable_error(rc)) + mapping_set_error(mapping, rc); + } + kref_put(&wdata->refcount, cifs_writedata_release); +@@ -2192,6 +2205,15 @@ retry: + continue; + } + ++ /* Return immediately if we received a signal during writing */ ++ if (is_interrupt_error(rc)) { ++ done = true; ++ break; ++ } ++ ++ if (rc != 0 && saved_rc == 0) ++ saved_rc = rc; ++ + wbc->nr_to_write -= nr_pages; + if (wbc->nr_to_write <= 0) + done = true; +@@ -2209,6 +2231,9 @@ retry: + goto retry; + } + ++ if (saved_rc != 0) ++ rc = saved_rc; ++ + if (wbc->range_cyclic || (range_whole && wbc->nr_to_write > 0)) + mapping->writeback_index = index; + +@@ -2241,8 +2266,8 @@ cifs_writepage_locked(struct page *page, struct writeback_control *wbc) + set_page_writeback(page); + retry_write: + rc = cifs_partialpagewrite(page, 0, PAGE_SIZE); +- if (rc == -EAGAIN) { +- if (wbc->sync_mode == WB_SYNC_ALL) ++ if (is_retryable_error(rc)) { ++ if (wbc->sync_mode == WB_SYNC_ALL && rc == -EAGAIN) + goto retry_write; + redirty_page_for_writepage(wbc, page); + } else if (rc != 0) { +diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c +index a81a9df997c1..84d51ca91ef7 100644 +--- a/fs/cifs/inode.c ++++ b/fs/cifs/inode.c +@@ -2261,6 +2261,11 @@ cifs_setattr_unix(struct dentry *direntry, struct iattr *attrs) + * the flush returns error? + */ + rc = filemap_write_and_wait(inode->i_mapping); ++ if (is_interrupt_error(rc)) { ++ rc = -ERESTARTSYS; ++ goto out; ++ } ++ + mapping_set_error(inode->i_mapping, rc); + rc = 0; + +@@ -2404,6 +2409,11 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs) + * the flush returns error? + */ + rc = filemap_write_and_wait(inode->i_mapping); ++ if (is_interrupt_error(rc)) { ++ rc = -ERESTARTSYS; ++ goto cifs_setattr_exit; ++ } ++ + mapping_set_error(inode->i_mapping, rc); + rc = 0; + +diff --git a/fs/cifs/smb2file.c b/fs/cifs/smb2file.c +index 2fc3d31967ee..b204e84b87fb 100644 +--- a/fs/cifs/smb2file.c ++++ b/fs/cifs/smb2file.c +@@ -128,6 +128,8 @@ smb2_unlock_range(struct cifsFileInfo *cfile, struct file_lock *flock, + if (max_buf < sizeof(struct smb2_lock_element)) + return -EINVAL; + ++ BUILD_BUG_ON(sizeof(struct smb2_lock_element) > PAGE_SIZE); ++ max_buf = min_t(unsigned int, max_buf, PAGE_SIZE); + max_num = max_buf / sizeof(struct smb2_lock_element); + buf = kcalloc(max_num, sizeof(struct smb2_lock_element), GFP_KERNEL); + if (!buf) +@@ -264,6 +266,8 @@ smb2_push_mandatory_locks(struct cifsFileInfo *cfile) + return -EINVAL; + } + ++ BUILD_BUG_ON(sizeof(struct smb2_lock_element) > PAGE_SIZE); ++ max_buf = min_t(unsigned int, max_buf, PAGE_SIZE); + max_num = max_buf / sizeof(struct smb2_lock_element); + buf = kcalloc(max_num, sizeof(struct smb2_lock_element), GFP_KERNEL); + if (!buf) { +diff --git a/fs/cifs/smb2inode.c b/fs/cifs/smb2inode.c +index a8999f930b22..057d2034209f 100644 +--- a/fs/cifs/smb2inode.c ++++ b/fs/cifs/smb2inode.c +@@ -294,6 +294,8 @@ smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon, + int rc; + struct smb2_file_all_info *smb2_data; + __u32 create_options = 0; ++ struct cifs_fid fid; ++ bool no_cached_open = tcon->nohandlecache; + + *adjust_tz = false; + *symlink = false; +@@ -302,6 +304,21 @@ smb2_query_path_info(const unsigned int xid, struct cifs_tcon *tcon, + GFP_KERNEL); + if (smb2_data == NULL) + return -ENOMEM; ++ ++ /* If it is a root and its handle is cached then use it */ ++ if (!strlen(full_path) && !no_cached_open) { ++ rc = open_shroot(xid, tcon, &fid); ++ if (rc) ++ goto out; ++ rc = SMB2_query_info(xid, tcon, fid.persistent_fid, ++ fid.volatile_fid, smb2_data); ++ close_shroot(&tcon->crfid); ++ if (rc) ++ goto out; ++ move_smb2_info_to_cifs(data, smb2_data); ++ goto out; ++ } ++ + if (backup_cred(cifs_sb)) + create_options |= CREATE_OPEN_BACKUP_INTENT; + +diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c +index d7dd7d38fad6..aa71e620f3cd 100644 +--- a/fs/cifs/smb2ops.c ++++ b/fs/cifs/smb2ops.c +@@ -154,7 +154,11 @@ smb2_get_credits(struct mid_q_entry *mid) + { + struct smb2_sync_hdr *shdr = (struct smb2_sync_hdr *)mid->resp_buf; + +- return le16_to_cpu(shdr->CreditRequest); ++ if (mid->mid_state == MID_RESPONSE_RECEIVED ++ || mid->mid_state == MID_RESPONSE_MALFORMED) ++ return le16_to_cpu(shdr->CreditRequest); ++ ++ return 0; + } + + static int +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index c393ac255af7..28712080add9 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -2826,9 +2826,10 @@ smb2_echo_callback(struct mid_q_entry *mid) + { + struct TCP_Server_Info *server = mid->callback_data; + struct smb2_echo_rsp *rsp = (struct smb2_echo_rsp *)mid->resp_buf; +- unsigned int credits_received = 1; ++ unsigned int credits_received = 0; + +- if (mid->mid_state == MID_RESPONSE_RECEIVED) ++ if (mid->mid_state == MID_RESPONSE_RECEIVED ++ || mid->mid_state == MID_RESPONSE_MALFORMED) + credits_received = le16_to_cpu(rsp->sync_hdr.CreditRequest); + + DeleteMidQEntry(mid); +@@ -3085,7 +3086,7 @@ smb2_readv_callback(struct mid_q_entry *mid) + struct TCP_Server_Info *server = tcon->ses->server; + struct smb2_sync_hdr *shdr = + (struct smb2_sync_hdr *)rdata->iov[0].iov_base; +- unsigned int credits_received = 1; ++ unsigned int credits_received = 0; + struct smb_rqst rqst = { .rq_iov = rdata->iov, + .rq_nvec = 2, + .rq_pages = rdata->pages, +@@ -3124,6 +3125,9 @@ smb2_readv_callback(struct mid_q_entry *mid) + task_io_account_read(rdata->got_bytes); + cifs_stats_bytes_read(tcon, rdata->got_bytes); + break; ++ case MID_RESPONSE_MALFORMED: ++ credits_received = le16_to_cpu(shdr->CreditRequest); ++ /* fall through */ + default: + if (rdata->result != -ENODATA) + rdata->result = -EIO; +@@ -3317,7 +3321,7 @@ smb2_writev_callback(struct mid_q_entry *mid) + struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink); + unsigned int written; + struct smb2_write_rsp *rsp = (struct smb2_write_rsp *)mid->resp_buf; +- unsigned int credits_received = 1; ++ unsigned int credits_received = 0; + + switch (mid->mid_state) { + case MID_RESPONSE_RECEIVED: +@@ -3345,6 +3349,9 @@ smb2_writev_callback(struct mid_q_entry *mid) + case MID_RETRY_NEEDED: + wdata->result = -EAGAIN; + break; ++ case MID_RESPONSE_MALFORMED: ++ credits_received = le16_to_cpu(rsp->sync_hdr.CreditRequest); ++ /* fall through */ + default: + wdata->result = -EIO; + break; +diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c +index d51064c1ba42..6f937e826910 100644 +--- a/fs/cifs/transport.c ++++ b/fs/cifs/transport.c +@@ -781,8 +781,25 @@ cifs_setup_request(struct cifs_ses *ses, struct smb_rqst *rqst) + } + + static void +-cifs_noop_callback(struct mid_q_entry *mid) ++cifs_compound_callback(struct mid_q_entry *mid) + { ++ struct TCP_Server_Info *server = mid->server; ++ ++ add_credits(server, server->ops->get_credits(mid), mid->optype); ++} ++ ++static void ++cifs_compound_last_callback(struct mid_q_entry *mid) ++{ ++ cifs_compound_callback(mid); ++ cifs_wake_up_task(mid); ++} ++ ++static void ++cifs_cancelled_callback(struct mid_q_entry *mid) ++{ ++ cifs_compound_callback(mid); ++ DeleteMidQEntry(mid); + } + + int +@@ -860,12 +877,16 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + } + + midQ[i]->mid_state = MID_REQUEST_SUBMITTED; ++ midQ[i]->optype = optype; + /* +- * We don't invoke the callback compounds unless it is the last +- * request. ++ * Invoke callback for every part of the compound chain ++ * to calculate credits properly. Wake up this thread only when ++ * the last element is received. + */ + if (i < num_rqst - 1) +- midQ[i]->callback = cifs_noop_callback; ++ midQ[i]->callback = cifs_compound_callback; ++ else ++ midQ[i]->callback = cifs_compound_last_callback; + } + cifs_in_send_inc(ses->server); + rc = smb_send_rqst(ses->server, num_rqst, rqst, flags); +@@ -879,8 +900,20 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + + mutex_unlock(&ses->server->srv_mutex); + +- if (rc < 0) ++ if (rc < 0) { ++ /* Sending failed for some reason - return credits back */ ++ for (i = 0; i < num_rqst; i++) ++ add_credits(ses->server, credits[i], optype); + goto out; ++ } ++ ++ /* ++ * At this point the request is passed to the network stack - we assume ++ * that any credits taken from the server structure on the client have ++ * been spent and we can't return them back. Once we receive responses ++ * we will collect credits granted by the server in the mid callbacks ++ * and add those credits to the server structure. ++ */ + + /* + * Compounding is never used during session establish. +@@ -894,25 +927,25 @@ compound_send_recv(const unsigned int xid, struct cifs_ses *ses, + + for (i = 0; i < num_rqst; i++) { + rc = wait_for_response(ses->server, midQ[i]); +- if (rc != 0) { ++ if (rc != 0) ++ break; ++ } ++ if (rc != 0) { ++ for (; i < num_rqst; i++) { + cifs_dbg(VFS, "Cancelling wait for mid %llu cmd: %d\n", + midQ[i]->mid, le16_to_cpu(midQ[i]->command)); + send_cancel(ses->server, &rqst[i], midQ[i]); + spin_lock(&GlobalMid_Lock); + if (midQ[i]->mid_state == MID_REQUEST_SUBMITTED) { + midQ[i]->mid_flags |= MID_WAIT_CANCELLED; +- midQ[i]->callback = DeleteMidQEntry; ++ midQ[i]->callback = cifs_cancelled_callback; + cancelled_mid[i] = true; ++ credits[i] = 0; + } + spin_unlock(&GlobalMid_Lock); + } + } + +- for (i = 0; i < num_rqst; i++) +- if (!cancelled_mid[i] && midQ[i]->resp_buf +- && (midQ[i]->mid_state == MID_RESPONSE_RECEIVED)) +- credits[i] = ses->server->ops->get_credits(midQ[i]); +- + for (i = 0; i < num_rqst; i++) { + if (rc < 0) + goto out; +@@ -971,7 +1004,6 @@ out: + for (i = 0; i < num_rqst; i++) { + if (!cancelled_mid[i]) + cifs_delete_mid(midQ[i]); +- add_credits(ses->server, credits[i], optype); + } + + return rc; +diff --git a/fs/inode.c b/fs/inode.c +index 35d2108d567c..9e198f00b64c 100644 +--- a/fs/inode.c ++++ b/fs/inode.c +@@ -730,11 +730,8 @@ static enum lru_status inode_lru_isolate(struct list_head *item, + return LRU_REMOVED; + } + +- /* +- * Recently referenced inodes and inodes with many attached pages +- * get one more pass. +- */ +- if (inode->i_state & I_REFERENCED || inode->i_data.nrpages > 1) { ++ /* recently referenced inodes get one more pass */ ++ if (inode->i_state & I_REFERENCED) { + inode->i_state &= ~I_REFERENCED; + spin_unlock(&inode->i_lock); + return LRU_ROTATE; +diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c +index b33f9785b756..72a7681f4046 100644 +--- a/fs/nfsd/nfsctl.c ++++ b/fs/nfsd/nfsctl.c +@@ -1239,8 +1239,8 @@ static __net_init int nfsd_init_net(struct net *net) + retval = nfsd_idmap_init(net); + if (retval) + goto out_idmap_error; +- nn->nfsd4_lease = 45; /* default lease time */ +- nn->nfsd4_grace = 45; ++ nn->nfsd4_lease = 90; /* default lease time */ ++ nn->nfsd4_grace = 90; + nn->somebody_reclaimed = false; + nn->clverifier_counter = prandom_u32(); + nn->clientid_counter = prandom_u32(); +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c +index 47c3764c469b..7bcf5714ca24 100644 +--- a/fs/proc/task_mmu.c ++++ b/fs/proc/task_mmu.c +@@ -423,7 +423,7 @@ struct mem_size_stats { + }; + + static void smaps_account(struct mem_size_stats *mss, struct page *page, +- bool compound, bool young, bool dirty) ++ bool compound, bool young, bool dirty, bool locked) + { + int i, nr = compound ? 1 << compound_order(page) : 1; + unsigned long size = nr * PAGE_SIZE; +@@ -450,24 +450,31 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, + else + mss->private_clean += size; + mss->pss += (u64)size << PSS_SHIFT; ++ if (locked) ++ mss->pss_locked += (u64)size << PSS_SHIFT; + return; + } + + for (i = 0; i < nr; i++, page++) { + int mapcount = page_mapcount(page); ++ unsigned long pss = (PAGE_SIZE << PSS_SHIFT); + + if (mapcount >= 2) { + if (dirty || PageDirty(page)) + mss->shared_dirty += PAGE_SIZE; + else + mss->shared_clean += PAGE_SIZE; +- mss->pss += (PAGE_SIZE << PSS_SHIFT) / mapcount; ++ mss->pss += pss / mapcount; ++ if (locked) ++ mss->pss_locked += pss / mapcount; + } else { + if (dirty || PageDirty(page)) + mss->private_dirty += PAGE_SIZE; + else + mss->private_clean += PAGE_SIZE; +- mss->pss += PAGE_SIZE << PSS_SHIFT; ++ mss->pss += pss; ++ if (locked) ++ mss->pss_locked += pss; + } + } + } +@@ -490,6 +497,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr, + { + struct mem_size_stats *mss = walk->private; + struct vm_area_struct *vma = walk->vma; ++ bool locked = !!(vma->vm_flags & VM_LOCKED); + struct page *page = NULL; + + if (pte_present(*pte)) { +@@ -532,7 +540,7 @@ static void smaps_pte_entry(pte_t *pte, unsigned long addr, + if (!page) + return; + +- smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte)); ++ smaps_account(mss, page, false, pte_young(*pte), pte_dirty(*pte), locked); + } + + #ifdef CONFIG_TRANSPARENT_HUGEPAGE +@@ -541,6 +549,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, + { + struct mem_size_stats *mss = walk->private; + struct vm_area_struct *vma = walk->vma; ++ bool locked = !!(vma->vm_flags & VM_LOCKED); + struct page *page; + + /* FOLL_DUMP will return -EFAULT on huge zero page */ +@@ -555,7 +564,7 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, + /* pass */; + else + VM_BUG_ON_PAGE(1, page); +- smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd)); ++ smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), locked); + } + #else + static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, +@@ -737,11 +746,8 @@ static void smap_gather_stats(struct vm_area_struct *vma, + } + } + #endif +- + /* mmap_sem is held in m_start */ + walk_page_vma(vma, &smaps_walk); +- if (vma->vm_flags & VM_LOCKED) +- mss->pss_locked += mss->pss; + } + + #define SEQ_PUT_DEC(str, val) \ +diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h +index de7377815b6b..8ef330027b13 100644 +--- a/include/linux/mmc/card.h ++++ b/include/linux/mmc/card.h +@@ -308,6 +308,7 @@ struct mmc_card { + unsigned int nr_parts; + + unsigned int bouncesz; /* Bounce buffer size */ ++ struct workqueue_struct *complete_wq; /* Private workqueue */ + }; + + static inline bool mmc_large_sector(struct mmc_card *card) +diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h +index 53c500f0ca79..c2876e740514 100644 +--- a/include/linux/perf_event.h ++++ b/include/linux/perf_event.h +@@ -447,6 +447,11 @@ struct pmu { + * Filter events for PMU-specific reasons. + */ + int (*filter_match) (struct perf_event *event); /* optional */ ++ ++ /* ++ * Check period value for PERF_EVENT_IOC_PERIOD ioctl. ++ */ ++ int (*check_period) (struct perf_event *event, u64 value); /* optional */ + }; + + enum perf_addr_filter_action_t { +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 84530ab358c3..699bc25d6204 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -4963,6 +4963,11 @@ static void __perf_event_period(struct perf_event *event, + } + } + ++static int perf_event_check_period(struct perf_event *event, u64 value) ++{ ++ return event->pmu->check_period(event, value); ++} ++ + static int perf_event_period(struct perf_event *event, u64 __user *arg) + { + u64 value; +@@ -4979,6 +4984,9 @@ static int perf_event_period(struct perf_event *event, u64 __user *arg) + if (event->attr.freq && value > sysctl_perf_event_sample_rate) + return -EINVAL; + ++ if (perf_event_check_period(event, value)) ++ return -EINVAL; ++ + event_function_call(event, __perf_event_period, &value); + + return 0; +@@ -9391,6 +9399,11 @@ static int perf_pmu_nop_int(struct pmu *pmu) + return 0; + } + ++static int perf_event_nop_int(struct perf_event *event, u64 value) ++{ ++ return 0; ++} ++ + static DEFINE_PER_CPU(unsigned int, nop_txn_flags); + + static void perf_pmu_start_txn(struct pmu *pmu, unsigned int flags) +@@ -9691,6 +9704,9 @@ got_cpu_context: + pmu->pmu_disable = perf_pmu_nop_void; + } + ++ if (!pmu->check_period) ++ pmu->check_period = perf_event_nop_int; ++ + if (!pmu->event_idx) + pmu->event_idx = perf_event_idx_default; + +diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c +index 309ef5a64af5..5ab4fe3b1dcc 100644 +--- a/kernel/events/ring_buffer.c ++++ b/kernel/events/ring_buffer.c +@@ -734,7 +734,7 @@ struct ring_buffer *rb_alloc(int nr_pages, long watermark, int cpu, int flags) + size = sizeof(struct ring_buffer); + size += nr_pages * sizeof(void *); + +- if (order_base_2(size) >= MAX_ORDER) ++ if (order_base_2(size) >= PAGE_SHIFT+MAX_ORDER) + goto fail; + + rb = kzalloc(size, GFP_KERNEL); +diff --git a/kernel/signal.c b/kernel/signal.c +index cf4cf68c3ea8..ac969af3e9a0 100644 +--- a/kernel/signal.c ++++ b/kernel/signal.c +@@ -2436,9 +2436,12 @@ relock: + } + + /* Has this task already been marked for death? */ +- ksig->info.si_signo = signr = SIGKILL; +- if (signal_group_exit(signal)) ++ if (signal_group_exit(signal)) { ++ ksig->info.si_signo = signr = SIGKILL; ++ sigdelset(¤t->pending.signal, SIGKILL); ++ recalc_sigpending(); + goto fatal; ++ } + + for (;;) { + struct k_sigaction *ka; +diff --git a/kernel/trace/trace_probe_tmpl.h b/kernel/trace/trace_probe_tmpl.h +index 5c56afc17cf8..4737bb8c07a3 100644 +--- a/kernel/trace/trace_probe_tmpl.h ++++ b/kernel/trace/trace_probe_tmpl.h +@@ -180,10 +180,12 @@ store_trace_args(void *data, struct trace_probe *tp, struct pt_regs *regs, + if (unlikely(arg->dynamic)) + *dl = make_data_loc(maxlen, dyndata - base); + ret = process_fetch_insn(arg->code, regs, dl, base); +- if (unlikely(ret < 0 && arg->dynamic)) ++ if (unlikely(ret < 0 && arg->dynamic)) { + *dl = make_data_loc(0, dyndata - base); +- else ++ } else { + dyndata += ret; ++ maxlen -= ret; ++ } + } + } + +diff --git a/mm/vmscan.c b/mm/vmscan.c +index 62ac0c488624..8e377bbac3a6 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -487,16 +487,6 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, + delta = freeable / 2; + } + +- /* +- * Make sure we apply some minimal pressure on default priority +- * even on small cgroups. Stale objects are not only consuming memory +- * by themselves, but can also hold a reference to a dying cgroup, +- * preventing it from being reclaimed. A dying cgroup with all +- * corresponding structures like per-cpu stats and kmem caches +- * can be really big, so it may lead to a significant waste of memory. +- */ +- delta = max_t(unsigned long long, delta, min(freeable, batch_size)); +- + total_scan += delta; + if (total_scan < 0) { + pr_err("shrink_slab: %pF negative objects to delete nr=%ld\n", +diff --git a/net/sunrpc/auth_gss/gss_krb5_seqnum.c b/net/sunrpc/auth_gss/gss_krb5_seqnum.c +index fb6656295204..507105127095 100644 +--- a/net/sunrpc/auth_gss/gss_krb5_seqnum.c ++++ b/net/sunrpc/auth_gss/gss_krb5_seqnum.c +@@ -44,7 +44,7 @@ krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum, + unsigned char *cksum, unsigned char *buf) + { + struct crypto_sync_skcipher *cipher; +- unsigned char plain[8]; ++ unsigned char *plain; + s32 code; + + dprintk("RPC: %s:\n", __func__); +@@ -52,6 +52,10 @@ krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum, + if (IS_ERR(cipher)) + return PTR_ERR(cipher); + ++ plain = kmalloc(8, GFP_NOFS); ++ if (!plain) ++ return -ENOMEM; ++ + plain[0] = (unsigned char) ((seqnum >> 24) & 0xff); + plain[1] = (unsigned char) ((seqnum >> 16) & 0xff); + plain[2] = (unsigned char) ((seqnum >> 8) & 0xff); +@@ -67,6 +71,7 @@ krb5_make_rc4_seq_num(struct krb5_ctx *kctx, int direction, s32 seqnum, + + code = krb5_encrypt(cipher, cksum, plain, buf, 8); + out: ++ kfree(plain); + crypto_free_sync_skcipher(cipher); + return code; + } +@@ -77,12 +82,17 @@ krb5_make_seq_num(struct krb5_ctx *kctx, + u32 seqnum, + unsigned char *cksum, unsigned char *buf) + { +- unsigned char plain[8]; ++ unsigned char *plain; ++ s32 code; + + if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) + return krb5_make_rc4_seq_num(kctx, direction, seqnum, + cksum, buf); + ++ plain = kmalloc(8, GFP_NOFS); ++ if (!plain) ++ return -ENOMEM; ++ + plain[0] = (unsigned char) (seqnum & 0xff); + plain[1] = (unsigned char) ((seqnum >> 8) & 0xff); + plain[2] = (unsigned char) ((seqnum >> 16) & 0xff); +@@ -93,7 +103,9 @@ krb5_make_seq_num(struct krb5_ctx *kctx, + plain[6] = direction; + plain[7] = direction; + +- return krb5_encrypt(key, cksum, plain, buf, 8); ++ code = krb5_encrypt(key, cksum, plain, buf, 8); ++ kfree(plain); ++ return code; + } + + static s32 +@@ -101,7 +113,7 @@ krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum, + unsigned char *buf, int *direction, s32 *seqnum) + { + struct crypto_sync_skcipher *cipher; +- unsigned char plain[8]; ++ unsigned char *plain; + s32 code; + + dprintk("RPC: %s:\n", __func__); +@@ -113,20 +125,28 @@ krb5_get_rc4_seq_num(struct krb5_ctx *kctx, unsigned char *cksum, + if (code) + goto out; + ++ plain = kmalloc(8, GFP_NOFS); ++ if (!plain) { ++ code = -ENOMEM; ++ goto out; ++ } ++ + code = krb5_decrypt(cipher, cksum, buf, plain, 8); + if (code) +- goto out; ++ goto out_plain; + + if ((plain[4] != plain[5]) || (plain[4] != plain[6]) + || (plain[4] != plain[7])) { + code = (s32)KG_BAD_SEQ; +- goto out; ++ goto out_plain; + } + + *direction = plain[4]; + + *seqnum = ((plain[0] << 24) | (plain[1] << 16) | + (plain[2] << 8) | (plain[3])); ++out_plain: ++ kfree(plain); + out: + crypto_free_sync_skcipher(cipher); + return code; +@@ -139,7 +159,7 @@ krb5_get_seq_num(struct krb5_ctx *kctx, + int *direction, u32 *seqnum) + { + s32 code; +- unsigned char plain[8]; ++ unsigned char *plain; + struct crypto_sync_skcipher *key = kctx->seq; + + dprintk("RPC: krb5_get_seq_num:\n"); +@@ -147,18 +167,25 @@ krb5_get_seq_num(struct krb5_ctx *kctx, + if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) + return krb5_get_rc4_seq_num(kctx, cksum, buf, + direction, seqnum); ++ plain = kmalloc(8, GFP_NOFS); ++ if (!plain) ++ return -ENOMEM; + + if ((code = krb5_decrypt(key, cksum, buf, plain, 8))) +- return code; ++ goto out; + + if ((plain[4] != plain[5]) || (plain[4] != plain[6]) || +- (plain[4] != plain[7])) +- return (s32)KG_BAD_SEQ; ++ (plain[4] != plain[7])) { ++ code = (s32)KG_BAD_SEQ; ++ goto out; ++ } + + *direction = plain[4]; + + *seqnum = ((plain[0]) | + (plain[1] << 8) | (plain[2] << 16) | (plain[3] << 24)); + +- return 0; ++out: ++ kfree(plain); ++ return code; + } +diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c +index 6c99fa8ac5fa..6c0b30391ba9 100644 +--- a/sound/core/pcm_lib.c ++++ b/sound/core/pcm_lib.c +@@ -2112,13 +2112,6 @@ int pcm_lib_apply_appl_ptr(struct snd_pcm_substream *substream, + return 0; + } + +-/* allow waiting for a capture stream that hasn't been started */ +-#if IS_ENABLED(CONFIG_SND_PCM_OSS) +-#define wait_capture_start(substream) ((substream)->oss.oss) +-#else +-#define wait_capture_start(substream) false +-#endif +- + /* the common loop for read/write data */ + snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream, + void *data, bool interleaved, +@@ -2184,16 +2177,11 @@ snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream, + snd_pcm_update_hw_ptr(substream); + + if (!is_playback && +- runtime->status->state == SNDRV_PCM_STATE_PREPARED) { +- if (size >= runtime->start_threshold) { +- err = snd_pcm_start(substream); +- if (err < 0) +- goto _end_unlock; +- } else if (!wait_capture_start(substream)) { +- /* nothing to do */ +- err = 0; ++ runtime->status->state == SNDRV_PCM_STATE_PREPARED && ++ size >= runtime->start_threshold) { ++ err = snd_pcm_start(substream); ++ if (err < 0) + goto _end_unlock; +- } + } + + avail = snd_pcm_avail(substream); +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c +index 152f54137082..a4ee7656d9ee 100644 +--- a/sound/pci/hda/patch_conexant.c ++++ b/sound/pci/hda/patch_conexant.c +@@ -924,6 +924,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = { + SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK), ++ SND_PCI_QUIRK(0x103c, 0x83b2, "HP EliteBook 840 G5", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x83b3, "HP EliteBook 830 G5", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x83d3, "HP ProBook 640 G4", CXT_FIXUP_HP_DOCK), + SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE), +diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c +index d00734d31e04..e5b6769b9797 100644 +--- a/sound/soc/codecs/hdmi-codec.c ++++ b/sound/soc/codecs/hdmi-codec.c +@@ -795,6 +795,8 @@ static int hdmi_codec_probe(struct platform_device *pdev) + if (hcd->spdif) + hcp->daidrv[i] = hdmi_spdif_dai; + ++ dev_set_drvdata(dev, hcp); ++ + ret = devm_snd_soc_register_component(dev, &hdmi_driver, hcp->daidrv, + dai_count); + if (ret) { +@@ -802,8 +804,6 @@ static int hdmi_codec_probe(struct platform_device *pdev) + __func__, ret); + return ret; + } +- +- dev_set_drvdata(dev, hcp); + return 0; + } + +diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c +index 382847154227..db114f3977e0 100644 +--- a/sound/usb/pcm.c ++++ b/sound/usb/pcm.c +@@ -314,6 +314,9 @@ static int search_roland_implicit_fb(struct usb_device *dev, int ifnum, + return 0; + } + ++/* Setup an implicit feedback endpoint from a quirk. Returns 0 if no quirk ++ * applies. Returns 1 if a quirk was found. ++ */ + static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs, + struct usb_device *dev, + struct usb_interface_descriptor *altsd, +@@ -384,7 +387,7 @@ add_sync_ep: + + subs->data_endpoint->sync_master = subs->sync_endpoint; + +- return 0; ++ return 1; + } + + static int set_sync_endpoint(struct snd_usb_substream *subs, +@@ -423,6 +426,10 @@ static int set_sync_endpoint(struct snd_usb_substream *subs, + if (err < 0) + return err; + ++ /* endpoint set by quirk */ ++ if (err > 0) ++ return 0; ++ + if (altsd->bNumEndpoints < 2) + return 0; + +diff --git a/tools/arch/riscv/include/uapi/asm/bitsperlong.h b/tools/arch/riscv/include/uapi/asm/bitsperlong.h +new file mode 100644 +index 000000000000..0b3cb52fd29d +--- /dev/null ++++ b/tools/arch/riscv/include/uapi/asm/bitsperlong.h +@@ -0,0 +1,25 @@ ++/* ++ * Copyright (C) 2012 ARM Ltd. ++ * Copyright (C) 2015 Regents of the University of California ++ * ++ * This program is free software; you can redistribute it and/or modify ++ * it under the terms of the GNU General Public License version 2 as ++ * published by the Free Software Foundation. ++ * ++ * This program is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ++ * GNU General Public License for more details. ++ * ++ * You should have received a copy of the GNU General Public License ++ * along with this program. If not, see . ++ */ ++ ++#ifndef _UAPI_ASM_RISCV_BITSPERLONG_H ++#define _UAPI_ASM_RISCV_BITSPERLONG_H ++ ++#define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8) ++ ++#include ++ ++#endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */ +diff --git a/tools/include/uapi/asm/bitsperlong.h b/tools/include/uapi/asm/bitsperlong.h +index 8dd6aefdafa4..57aaeaf8e192 100644 +--- a/tools/include/uapi/asm/bitsperlong.h ++++ b/tools/include/uapi/asm/bitsperlong.h +@@ -13,6 +13,10 @@ + #include "../../arch/mips/include/uapi/asm/bitsperlong.h" + #elif defined(__ia64__) + #include "../../arch/ia64/include/uapi/asm/bitsperlong.h" ++#elif defined(__riscv) ++#include "../../arch/riscv/include/uapi/asm/bitsperlong.h" ++#elif defined(__alpha__) ++#include "../../arch/alpha/include/uapi/asm/bitsperlong.h" + #else + #include + #endif +diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c +index 1410d66192f7..63a3afc7f32b 100644 +--- a/tools/perf/builtin-stat.c ++++ b/tools/perf/builtin-stat.c +@@ -561,7 +561,8 @@ try_again: + break; + } + } +- wait4(child_pid, &status, 0, &stat_config.ru_data); ++ if (child_pid != -1) ++ wait4(child_pid, &status, 0, &stat_config.ru_data); + + if (workload_exec_errno) { + const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg)); +diff --git a/tools/perf/tests/shell/lib/probe_vfs_getname.sh b/tools/perf/tests/shell/lib/probe_vfs_getname.sh +index 1c16e56cd93e..7cb99b433888 100644 +--- a/tools/perf/tests/shell/lib/probe_vfs_getname.sh ++++ b/tools/perf/tests/shell/lib/probe_vfs_getname.sh +@@ -13,7 +13,8 @@ add_probe_vfs_getname() { + local verbose=$1 + if [ $had_vfs_getname -eq 1 ] ; then + line=$(perf probe -L getname_flags 2>&1 | egrep 'result.*=.*filename;' | sed -r 's/[[:space:]]+([[:digit:]]+)[[:space:]]+result->uptr.*/\1/') +- perf probe $verbose "vfs_getname=getname_flags:${line} pathname=result->name:string" ++ perf probe -q "vfs_getname=getname_flags:${line} pathname=result->name:string" || \ ++ perf probe $verbose "vfs_getname=getname_flags:${line} pathname=filename:string" + fi + } + +diff --git a/tools/perf/util/callchain.c b/tools/perf/util/callchain.c +index 32ef7bdca1cf..dc2212e12184 100644 +--- a/tools/perf/util/callchain.c ++++ b/tools/perf/util/callchain.c +@@ -766,6 +766,7 @@ static enum match_result match_chain(struct callchain_cursor_node *node, + cnode->cycles_count += node->branch_flags.cycles; + cnode->iter_count += node->nr_loop_iter; + cnode->iter_cycles += node->iter_cycles; ++ cnode->from_count++; + } + } + +@@ -1345,10 +1346,10 @@ static int branch_to_str(char *bf, int bfsize, + static int branch_from_str(char *bf, int bfsize, + u64 branch_count, + u64 cycles_count, u64 iter_count, +- u64 iter_cycles) ++ u64 iter_cycles, u64 from_count) + { + int printed = 0, i = 0; +- u64 cycles; ++ u64 cycles, v = 0; + + cycles = cycles_count / branch_count; + if (cycles) { +@@ -1357,14 +1358,16 @@ static int branch_from_str(char *bf, int bfsize, + bf + printed, bfsize - printed); + } + +- if (iter_count) { +- printed += count_pri64_printf(i++, "iter", +- iter_count, +- bf + printed, bfsize - printed); ++ if (iter_count && from_count) { ++ v = iter_count / from_count; ++ if (v) { ++ printed += count_pri64_printf(i++, "iter", ++ v, bf + printed, bfsize - printed); + +- printed += count_pri64_printf(i++, "avg_cycles", +- iter_cycles / iter_count, +- bf + printed, bfsize - printed); ++ printed += count_pri64_printf(i++, "avg_cycles", ++ iter_cycles / iter_count, ++ bf + printed, bfsize - printed); ++ } + } + + if (i) +@@ -1377,6 +1380,7 @@ static int counts_str_build(char *bf, int bfsize, + u64 branch_count, u64 predicted_count, + u64 abort_count, u64 cycles_count, + u64 iter_count, u64 iter_cycles, ++ u64 from_count, + struct branch_type_stat *brtype_stat) + { + int printed; +@@ -1389,7 +1393,8 @@ static int counts_str_build(char *bf, int bfsize, + predicted_count, abort_count, brtype_stat); + } else { + printed = branch_from_str(bf, bfsize, branch_count, +- cycles_count, iter_count, iter_cycles); ++ cycles_count, iter_count, iter_cycles, ++ from_count); + } + + if (!printed) +@@ -1402,13 +1407,14 @@ static int callchain_counts_printf(FILE *fp, char *bf, int bfsize, + u64 branch_count, u64 predicted_count, + u64 abort_count, u64 cycles_count, + u64 iter_count, u64 iter_cycles, ++ u64 from_count, + struct branch_type_stat *brtype_stat) + { + char str[256]; + + counts_str_build(str, sizeof(str), branch_count, + predicted_count, abort_count, cycles_count, +- iter_count, iter_cycles, brtype_stat); ++ iter_count, iter_cycles, from_count, brtype_stat); + + if (fp) + return fprintf(fp, "%s", str); +@@ -1422,6 +1428,7 @@ int callchain_list_counts__printf_value(struct callchain_list *clist, + u64 branch_count, predicted_count; + u64 abort_count, cycles_count; + u64 iter_count, iter_cycles; ++ u64 from_count; + + branch_count = clist->branch_count; + predicted_count = clist->predicted_count; +@@ -1429,11 +1436,12 @@ int callchain_list_counts__printf_value(struct callchain_list *clist, + cycles_count = clist->cycles_count; + iter_count = clist->iter_count; + iter_cycles = clist->iter_cycles; ++ from_count = clist->from_count; + + return callchain_counts_printf(fp, bf, bfsize, branch_count, + predicted_count, abort_count, + cycles_count, iter_count, iter_cycles, +- &clist->brtype_stat); ++ from_count, &clist->brtype_stat); + } + + static void free_callchain_node(struct callchain_node *node) +diff --git a/tools/perf/util/callchain.h b/tools/perf/util/callchain.h +index 154560b1eb65..99d38ac019b8 100644 +--- a/tools/perf/util/callchain.h ++++ b/tools/perf/util/callchain.h +@@ -118,6 +118,7 @@ struct callchain_list { + bool has_children; + }; + u64 branch_count; ++ u64 from_count; + u64 predicted_count; + u64 abort_count; + u64 cycles_count; +diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c +index 9397e3f2444d..ea228dd0a187 100644 +--- a/tools/perf/util/machine.c ++++ b/tools/perf/util/machine.c +@@ -2005,7 +2005,7 @@ static void save_iterations(struct iterations *iter, + { + int i; + +- iter->nr_loop_iter = nr; ++ iter->nr_loop_iter++; + iter->cycles = 0; + + for (i = 0; i < nr; i++)