From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id E2D811382C5 for ; Wed, 10 Feb 2021 10:17:58 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 230D8E089C; Wed, 10 Feb 2021 10:17:58 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id F0806E089C for ; Wed, 10 Feb 2021 10:17:57 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id A9584335DD0 for ; Wed, 10 Feb 2021 10:17:56 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 318B34A6 for ; Wed, 10 Feb 2021 10:17:55 +0000 (UTC) From: "Alice Ferrazzi" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Alice Ferrazzi" Message-ID: <1612952261.346dc70cb8b10bf114f95ed45924706a3f159ffe.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1256_linux-4.4.257.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Alice Ferrazzi X-VCS-Revision: 346dc70cb8b10bf114f95ed45924706a3f159ffe X-VCS-Branch: 4.4 Date: Wed, 10 Feb 2021 10:17:55 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 37d0b055-2f4d-4d1e-895f-3e43f76ae6db X-Archives-Hash: 16c09329a702ab58ccc192c926c90b38 commit: 346dc70cb8b10bf114f95ed45924706a3f159ffe Author: Alice Ferrazzi gentoo org> AuthorDate: Wed Feb 10 10:17:29 2021 +0000 Commit: Alice Ferrazzi gentoo org> CommitDate: Wed Feb 10 10:17:41 2021 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=346dc70c Linux patch 4.4.257 Signed-off-by: Alice Ferrazzi gentoo.org> 0000_README | 4 + 1256_linux-4.4.257.patch | 1682 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 1686 insertions(+) diff --git a/0000_README b/0000_README index e5f13f4..269cc08 100644 --- a/0000_README +++ b/0000_README @@ -1067,6 +1067,10 @@ Patch: 1255_linux-4.4.256.patch From: http://www.kernel.org Desc: Linux 4.4.256 +Patch: 1256_linux-4.4.257.patch +From: http://www.kernel.org +Desc: Linux 4.4.257 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1256_linux-4.4.257.patch b/1256_linux-4.4.257.patch new file mode 100644 index 0000000..e42f5ea --- /dev/null +++ b/1256_linux-4.4.257.patch @@ -0,0 +1,1682 @@ +diff --git a/Makefile b/Makefile +index 0057587d2cbe2..8de8f9ac32795 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 4 +-SUBLEVEL = 256 ++SUBLEVEL = 257 + EXTRAVERSION = + NAME = Blurry Fish Butt + +@@ -830,12 +830,6 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=strict-prototypes) + # Prohibit date/time macros, which would make the build non-deterministic + KBUILD_CFLAGS += $(call cc-option,-Werror=date-time) + +-# ensure -fcf-protection is disabled when using retpoline as it is +-# incompatible with -mindirect-branch=thunk-extern +-ifdef CONFIG_RETPOLINE +-KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none) +-endif +- + # use the deterministic mode of AR if available + KBUILD_ARFLAGS := $(call ar-option,D) + +@@ -1068,7 +1062,7 @@ endef + + define filechk_version.h + (echo \#define LINUX_VERSION_CODE $(shell \ +- expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \ ++ expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \ + echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))';) + endef + +diff --git a/arch/arm/mach-footbridge/dc21285.c b/arch/arm/mach-footbridge/dc21285.c +index 96a3d73ef4bf4..fd6c9169fa78e 100644 +--- a/arch/arm/mach-footbridge/dc21285.c ++++ b/arch/arm/mach-footbridge/dc21285.c +@@ -69,15 +69,15 @@ dc21285_read_config(struct pci_bus *bus, unsigned int devfn, int where, + if (addr) + switch (size) { + case 1: +- asm("ldrb %0, [%1, %2]" ++ asm volatile("ldrb %0, [%1, %2]" + : "=r" (v) : "r" (addr), "r" (where) : "cc"); + break; + case 2: +- asm("ldrh %0, [%1, %2]" ++ asm volatile("ldrh %0, [%1, %2]" + : "=r" (v) : "r" (addr), "r" (where) : "cc"); + break; + case 4: +- asm("ldr %0, [%1, %2]" ++ asm volatile("ldr %0, [%1, %2]" + : "=r" (v) : "r" (addr), "r" (where) : "cc"); + break; + } +@@ -103,17 +103,17 @@ dc21285_write_config(struct pci_bus *bus, unsigned int devfn, int where, + if (addr) + switch (size) { + case 1: +- asm("strb %0, [%1, %2]" ++ asm volatile("strb %0, [%1, %2]" + : : "r" (value), "r" (addr), "r" (where) + : "cc"); + break; + case 2: +- asm("strh %0, [%1, %2]" ++ asm volatile("strh %0, [%1, %2]" + : : "r" (value), "r" (addr), "r" (where) + : "cc"); + break; + case 4: +- asm("str %0, [%1, %2]" ++ asm volatile("str %0, [%1, %2]" + : : "r" (value), "r" (addr), "r" (where) + : "cc"); + break; +diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig +index 9d8bc19edc48e..9f1376788820e 100644 +--- a/arch/mips/Kconfig ++++ b/arch/mips/Kconfig +@@ -2990,6 +2990,7 @@ config MIPS32_N32 + config BINFMT_ELF32 + bool + default y if MIPS32_O32 || MIPS32_N32 ++ select ELFCORE + + endmenu + +diff --git a/arch/x86/Makefile b/arch/x86/Makefile +index 8b4d022ce0cbc..e59dc138b24ea 100644 +--- a/arch/x86/Makefile ++++ b/arch/x86/Makefile +@@ -137,6 +137,9 @@ else + KBUILD_CFLAGS += -mno-red-zone + KBUILD_CFLAGS += -mcmodel=kernel + ++ # Intel CET isn't enabled in the kernel ++ KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none) ++ + # -funit-at-a-time shrinks the kernel .text considerably + # unfortunately it makes reading oopses harder. + KBUILD_CFLAGS += $(call cc-option,-funit-at-a-time) +diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h +index 3328a37ddc75c..34f11bc42d9b7 100644 +--- a/arch/x86/include/asm/apic.h ++++ b/arch/x86/include/asm/apic.h +@@ -168,16 +168,6 @@ static inline void disable_local_APIC(void) { } + #endif /* !CONFIG_X86_LOCAL_APIC */ + + #ifdef CONFIG_X86_X2APIC +-/* +- * Make previous memory operations globally visible before +- * sending the IPI through x2apic wrmsr. We need a serializing instruction or +- * mfence for this. +- */ +-static inline void x2apic_wrmsr_fence(void) +-{ +- asm volatile("mfence" : : : "memory"); +-} +- + static inline void native_apic_msr_write(u32 reg, u32 v) + { + if (reg == APIC_DFR || reg == APIC_ID || reg == APIC_LDR || +diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h +index b2a5bef742822..134d7ffc662e8 100644 +--- a/arch/x86/include/asm/barrier.h ++++ b/arch/x86/include/asm/barrier.h +@@ -119,4 +119,22 @@ do { \ + #define smp_mb__before_atomic() do { } while (0) + #define smp_mb__after_atomic() do { } while (0) + ++/* ++ * Make previous memory operations globally visible before ++ * a WRMSR. ++ * ++ * MFENCE makes writes visible, but only affects load/store ++ * instructions. WRMSR is unfortunately not a load/store ++ * instruction and is unaffected by MFENCE. The LFENCE ensures ++ * that the WRMSR is not reordered. ++ * ++ * Most WRMSRs are full serializing instructions themselves and ++ * do not require this barrier. This is only required for the ++ * IA32_TSC_DEADLINE and X2APIC MSRs. ++ */ ++static inline void weak_wrmsr_fence(void) ++{ ++ asm volatile("mfence; lfence" : : : "memory"); ++} ++ + #endif /* _ASM_X86_BARRIER_H */ +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c +index 4dcf71c26d647..f53849f3f7fbf 100644 +--- a/arch/x86/kernel/apic/apic.c ++++ b/arch/x86/kernel/apic/apic.c +@@ -41,6 +41,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -464,6 +465,9 @@ static int lapic_next_deadline(unsigned long delta, + { + u64 tsc; + ++ /* This MSR is special and need a special fence: */ ++ weak_wrmsr_fence(); ++ + tsc = rdtsc(); + wrmsrl(MSR_IA32_TSC_DEADLINE, tsc + (((u64) delta) * TSC_DIVISOR)); + return 0; +diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c +index cc8311c4d2985..f474756fc151e 100644 +--- a/arch/x86/kernel/apic/x2apic_cluster.c ++++ b/arch/x86/kernel/apic/x2apic_cluster.c +@@ -32,7 +32,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest) + unsigned long flags; + u32 dest; + +- x2apic_wrmsr_fence(); ++ /* x2apic MSRs are special and need a special fence: */ ++ weak_wrmsr_fence(); + + local_irq_save(flags); + +diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c +index 662e9150ea6f2..ad7c3544b07f9 100644 +--- a/arch/x86/kernel/apic/x2apic_phys.c ++++ b/arch/x86/kernel/apic/x2apic_phys.c +@@ -43,7 +43,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest) + unsigned long this_cpu; + unsigned long flags; + +- x2apic_wrmsr_fence(); ++ /* x2apic MSRs are special and need a special fence: */ ++ weak_wrmsr_fence(); + + local_irq_save(flags); + +diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c +index 82707f9824cae..b4826335ad0b3 100644 +--- a/drivers/acpi/thermal.c ++++ b/drivers/acpi/thermal.c +@@ -188,6 +188,8 @@ struct acpi_thermal { + int tz_enabled; + int kelvin_offset; + struct work_struct thermal_check_work; ++ struct mutex thermal_check_lock; ++ atomic_t thermal_check_count; + }; + + /* -------------------------------------------------------------------------- +@@ -513,16 +515,6 @@ static int acpi_thermal_get_trip_points(struct acpi_thermal *tz) + return 0; + } + +-static void acpi_thermal_check(void *data) +-{ +- struct acpi_thermal *tz = data; +- +- if (!tz->tz_enabled) +- return; +- +- thermal_zone_device_update(tz->thermal_zone); +-} +- + /* sys I/F for generic thermal sysfs support */ + + static int thermal_get_temp(struct thermal_zone_device *thermal, int *temp) +@@ -556,6 +548,8 @@ static int thermal_get_mode(struct thermal_zone_device *thermal, + return 0; + } + ++static void acpi_thermal_check_fn(struct work_struct *work); ++ + static int thermal_set_mode(struct thermal_zone_device *thermal, + enum thermal_device_mode mode) + { +@@ -581,7 +575,7 @@ static int thermal_set_mode(struct thermal_zone_device *thermal, + ACPI_DEBUG_PRINT((ACPI_DB_INFO, + "%s kernel ACPI thermal control\n", + tz->tz_enabled ? "Enable" : "Disable")); +- acpi_thermal_check(tz); ++ acpi_thermal_check_fn(&tz->thermal_check_work); + } + return 0; + } +@@ -950,6 +944,12 @@ static void acpi_thermal_unregister_thermal_zone(struct acpi_thermal *tz) + Driver Interface + -------------------------------------------------------------------------- */ + ++static void acpi_queue_thermal_check(struct acpi_thermal *tz) ++{ ++ if (!work_pending(&tz->thermal_check_work)) ++ queue_work(acpi_thermal_pm_queue, &tz->thermal_check_work); ++} ++ + static void acpi_thermal_notify(struct acpi_device *device, u32 event) + { + struct acpi_thermal *tz = acpi_driver_data(device); +@@ -960,17 +960,17 @@ static void acpi_thermal_notify(struct acpi_device *device, u32 event) + + switch (event) { + case ACPI_THERMAL_NOTIFY_TEMPERATURE: +- acpi_thermal_check(tz); ++ acpi_queue_thermal_check(tz); + break; + case ACPI_THERMAL_NOTIFY_THRESHOLDS: + acpi_thermal_trips_update(tz, ACPI_TRIPS_REFRESH_THRESHOLDS); +- acpi_thermal_check(tz); ++ acpi_queue_thermal_check(tz); + acpi_bus_generate_netlink_event(device->pnp.device_class, + dev_name(&device->dev), event, 0); + break; + case ACPI_THERMAL_NOTIFY_DEVICES: + acpi_thermal_trips_update(tz, ACPI_TRIPS_REFRESH_DEVICES); +- acpi_thermal_check(tz); ++ acpi_queue_thermal_check(tz); + acpi_bus_generate_netlink_event(device->pnp.device_class, + dev_name(&device->dev), event, 0); + break; +@@ -1070,7 +1070,27 @@ static void acpi_thermal_check_fn(struct work_struct *work) + { + struct acpi_thermal *tz = container_of(work, struct acpi_thermal, + thermal_check_work); +- acpi_thermal_check(tz); ++ ++ if (!tz->tz_enabled) ++ return; ++ /* ++ * In general, it is not sufficient to check the pending bit, because ++ * subsequent instances of this function may be queued after one of them ++ * has started running (e.g. if _TMP sleeps). Avoid bailing out if just ++ * one of them is running, though, because it may have done the actual ++ * check some time ago, so allow at least one of them to block on the ++ * mutex while another one is running the update. ++ */ ++ if (!atomic_add_unless(&tz->thermal_check_count, -1, 1)) ++ return; ++ ++ mutex_lock(&tz->thermal_check_lock); ++ ++ thermal_zone_device_update(tz->thermal_zone); ++ ++ atomic_inc(&tz->thermal_check_count); ++ ++ mutex_unlock(&tz->thermal_check_lock); + } + + static int acpi_thermal_add(struct acpi_device *device) +@@ -1102,6 +1122,8 @@ static int acpi_thermal_add(struct acpi_device *device) + if (result) + goto free_memory; + ++ atomic_set(&tz->thermal_check_count, 3); ++ mutex_init(&tz->thermal_check_lock); + INIT_WORK(&tz->thermal_check_work, acpi_thermal_check_fn); + + pr_info(PREFIX "%s [%s] (%ld C)\n", acpi_device_name(device), +@@ -1167,7 +1189,7 @@ static int acpi_thermal_resume(struct device *dev) + tz->state.active |= tz->trips.active[i].flags.enabled; + } + +- queue_work(acpi_thermal_pm_queue, &tz->thermal_check_work); ++ acpi_queue_thermal_check(tz); + + return AE_OK; + } +diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c +index 637f1347cd13d..815b69d35722c 100644 +--- a/drivers/input/joystick/xpad.c ++++ b/drivers/input/joystick/xpad.c +@@ -232,9 +232,17 @@ static const struct xpad_device { + { 0x0e6f, 0x0213, "Afterglow Gamepad for Xbox 360", 0, XTYPE_XBOX360 }, + { 0x0e6f, 0x021f, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 }, + { 0x0e6f, 0x0246, "Rock Candy Gamepad for Xbox One 2015", 0, XTYPE_XBOXONE }, +- { 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02a0, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02a1, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02a2, "PDP Wired Controller for Xbox One - Crimson Red", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x02a4, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x02a6, "PDP Wired Controller for Xbox One - Camo Series", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02a7, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02a8, "PDP Xbox One Controller", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02ab, "PDP Controller for Xbox One", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02ad, "PDP Wired Controller for Xbox One - Stealth Series", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02b3, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE }, ++ { 0x0e6f, 0x02b8, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 }, + { 0x0e6f, 0x0346, "Rock Candy Gamepad for Xbox One 2016", 0, XTYPE_XBOXONE }, + { 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 }, +@@ -313,6 +321,9 @@ static const struct xpad_device { + { 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 }, + { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 }, + { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 }, ++ { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE }, ++ { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 }, ++ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE }, + { 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 }, + { 0x24c6, 0x5300, "PowerA MINI PROEX Controller", 0, XTYPE_XBOX360 }, + { 0x24c6, 0x5303, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 }, +@@ -446,8 +457,12 @@ static const struct usb_device_id xpad_table[] = { + XPAD_XBOX360_VENDOR(0x162e), /* Joytech X-Box 360 controllers */ + XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */ + XPAD_XBOX360_VENDOR(0x1bad), /* Harminix Rock Band Guitar and Drums */ ++ XPAD_XBOX360_VENDOR(0x20d6), /* PowerA Controllers */ ++ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA Controllers */ + XPAD_XBOX360_VENDOR(0x24c6), /* PowerA Controllers */ + XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */ ++ XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke X-Box One pad */ ++ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */ + { } + }; + +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h +index fa07be0b4500e..2317f8d3fef6f 100644 +--- a/drivers/input/serio/i8042-x86ia64io.h ++++ b/drivers/input/serio/i8042-x86ia64io.h +@@ -223,6 +223,8 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = { + DMI_MATCH(DMI_SYS_VENDOR, "PEGATRON CORPORATION"), + DMI_MATCH(DMI_PRODUCT_NAME, "C15B"), + }, ++ }, ++ { + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "ByteSpeed LLC"), + DMI_MATCH(DMI_PRODUCT_NAME, "ByteSpeed Laptop C15B"), +diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c +index 8651bd30863d4..f9416535f79d8 100644 +--- a/drivers/mmc/core/sdio_cis.c ++++ b/drivers/mmc/core/sdio_cis.c +@@ -24,6 +24,8 @@ + #include "sdio_cis.h" + #include "sdio_ops.h" + ++#define SDIO_READ_CIS_TIMEOUT_MS (10 * 1000) /* 10s */ ++ + static int cistpl_vers_1(struct mmc_card *card, struct sdio_func *func, + const unsigned char *buf, unsigned size) + { +@@ -263,6 +265,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func) + + do { + unsigned char tpl_code, tpl_link; ++ unsigned long timeout = jiffies + ++ msecs_to_jiffies(SDIO_READ_CIS_TIMEOUT_MS); + + ret = mmc_io_rw_direct(card, 0, 0, ptr++, 0, &tpl_code); + if (ret) +@@ -315,6 +319,8 @@ static int sdio_read_cis(struct mmc_card *card, struct sdio_func *func) + prev = &this->next; + + if (ret == -ENOENT) { ++ if (time_after(jiffies, timeout)) ++ break; + /* warn about unknown tuples */ + pr_warn_ratelimited("%s: queuing unknown" + " CIS tuple 0x%02x (%u bytes)\n", +diff --git a/drivers/scsi/ibmvscsi/ibmvfc.c b/drivers/scsi/ibmvscsi/ibmvfc.c +index db80ab8335dfb..aa74f72e582ab 100644 +--- a/drivers/scsi/ibmvscsi/ibmvfc.c ++++ b/drivers/scsi/ibmvscsi/ibmvfc.c +@@ -2883,8 +2883,10 @@ static int ibmvfc_slave_configure(struct scsi_device *sdev) + unsigned long flags = 0; + + spin_lock_irqsave(shost->host_lock, flags); +- if (sdev->type == TYPE_DISK) ++ if (sdev->type == TYPE_DISK) { + sdev->allow_restart = 1; ++ blk_queue_rq_timeout(sdev->request_queue, 120 * HZ); ++ } + spin_unlock_irqrestore(shost->host_lock, flags); + return 0; + } +diff --git a/drivers/scsi/libfc/fc_exch.c b/drivers/scsi/libfc/fc_exch.c +index b20c575564e43..a088f74a157c7 100644 +--- a/drivers/scsi/libfc/fc_exch.c ++++ b/drivers/scsi/libfc/fc_exch.c +@@ -1577,8 +1577,13 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp) + rc = fc_exch_done_locked(ep); + WARN_ON(fc_seq_exch(sp) != ep); + spin_unlock_bh(&ep->ex_lock); +- if (!rc) ++ if (!rc) { + fc_exch_delete(ep); ++ } else { ++ FC_EXCH_DBG(ep, "ep is completed already," ++ "hence skip calling the resp\n"); ++ goto skip_resp; ++ } + } + + /* +@@ -1597,6 +1602,7 @@ static void fc_exch_recv_seq_resp(struct fc_exch_mgr *mp, struct fc_frame *fp) + if (!fc_invoke_resp(ep, sp, fp)) + fc_frame_free(fp); + ++skip_resp: + fc_exch_release(ep); + return; + rel: +@@ -1841,10 +1847,16 @@ static void fc_exch_reset(struct fc_exch *ep) + + fc_exch_hold(ep); + +- if (!rc) ++ if (!rc) { + fc_exch_delete(ep); ++ } else { ++ FC_EXCH_DBG(ep, "ep is completed already," ++ "hence skip calling the resp\n"); ++ goto skip_resp; ++ } + + fc_invoke_resp(ep, sp, ERR_PTR(-FC_EX_CLOSED)); ++skip_resp: + fc_seq_set_resp(sp, NULL, ep->arg); + fc_exch_release(ep); + } +diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c +index 76701d6ce92c3..582099f4f449f 100644 +--- a/drivers/usb/class/usblp.c ++++ b/drivers/usb/class/usblp.c +@@ -1349,14 +1349,17 @@ static int usblp_set_protocol(struct usblp *usblp, int protocol) + if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL) + return -EINVAL; + +- alts = usblp->protocol[protocol].alt_setting; +- if (alts < 0) +- return -EINVAL; +- r = usb_set_interface(usblp->dev, usblp->ifnum, alts); +- if (r < 0) { +- printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n", +- alts, usblp->ifnum); +- return r; ++ /* Don't unnecessarily set the interface if there's a single alt. */ ++ if (usblp->intf->num_altsetting > 1) { ++ alts = usblp->protocol[protocol].alt_setting; ++ if (alts < 0) ++ return -EINVAL; ++ r = usb_set_interface(usblp->dev, usblp->ifnum, alts); ++ if (r < 0) { ++ printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n", ++ alts, usblp->ifnum); ++ return r; ++ } + } + + usblp->bidir = (usblp->protocol[protocol].epread != NULL); +diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c +index e5ad717cba22f..135e97310f118 100644 +--- a/drivers/usb/dwc2/gadget.c ++++ b/drivers/usb/dwc2/gadget.c +@@ -871,7 +871,6 @@ static void dwc2_hsotg_complete_oursetup(struct usb_ep *ep, + static struct dwc2_hsotg_ep *ep_from_windex(struct dwc2_hsotg *hsotg, + u32 windex) + { +- struct dwc2_hsotg_ep *ep; + int dir = (windex & USB_DIR_IN) ? 1 : 0; + int idx = windex & 0x7F; + +@@ -881,12 +880,7 @@ static struct dwc2_hsotg_ep *ep_from_windex(struct dwc2_hsotg *hsotg, + if (idx > hsotg->num_of_eps) + return NULL; + +- ep = index_to_ep(hsotg, idx, dir); +- +- if (idx && ep->dir_in != dir) +- return NULL; +- +- return ep; ++ return index_to_ep(hsotg, idx, dir); + } + + /** +diff --git a/drivers/usb/gadget/legacy/ether.c b/drivers/usb/gadget/legacy/ether.c +index 31e9160223e9a..0b7229678b530 100644 +--- a/drivers/usb/gadget/legacy/ether.c ++++ b/drivers/usb/gadget/legacy/ether.c +@@ -407,8 +407,10 @@ static int eth_bind(struct usb_composite_dev *cdev) + struct usb_descriptor_header *usb_desc; + + usb_desc = usb_otg_descriptor_alloc(gadget); +- if (!usb_desc) ++ if (!usb_desc) { ++ status = -ENOMEM; + goto fail1; ++ } + usb_otg_descriptor_init(gadget, usb_desc); + otg_desc[0] = usb_desc; + otg_desc[1] = NULL; +diff --git a/drivers/usb/gadget/udc/udc-core.c b/drivers/usb/gadget/udc/udc-core.c +index a6a1678cb9276..c6859fdd74bc2 100644 +--- a/drivers/usb/gadget/udc/udc-core.c ++++ b/drivers/usb/gadget/udc/udc-core.c +@@ -612,10 +612,13 @@ static ssize_t usb_udc_softconn_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t n) + { + struct usb_udc *udc = container_of(dev, struct usb_udc, dev); ++ ssize_t ret; + ++ mutex_lock(&udc_lock); + if (!udc->driver) { + dev_err(dev, "soft-connect without a gadget driver\n"); +- return -EOPNOTSUPP; ++ ret = -EOPNOTSUPP; ++ goto out; + } + + if (sysfs_streq(buf, "connect")) { +@@ -627,10 +630,14 @@ static ssize_t usb_udc_softconn_store(struct device *dev, + usb_gadget_udc_stop(udc); + } else { + dev_err(dev, "unsupported command '%s'\n", buf); +- return -EINVAL; ++ ret = -EINVAL; ++ goto out; + } + +- return n; ++ ret = n; ++out: ++ mutex_unlock(&udc_lock); ++ return ret; + } + static DEVICE_ATTR(soft_connect, S_IWUSR, NULL, usb_udc_softconn_store); + +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c +index 13c718ebaee5b..ded4c8f2bba4e 100644 +--- a/drivers/usb/serial/cp210x.c ++++ b/drivers/usb/serial/cp210x.c +@@ -57,6 +57,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */ + { USB_DEVICE(0x08FD, 0x000A) }, /* Digianswer A/S , ZigBee/802.15.4 MAC Device */ + { USB_DEVICE(0x0908, 0x01FF) }, /* Siemens RUGGEDCOM USB Serial Console */ ++ { USB_DEVICE(0x0988, 0x0578) }, /* Teraoka AD2000 */ + { USB_DEVICE(0x0B00, 0x3070) }, /* Ingenico 3070 */ + { USB_DEVICE(0x0BED, 0x1100) }, /* MEI (TM) Cashflow-SC Bill/Voucher Acceptor */ + { USB_DEVICE(0x0BED, 0x1101) }, /* MEI series 2000 Combo Acceptor */ +@@ -197,6 +198,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */ + { USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */ + { USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */ ++ { USB_DEVICE(0x199B, 0xBA30) }, /* LORD WSDA-200-USB */ + { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */ + { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ + { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 1998b314368e0..3c536eed07541 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -425,6 +425,8 @@ static void option_instat_callback(struct urb *urb); + #define CINTERION_PRODUCT_AHXX_2RMNET 0x0084 + #define CINTERION_PRODUCT_AHXX_AUDIO 0x0085 + #define CINTERION_PRODUCT_CLS8 0x00b0 ++#define CINTERION_PRODUCT_MV31_MBIM 0x00b3 ++#define CINTERION_PRODUCT_MV31_RMNET 0x00b7 + + /* Olivetti products */ + #define OLIVETTI_VENDOR_ID 0x0b3c +@@ -1896,6 +1898,10 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC25_MDMNET) }, + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDM) }, /* HC28 enumerates with Siemens or Cinterion VID depending on FW revision */ + { USB_DEVICE(SIEMENS_VENDOR_ID, CINTERION_PRODUCT_HC28_MDMNET) }, ++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_MBIM, 0xff), ++ .driver_info = RSVD(3)}, ++ { USB_DEVICE_INTERFACE_CLASS(CINTERION_VENDOR_ID, CINTERION_PRODUCT_MV31_RMNET, 0xff), ++ .driver_info = RSVD(0)}, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100), + .driver_info = RSVD(4) }, + { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD120), +diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt +index 2d0cbbd14cfc8..72c03354c14bf 100644 +--- a/fs/Kconfig.binfmt ++++ b/fs/Kconfig.binfmt +@@ -1,6 +1,7 @@ + config BINFMT_ELF + bool "Kernel support for ELF binaries" + depends on MMU && (BROKEN || !FRV) ++ select ELFCORE + default y + ---help--- + ELF (Executable and Linkable Format) is a format for libraries and +@@ -26,6 +27,7 @@ config BINFMT_ELF + config COMPAT_BINFMT_ELF + bool + depends on COMPAT && BINFMT_ELF ++ select ELFCORE + + config ARCH_BINFMT_ELF_STATE + bool +@@ -34,6 +36,7 @@ config BINFMT_ELF_FDPIC + bool "Kernel support for FDPIC ELF binaries" + default y + depends on (FRV || BLACKFIN || (SUPERH32 && !MMU) || C6X) ++ select ELFCORE + help + ELF FDPIC binaries are based on ELF, but allow the individual load + segments of a binary to be located in memory independently of each +@@ -43,6 +46,11 @@ config BINFMT_ELF_FDPIC + + It is also possible to run FDPIC ELF binaries on MMU linux also. + ++config ELFCORE ++ bool ++ help ++ This option enables kernel/elfcore.o. ++ + config CORE_DUMP_DEFAULT_ELF_HEADERS + bool "Write ELF core dumps with partial segments" + default y +diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c +index be16da31cbccf..9f1641324a811 100644 +--- a/fs/cifs/dir.c ++++ b/fs/cifs/dir.c +@@ -831,6 +831,7 @@ static int + cifs_d_revalidate(struct dentry *direntry, unsigned int flags) + { + struct inode *inode; ++ int rc; + + if (flags & LOOKUP_RCU) + return -ECHILD; +@@ -840,8 +841,25 @@ cifs_d_revalidate(struct dentry *direntry, unsigned int flags) + if ((flags & LOOKUP_REVAL) && !CIFS_CACHE_READ(CIFS_I(inode))) + CIFS_I(inode)->time = 0; /* force reval */ + +- if (cifs_revalidate_dentry(direntry)) +- return 0; ++ rc = cifs_revalidate_dentry(direntry); ++ if (rc) { ++ cifs_dbg(FYI, "cifs_revalidate_dentry failed with rc=%d", rc); ++ switch (rc) { ++ case -ENOENT: ++ case -ESTALE: ++ /* ++ * Those errors mean the dentry is invalid ++ * (file was deleted or recreated) ++ */ ++ return 0; ++ default: ++ /* ++ * Otherwise some unexpected error happened ++ * report it as-is to VFS layer ++ */ ++ return rc; ++ } ++ } + else { + /* + * If the inode wasn't known to be a dfs entry when +diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c +index 937c6ee1786f9..b743aa5bce0d2 100644 +--- a/fs/hugetlbfs/inode.c ++++ b/fs/hugetlbfs/inode.c +@@ -661,8 +661,9 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, + + mutex_unlock(&hugetlb_fault_mutex_table[hash]); + ++ set_page_huge_active(page); + /* +- * page_put due to reference from alloc_huge_page() ++ * put_page() due to reference from alloc_huge_page() + * unlock_page because locked by add_to_page_cache() + */ + put_page(page); +diff --git a/include/linux/elfcore.h b/include/linux/elfcore.h +index 698d51a0eea3f..4adf7faeaeb59 100644 +--- a/include/linux/elfcore.h ++++ b/include/linux/elfcore.h +@@ -55,6 +55,7 @@ static inline int elf_core_copy_task_xfpregs(struct task_struct *t, elf_fpxregse + } + #endif + ++#if defined(CONFIG_UM) || defined(CONFIG_IA64) + /* + * These functions parameterize elf_core_dump in fs/binfmt_elf.c to write out + * extra segments containing the gate DSO contents. Dumping its +@@ -69,5 +70,26 @@ elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset); + extern int + elf_core_write_extra_data(struct coredump_params *cprm); + extern size_t elf_core_extra_data_size(void); ++#else ++static inline Elf_Half elf_core_extra_phdrs(void) ++{ ++ return 0; ++} ++ ++static inline int elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset) ++{ ++ return 1; ++} ++ ++static inline int elf_core_write_extra_data(struct coredump_params *cprm) ++{ ++ return 1; ++} ++ ++static inline size_t elf_core_extra_data_size(void) ++{ ++ return 0; ++} ++#endif + + #endif /* _LINUX_ELFCORE_H */ +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h +index cc185525a94ba..c4a4a39a458dc 100644 +--- a/include/linux/hugetlb.h ++++ b/include/linux/hugetlb.h +@@ -506,6 +506,9 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm) + { + atomic_long_sub(l, &mm->hugetlb_usage); + } ++ ++void set_page_huge_active(struct page *page); ++ + #else /* CONFIG_HUGETLB_PAGE */ + struct hstate {}; + #define alloc_huge_page(v, a, r) NULL +diff --git a/kernel/Makefile b/kernel/Makefile +index a672bece1f499..8b73d57804f23 100644 +--- a/kernel/Makefile ++++ b/kernel/Makefile +@@ -77,9 +77,6 @@ obj-$(CONFIG_TASK_DELAY_ACCT) += delayacct.o + obj-$(CONFIG_TASKSTATS) += taskstats.o tsacct.o + obj-$(CONFIG_TRACEPOINTS) += tracepoint.o + obj-$(CONFIG_LATENCYTOP) += latencytop.o +-obj-$(CONFIG_BINFMT_ELF) += elfcore.o +-obj-$(CONFIG_COMPAT_BINFMT_ELF) += elfcore.o +-obj-$(CONFIG_BINFMT_ELF_FDPIC) += elfcore.o + obj-$(CONFIG_FUNCTION_TRACER) += trace/ + obj-$(CONFIG_TRACING) += trace/ + obj-$(CONFIG_TRACE_CLOCK) += trace/ +diff --git a/kernel/elfcore.c b/kernel/elfcore.c +deleted file mode 100644 +index a2b29b9bdfcb2..0000000000000 +--- a/kernel/elfcore.c ++++ /dev/null +@@ -1,25 +0,0 @@ +-#include +-#include +-#include +-#include +-#include +- +-Elf_Half __weak elf_core_extra_phdrs(void) +-{ +- return 0; +-} +- +-int __weak elf_core_write_extra_phdrs(struct coredump_params *cprm, loff_t offset) +-{ +- return 1; +-} +- +-int __weak elf_core_write_extra_data(struct coredump_params *cprm) +-{ +- return 1; +-} +- +-size_t __weak elf_core_extra_data_size(void) +-{ +- return 0; +-} +diff --git a/kernel/futex.c b/kernel/futex.c +index f1990e2a51e5a..199e63c5b6120 100644 +--- a/kernel/futex.c ++++ b/kernel/futex.c +@@ -835,6 +835,29 @@ static struct futex_pi_state * alloc_pi_state(void) + return pi_state; + } + ++static void pi_state_update_owner(struct futex_pi_state *pi_state, ++ struct task_struct *new_owner) ++{ ++ struct task_struct *old_owner = pi_state->owner; ++ ++ lockdep_assert_held(&pi_state->pi_mutex.wait_lock); ++ ++ if (old_owner) { ++ raw_spin_lock(&old_owner->pi_lock); ++ WARN_ON(list_empty(&pi_state->list)); ++ list_del_init(&pi_state->list); ++ raw_spin_unlock(&old_owner->pi_lock); ++ } ++ ++ if (new_owner) { ++ raw_spin_lock(&new_owner->pi_lock); ++ WARN_ON(!list_empty(&pi_state->list)); ++ list_add(&pi_state->list, &new_owner->pi_state_list); ++ pi_state->owner = new_owner; ++ raw_spin_unlock(&new_owner->pi_lock); ++ } ++} ++ + /* + * Must be called with the hb lock held. + */ +@@ -851,11 +874,8 @@ static void free_pi_state(struct futex_pi_state *pi_state) + * and has cleaned up the pi_state already + */ + if (pi_state->owner) { +- raw_spin_lock_irq(&pi_state->owner->pi_lock); +- list_del_init(&pi_state->list); +- raw_spin_unlock_irq(&pi_state->owner->pi_lock); +- +- rt_mutex_proxy_unlock(&pi_state->pi_mutex, pi_state->owner); ++ pi_state_update_owner(pi_state, NULL); ++ rt_mutex_proxy_unlock(&pi_state->pi_mutex); + } + + if (current->pi_state_cache) +@@ -936,7 +956,7 @@ static void exit_pi_state_list(struct task_struct *curr) + pi_state->owner = NULL; + raw_spin_unlock_irq(&curr->pi_lock); + +- rt_mutex_unlock(&pi_state->pi_mutex); ++ rt_mutex_futex_unlock(&pi_state->pi_mutex); + + spin_unlock(&hb->lock); + +@@ -992,7 +1012,8 @@ static void exit_pi_state_list(struct task_struct *curr) + * FUTEX_OWNER_DIED bit. See [4] + * + * [10] There is no transient state which leaves owner and user space +- * TID out of sync. ++ * TID out of sync. Except one error case where the kernel is denied ++ * write access to the user address, see fixup_pi_state_owner(). + */ + + /* +@@ -1389,12 +1410,19 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this, + new_owner = rt_mutex_next_owner(&pi_state->pi_mutex); + + /* +- * It is possible that the next waiter (the one that brought +- * this owner to the kernel) timed out and is no longer +- * waiting on the lock. ++ * When we interleave with futex_lock_pi() where it does ++ * rt_mutex_timed_futex_lock(), we might observe @this futex_q waiter, ++ * but the rt_mutex's wait_list can be empty (either still, or again, ++ * depending on which side we land). ++ * ++ * When this happens, give up our locks and try again, giving the ++ * futex_lock_pi() instance time to complete, either by waiting on the ++ * rtmutex or removing itself from the futex queue. + */ +- if (!new_owner) +- new_owner = this->task; ++ if (!new_owner) { ++ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); ++ return -EAGAIN; ++ } + + /* + * We pass it to the next owner. The WAITERS bit is always +@@ -1420,36 +1448,24 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this, + else + ret = -EINVAL; + } +- if (ret) { +- raw_spin_unlock(&pi_state->pi_mutex.wait_lock); +- return ret; +- } +- +- raw_spin_lock_irq(&pi_state->owner->pi_lock); +- WARN_ON(list_empty(&pi_state->list)); +- list_del_init(&pi_state->list); +- raw_spin_unlock_irq(&pi_state->owner->pi_lock); +- +- raw_spin_lock_irq(&new_owner->pi_lock); +- WARN_ON(!list_empty(&pi_state->list)); +- list_add(&pi_state->list, &new_owner->pi_state_list); +- pi_state->owner = new_owner; +- raw_spin_unlock_irq(&new_owner->pi_lock); +- +- raw_spin_unlock(&pi_state->pi_mutex.wait_lock); + +- deboost = rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q); ++ if (!ret) { ++ /* ++ * This is a point of no return; once we modified the uval ++ * there is no going back and subsequent operations must ++ * not fail. ++ */ ++ pi_state_update_owner(pi_state, new_owner); ++ deboost = __rt_mutex_futex_unlock(&pi_state->pi_mutex, &wake_q); ++ } + +- /* +- * First unlock HB so the waiter does not spin on it once he got woken +- * up. Second wake up the waiter before the priority is adjusted. If we +- * deboost first (and lose our higher priority), then the task might get +- * scheduled away before the wake up can take place. +- */ ++ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); + spin_unlock(&hb->lock); +- wake_up_q(&wake_q); +- if (deboost) ++ ++ if (deboost) { ++ wake_up_q(&wake_q); + rt_mutex_adjust_prio(current); ++ } + + return 0; + } +@@ -2222,30 +2238,32 @@ static void unqueue_me_pi(struct futex_q *q) + spin_unlock(q->lock_ptr); + } + +-/* +- * Fixup the pi_state owner with the new owner. +- * +- * Must be called with hash bucket lock held and mm->sem held for non +- * private futexes. +- */ +-static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, +- struct task_struct *newowner) ++static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, ++ struct task_struct *argowner) + { +- u32 newtid = task_pid_vnr(newowner) | FUTEX_WAITERS; + struct futex_pi_state *pi_state = q->pi_state; +- struct task_struct *oldowner = pi_state->owner; +- u32 uval, uninitialized_var(curval), newval; +- int ret; ++ struct task_struct *oldowner, *newowner; ++ u32 uval, curval, newval, newtid; ++ int err = 0; ++ ++ oldowner = pi_state->owner; + + /* Owner died? */ + if (!pi_state->owner) + newtid |= FUTEX_OWNER_DIED; + + /* +- * We are here either because we stole the rtmutex from the +- * previous highest priority waiter or we are the highest priority +- * waiter but failed to get the rtmutex the first time. +- * We have to replace the newowner TID in the user space variable. ++ * We are here because either: ++ * ++ * - we stole the lock and pi_state->owner needs updating to reflect ++ * that (@argowner == current), ++ * ++ * or: ++ * ++ * - someone stole our lock and we need to fix things to point to the ++ * new owner (@argowner == NULL). ++ * ++ * Either way, we have to replace the TID in the user space variable. + * This must be atomic as we have to preserve the owner died bit here. + * + * Note: We write the user space value _before_ changing the pi_state +@@ -2259,6 +2277,39 @@ static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, + * in lookup_pi_state. + */ + retry: ++ if (!argowner) { ++ if (oldowner != current) { ++ /* ++ * We raced against a concurrent self; things are ++ * already fixed up. Nothing to do. ++ */ ++ return 0; ++ } ++ ++ if (__rt_mutex_futex_trylock(&pi_state->pi_mutex)) { ++ /* We got the lock after all, nothing to fix. */ ++ return 1; ++ } ++ ++ /* ++ * Since we just failed the trylock; there must be an owner. ++ */ ++ newowner = rt_mutex_owner(&pi_state->pi_mutex); ++ BUG_ON(!newowner); ++ } else { ++ WARN_ON_ONCE(argowner != current); ++ if (oldowner == current) { ++ /* ++ * We raced against a concurrent self; things are ++ * already fixed up. Nothing to do. ++ */ ++ return 1; ++ } ++ newowner = argowner; ++ } ++ ++ newtid = task_pid_vnr(newowner) | FUTEX_WAITERS; ++ + if (get_futex_value_locked(&uval, uaddr)) + goto handle_fault; + +@@ -2276,19 +2327,8 @@ retry: + * We fixed up user space. Now we need to fix the pi_state + * itself. + */ +- if (pi_state->owner != NULL) { +- raw_spin_lock_irq(&pi_state->owner->pi_lock); +- WARN_ON(list_empty(&pi_state->list)); +- list_del_init(&pi_state->list); +- raw_spin_unlock_irq(&pi_state->owner->pi_lock); +- } +- +- pi_state->owner = newowner; ++ pi_state_update_owner(pi_state, newowner); + +- raw_spin_lock_irq(&newowner->pi_lock); +- WARN_ON(!list_empty(&pi_state->list)); +- list_add(&pi_state->list, &newowner->pi_state_list); +- raw_spin_unlock_irq(&newowner->pi_lock); + return 0; + + /* +@@ -2304,7 +2344,7 @@ retry: + handle_fault: + spin_unlock(q->lock_ptr); + +- ret = fault_in_user_writeable(uaddr); ++ err = fault_in_user_writeable(uaddr); + + spin_lock(q->lock_ptr); + +@@ -2312,12 +2352,45 @@ handle_fault: + * Check if someone else fixed it for us: + */ + if (pi_state->owner != oldowner) +- return 0; ++ return argowner == current; + +- if (ret) +- return ret; ++ /* Retry if err was -EAGAIN or the fault in succeeded */ ++ if (!err) ++ goto retry; + +- goto retry; ++ /* ++ * fault_in_user_writeable() failed so user state is immutable. At ++ * best we can make the kernel state consistent but user state will ++ * be most likely hosed and any subsequent unlock operation will be ++ * rejected due to PI futex rule [10]. ++ * ++ * Ensure that the rtmutex owner is also the pi_state owner despite ++ * the user space value claiming something different. There is no ++ * point in unlocking the rtmutex if current is the owner as it ++ * would need to wait until the next waiter has taken the rtmutex ++ * to guarantee consistent state. Keep it simple. Userspace asked ++ * for this wreckaged state. ++ * ++ * The rtmutex has an owner - either current or some other ++ * task. See the EAGAIN loop above. ++ */ ++ pi_state_update_owner(pi_state, rt_mutex_owner(&pi_state->pi_mutex)); ++ ++ return err; ++} ++ ++static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q, ++ struct task_struct *argowner) ++{ ++ struct futex_pi_state *pi_state = q->pi_state; ++ int ret; ++ ++ lockdep_assert_held(q->lock_ptr); ++ ++ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock); ++ ret = __fixup_pi_state_owner(uaddr, q, argowner); ++ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); ++ return ret; + } + + static long futex_wait_restart(struct restart_block *restart); +@@ -2339,13 +2412,16 @@ static long futex_wait_restart(struct restart_block *restart); + */ + static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked) + { +- struct task_struct *owner; + int ret = 0; + + if (locked) { + /* + * Got the lock. We might not be the anticipated owner if we + * did a lock-steal - fix up the PI-state in that case: ++ * ++ * Speculative pi_state->owner read (we don't hold wait_lock); ++ * since we own the lock pi_state->owner == current is the ++ * stable state, anything else needs more attention. + */ + if (q->pi_state->owner != current) + ret = fixup_pi_state_owner(uaddr, q, current); +@@ -2353,43 +2429,24 @@ static int fixup_owner(u32 __user *uaddr, struct futex_q *q, int locked) + } + + /* +- * Catch the rare case, where the lock was released when we were on the +- * way back before we locked the hash bucket. ++ * If we didn't get the lock; check if anybody stole it from us. In ++ * that case, we need to fix up the uval to point to them instead of ++ * us, otherwise bad things happen. [10] ++ * ++ * Another speculative read; pi_state->owner == current is unstable ++ * but needs our attention. + */ + if (q->pi_state->owner == current) { +- /* +- * Try to get the rt_mutex now. This might fail as some other +- * task acquired the rt_mutex after we removed ourself from the +- * rt_mutex waiters list. +- */ +- if (rt_mutex_trylock(&q->pi_state->pi_mutex)) { +- locked = 1; +- goto out; +- } +- +- /* +- * pi_state is incorrect, some other task did a lock steal and +- * we returned due to timeout or signal without taking the +- * rt_mutex. Too late. +- */ +- raw_spin_lock(&q->pi_state->pi_mutex.wait_lock); +- owner = rt_mutex_owner(&q->pi_state->pi_mutex); +- if (!owner) +- owner = rt_mutex_next_owner(&q->pi_state->pi_mutex); +- raw_spin_unlock(&q->pi_state->pi_mutex.wait_lock); +- ret = fixup_pi_state_owner(uaddr, q, owner); ++ ret = fixup_pi_state_owner(uaddr, q, NULL); + goto out; + } + + /* + * Paranoia check. If we did not take the lock, then we should not be +- * the owner of the rt_mutex. ++ * the owner of the rt_mutex. Warn and establish consistent state. + */ +- if (rt_mutex_owner(&q->pi_state->pi_mutex) == current) +- printk(KERN_ERR "fixup_owner: ret = %d pi-mutex: %p " +- "pi-state %p\n", ret, +- q->pi_state->pi_mutex.owner, +- q->pi_state->owner); ++ if (WARN_ON_ONCE(rt_mutex_owner(&q->pi_state->pi_mutex) == current)) ++ return fixup_pi_state_owner(uaddr, q, current); + + out: + return ret ? ret : locked; +@@ -2686,7 +2743,7 @@ retry_private: + if (!trylock) { + ret = rt_mutex_timed_futex_lock(&q.pi_state->pi_mutex, to); + } else { +- ret = rt_mutex_trylock(&q.pi_state->pi_mutex); ++ ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex); + /* Fixup the trylock return value: */ + ret = ret ? 0 : -EWOULDBLOCK; + } +@@ -2704,13 +2761,6 @@ retry_private: + if (res) + ret = (res < 0) ? res : 0; + +- /* +- * If fixup_owner() faulted and was unable to handle the fault, unlock +- * it and return the fault to userspace. +- */ +- if (ret && (rt_mutex_owner(&q.pi_state->pi_mutex) == current)) +- rt_mutex_unlock(&q.pi_state->pi_mutex); +- + /* Unqueue and drop the lock */ + unqueue_me_pi(&q); + +@@ -3015,8 +3065,6 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, + if (q.pi_state && (q.pi_state->owner != current)) { + spin_lock(q.lock_ptr); + ret = fixup_pi_state_owner(uaddr2, &q, current); +- if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current) +- rt_mutex_unlock(&q.pi_state->pi_mutex); + /* + * Drop the reference to the pi state which + * the requeue_pi() code acquired for us. +@@ -3053,14 +3101,6 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, + if (res) + ret = (res < 0) ? res : 0; + +- /* +- * If fixup_pi_state_owner() faulted and was unable to handle +- * the fault, unlock the rt_mutex and return the fault to +- * userspace. +- */ +- if (ret && rt_mutex_owner(pi_mutex) == current) +- rt_mutex_unlock(pi_mutex); +- + /* Unqueue and drop the lock. */ + unqueue_me_pi(&q); + } +diff --git a/kernel/kprobes.c b/kernel/kprobes.c +index 33c37dbc56a05..90f46c8aa9007 100644 +--- a/kernel/kprobes.c ++++ b/kernel/kprobes.c +@@ -1884,6 +1884,10 @@ int register_kretprobe(struct kretprobe *rp) + int i; + void *addr; + ++ /* If only rp->kp.addr is specified, check reregistering kprobes */ ++ if (rp->kp.addr && check_kprobe_rereg(&rp->kp)) ++ return -EINVAL; ++ + if (kretprobe_blacklist_size) { + addr = kprobe_addr(&rp->kp); + if (IS_ERR(addr)) +diff --git a/kernel/locking/rtmutex-debug.c b/kernel/locking/rtmutex-debug.c +index 62b6cee8ea7f9..0613c4b1d0596 100644 +--- a/kernel/locking/rtmutex-debug.c ++++ b/kernel/locking/rtmutex-debug.c +@@ -173,12 +173,3 @@ void debug_rt_mutex_init(struct rt_mutex *lock, const char *name) + lock->name = name; + } + +-void +-rt_mutex_deadlock_account_lock(struct rt_mutex *lock, struct task_struct *task) +-{ +-} +- +-void rt_mutex_deadlock_account_unlock(struct task_struct *task) +-{ +-} +- +diff --git a/kernel/locking/rtmutex-debug.h b/kernel/locking/rtmutex-debug.h +index d0519c3432b67..b585af9a1b508 100644 +--- a/kernel/locking/rtmutex-debug.h ++++ b/kernel/locking/rtmutex-debug.h +@@ -9,9 +9,6 @@ + * This file contains macros used solely by rtmutex.c. Debug version. + */ + +-extern void +-rt_mutex_deadlock_account_lock(struct rt_mutex *lock, struct task_struct *task); +-extern void rt_mutex_deadlock_account_unlock(struct task_struct *task); + extern void debug_rt_mutex_init_waiter(struct rt_mutex_waiter *waiter); + extern void debug_rt_mutex_free_waiter(struct rt_mutex_waiter *waiter); + extern void debug_rt_mutex_init(struct rt_mutex *lock, const char *name); +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index dd173df9ee5e5..1c0cb5c3c6ad6 100644 +--- a/kernel/locking/rtmutex.c ++++ b/kernel/locking/rtmutex.c +@@ -937,8 +937,6 @@ takeit: + */ + rt_mutex_set_owner(lock, task); + +- rt_mutex_deadlock_account_lock(lock, task); +- + return 1; + } + +@@ -1286,6 +1284,19 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, + return ret; + } + ++static inline int __rt_mutex_slowtrylock(struct rt_mutex *lock) ++{ ++ int ret = try_to_take_rt_mutex(lock, current, NULL); ++ ++ /* ++ * try_to_take_rt_mutex() sets the lock waiters bit ++ * unconditionally. Clean this up. ++ */ ++ fixup_rt_mutex_waiters(lock); ++ ++ return ret; ++} ++ + /* + * Slow path try-lock function: + */ +@@ -1307,13 +1318,7 @@ static inline int rt_mutex_slowtrylock(struct rt_mutex *lock) + */ + raw_spin_lock(&lock->wait_lock); + +- ret = try_to_take_rt_mutex(lock, current, NULL); +- +- /* +- * try_to_take_rt_mutex() sets the lock waiters bit +- * unconditionally. Clean this up. +- */ +- fixup_rt_mutex_waiters(lock); ++ ret = __rt_mutex_slowtrylock(lock); + + raw_spin_unlock(&lock->wait_lock); + +@@ -1331,8 +1336,6 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock, + + debug_rt_mutex_unlock(lock); + +- rt_mutex_deadlock_account_unlock(current); +- + /* + * We must be careful here if the fast path is enabled. If we + * have no waiters queued we cannot set owner to NULL here +@@ -1398,11 +1401,10 @@ rt_mutex_fastlock(struct rt_mutex *lock, int state, + struct hrtimer_sleeper *timeout, + enum rtmutex_chainwalk chwalk)) + { +- if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) { +- rt_mutex_deadlock_account_lock(lock, current); ++ if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) + return 0; +- } else +- return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK); ++ ++ return slowfn(lock, state, NULL, RT_MUTEX_MIN_CHAINWALK); + } + + static inline int +@@ -1414,21 +1416,19 @@ rt_mutex_timed_fastlock(struct rt_mutex *lock, int state, + enum rtmutex_chainwalk chwalk)) + { + if (chwalk == RT_MUTEX_MIN_CHAINWALK && +- likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) { +- rt_mutex_deadlock_account_lock(lock, current); ++ likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) + return 0; +- } else +- return slowfn(lock, state, timeout, chwalk); ++ ++ return slowfn(lock, state, timeout, chwalk); + } + + static inline int + rt_mutex_fasttrylock(struct rt_mutex *lock, + int (*slowfn)(struct rt_mutex *lock)) + { +- if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) { +- rt_mutex_deadlock_account_lock(lock, current); ++ if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) + return 1; +- } ++ + return slowfn(lock); + } + +@@ -1438,19 +1438,18 @@ rt_mutex_fastunlock(struct rt_mutex *lock, + struct wake_q_head *wqh)) + { + WAKE_Q(wake_q); ++ bool deboost; + +- if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) { +- rt_mutex_deadlock_account_unlock(current); ++ if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) ++ return; + +- } else { +- bool deboost = slowfn(lock, &wake_q); ++ deboost = slowfn(lock, &wake_q); + +- wake_up_q(&wake_q); ++ wake_up_q(&wake_q); + +- /* Undo pi boosting if necessary: */ +- if (deboost) +- rt_mutex_adjust_prio(current); +- } ++ /* Undo pi boosting if necessary: */ ++ if (deboost) ++ rt_mutex_adjust_prio(current); + } + + /** +@@ -1485,15 +1484,28 @@ EXPORT_SYMBOL_GPL(rt_mutex_lock_interruptible); + + /* + * Futex variant with full deadlock detection. ++ * Futex variants must not use the fast-path, see __rt_mutex_futex_unlock(). + */ +-int rt_mutex_timed_futex_lock(struct rt_mutex *lock, ++int __sched rt_mutex_timed_futex_lock(struct rt_mutex *lock, + struct hrtimer_sleeper *timeout) + { + might_sleep(); + +- return rt_mutex_timed_fastlock(lock, TASK_INTERRUPTIBLE, timeout, +- RT_MUTEX_FULL_CHAINWALK, +- rt_mutex_slowlock); ++ return rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, ++ timeout, RT_MUTEX_FULL_CHAINWALK); ++} ++ ++/* ++ * Futex variant, must not use fastpath. ++ */ ++int __sched rt_mutex_futex_trylock(struct rt_mutex *lock) ++{ ++ return rt_mutex_slowtrylock(lock); ++} ++ ++int __sched __rt_mutex_futex_trylock(struct rt_mutex *lock) ++{ ++ return __rt_mutex_slowtrylock(lock); + } + + /** +@@ -1552,20 +1564,38 @@ void __sched rt_mutex_unlock(struct rt_mutex *lock) + EXPORT_SYMBOL_GPL(rt_mutex_unlock); + + /** +- * rt_mutex_futex_unlock - Futex variant of rt_mutex_unlock +- * @lock: the rt_mutex to be unlocked +- * +- * Returns: true/false indicating whether priority adjustment is +- * required or not. ++ * Futex variant, that since futex variants do not use the fast-path, can be ++ * simple and will not need to retry. + */ +-bool __sched rt_mutex_futex_unlock(struct rt_mutex *lock, +- struct wake_q_head *wqh) ++bool __sched __rt_mutex_futex_unlock(struct rt_mutex *lock, ++ struct wake_q_head *wake_q) + { +- if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) { +- rt_mutex_deadlock_account_unlock(current); +- return false; ++ lockdep_assert_held(&lock->wait_lock); ++ ++ debug_rt_mutex_unlock(lock); ++ ++ if (!rt_mutex_has_waiters(lock)) { ++ lock->owner = NULL; ++ return false; /* done */ ++ } ++ ++ mark_wakeup_next_waiter(wake_q, lock); ++ return true; /* deboost and wakeups */ ++} ++ ++void __sched rt_mutex_futex_unlock(struct rt_mutex *lock) ++{ ++ WAKE_Q(wake_q); ++ bool deboost; ++ ++ raw_spin_lock_irq(&lock->wait_lock); ++ deboost = __rt_mutex_futex_unlock(lock, &wake_q); ++ raw_spin_unlock_irq(&lock->wait_lock); ++ ++ if (deboost) { ++ wake_up_q(&wake_q); ++ rt_mutex_adjust_prio(current); + } +- return rt_mutex_slowunlock(lock, wqh); + } + + /** +@@ -1622,7 +1652,6 @@ void rt_mutex_init_proxy_locked(struct rt_mutex *lock, + __rt_mutex_init(lock, NULL); + debug_rt_mutex_proxy_lock(lock, proxy_owner); + rt_mutex_set_owner(lock, proxy_owner); +- rt_mutex_deadlock_account_lock(lock, proxy_owner); + } + + /** +@@ -1633,12 +1662,10 @@ void rt_mutex_init_proxy_locked(struct rt_mutex *lock, + * No locking. Caller has to do serializing itself + * Special API call for PI-futex support + */ +-void rt_mutex_proxy_unlock(struct rt_mutex *lock, +- struct task_struct *proxy_owner) ++void rt_mutex_proxy_unlock(struct rt_mutex *lock) + { + debug_rt_mutex_proxy_unlock(lock); + rt_mutex_set_owner(lock, NULL); +- rt_mutex_deadlock_account_unlock(proxy_owner); + } + + /** +diff --git a/kernel/locking/rtmutex.h b/kernel/locking/rtmutex.h +index c4060584c4076..6607802efa8bd 100644 +--- a/kernel/locking/rtmutex.h ++++ b/kernel/locking/rtmutex.h +@@ -11,8 +11,6 @@ + */ + + #define rt_mutex_deadlock_check(l) (0) +-#define rt_mutex_deadlock_account_lock(m, t) do { } while (0) +-#define rt_mutex_deadlock_account_unlock(l) do { } while (0) + #define debug_rt_mutex_init_waiter(w) do { } while (0) + #define debug_rt_mutex_free_waiter(w) do { } while (0) + #define debug_rt_mutex_lock(l) do { } while (0) +diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h +index 6f8f68edb700c..4584db96265d4 100644 +--- a/kernel/locking/rtmutex_common.h ++++ b/kernel/locking/rtmutex_common.h +@@ -101,8 +101,7 @@ enum rtmutex_chainwalk { + extern struct task_struct *rt_mutex_next_owner(struct rt_mutex *lock); + extern void rt_mutex_init_proxy_locked(struct rt_mutex *lock, + struct task_struct *proxy_owner); +-extern void rt_mutex_proxy_unlock(struct rt_mutex *lock, +- struct task_struct *proxy_owner); ++extern void rt_mutex_proxy_unlock(struct rt_mutex *lock); + extern int rt_mutex_start_proxy_lock(struct rt_mutex *lock, + struct rt_mutex_waiter *waiter, + struct task_struct *task); +@@ -112,8 +111,13 @@ extern int rt_mutex_wait_proxy_lock(struct rt_mutex *lock, + extern bool rt_mutex_cleanup_proxy_lock(struct rt_mutex *lock, + struct rt_mutex_waiter *waiter); + extern int rt_mutex_timed_futex_lock(struct rt_mutex *l, struct hrtimer_sleeper *to); +-extern bool rt_mutex_futex_unlock(struct rt_mutex *lock, +- struct wake_q_head *wqh); ++extern int rt_mutex_futex_trylock(struct rt_mutex *l); ++extern int __rt_mutex_futex_trylock(struct rt_mutex *l); ++ ++extern void rt_mutex_futex_unlock(struct rt_mutex *lock); ++extern bool __rt_mutex_futex_unlock(struct rt_mutex *lock, ++ struct wake_q_head *wqh); ++ + extern void rt_mutex_adjust_prio(struct task_struct *task); + + #ifdef CONFIG_DEBUG_RT_MUTEXES +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 7a23792230854..dc877712ef1f3 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -1184,12 +1184,11 @@ struct hstate *size_to_hstate(unsigned long size) + */ + bool page_huge_active(struct page *page) + { +- VM_BUG_ON_PAGE(!PageHuge(page), page); +- return PageHead(page) && PagePrivate(&page[1]); ++ return PageHeadHuge(page) && PagePrivate(&page[1]); + } + + /* never called for tail page */ +-static void set_page_huge_active(struct page *page) ++void set_page_huge_active(struct page *page) + { + VM_BUG_ON_PAGE(!PageHeadHuge(page), page); + SetPagePrivate(&page[1]); +@@ -4544,9 +4543,9 @@ bool isolate_huge_page(struct page *page, struct list_head *list) + { + bool ret = true; + +- VM_BUG_ON_PAGE(!PageHead(page), page); + spin_lock(&hugetlb_lock); +- if (!page_huge_active(page) || !get_page_unless_zero(page)) { ++ if (!PageHeadHuge(page) || !page_huge_active(page) || ++ !get_page_unless_zero(page)) { + ret = false; + goto unlock; + } +diff --git a/net/lapb/lapb_out.c b/net/lapb/lapb_out.c +index ba4d015bd1a67..7cbb77b7479a6 100644 +--- a/net/lapb/lapb_out.c ++++ b/net/lapb/lapb_out.c +@@ -87,7 +87,8 @@ void lapb_kick(struct lapb_cb *lapb) + skb = skb_dequeue(&lapb->write_queue); + + do { +- if ((skbn = skb_clone(skb, GFP_ATOMIC)) == NULL) { ++ skbn = skb_copy(skb, GFP_ATOMIC); ++ if (!skbn) { + skb_queue_head(&lapb->write_queue, skb); + break; + } +diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c +index df2e4e3112177..5d097ae26b70e 100644 +--- a/net/mac80211/driver-ops.c ++++ b/net/mac80211/driver-ops.c +@@ -128,8 +128,11 @@ int drv_sta_state(struct ieee80211_local *local, + } else if (old_state == IEEE80211_STA_AUTH && + new_state == IEEE80211_STA_ASSOC) { + ret = drv_sta_add(local, sdata, &sta->sta); +- if (ret == 0) ++ if (ret == 0) { + sta->uploaded = true; ++ if (rcu_access_pointer(sta->sta.rates)) ++ drv_sta_rate_tbl_update(local, sdata, &sta->sta); ++ } + } else if (old_state == IEEE80211_STA_ASSOC && + new_state == IEEE80211_STA_AUTH) { + drv_sta_remove(local, sdata, &sta->sta); +diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c +index a4e2f4e67f941..a4d9e9ee06bee 100644 +--- a/net/mac80211/rate.c ++++ b/net/mac80211/rate.c +@@ -888,7 +888,8 @@ int rate_control_set_rates(struct ieee80211_hw *hw, + if (old) + kfree_rcu(old, rcu_head); + +- drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta); ++ if (sta->uploaded) ++ drv_sta_rate_tbl_update(hw_to_local(hw), sta->sdata, pubsta); + + return 0; + } +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c +index b379c330a3388..5e9ab343c062b 100644 +--- a/net/sched/sch_api.c ++++ b/net/sched/sch_api.c +@@ -391,7 +391,8 @@ struct qdisc_rate_table *qdisc_get_rtab(struct tc_ratespec *r, struct nlattr *ta + { + struct qdisc_rate_table *rtab; + +- if (tab == NULL || r->rate == 0 || r->cell_log == 0 || ++ if (tab == NULL || r->rate == 0 || ++ r->cell_log == 0 || r->cell_log >= 32 || + nla_len(tab) != TC_RTAB_SIZE) + return NULL; + +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 854d2da02cc98..c7061a5dd809a 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -6211,7 +6211,7 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = { + SND_HDA_PIN_QUIRK(0x10ec0299, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE, + ALC225_STANDARD_PINS, + {0x12, 0xb7a60130}, +- {0x13, 0xb8a60140}, ++ {0x13, 0xb8a61140}, + {0x17, 0x90170110}), + {} + };