From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.9 commit in: /
Date: Fri, 20 Mar 2020 11:54:55 +0000 (UTC) [thread overview]
Message-ID: <1584705273.aa82544079131e7dd335b996c29c4a43c6b3545d.mpagano@gentoo> (raw)
commit: aa82544079131e7dd335b996c29c4a43c6b3545d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 20 11:54:33 2020 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 20 11:54:33 2020 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=aa825440
Linux patch 4.9.217
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1216_linux-4.9.217.patch | 3105 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3109 insertions(+)
diff --git a/0000_README b/0000_README
index bb20a18..785a9a7 100644
--- a/0000_README
+++ b/0000_README
@@ -907,6 +907,10 @@ Patch: 1215_linux-4.9.216.patch
From: http://www.kernel.org
Desc: Linux 4.9.216
+Patch: 1216_linux-4.9.217.patch
+From: http://www.kernel.org
+Desc: Linux 4.9.217
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1216_linux-4.9.217.patch b/1216_linux-4.9.217.patch
new file mode 100644
index 0000000..7989681
--- /dev/null
+++ b/1216_linux-4.9.217.patch
@@ -0,0 +1,3105 @@
+diff --git a/Documentation/filesystems/porting b/Documentation/filesystems/porting
+index bdd025ceb763..85ed3450099a 100644
+--- a/Documentation/filesystems/porting
++++ b/Documentation/filesystems/porting
+@@ -596,3 +596,10 @@ in your dentry operations instead.
+ [mandatory]
+ ->rename() has an added flags argument. Any flags not handled by the
+ filesystem should result in EINVAL being returned.
++--
++[mandatory]
++
++ [should've been added in 2016] stale comment in finish_open()
++ nonwithstanding, failure exits in ->atomic_open() instances should
++ *NOT* fput() the file, no matter what. Everything is handled by the
++ caller.
+diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
+index b2d2f4539a3f..e05d65d6fcb6 100644
+--- a/Documentation/kernel-parameters.txt
++++ b/Documentation/kernel-parameters.txt
+@@ -335,6 +335,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
+ dynamic table installation which will install SSDT
+ tables to /sys/firmware/acpi/tables/dynamic.
+
++ acpi_no_watchdog [HW,ACPI,WDT]
++ Ignore the ACPI-based watchdog interface (WDAT) and let
++ a native driver control the watchdog device instead.
++
+ acpi_rsdp= [ACPI,EFI,KEXEC]
+ Pass the RSDP address to the kernel, mostly used
+ on machines running EFI runtime service to boot the
+diff --git a/Makefile b/Makefile
+index f0290097784a..96b230200cbe 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 9
+-SUBLEVEL = 216
++SUBLEVEL = 217
+ EXTRAVERSION =
+ NAME = Roaring Lionus
+
+diff --git a/arch/arc/include/asm/linkage.h b/arch/arc/include/asm/linkage.h
+index b29f1a9fd6f7..07c8e1a6c56e 100644
+--- a/arch/arc/include/asm/linkage.h
++++ b/arch/arc/include/asm/linkage.h
+@@ -14,6 +14,8 @@
+ #ifdef __ASSEMBLY__
+
+ #define ASM_NL ` /* use '`' to mark new line in macro */
++#define __ALIGN .align 4
++#define __ALIGN_STR __stringify(__ALIGN)
+
+ /* annotation for data we want in DCCM - if enabled in .config */
+ .macro ARCFP_DATA nm
+diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
+index 890439737374..bf6e45dec017 100644
+--- a/arch/arm/kernel/vdso.c
++++ b/arch/arm/kernel/vdso.c
+@@ -85,6 +85,8 @@ static bool __init cntvct_functional(void)
+ * this.
+ */
+ np = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
++ if (!np)
++ np = of_find_compatible_node(NULL, NULL, "arm,armv8-timer");
+ if (!np)
+ goto out_put;
+
+diff --git a/arch/arm/lib/copy_from_user.S b/arch/arm/lib/copy_from_user.S
+index 6709a8d33963..f1e34f16cfab 100644
+--- a/arch/arm/lib/copy_from_user.S
++++ b/arch/arm/lib/copy_from_user.S
+@@ -100,7 +100,7 @@ ENTRY(arm_copy_from_user)
+
+ ENDPROC(arm_copy_from_user)
+
+- .pushsection .fixup,"ax"
++ .pushsection .text.fixup,"ax"
+ .align 0
+ copy_abort_preamble
+ ldmfd sp!, {r1, r2, r3}
+diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
+index c16c99bc2a10..6bfb9a68134c 100644
+--- a/arch/x86/events/amd/uncore.c
++++ b/arch/x86/events/amd/uncore.c
+@@ -185,20 +185,18 @@ static int amd_uncore_event_init(struct perf_event *event)
+
+ /*
+ * NB and Last level cache counters (MSRs) are shared across all cores
+- * that share the same NB / Last level cache. Interrupts can be directed
+- * to a single target core, however, event counts generated by processes
+- * running on other cores cannot be masked out. So we do not support
+- * sampling and per-thread events.
++ * that share the same NB / Last level cache. On family 16h and below,
++ * Interrupts can be directed to a single target core, however, event
++ * counts generated by processes running on other cores cannot be masked
++ * out. So we do not support sampling and per-thread events via
++ * CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts:
+ */
+- if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)
+- return -EINVAL;
+
+ /* NB and Last level cache counters do not have usr/os/guest/host bits */
+ if (event->attr.exclude_user || event->attr.exclude_kernel ||
+ event->attr.exclude_host || event->attr.exclude_guest)
+ return -EINVAL;
+
+- /* and we do not enable counter overflow interrupts */
+ hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;
+ hwc->idx = -1;
+
+@@ -275,6 +273,7 @@ static struct pmu amd_nb_pmu = {
+ .start = amd_uncore_start,
+ .stop = amd_uncore_stop,
+ .read = amd_uncore_read,
++ .capabilities = PERF_PMU_CAP_NO_INTERRUPT,
+ };
+
+ static struct pmu amd_llc_pmu = {
+@@ -287,6 +286,7 @@ static struct pmu amd_llc_pmu = {
+ .start = amd_uncore_start,
+ .stop = amd_uncore_stop,
+ .read = amd_uncore_read,
++ .capabilities = PERF_PMU_CAP_NO_INTERRUPT,
+ };
+
+ static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index e9c7090858d6..da3cd734dee1 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -5022,6 +5022,7 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)
+ ctxt->fetch.ptr = ctxt->fetch.data;
+ ctxt->fetch.end = ctxt->fetch.data + insn_len;
+ ctxt->opcode_len = 1;
++ ctxt->intercept = x86_intercept_none;
+ if (insn_len > 0)
+ memcpy(ctxt->fetch.data, insn, insn_len);
+ else {
+diff --git a/drivers/acpi/acpi_watchdog.c b/drivers/acpi/acpi_watchdog.c
+index 7ef0a0e105e1..4296f4932294 100644
+--- a/drivers/acpi/acpi_watchdog.c
++++ b/drivers/acpi/acpi_watchdog.c
+@@ -58,12 +58,14 @@ static bool acpi_watchdog_uses_rtc(const struct acpi_table_wdat *wdat)
+ }
+ #endif
+
++static bool acpi_no_watchdog;
++
+ static const struct acpi_table_wdat *acpi_watchdog_get_wdat(void)
+ {
+ const struct acpi_table_wdat *wdat = NULL;
+ acpi_status status;
+
+- if (acpi_disabled)
++ if (acpi_disabled || acpi_no_watchdog)
+ return NULL;
+
+ status = acpi_get_table(ACPI_SIG_WDAT, 0,
+@@ -91,6 +93,14 @@ bool acpi_has_watchdog(void)
+ }
+ EXPORT_SYMBOL_GPL(acpi_has_watchdog);
+
++/* ACPI watchdog can be disabled on boot command line */
++static int __init disable_acpi_watchdog(char *str)
++{
++ acpi_no_watchdog = true;
++ return 1;
++}
++__setup("acpi_no_watchdog", disable_acpi_watchdog);
++
+ void __init acpi_watchdog_init(void)
+ {
+ const struct acpi_wdat_entry *entries;
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 44ef1d66caa6..f287eec36b28 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -215,10 +215,12 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx,
+ err = __virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);
+ if (err) {
+ virtqueue_kick(vblk->vqs[qid].vq);
+- blk_mq_stop_hw_queue(hctx);
++ /* Don't stop the queue if -ENOMEM: we may have failed to
++ * bounce the buffer due to global resource outage.
++ */
++ if (err == -ENOSPC)
++ blk_mq_stop_hw_queue(hctx);
+ spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);
+- /* Out of mem doesn't actually happen, since we fall back
+- * to direct descriptors */
+ if (err == -ENOMEM || err == -ENOSPC)
+ return BLK_MQ_RQ_QUEUE_BUSY;
+ return BLK_MQ_RQ_QUEUE_ERROR;
+diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c
+index 3e626fd9bd4e..1c65f5ac4368 100644
+--- a/drivers/firmware/efi/efivars.c
++++ b/drivers/firmware/efi/efivars.c
+@@ -139,13 +139,16 @@ static ssize_t
+ efivar_attr_read(struct efivar_entry *entry, char *buf)
+ {
+ struct efi_variable *var = &entry->var;
++ unsigned long size = sizeof(var->Data);
+ char *str = buf;
++ int ret;
+
+ if (!entry || !buf)
+ return -EINVAL;
+
+- var->DataSize = 1024;
+- if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data))
++ ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
++ var->DataSize = size;
++ if (ret)
+ return -EIO;
+
+ if (var->Attributes & EFI_VARIABLE_NON_VOLATILE)
+@@ -172,13 +175,16 @@ static ssize_t
+ efivar_size_read(struct efivar_entry *entry, char *buf)
+ {
+ struct efi_variable *var = &entry->var;
++ unsigned long size = sizeof(var->Data);
+ char *str = buf;
++ int ret;
+
+ if (!entry || !buf)
+ return -EINVAL;
+
+- var->DataSize = 1024;
+- if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data))
++ ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
++ var->DataSize = size;
++ if (ret)
+ return -EIO;
+
+ str += sprintf(str, "0x%lx\n", var->DataSize);
+@@ -189,12 +195,15 @@ static ssize_t
+ efivar_data_read(struct efivar_entry *entry, char *buf)
+ {
+ struct efi_variable *var = &entry->var;
++ unsigned long size = sizeof(var->Data);
++ int ret;
+
+ if (!entry || !buf)
+ return -EINVAL;
+
+- var->DataSize = 1024;
+- if (efivar_entry_get(entry, &var->Attributes, &var->DataSize, var->Data))
++ ret = efivar_entry_get(entry, &var->Attributes, &size, var->Data);
++ var->DataSize = size;
++ if (ret)
+ return -EIO;
+
+ memcpy(buf, var->Data, var->DataSize);
+@@ -263,6 +272,9 @@ efivar_store_raw(struct efivar_entry *entry, const char *buf, size_t count)
+ u8 *data;
+ int err;
+
++ if (!entry || !buf)
++ return -EINVAL;
++
+ if (is_compat()) {
+ struct compat_efi_variable *compat;
+
+@@ -314,14 +326,16 @@ efivar_show_raw(struct efivar_entry *entry, char *buf)
+ {
+ struct efi_variable *var = &entry->var;
+ struct compat_efi_variable *compat;
++ unsigned long datasize = sizeof(var->Data);
+ size_t size;
++ int ret;
+
+ if (!entry || !buf)
+ return 0;
+
+- var->DataSize = 1024;
+- if (efivar_entry_get(entry, &entry->var.Attributes,
+- &entry->var.DataSize, entry->var.Data))
++ ret = efivar_entry_get(entry, &var->Attributes, &datasize, var->Data);
++ var->DataSize = datasize;
++ if (ret)
+ return -EIO;
+
+ if (is_compat()) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+index ac8885562919..0c2ed1254585 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c
+@@ -363,8 +363,7 @@ bool amdgpu_atombios_get_connector_info_from_object_table(struct amdgpu_device *
+ router.ddc_valid = false;
+ router.cd_valid = false;
+ for (j = 0; j < ((le16_to_cpu(path->usSize) - 8) / 2); j++) {
+- uint8_t grph_obj_type=
+- grph_obj_type =
++ uint8_t grph_obj_type =
+ (le16_to_cpu(path->usGraphicObjIds[j]) &
+ OBJECT_TYPE_MASK) >> OBJECT_TYPE_SHIFT;
+
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 31c087e1746d..197eb75d10ef 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -341,7 +341,8 @@ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ unsigned long **bit, int *max)
+ {
+ if (usage->hid == (HID_UP_CUSTOM | 0x0003) ||
+- usage->hid == (HID_UP_MSVENDOR | 0x0003)) {
++ usage->hid == (HID_UP_MSVENDOR | 0x0003) ||
++ usage->hid == (HID_UP_HPVENDOR2 | 0x0003)) {
+ /* The fn key on Apple USB keyboards */
+ set_bit(EV_REP, hi->input->evbit);
+ hid_map_usage_clear(hi, usage, bit, max, EV_KEY, KEY_FN);
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index 10af8585c820..95052373a828 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -341,6 +341,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ },
+ .driver_data = (void *)&sipodev_desc
+ },
++ {
++ .ident = "Trekstor SURFBOOK E11B",
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TREKSTOR"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SURFBOOK E11B"),
++ },
++ .driver_data = (void *)&sipodev_desc
++ },
+ {
+ .ident = "Direkt-Tek DTLAPY116-2",
+ .matches = {
+diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
+index d51734e0c350..977070ce4fe9 100644
+--- a/drivers/iommu/dmar.c
++++ b/drivers/iommu/dmar.c
+@@ -39,6 +39,7 @@
+ #include <linux/dmi.h>
+ #include <linux/slab.h>
+ #include <linux/iommu.h>
++#include <linux/limits.h>
+ #include <asm/irq_remapping.h>
+ #include <asm/iommu_table.h>
+
+@@ -138,6 +139,13 @@ dmar_alloc_pci_notify_info(struct pci_dev *dev, unsigned long event)
+
+ BUG_ON(dev->is_virtfn);
+
++ /*
++ * Ignore devices that have a domain number higher than what can
++ * be looked up in DMAR, e.g. VMD subdevices with domain 0x10000
++ */
++ if (pci_domain_nr(dev->bus) > U16_MAX)
++ return NULL;
++
+ /* Only generate path[] for device addition event */
+ if (event == BUS_NOTIFY_ADD_DEVICE)
+ for (tmp = dev; tmp; tmp = tmp->bus->self)
+@@ -450,12 +458,13 @@ static int __init dmar_parse_one_andd(struct acpi_dmar_header *header,
+
+ /* Check for NUL termination within the designated length */
+ if (strnlen(andd->device_name, header->length - 8) == header->length - 8) {
+- WARN_TAINT(1, TAINT_FIRMWARE_WORKAROUND,
++ pr_warn(FW_BUG
+ "Your BIOS is broken; ANDD object name is not NUL-terminated\n"
+ "BIOS vendor: %s; Ver: %s; Product Version: %s\n",
+ dmi_get_system_info(DMI_BIOS_VENDOR),
+ dmi_get_system_info(DMI_BIOS_VERSION),
+ dmi_get_system_info(DMI_PRODUCT_VERSION));
++ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
+ return -EINVAL;
+ }
+ pr_info("ANDD device: %x name: %s\n", andd->device_number,
+@@ -481,14 +490,14 @@ static int dmar_parse_one_rhsa(struct acpi_dmar_header *header, void *arg)
+ return 0;
+ }
+ }
+- WARN_TAINT(
+- 1, TAINT_FIRMWARE_WORKAROUND,
++ pr_warn(FW_BUG
+ "Your BIOS is broken; RHSA refers to non-existent DMAR unit at %llx\n"
+ "BIOS vendor: %s; Ver: %s; Product Version: %s\n",
+- drhd->reg_base_addr,
++ rhsa->base_address,
+ dmi_get_system_info(DMI_BIOS_VENDOR),
+ dmi_get_system_info(DMI_BIOS_VERSION),
+ dmi_get_system_info(DMI_PRODUCT_VERSION));
++ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
+
+ return 0;
+ }
+@@ -834,14 +843,14 @@ int __init dmar_table_init(void)
+
+ static void warn_invalid_dmar(u64 addr, const char *message)
+ {
+- WARN_TAINT_ONCE(
+- 1, TAINT_FIRMWARE_WORKAROUND,
++ pr_warn_once(FW_BUG
+ "Your BIOS is broken; DMAR reported at address %llx%s!\n"
+ "BIOS vendor: %s; Ver: %s; Product Version: %s\n",
+ addr, message,
+ dmi_get_system_info(DMI_BIOS_VENDOR),
+ dmi_get_system_info(DMI_BIOS_VERSION),
+ dmi_get_system_info(DMI_PRODUCT_VERSION));
++ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
+ }
+
+ static int __ref
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 5c6e0a9fd2f3..593a4bfcba42 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -4085,10 +4085,11 @@ static void quirk_ioat_snb_local_iommu(struct pci_dev *pdev)
+
+ /* we know that the this iommu should be at offset 0xa000 from vtbar */
+ drhd = dmar_find_matched_drhd_unit(pdev);
+- if (WARN_TAINT_ONCE(!drhd || drhd->reg_base_addr - vtbar != 0xa000,
+- TAINT_FIRMWARE_WORKAROUND,
+- "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n"))
++ if (!drhd || drhd->reg_base_addr - vtbar != 0xa000) {
++ pr_warn_once(FW_BUG "BIOS assigned incorrect VT-d unit for Intel(R) QuickData Technology device\n");
++ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_STILL_OK);
+ pdev->dev.archdata.iommu = DUMMY_DEVICE_DOMAIN_INFO;
++ }
+ }
+ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IOAT_SNB, quirk_ioat_snb_local_iommu);
+
+@@ -5192,8 +5193,10 @@ static phys_addr_t intel_iommu_iova_to_phys(struct iommu_domain *domain,
+ u64 phys = 0;
+
+ pte = pfn_to_dma_pte(dmar_domain, iova >> VTD_PAGE_SHIFT, &level);
+- if (pte)
+- phys = dma_pte_addr(pte);
++ if (pte && dma_pte_present(pte))
++ phys = dma_pte_addr(pte) +
++ (iova & (BIT_MASK(level_to_offset_bits(level) +
++ VTD_PAGE_SHIFT) - 1));
+
+ return phys;
+ }
+diff --git a/drivers/net/bonding/bond_alb.c b/drivers/net/bonding/bond_alb.c
+index 9834d28d52e8..1f8fbd7776fb 100644
+--- a/drivers/net/bonding/bond_alb.c
++++ b/drivers/net/bonding/bond_alb.c
+@@ -71,11 +71,6 @@ struct arp_pkt {
+ };
+ #pragma pack()
+
+-static inline struct arp_pkt *arp_pkt(const struct sk_buff *skb)
+-{
+- return (struct arp_pkt *)skb_network_header(skb);
+-}
+-
+ /* Forward declaration */
+ static void alb_send_learning_packets(struct slave *slave, u8 mac_addr[],
+ bool strict_match);
+@@ -574,10 +569,11 @@ static void rlb_req_update_subnet_clients(struct bonding *bond, __be32 src_ip)
+ spin_unlock(&bond->mode_lock);
+ }
+
+-static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bond)
++static struct slave *rlb_choose_channel(struct sk_buff *skb,
++ struct bonding *bond,
++ const struct arp_pkt *arp)
+ {
+ struct alb_bond_info *bond_info = &(BOND_ALB_INFO(bond));
+- struct arp_pkt *arp = arp_pkt(skb);
+ struct slave *assigned_slave, *curr_active_slave;
+ struct rlb_client_info *client_info;
+ u32 hash_index = 0;
+@@ -674,8 +670,12 @@ static struct slave *rlb_choose_channel(struct sk_buff *skb, struct bonding *bon
+ */
+ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
+ {
+- struct arp_pkt *arp = arp_pkt(skb);
+ struct slave *tx_slave = NULL;
++ struct arp_pkt *arp;
++
++ if (!pskb_network_may_pull(skb, sizeof(*arp)))
++ return NULL;
++ arp = (struct arp_pkt *)skb_network_header(skb);
+
+ /* Don't modify or load balance ARPs that do not originate locally
+ * (e.g.,arrive via a bridge).
+@@ -685,7 +685,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
+
+ if (arp->op_code == htons(ARPOP_REPLY)) {
+ /* the arp must be sent on the selected rx channel */
+- tx_slave = rlb_choose_channel(skb, bond);
++ tx_slave = rlb_choose_channel(skb, bond, arp);
+ if (tx_slave)
+ ether_addr_copy(arp->mac_src, tx_slave->dev->dev_addr);
+ netdev_dbg(bond->dev, "Server sent ARP Reply packet\n");
+@@ -695,7 +695,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
+ * When the arp reply is received the entry will be updated
+ * with the correct unicast address of the client.
+ */
+- rlb_choose_channel(skb, bond);
++ rlb_choose_channel(skb, bond, arp);
+
+ /* The ARP reply packets must be delayed so that
+ * they can cancel out the influence of the ARP request.
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index fbe3c2c114f9..736e550163e1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -6439,13 +6439,13 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
+ return -EINVAL;
+
+ if (netif_running(dev))
+- bnxt_close_nic(bp, false, false);
++ bnxt_close_nic(bp, true, false);
+
+ dev->mtu = new_mtu;
+ bnxt_set_ring_params(bp);
+
+ if (netif_running(dev))
+- return bnxt_open_nic(bp, false, false);
++ return bnxt_open_nic(bp, true, false);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 1b07c6216e2a..8df32398d343 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -2470,15 +2470,15 @@ fec_enet_set_coalesce(struct net_device *ndev, struct ethtool_coalesce *ec)
+ return -EINVAL;
+ }
+
+- cycle = fec_enet_us_to_itr_clock(ndev, fep->rx_time_itr);
++ cycle = fec_enet_us_to_itr_clock(ndev, ec->rx_coalesce_usecs);
+ if (cycle > 0xFFFF) {
+ pr_err("Rx coalesced usec exceed hardware limitation\n");
+ return -EINVAL;
+ }
+
+- cycle = fec_enet_us_to_itr_clock(ndev, fep->tx_time_itr);
++ cycle = fec_enet_us_to_itr_clock(ndev, ec->tx_coalesce_usecs);
+ if (cycle > 0xFFFF) {
+- pr_err("Rx coalesced usec exceed hardware limitation\n");
++ pr_err("Tx coalesced usec exceed hardware limitation\n");
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/ethernet/micrel/ks8851_mll.c b/drivers/net/ethernet/micrel/ks8851_mll.c
+index d94e151cff12..d4747caf1e7c 100644
+--- a/drivers/net/ethernet/micrel/ks8851_mll.c
++++ b/drivers/net/ethernet/micrel/ks8851_mll.c
+@@ -831,14 +831,17 @@ static irqreturn_t ks_irq(int irq, void *pw)
+ {
+ struct net_device *netdev = pw;
+ struct ks_net *ks = netdev_priv(netdev);
++ unsigned long flags;
+ u16 status;
+
++ spin_lock_irqsave(&ks->statelock, flags);
+ /*this should be the first in IRQ handler */
+ ks_save_cmd_reg(ks);
+
+ status = ks_rdreg16(ks, KS_ISR);
+ if (unlikely(!status)) {
+ ks_restore_cmd_reg(ks);
++ spin_unlock_irqrestore(&ks->statelock, flags);
+ return IRQ_NONE;
+ }
+
+@@ -864,6 +867,7 @@ static irqreturn_t ks_irq(int irq, void *pw)
+ ks->netdev->stats.rx_over_errors++;
+ /* this should be the last in IRQ handler*/
+ ks_restore_cmd_reg(ks);
++ spin_unlock_irqrestore(&ks->statelock, flags);
+ return IRQ_HANDLED;
+ }
+
+@@ -933,6 +937,7 @@ static int ks_net_stop(struct net_device *netdev)
+
+ /* shutdown RX/TX QMU */
+ ks_disable_qmu(ks);
++ ks_disable_int(ks);
+
+ /* set powermode to soft power down to save power */
+ ks_set_powermode(ks, PMECR_PM_SOFTDOWN);
+@@ -989,10 +994,9 @@ static netdev_tx_t ks_start_xmit(struct sk_buff *skb, struct net_device *netdev)
+ {
+ netdev_tx_t retv = NETDEV_TX_OK;
+ struct ks_net *ks = netdev_priv(netdev);
++ unsigned long flags;
+
+- disable_irq(netdev->irq);
+- ks_disable_int(ks);
+- spin_lock(&ks->statelock);
++ spin_lock_irqsave(&ks->statelock, flags);
+
+ /* Extra space are required:
+ * 4 byte for alignment, 4 for status/length, 4 for CRC
+@@ -1006,9 +1010,7 @@ static netdev_tx_t ks_start_xmit(struct sk_buff *skb, struct net_device *netdev)
+ dev_kfree_skb(skb);
+ } else
+ retv = NETDEV_TX_BUSY;
+- spin_unlock(&ks->statelock);
+- ks_enable_int(ks);
+- enable_irq(netdev->irq);
++ spin_unlock_irqrestore(&ks->statelock, flags);
+ return retv;
+ }
+
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index c747ab652665..6c0982a39486 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -251,6 +251,7 @@ acct:
+ } else {
+ kfree_skb(skb);
+ }
++ cond_resched();
+ }
+ }
+
+@@ -443,19 +444,21 @@ static int ipvlan_process_outbound(struct sk_buff *skb)
+ struct ethhdr *ethh = eth_hdr(skb);
+ int ret = NET_XMIT_DROP;
+
+- /* In this mode we dont care about multicast and broadcast traffic */
+- if (is_multicast_ether_addr(ethh->h_dest)) {
+- pr_warn_ratelimited("Dropped {multi|broad}cast of type= [%x]\n",
+- ntohs(skb->protocol));
+- kfree_skb(skb);
+- goto out;
+- }
+-
+ /* The ipvlan is a pseudo-L2 device, so the packets that we receive
+ * will have L2; which need to discarded and processed further
+ * in the net-ns of the main-device.
+ */
+ if (skb_mac_header_was_set(skb)) {
++ /* In this mode we dont care about
++ * multicast and broadcast traffic */
++ if (is_multicast_ether_addr(ethh->h_dest)) {
++ pr_debug_ratelimited(
++ "Dropped {multi|broad}cast of type=[%x]\n",
++ ntohs(skb->protocol));
++ kfree_skb(skb);
++ goto out;
++ }
++
+ skb_pull(skb, sizeof(*ethh));
+ skb->mac_header = (typeof(skb->mac_header))~0U;
+ skb_reset_network_header(skb);
+diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
+index 72fb55ca27f3..72f37e546ed2 100644
+--- a/drivers/net/ipvlan/ipvlan_main.c
++++ b/drivers/net/ipvlan/ipvlan_main.c
+@@ -217,7 +217,6 @@ static void ipvlan_uninit(struct net_device *dev)
+ static int ipvlan_open(struct net_device *dev)
+ {
+ struct ipvl_dev *ipvlan = netdev_priv(dev);
+- struct net_device *phy_dev = ipvlan->phy_dev;
+ struct ipvl_addr *addr;
+
+ if (ipvlan->port->mode == IPVLAN_MODE_L3 ||
+@@ -229,7 +228,7 @@ static int ipvlan_open(struct net_device *dev)
+ list_for_each_entry(addr, &ipvlan->addrs, anode)
+ ipvlan_ht_addr_add(ipvlan, addr);
+
+- return dev_uc_add(phy_dev, phy_dev->dev_addr);
++ return 0;
+ }
+
+ static int ipvlan_stop(struct net_device *dev)
+@@ -241,8 +240,6 @@ static int ipvlan_stop(struct net_device *dev)
+ dev_uc_unsync(phy_dev, dev);
+ dev_mc_unsync(phy_dev, dev);
+
+- dev_uc_del(phy_dev, phy_dev->dev_addr);
+-
+ list_for_each_entry(addr, &ipvlan->addrs, anode)
+ ipvlan_ht_addr_del(addr);
+
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index a48ed0873cc7..8c64b06cb98c 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -2871,6 +2871,11 @@ static void macsec_dev_set_rx_mode(struct net_device *dev)
+ dev_uc_sync(real_dev, dev);
+ }
+
++static sci_t dev_to_sci(struct net_device *dev, __be16 port)
++{
++ return make_sci(dev->dev_addr, port);
++}
++
+ static int macsec_set_mac_address(struct net_device *dev, void *p)
+ {
+ struct macsec_dev *macsec = macsec_priv(dev);
+@@ -2892,6 +2897,7 @@ static int macsec_set_mac_address(struct net_device *dev, void *p)
+
+ out:
+ ether_addr_copy(dev->dev_addr, addr->sa_data);
++ macsec->secy.sci = dev_to_sci(dev, MACSEC_PORT_ES);
+ return 0;
+ }
+
+@@ -2976,6 +2982,7 @@ static const struct device_type macsec_type = {
+
+ static const struct nla_policy macsec_rtnl_policy[IFLA_MACSEC_MAX + 1] = {
+ [IFLA_MACSEC_SCI] = { .type = NLA_U64 },
++ [IFLA_MACSEC_PORT] = { .type = NLA_U16 },
+ [IFLA_MACSEC_ICV_LEN] = { .type = NLA_U8 },
+ [IFLA_MACSEC_CIPHER_SUITE] = { .type = NLA_U64 },
+ [IFLA_MACSEC_WINDOW] = { .type = NLA_U32 },
+@@ -3160,11 +3167,6 @@ static bool sci_exists(struct net_device *dev, sci_t sci)
+ return false;
+ }
+
+-static sci_t dev_to_sci(struct net_device *dev, __be16 port)
+-{
+- return make_sci(dev->dev_addr, port);
+-}
+-
+ static int macsec_add_dev(struct net_device *dev, sci_t sci, u8 icv_len)
+ {
+ struct macsec_dev *macsec = macsec_priv(dev);
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index e2b3d3c4d4df..294881621430 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -309,6 +309,8 @@ static void macvlan_process_broadcast(struct work_struct *w)
+ if (src)
+ dev_put(src->dev);
+ kfree_skb(skb);
++
++ cond_resched();
+ }
+ }
+
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 487d0372a444..2f5587306022 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -80,7 +80,7 @@ static LIST_HEAD(phy_fixup_list);
+ static DEFINE_MUTEX(phy_fixup_lock);
+
+ #ifdef CONFIG_PM
+-static bool mdio_bus_phy_may_suspend(struct phy_device *phydev, bool suspend)
++static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
+ {
+ struct device_driver *drv = phydev->mdio.dev.driver;
+ struct phy_driver *phydrv = to_phy_driver(drv);
+@@ -92,11 +92,10 @@ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev, bool suspend)
+ /* PHY not attached? May suspend if the PHY has not already been
+ * suspended as part of a prior call to phy_disconnect() ->
+ * phy_detach() -> phy_suspend() because the parent netdev might be the
+- * MDIO bus driver and clock gated at this point. Also may resume if
+- * PHY is not attached.
++ * MDIO bus driver and clock gated at this point.
+ */
+ if (!netdev)
+- return suspend ? !phydev->suspended : phydev->suspended;
++ goto out;
+
+ /* Don't suspend PHY if the attached netdev parent may wakeup.
+ * The parent may point to a PCI device, as in tg3 driver.
+@@ -111,7 +110,8 @@ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev, bool suspend)
+ if (device_may_wakeup(&netdev->dev))
+ return false;
+
+- return true;
++out:
++ return !phydev->suspended;
+ }
+
+ static int mdio_bus_phy_suspend(struct device *dev)
+@@ -126,9 +126,11 @@ static int mdio_bus_phy_suspend(struct device *dev)
+ if (phydev->attached_dev && phydev->adjust_link)
+ phy_stop_machine(phydev);
+
+- if (!mdio_bus_phy_may_suspend(phydev, true))
++ if (!mdio_bus_phy_may_suspend(phydev))
+ return 0;
+
++ phydev->suspended_by_mdio_bus = true;
++
+ return phy_suspend(phydev);
+ }
+
+@@ -137,9 +139,11 @@ static int mdio_bus_phy_resume(struct device *dev)
+ struct phy_device *phydev = to_phy_device(dev);
+ int ret;
+
+- if (!mdio_bus_phy_may_suspend(phydev, false))
++ if (!phydev->suspended_by_mdio_bus)
+ goto no_resume;
+
++ phydev->suspended_by_mdio_bus = false;
++
+ ret = phy_resume(phydev);
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/net/slip/slhc.c b/drivers/net/slip/slhc.c
+index ddceed3c5a4a..a516470da015 100644
+--- a/drivers/net/slip/slhc.c
++++ b/drivers/net/slip/slhc.c
+@@ -232,7 +232,7 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
+ register struct cstate *cs = lcs->next;
+ register unsigned long deltaS, deltaA;
+ register short changes = 0;
+- int hlen;
++ int nlen, hlen;
+ unsigned char new_seq[16];
+ register unsigned char *cp = new_seq;
+ struct iphdr *ip;
+@@ -248,6 +248,8 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
+ return isize;
+
+ ip = (struct iphdr *) icp;
++ if (ip->version != 4 || ip->ihl < 5)
++ return isize;
+
+ /* Bail if this packet isn't TCP, or is an IP fragment */
+ if (ip->protocol != IPPROTO_TCP || (ntohs(ip->frag_off) & 0x3fff)) {
+@@ -258,10 +260,14 @@ slhc_compress(struct slcompress *comp, unsigned char *icp, int isize,
+ comp->sls_o_tcp++;
+ return isize;
+ }
+- /* Extract TCP header */
++ nlen = ip->ihl * 4;
++ if (isize < nlen + sizeof(*th))
++ return isize;
+
+- th = (struct tcphdr *)(((unsigned char *)ip) + ip->ihl*4);
+- hlen = ip->ihl*4 + th->doff*4;
++ th = (struct tcphdr *)(icp + nlen);
++ if (th->doff < sizeof(struct tcphdr) / 4)
++ return isize;
++ hlen = nlen + th->doff * 4;
+
+ /* Bail if the TCP packet isn't `compressible' (i.e., ACK isn't set or
+ * some other control bit is set). Also uncompressible if
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index fd2573cca803..d0c18e3557f1 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2216,6 +2216,8 @@ team_nl_option_policy[TEAM_ATTR_OPTION_MAX + 1] = {
+ [TEAM_ATTR_OPTION_CHANGED] = { .type = NLA_FLAG },
+ [TEAM_ATTR_OPTION_TYPE] = { .type = NLA_U8 },
+ [TEAM_ATTR_OPTION_DATA] = { .type = NLA_BINARY },
++ [TEAM_ATTR_OPTION_PORT_IFINDEX] = { .type = NLA_U32 },
++ [TEAM_ATTR_OPTION_ARRAY_INDEX] = { .type = NLA_U32 },
+ };
+
+ static int team_nl_cmd_noop(struct sk_buff *skb, struct genl_info *info)
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index ba7cfc089516..6e74965d26a0 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -3423,7 +3423,10 @@ static void r8153_init(struct r8152 *tp)
+ if (ocp_read_word(tp, MCU_TYPE_PLA, PLA_BOOT_CTRL) &
+ AUTOLOAD_DONE)
+ break;
++
+ msleep(20);
++ if (test_bit(RTL8152_UNPLUG, &tp->flags))
++ break;
+ }
+
+ for (i = 0; i < 500; i++) {
+@@ -3447,7 +3450,10 @@ static void r8153_init(struct r8152 *tp)
+ ocp_data = ocp_reg_read(tp, OCP_PHY_STATUS) & PHY_STAT_MASK;
+ if (ocp_data == PHY_STAT_LAN_ON)
+ break;
++
+ msleep(20);
++ if (test_bit(RTL8152_UNPLUG, &tp->flags))
++ break;
+ }
+
+ usb_disable_lpm(tp->udev);
+diff --git a/drivers/net/wireless/marvell/mwifiex/tdls.c b/drivers/net/wireless/marvell/mwifiex/tdls.c
+index df9704de0715..c6fc09d17462 100644
+--- a/drivers/net/wireless/marvell/mwifiex/tdls.c
++++ b/drivers/net/wireless/marvell/mwifiex/tdls.c
+@@ -917,59 +917,117 @@ void mwifiex_process_tdls_action_frame(struct mwifiex_private *priv,
+
+ switch (*pos) {
+ case WLAN_EID_SUPP_RATES:
++ if (pos[1] > 32)
++ return;
+ sta_ptr->tdls_cap.rates_len = pos[1];
+ for (i = 0; i < pos[1]; i++)
+ sta_ptr->tdls_cap.rates[i] = pos[i + 2];
+ break;
+
+ case WLAN_EID_EXT_SUPP_RATES:
++ if (pos[1] > 32)
++ return;
+ basic = sta_ptr->tdls_cap.rates_len;
++ if (pos[1] > 32 - basic)
++ return;
+ for (i = 0; i < pos[1]; i++)
+ sta_ptr->tdls_cap.rates[basic + i] = pos[i + 2];
+ sta_ptr->tdls_cap.rates_len += pos[1];
+ break;
+ case WLAN_EID_HT_CAPABILITY:
+- memcpy((u8 *)&sta_ptr->tdls_cap.ht_capb, pos,
++ if (pos > end - sizeof(struct ieee80211_ht_cap) - 2)
++ return;
++ if (pos[1] != sizeof(struct ieee80211_ht_cap))
++ return;
++ /* copy the ie's value into ht_capb*/
++ memcpy((u8 *)&sta_ptr->tdls_cap.ht_capb, pos + 2,
+ sizeof(struct ieee80211_ht_cap));
+ sta_ptr->is_11n_enabled = 1;
+ break;
+ case WLAN_EID_HT_OPERATION:
+- memcpy(&sta_ptr->tdls_cap.ht_oper, pos,
++ if (pos > end -
++ sizeof(struct ieee80211_ht_operation) - 2)
++ return;
++ if (pos[1] != sizeof(struct ieee80211_ht_operation))
++ return;
++ /* copy the ie's value into ht_oper*/
++ memcpy(&sta_ptr->tdls_cap.ht_oper, pos + 2,
+ sizeof(struct ieee80211_ht_operation));
+ break;
+ case WLAN_EID_BSS_COEX_2040:
++ if (pos > end - 3)
++ return;
++ if (pos[1] != 1)
++ return;
+ sta_ptr->tdls_cap.coex_2040 = pos[2];
+ break;
+ case WLAN_EID_EXT_CAPABILITY:
++ if (pos > end - sizeof(struct ieee_types_header))
++ return;
++ if (pos[1] < sizeof(struct ieee_types_header))
++ return;
++ if (pos[1] > 8)
++ return;
+ memcpy((u8 *)&sta_ptr->tdls_cap.extcap, pos,
+ sizeof(struct ieee_types_header) +
+ min_t(u8, pos[1], 8));
+ break;
+ case WLAN_EID_RSN:
++ if (pos > end - sizeof(struct ieee_types_header))
++ return;
++ if (pos[1] < sizeof(struct ieee_types_header))
++ return;
++ if (pos[1] > IEEE_MAX_IE_SIZE -
++ sizeof(struct ieee_types_header))
++ return;
+ memcpy((u8 *)&sta_ptr->tdls_cap.rsn_ie, pos,
+ sizeof(struct ieee_types_header) +
+ min_t(u8, pos[1], IEEE_MAX_IE_SIZE -
+ sizeof(struct ieee_types_header)));
+ break;
+ case WLAN_EID_QOS_CAPA:
++ if (pos > end - 3)
++ return;
++ if (pos[1] != 1)
++ return;
+ sta_ptr->tdls_cap.qos_info = pos[2];
+ break;
+ case WLAN_EID_VHT_OPERATION:
+- if (priv->adapter->is_hw_11ac_capable)
+- memcpy(&sta_ptr->tdls_cap.vhtoper, pos,
++ if (priv->adapter->is_hw_11ac_capable) {
++ if (pos > end -
++ sizeof(struct ieee80211_vht_operation) - 2)
++ return;
++ if (pos[1] !=
++ sizeof(struct ieee80211_vht_operation))
++ return;
++ /* copy the ie's value into vhtoper*/
++ memcpy(&sta_ptr->tdls_cap.vhtoper, pos + 2,
+ sizeof(struct ieee80211_vht_operation));
++ }
+ break;
+ case WLAN_EID_VHT_CAPABILITY:
+ if (priv->adapter->is_hw_11ac_capable) {
+- memcpy((u8 *)&sta_ptr->tdls_cap.vhtcap, pos,
++ if (pos > end -
++ sizeof(struct ieee80211_vht_cap) - 2)
++ return;
++ if (pos[1] != sizeof(struct ieee80211_vht_cap))
++ return;
++ /* copy the ie's value into vhtcap*/
++ memcpy((u8 *)&sta_ptr->tdls_cap.vhtcap, pos + 2,
+ sizeof(struct ieee80211_vht_cap));
+ sta_ptr->is_11ac_enabled = 1;
+ }
+ break;
+ case WLAN_EID_AID:
+- if (priv->adapter->is_hw_11ac_capable)
++ if (priv->adapter->is_hw_11ac_capable) {
++ if (pos > end - 4)
++ return;
++ if (pos[1] != 2)
++ return;
+ sta_ptr->tdls_cap.aid =
+ le16_to_cpu(*(__le16 *)(pos + 2));
++ }
++ break;
+ default:
+ break;
+ }
+diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c
+index d6475dcce9df..0262c8f7e7c7 100644
+--- a/fs/cifs/dir.c
++++ b/fs/cifs/dir.c
+@@ -551,7 +551,6 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry,
+ if (server->ops->close)
+ server->ops->close(xid, tcon, &fid);
+ cifs_del_pending_open(&open);
+- fput(file);
+ rc = -ENOMEM;
+ }
+
+diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
+index bd6202b70447..daad7b04f88c 100644
+--- a/fs/gfs2/inode.c
++++ b/fs/gfs2/inode.c
+@@ -1248,7 +1248,7 @@ static int gfs2_atomic_open(struct inode *dir, struct dentry *dentry,
+ if (!(*opened & FILE_OPENED))
+ return finish_no_open(file, d);
+ dput(d);
+- return 0;
++ return excl && (flags & O_CREAT) ? -EEXIST : 0;
+ }
+
+ BUG_ON(d != NULL);
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 04dd0652bb5c..8de458d64134 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1037,8 +1037,8 @@ static bool jbd2_write_access_granted(handle_t *handle, struct buffer_head *bh,
+ /* For undo access buffer must have data copied */
+ if (undo && !jh->b_committed_data)
+ goto out;
+- if (jh->b_transaction != handle->h_transaction &&
+- jh->b_next_transaction != handle->h_transaction)
++ if (READ_ONCE(jh->b_transaction) != handle->h_transaction &&
++ READ_ONCE(jh->b_next_transaction) != handle->h_transaction)
+ goto out;
+ /*
+ * There are two reasons for the barrier here:
+@@ -2448,8 +2448,8 @@ void __jbd2_journal_refile_buffer(struct journal_head *jh)
+ * our jh reference and thus __jbd2_journal_file_buffer() must not
+ * take a new one.
+ */
+- jh->b_transaction = jh->b_next_transaction;
+- jh->b_next_transaction = NULL;
++ WRITE_ONCE(jh->b_transaction, jh->b_next_transaction);
++ WRITE_ONCE(jh->b_next_transaction, NULL);
+ if (buffer_freed(bh))
+ jlist = BJ_Forget;
+ else if (jh->b_modified)
+diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
+index c2665d920cf8..2517fcd423b6 100644
+--- a/fs/nfs/dir.c
++++ b/fs/nfs/dir.c
+@@ -678,8 +678,6 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
+ goto out_label_free;
+ }
+
+- array = kmap(page);
+-
+ status = nfs_readdir_alloc_pages(pages, array_size);
+ if (status < 0)
+ goto out_release_array;
+diff --git a/fs/open.c b/fs/open.c
+index 8db6e3a5fc10..e17cc79bd88a 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -824,9 +824,6 @@ cleanup_file:
+ * the return value of d_splice_alias(), then the caller needs to perform dput()
+ * on it after finish_open().
+ *
+- * On successful return @file is a fully instantiated open file. After this, if
+- * an error occurs in ->atomic_open(), it needs to clean up with fput().
+- *
+ * Returns zero on success or -errno if the open failed.
+ */
+ int finish_open(struct file *file, struct dentry *dentry,
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index 867110c9d707..8eafced47540 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -333,6 +333,7 @@ struct phy_c45_device_ids {
+ * is_pseudo_fixed_link: Set to true if this phy is an Ethernet switch, etc.
+ * has_fixups: Set to true if this phy has fixups/quirks.
+ * suspended: Set to true if this phy has been suspended successfully.
++ * suspended_by_mdio_bus: Set to true if this phy was suspended by MDIO bus.
+ * state: state of the PHY for management purposes
+ * dev_flags: Device-specific flags used by the PHY driver.
+ * link_timeout: The number of timer firings to wait before the
+@@ -369,6 +370,7 @@ struct phy_device {
+ bool is_pseudo_fixed_link;
+ bool has_fixups;
+ bool suspended;
++ bool suspended_by_mdio_bus;
+
+ enum phy_state state;
+
+diff --git a/include/net/fib_rules.h b/include/net/fib_rules.h
+index 456e4a6006ab..0b0ad792dd5c 100644
+--- a/include/net/fib_rules.h
++++ b/include/net/fib_rules.h
+@@ -87,6 +87,7 @@ struct fib_rules_ops {
+ [FRA_OIFNAME] = { .type = NLA_STRING, .len = IFNAMSIZ - 1 }, \
+ [FRA_PRIORITY] = { .type = NLA_U32 }, \
+ [FRA_FWMARK] = { .type = NLA_U32 }, \
++ [FRA_TUN_ID] = { .type = NLA_U64 }, \
+ [FRA_FWMASK] = { .type = NLA_U32 }, \
+ [FRA_TABLE] = { .type = NLA_U32 }, \
+ [FRA_SUPPRESS_PREFIXLEN] = { .type = NLA_U32 }, \
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index bb0cf1caf1cd..2d7a4fc42a88 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -6335,6 +6335,10 @@ void cgroup_sk_alloc(struct sock_cgroup_data *skcd)
+ return;
+ }
+
++ /* Don't associate the sock with unrelated interrupted task's cgroup. */
++ if (in_interrupt())
++ return;
++
+ rcu_read_lock();
+
+ while (true) {
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 57fadbe69c2e..d90ccbeb909d 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -373,27 +373,32 @@ __sigqueue_alloc(int sig, struct task_struct *t, gfp_t flags, int override_rlimi
+ {
+ struct sigqueue *q = NULL;
+ struct user_struct *user;
++ int sigpending;
+
+ /*
+ * Protect access to @t credentials. This can go away when all
+ * callers hold rcu read lock.
++ *
++ * NOTE! A pending signal will hold on to the user refcount,
++ * and we get/put the refcount only when the sigpending count
++ * changes from/to zero.
+ */
+ rcu_read_lock();
+- user = get_uid(__task_cred(t)->user);
+- atomic_inc(&user->sigpending);
++ user = __task_cred(t)->user;
++ sigpending = atomic_inc_return(&user->sigpending);
++ if (sigpending == 1)
++ get_uid(user);
+ rcu_read_unlock();
+
+- if (override_rlimit ||
+- atomic_read(&user->sigpending) <=
+- task_rlimit(t, RLIMIT_SIGPENDING)) {
++ if (override_rlimit || likely(sigpending <= task_rlimit(t, RLIMIT_SIGPENDING))) {
+ q = kmem_cache_alloc(sigqueue_cachep, flags);
+ } else {
+ print_dropped_signal(sig);
+ }
+
+ if (unlikely(q == NULL)) {
+- atomic_dec(&user->sigpending);
+- free_uid(user);
++ if (atomic_dec_and_test(&user->sigpending))
++ free_uid(user);
+ } else {
+ INIT_LIST_HEAD(&q->list);
+ q->flags = 0;
+@@ -407,8 +412,8 @@ static void __sigqueue_free(struct sigqueue *q)
+ {
+ if (q->flags & SIGQUEUE_PREALLOC)
+ return;
+- atomic_dec(&q->user->sigpending);
+- free_uid(q->user);
++ if (atomic_dec_and_test(&q->user->sigpending))
++ free_uid(q->user);
+ kmem_cache_free(sigqueue_cachep, q);
+ }
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 7d970b565c4d..00c295d3104b 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1384,14 +1384,16 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
+ WARN_ON_ONCE(!is_chained_work(wq)))
+ return;
+ retry:
+- if (req_cpu == WORK_CPU_UNBOUND)
+- cpu = wq_select_unbound_cpu(raw_smp_processor_id());
+-
+ /* pwq which will be used unless @work is executing elsewhere */
+- if (!(wq->flags & WQ_UNBOUND))
+- pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
+- else
++ if (wq->flags & WQ_UNBOUND) {
++ if (req_cpu == WORK_CPU_UNBOUND)
++ cpu = wq_select_unbound_cpu(raw_smp_processor_id());
+ pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));
++ } else {
++ if (req_cpu == WORK_CPU_UNBOUND)
++ cpu = raw_smp_processor_id();
++ pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);
++ }
+
+ /*
+ * If @work was previously on a different pool, it might still be
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 0f8422239dea..b85a1c040bc9 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5726,6 +5726,10 @@ void mem_cgroup_sk_alloc(struct sock *sk)
+ return;
+ }
+
++ /* Do not associate the sock with unrelated interrupted task's memcg. */
++ if (in_interrupt())
++ return;
++
+ rcu_read_lock();
+ memcg = mem_cgroup_from_task(current);
+ if (memcg == root_mem_cgroup)
+diff --git a/mm/slub.c b/mm/slub.c
+index fa6d62d559eb..4a5b2a0f9360 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -3114,6 +3114,15 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
+ void *object = c->freelist;
+
+ if (unlikely(!object)) {
++ /*
++ * We may have removed an object from c->freelist using
++ * the fastpath in the previous iteration; in that case,
++ * c->tid has not been bumped yet.
++ * Since ___slab_alloc() may reenable interrupts while
++ * allocating memory, we should bump c->tid now.
++ */
++ c->tid = next_tid(c->tid);
++
+ /*
+ * Invoking slow path likely have side-effect
+ * of re-populating per CPU c->freelist
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index 780700fcbe63..2b663622bdb4 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -34,6 +34,7 @@
+ #include <linux/kref.h>
+ #include <linux/list.h>
+ #include <linux/lockdep.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/pkt_sched.h>
+@@ -149,7 +150,7 @@ static void batadv_iv_ogm_orig_free(struct batadv_orig_node *orig_node)
+ * Return: 0 on success, a negative error code otherwise.
+ */
+ static int batadv_iv_ogm_orig_add_if(struct batadv_orig_node *orig_node,
+- int max_if_num)
++ unsigned int max_if_num)
+ {
+ void *data_ptr;
+ size_t old_size;
+@@ -193,7 +194,8 @@ unlock:
+ */
+ static void
+ batadv_iv_ogm_drop_bcast_own_entry(struct batadv_orig_node *orig_node,
+- int max_if_num, int del_if_num)
++ unsigned int max_if_num,
++ unsigned int del_if_num)
+ {
+ size_t chunk_size;
+ size_t if_offset;
+@@ -231,7 +233,8 @@ batadv_iv_ogm_drop_bcast_own_entry(struct batadv_orig_node *orig_node,
+ */
+ static void
+ batadv_iv_ogm_drop_bcast_own_sum_entry(struct batadv_orig_node *orig_node,
+- int max_if_num, int del_if_num)
++ unsigned int max_if_num,
++ unsigned int del_if_num)
+ {
+ size_t if_offset;
+ void *data_ptr;
+@@ -268,7 +271,8 @@ batadv_iv_ogm_drop_bcast_own_sum_entry(struct batadv_orig_node *orig_node,
+ * Return: 0 on success, a negative error code otherwise.
+ */
+ static int batadv_iv_ogm_orig_del_if(struct batadv_orig_node *orig_node,
+- int max_if_num, int del_if_num)
++ unsigned int max_if_num,
++ unsigned int del_if_num)
+ {
+ spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
+
+@@ -302,7 +306,8 @@ static struct batadv_orig_node *
+ batadv_iv_ogm_orig_get(struct batadv_priv *bat_priv, const u8 *addr)
+ {
+ struct batadv_orig_node *orig_node;
+- int size, hash_added;
++ int hash_added;
++ size_t size;
+
+ orig_node = batadv_orig_hash_find(bat_priv, addr);
+ if (orig_node)
+@@ -366,14 +371,18 @@ static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface)
+ unsigned char *ogm_buff;
+ u32 random_seqno;
+
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
+ /* randomize initial seqno to avoid collision */
+ get_random_bytes(&random_seqno, sizeof(random_seqno));
+ atomic_set(&hard_iface->bat_iv.ogm_seqno, random_seqno);
+
+ hard_iface->bat_iv.ogm_buff_len = BATADV_OGM_HLEN;
+ ogm_buff = kmalloc(hard_iface->bat_iv.ogm_buff_len, GFP_ATOMIC);
+- if (!ogm_buff)
++ if (!ogm_buff) {
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ return -ENOMEM;
++ }
+
+ hard_iface->bat_iv.ogm_buff = ogm_buff;
+
+@@ -385,35 +394,59 @@ static int batadv_iv_ogm_iface_enable(struct batadv_hard_iface *hard_iface)
+ batadv_ogm_packet->reserved = 0;
+ batadv_ogm_packet->tq = BATADV_TQ_MAX_VALUE;
+
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
++
+ return 0;
+ }
+
+ static void batadv_iv_ogm_iface_disable(struct batadv_hard_iface *hard_iface)
+ {
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
+ kfree(hard_iface->bat_iv.ogm_buff);
+ hard_iface->bat_iv.ogm_buff = NULL;
++
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ }
+
+ static void batadv_iv_ogm_iface_update_mac(struct batadv_hard_iface *hard_iface)
+ {
+ struct batadv_ogm_packet *batadv_ogm_packet;
+- unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff;
++ void *ogm_buff;
+
+- batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
++ ogm_buff = hard_iface->bat_iv.ogm_buff;
++ if (!ogm_buff)
++ goto unlock;
++
++ batadv_ogm_packet = ogm_buff;
+ ether_addr_copy(batadv_ogm_packet->orig,
+ hard_iface->net_dev->dev_addr);
+ ether_addr_copy(batadv_ogm_packet->prev_sender,
+ hard_iface->net_dev->dev_addr);
++
++unlock:
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ }
+
+ static void
+ batadv_iv_ogm_primary_iface_set(struct batadv_hard_iface *hard_iface)
+ {
+ struct batadv_ogm_packet *batadv_ogm_packet;
+- unsigned char *ogm_buff = hard_iface->bat_iv.ogm_buff;
++ void *ogm_buff;
+
+- batadv_ogm_packet = (struct batadv_ogm_packet *)ogm_buff;
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++
++ ogm_buff = hard_iface->bat_iv.ogm_buff;
++ if (!ogm_buff)
++ goto unlock;
++
++ batadv_ogm_packet = ogm_buff;
+ batadv_ogm_packet->ttl = BATADV_TTL;
++
++unlock:
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
+ }
+
+ /* when do we schedule our own ogm to be sent */
+@@ -898,7 +931,7 @@ batadv_iv_ogm_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
+ u32 i;
+ size_t word_index;
+ u8 *w;
+- int if_num;
++ unsigned int if_num;
+
+ for (i = 0; i < hash->size; i++) {
+ head = &hash->table[i];
+@@ -919,7 +952,11 @@ batadv_iv_ogm_slide_own_bcast_window(struct batadv_hard_iface *hard_iface)
+ }
+ }
+
+-static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
++/**
++ * batadv_iv_ogm_schedule_buff() - schedule submission of hardif ogm buffer
++ * @hard_iface: interface whose ogm buffer should be transmitted
++ */
++static void batadv_iv_ogm_schedule_buff(struct batadv_hard_iface *hard_iface)
+ {
+ struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
+ unsigned char **ogm_buff = &hard_iface->bat_iv.ogm_buff;
+@@ -930,8 +967,10 @@ static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
+ u16 tvlv_len = 0;
+ unsigned long send_time;
+
+- if ((hard_iface->if_status == BATADV_IF_NOT_IN_USE) ||
+- (hard_iface->if_status == BATADV_IF_TO_BE_REMOVED))
++ lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex);
++
++ /* interface already disabled by batadv_iv_ogm_iface_disable */
++ if (!*ogm_buff)
+ return;
+
+ /* the interface gets activated here to avoid race conditions between
+@@ -1000,6 +1039,17 @@ out:
+ batadv_hardif_put(primary_if);
+ }
+
++static void batadv_iv_ogm_schedule(struct batadv_hard_iface *hard_iface)
++{
++ if (hard_iface->if_status == BATADV_IF_NOT_IN_USE ||
++ hard_iface->if_status == BATADV_IF_TO_BE_REMOVED)
++ return;
++
++ mutex_lock(&hard_iface->bat_iv.ogm_buff_mutex);
++ batadv_iv_ogm_schedule_buff(hard_iface);
++ mutex_unlock(&hard_iface->bat_iv.ogm_buff_mutex);
++}
++
+ /**
+ * batadv_iv_ogm_orig_update - use OGM to update corresponding data in an
+ * originator
+@@ -1028,7 +1078,7 @@ batadv_iv_ogm_orig_update(struct batadv_priv *bat_priv,
+ struct batadv_neigh_node *tmp_neigh_node = NULL;
+ struct batadv_neigh_node *router = NULL;
+ struct batadv_orig_node *orig_node_tmp;
+- int if_num;
++ unsigned int if_num;
+ u8 sum_orig, sum_neigh;
+ u8 *neigh_addr;
+ u8 tq_avg;
+@@ -1186,7 +1236,7 @@ static bool batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
+ u8 total_count;
+ u8 orig_eq_count, neigh_rq_count, neigh_rq_inv, tq_own;
+ unsigned int neigh_rq_inv_cube, neigh_rq_max_cube;
+- int if_num;
++ unsigned int if_num;
+ unsigned int tq_asym_penalty, inv_asym_penalty;
+ unsigned int combined_tq;
+ unsigned int tq_iface_penalty;
+@@ -1227,7 +1277,7 @@ static bool batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
+ orig_node->last_seen = jiffies;
+
+ /* find packet count of corresponding one hop neighbor */
+- spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
++ spin_lock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock);
+ if_num = if_incoming->if_num;
+ orig_eq_count = orig_neigh_node->bat_iv.bcast_own_sum[if_num];
+ neigh_ifinfo = batadv_neigh_ifinfo_new(neigh_node, if_outgoing);
+@@ -1237,7 +1287,7 @@ static bool batadv_iv_ogm_calc_tq(struct batadv_orig_node *orig_node,
+ } else {
+ neigh_rq_count = 0;
+ }
+- spin_unlock_bh(&orig_node->bat_iv.ogm_cnt_lock);
++ spin_unlock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock);
+
+ /* pay attention to not get a value bigger than 100 % */
+ if (orig_eq_count > neigh_rq_count)
+@@ -1705,9 +1755,9 @@ static void batadv_iv_ogm_process(const struct sk_buff *skb, int ogm_offset,
+
+ if (is_my_orig) {
+ unsigned long *word;
+- int offset;
++ size_t offset;
+ s32 bit_pos;
+- s16 if_num;
++ unsigned int if_num;
+ u8 *weight;
+
+ orig_neigh_node = batadv_iv_ogm_orig_get(bat_priv,
+@@ -2473,12 +2523,22 @@ batadv_iv_ogm_neigh_is_sob(struct batadv_neigh_node *neigh1,
+ return ret;
+ }
+
+-static void batadv_iv_iface_activate(struct batadv_hard_iface *hard_iface)
++static void batadv_iv_iface_enabled(struct batadv_hard_iface *hard_iface)
+ {
+ /* begin scheduling originator messages on that interface */
+ batadv_iv_ogm_schedule(hard_iface);
+ }
+
++/**
++ * batadv_iv_init_sel_class - initialize GW selection class
++ * @bat_priv: the bat priv with all the soft interface information
++ */
++static void batadv_iv_init_sel_class(struct batadv_priv *bat_priv)
++{
++ /* set default TQ difference threshold to 20 */
++ atomic_set(&bat_priv->gw.sel_class, 20);
++}
++
+ static struct batadv_gw_node *
+ batadv_iv_gw_get_best_gw_node(struct batadv_priv *bat_priv)
+ {
+@@ -2803,8 +2863,8 @@ unlock:
+ static struct batadv_algo_ops batadv_batman_iv __read_mostly = {
+ .name = "BATMAN_IV",
+ .iface = {
+- .activate = batadv_iv_iface_activate,
+ .enable = batadv_iv_ogm_iface_enable,
++ .enabled = batadv_iv_iface_enabled,
+ .disable = batadv_iv_ogm_iface_disable,
+ .update_mac = batadv_iv_ogm_iface_update_mac,
+ .primary_set = batadv_iv_ogm_primary_iface_set,
+@@ -2827,6 +2887,7 @@ static struct batadv_algo_ops batadv_batman_iv __read_mostly = {
+ .del_if = batadv_iv_ogm_orig_del_if,
+ },
+ .gw = {
++ .init_sel_class = batadv_iv_init_sel_class,
+ .get_best_gw_node = batadv_iv_gw_get_best_gw_node,
+ .is_eligible = batadv_iv_gw_is_eligible,
+ #ifdef CONFIG_BATMAN_ADV_DEBUGFS
+diff --git a/net/batman-adv/bat_v.c b/net/batman-adv/bat_v.c
+index 4348118e7eac..18fa602e5fc6 100644
+--- a/net/batman-adv/bat_v.c
++++ b/net/batman-adv/bat_v.c
+@@ -19,7 +19,6 @@
+ #include "main.h"
+
+ #include <linux/atomic.h>
+-#include <linux/bug.h>
+ #include <linux/cache.h>
+ #include <linux/errno.h>
+ #include <linux/if_ether.h>
+@@ -623,11 +622,11 @@ static int batadv_v_neigh_cmp(struct batadv_neigh_node *neigh1,
+ int ret = 0;
+
+ ifinfo1 = batadv_neigh_ifinfo_get(neigh1, if_outgoing1);
+- if (WARN_ON(!ifinfo1))
++ if (!ifinfo1)
+ goto err_ifinfo1;
+
+ ifinfo2 = batadv_neigh_ifinfo_get(neigh2, if_outgoing2);
+- if (WARN_ON(!ifinfo2))
++ if (!ifinfo2)
+ goto err_ifinfo2;
+
+ ret = ifinfo1->bat_v.throughput - ifinfo2->bat_v.throughput;
+@@ -649,11 +648,11 @@ static bool batadv_v_neigh_is_sob(struct batadv_neigh_node *neigh1,
+ bool ret = false;
+
+ ifinfo1 = batadv_neigh_ifinfo_get(neigh1, if_outgoing1);
+- if (WARN_ON(!ifinfo1))
++ if (!ifinfo1)
+ goto err_ifinfo1;
+
+ ifinfo2 = batadv_neigh_ifinfo_get(neigh2, if_outgoing2);
+- if (WARN_ON(!ifinfo2))
++ if (!ifinfo2)
+ goto err_ifinfo2;
+
+ threshold = ifinfo1->bat_v.throughput / 4;
+@@ -668,6 +667,16 @@ err_ifinfo1:
+ return ret;
+ }
+
++/**
++ * batadv_v_init_sel_class - initialize GW selection class
++ * @bat_priv: the bat priv with all the soft interface information
++ */
++static void batadv_v_init_sel_class(struct batadv_priv *bat_priv)
++{
++ /* set default throughput difference threshold to 5Mbps */
++ atomic_set(&bat_priv->gw.sel_class, 50);
++}
++
+ static ssize_t batadv_v_store_sel_class(struct batadv_priv *bat_priv,
+ char *buff, size_t count)
+ {
+@@ -805,7 +814,7 @@ static bool batadv_v_gw_is_eligible(struct batadv_priv *bat_priv,
+ }
+
+ orig_gw = batadv_gw_node_get(bat_priv, orig_node);
+- if (!orig_node)
++ if (!orig_gw)
+ goto out;
+
+ if (batadv_v_gw_throughput_get(orig_gw, &orig_throughput) < 0)
+@@ -1054,6 +1063,7 @@ static struct batadv_algo_ops batadv_batman_v __read_mostly = {
+ .dump = batadv_v_orig_dump,
+ },
+ .gw = {
++ .init_sel_class = batadv_v_init_sel_class,
+ .store_sel_class = batadv_v_store_sel_class,
+ .show_sel_class = batadv_v_show_sel_class,
+ .get_best_gw_node = batadv_v_gw_get_best_gw_node,
+@@ -1094,9 +1104,6 @@ int batadv_v_mesh_init(struct batadv_priv *bat_priv)
+ if (ret < 0)
+ return ret;
+
+- /* set default throughput difference threshold to 5Mbps */
+- atomic_set(&bat_priv->gw.sel_class, 50);
+-
+ return 0;
+ }
+
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 5d79004de25c..62df763b2aae 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -19,6 +19,7 @@
+ #include "main.h"
+
+ #include <linux/atomic.h>
++#include <linux/bitops.h>
+ #include <linux/byteorder/generic.h>
+ #include <linux/errno.h>
+ #include <linux/etherdevice.h>
+@@ -29,6 +30,7 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/netdevice.h>
++#include <linux/nl80211.h>
+ #include <linux/random.h>
+ #include <linux/rculist.h>
+ #include <linux/rcupdate.h>
+@@ -100,8 +102,12 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ */
+ return 0;
+ }
+- if (!ret)
+- return sinfo.expected_throughput / 100;
++ if (ret)
++ goto default_throughput;
++ if (!(sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)))
++ goto default_throughput;
++
++ return sinfo.expected_throughput / 100;
+ }
+
+ /* unsupported WiFi driver version */
+@@ -185,6 +191,7 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ struct sk_buff *skb;
+ int probe_len, i;
+ int elp_skb_len;
++ void *tmp;
+
+ /* this probing routine is for Wifi neighbours only */
+ if (!batadv_is_wifi_netdev(hard_iface->net_dev))
+@@ -216,7 +223,8 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ * the packet to be exactly of that size to make the link
+ * throughput estimation effective.
+ */
+- skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
++ tmp = skb_put(skb, probe_len - hard_iface->bat_v.elp_skb->len);
++ memset(tmp, 0, probe_len - hard_iface->bat_v.elp_skb->len);
+
+ batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ "Sending unicast (probe) ELP packet on interface %s to %pM\n",
+@@ -327,21 +335,23 @@ out:
+ */
+ int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface)
+ {
++ static const size_t tvlv_padding = sizeof(__be32);
+ struct batadv_elp_packet *elp_packet;
+ unsigned char *elp_buff;
+ u32 random_seqno;
+ size_t size;
+ int res = -ENOMEM;
+
+- size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN;
++ size = ETH_HLEN + NET_IP_ALIGN + BATADV_ELP_HLEN + tvlv_padding;
+ hard_iface->bat_v.elp_skb = dev_alloc_skb(size);
+ if (!hard_iface->bat_v.elp_skb)
+ goto out;
+
+ skb_reserve(hard_iface->bat_v.elp_skb, ETH_HLEN + NET_IP_ALIGN);
+- elp_buff = skb_put(hard_iface->bat_v.elp_skb, BATADV_ELP_HLEN);
++ elp_buff = skb_put(hard_iface->bat_v.elp_skb,
++ BATADV_ELP_HLEN + tvlv_padding);
+ elp_packet = (struct batadv_elp_packet *)elp_buff;
+- memset(elp_packet, 0, BATADV_ELP_HLEN);
++ memset(elp_packet, 0, BATADV_ELP_HLEN + tvlv_padding);
+
+ elp_packet->packet_type = BATADV_ELP;
+ elp_packet->version = BATADV_COMPAT_VERSION;
+diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c
+index f435435b447e..b0cae59bd327 100644
+--- a/net/batman-adv/bat_v_ogm.c
++++ b/net/batman-adv/bat_v_ogm.c
+@@ -28,6 +28,8 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/lockdep.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+ #include <linux/random.h>
+ #include <linux/rculist.h>
+@@ -127,22 +129,19 @@ static void batadv_v_ogm_send_to_if(struct sk_buff *skb,
+ }
+
+ /**
+- * batadv_v_ogm_send - periodic worker broadcasting the own OGM
+- * @work: work queue item
++ * batadv_v_ogm_send_softif() - periodic worker broadcasting the own OGM
++ * @bat_priv: the bat priv with all the soft interface information
+ */
+-static void batadv_v_ogm_send(struct work_struct *work)
++static void batadv_v_ogm_send_softif(struct batadv_priv *bat_priv)
+ {
+ struct batadv_hard_iface *hard_iface;
+- struct batadv_priv_bat_v *bat_v;
+- struct batadv_priv *bat_priv;
+ struct batadv_ogm2_packet *ogm_packet;
+ struct sk_buff *skb, *skb_tmp;
+ unsigned char *ogm_buff, *pkt_buff;
+ int ogm_buff_len;
+ u16 tvlv_len = 0;
+
+- bat_v = container_of(work, struct batadv_priv_bat_v, ogm_wq.work);
+- bat_priv = container_of(bat_v, struct batadv_priv, bat_v);
++ lockdep_assert_held(&bat_priv->bat_v.ogm_buff_mutex);
+
+ if (atomic_read(&bat_priv->mesh_state) == BATADV_MESH_DEACTIVATING)
+ goto out;
+@@ -209,6 +208,23 @@ out:
+ return;
+ }
+
++/**
++ * batadv_v_ogm_send() - periodic worker broadcasting the own OGM
++ * @work: work queue item
++ */
++static void batadv_v_ogm_send(struct work_struct *work)
++{
++ struct batadv_priv_bat_v *bat_v;
++ struct batadv_priv *bat_priv;
++
++ bat_v = container_of(work, struct batadv_priv_bat_v, ogm_wq.work);
++ bat_priv = container_of(bat_v, struct batadv_priv, bat_v);
++
++ mutex_lock(&bat_priv->bat_v.ogm_buff_mutex);
++ batadv_v_ogm_send_softif(bat_priv);
++ mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex);
++}
++
+ /**
+ * batadv_v_ogm_iface_enable - prepare an interface for B.A.T.M.A.N. V
+ * @hard_iface: the interface to prepare
+@@ -235,11 +251,15 @@ void batadv_v_ogm_primary_iface_set(struct batadv_hard_iface *primary_iface)
+ struct batadv_priv *bat_priv = netdev_priv(primary_iface->soft_iface);
+ struct batadv_ogm2_packet *ogm_packet;
+
++ mutex_lock(&bat_priv->bat_v.ogm_buff_mutex);
+ if (!bat_priv->bat_v.ogm_buff)
+- return;
++ goto unlock;
+
+ ogm_packet = (struct batadv_ogm2_packet *)bat_priv->bat_v.ogm_buff;
+ ether_addr_copy(ogm_packet->orig, primary_iface->net_dev->dev_addr);
++
++unlock:
++ mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex);
+ }
+
+ /**
+@@ -827,6 +847,8 @@ int batadv_v_ogm_init(struct batadv_priv *bat_priv)
+ atomic_set(&bat_priv->bat_v.ogm_seqno, random_seqno);
+ INIT_DELAYED_WORK(&bat_priv->bat_v.ogm_wq, batadv_v_ogm_send);
+
++ mutex_init(&bat_priv->bat_v.ogm_buff_mutex);
++
+ return 0;
+ }
+
+@@ -838,7 +860,11 @@ void batadv_v_ogm_free(struct batadv_priv *bat_priv)
+ {
+ cancel_delayed_work_sync(&bat_priv->bat_v.ogm_wq);
+
++ mutex_lock(&bat_priv->bat_v.ogm_buff_mutex);
++
+ kfree(bat_priv->bat_v.ogm_buff);
+ bat_priv->bat_v.ogm_buff = NULL;
+ bat_priv->bat_v.ogm_buff_len = 0;
++
++ mutex_unlock(&bat_priv->bat_v.ogm_buff_mutex);
+ }
+diff --git a/net/batman-adv/debugfs.c b/net/batman-adv/debugfs.c
+index b4ffba7dd583..e0ab277db503 100644
+--- a/net/batman-adv/debugfs.c
++++ b/net/batman-adv/debugfs.c
+@@ -18,6 +18,7 @@
+ #include "debugfs.h"
+ #include "main.h"
+
++#include <linux/dcache.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/errno.h>
+@@ -339,6 +340,25 @@ out:
+ return -ENOMEM;
+ }
+
++/**
++ * batadv_debugfs_rename_hardif() - Fix debugfs path for renamed hardif
++ * @hard_iface: hard interface which was renamed
++ */
++void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface)
++{
++ const char *name = hard_iface->net_dev->name;
++ struct dentry *dir;
++ struct dentry *d;
++
++ dir = hard_iface->debug_dir;
++ if (!dir)
++ return;
++
++ d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name);
++ if (!d)
++ pr_err("Can't rename debugfs dir to %s\n", name);
++}
++
+ /**
+ * batadv_debugfs_del_hardif - delete the base directory for a hard interface
+ * in debugfs.
+@@ -403,6 +423,26 @@ out:
+ return -ENOMEM;
+ }
+
++/**
++ * batadv_debugfs_rename_meshif() - Fix debugfs path for renamed softif
++ * @dev: net_device which was renamed
++ */
++void batadv_debugfs_rename_meshif(struct net_device *dev)
++{
++ struct batadv_priv *bat_priv = netdev_priv(dev);
++ const char *name = dev->name;
++ struct dentry *dir;
++ struct dentry *d;
++
++ dir = bat_priv->debug_dir;
++ if (!dir)
++ return;
++
++ d = debugfs_rename(dir->d_parent, dir, dir->d_parent, name);
++ if (!d)
++ pr_err("Can't rename debugfs dir to %s\n", name);
++}
++
+ void batadv_debugfs_del_meshif(struct net_device *dev)
+ {
+ struct batadv_priv *bat_priv = netdev_priv(dev);
+diff --git a/net/batman-adv/debugfs.h b/net/batman-adv/debugfs.h
+index e49121ee55f6..59a0d6d70ecd 100644
+--- a/net/batman-adv/debugfs.h
++++ b/net/batman-adv/debugfs.h
+@@ -29,8 +29,10 @@ struct net_device;
+ void batadv_debugfs_init(void);
+ void batadv_debugfs_destroy(void);
+ int batadv_debugfs_add_meshif(struct net_device *dev);
++void batadv_debugfs_rename_meshif(struct net_device *dev);
+ void batadv_debugfs_del_meshif(struct net_device *dev);
+ int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface);
++void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface);
+ void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface);
+
+ #else
+@@ -48,6 +50,10 @@ static inline int batadv_debugfs_add_meshif(struct net_device *dev)
+ return 0;
+ }
+
++static inline void batadv_debugfs_rename_meshif(struct net_device *dev)
++{
++}
++
+ static inline void batadv_debugfs_del_meshif(struct net_device *dev)
+ {
+ }
+@@ -58,6 +64,11 @@ int batadv_debugfs_add_hardif(struct batadv_hard_iface *hard_iface)
+ return 0;
+ }
+
++static inline
++void batadv_debugfs_rename_hardif(struct batadv_hard_iface *hard_iface)
++{
++}
++
+ static inline
+ void batadv_debugfs_del_hardif(struct batadv_hard_iface *hard_iface)
+ {
+diff --git a/net/batman-adv/distributed-arp-table.c b/net/batman-adv/distributed-arp-table.c
+index 3b440b8d7c05..83c7009b0da1 100644
+--- a/net/batman-adv/distributed-arp-table.c
++++ b/net/batman-adv/distributed-arp-table.c
+@@ -1025,8 +1025,9 @@ bool batadv_dat_snoop_outgoing_arp_request(struct batadv_priv *bat_priv,
+ skb_reset_mac_header(skb_new);
+ skb_new->protocol = eth_type_trans(skb_new,
+ bat_priv->soft_iface);
+- bat_priv->stats.rx_packets++;
+- bat_priv->stats.rx_bytes += skb->len + ETH_HLEN + hdr_size;
++ batadv_inc_counter(bat_priv, BATADV_CNT_RX);
++ batadv_add_counter(bat_priv, BATADV_CNT_RX_BYTES,
++ skb->len + ETH_HLEN + hdr_size);
+ bat_priv->soft_iface->last_rx = jiffies;
+
+ netif_rx(skb_new);
+diff --git a/net/batman-adv/fragmentation.c b/net/batman-adv/fragmentation.c
+index a06b6041f3e0..fef21f75892e 100644
+--- a/net/batman-adv/fragmentation.c
++++ b/net/batman-adv/fragmentation.c
+@@ -232,8 +232,10 @@ err_unlock:
+ spin_unlock_bh(&chain->lock);
+
+ err:
+- if (!ret)
++ if (!ret) {
+ kfree(frag_entry_new);
++ kfree_skb(skb);
++ }
+
+ return ret;
+ }
+@@ -305,7 +307,7 @@ free:
+ *
+ * There are three possible outcomes: 1) Packet is merged: Return true and
+ * set *skb to merged packet; 2) Packet is buffered: Return true and set *skb
+- * to NULL; 3) Error: Return false and leave skb as is.
++ * to NULL; 3) Error: Return false and free skb.
+ *
+ * Return: true when packet is merged or buffered, false when skb is not not
+ * used.
+@@ -330,9 +332,9 @@ bool batadv_frag_skb_buffer(struct sk_buff **skb,
+ goto out_err;
+
+ out:
+- *skb = skb_out;
+ ret = true;
+ out_err:
++ *skb = skb_out;
+ return ret;
+ }
+
+@@ -482,12 +484,20 @@ int batadv_frag_send_packet(struct sk_buff *skb,
+ */
+ if (skb->priority >= 256 && skb->priority <= 263)
+ frag_header.priority = skb->priority - 256;
++ else
++ frag_header.priority = 0;
+
+ ether_addr_copy(frag_header.orig, primary_if->net_dev->dev_addr);
+ ether_addr_copy(frag_header.dest, orig_node->orig);
+
+ /* Eat and send fragments from the tail of skb */
+ while (skb->len > max_fragment_size) {
++ /* The initial check in this function should cover this case */
++ if (frag_header.no == BATADV_FRAG_MAX_FRAGMENTS - 1) {
++ ret = -1;
++ goto out;
++ }
++
+ skb_fragment = batadv_frag_create(skb, &frag_header, mtu);
+ if (!skb_fragment)
+ goto out;
+@@ -505,12 +515,6 @@ int batadv_frag_send_packet(struct sk_buff *skb,
+ }
+
+ frag_header.no++;
+-
+- /* The initial check in this function should cover this case */
+- if (frag_header.no == BATADV_FRAG_MAX_FRAGMENTS - 1) {
+- ret = -1;
+- goto out;
+- }
+ }
+
+ /* Make room for the fragment header. */
+diff --git a/net/batman-adv/gateway_client.c b/net/batman-adv/gateway_client.c
+index ed9aaf30fbcf..3bd7ed6b6b3e 100644
+--- a/net/batman-adv/gateway_client.c
++++ b/net/batman-adv/gateway_client.c
+@@ -31,6 +31,7 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/lockdep.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/rculist.h>
+@@ -325,6 +326,9 @@ out:
+ * @bat_priv: the bat priv with all the soft interface information
+ * @orig_node: originator announcing gateway capabilities
+ * @gateway: announced bandwidth information
++ *
++ * Has to be called with the appropriate locks being acquired
++ * (gw.list_lock).
+ */
+ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ struct batadv_orig_node *orig_node,
+@@ -332,6 +336,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ {
+ struct batadv_gw_node *gw_node;
+
++ lockdep_assert_held(&bat_priv->gw.list_lock);
++
+ if (gateway->bandwidth_down == 0)
+ return;
+
+@@ -346,10 +352,8 @@ static void batadv_gw_node_add(struct batadv_priv *bat_priv,
+ gw_node->bandwidth_down = ntohl(gateway->bandwidth_down);
+ gw_node->bandwidth_up = ntohl(gateway->bandwidth_up);
+
+- spin_lock_bh(&bat_priv->gw.list_lock);
+ kref_get(&gw_node->refcount);
+ hlist_add_head_rcu(&gw_node->list, &bat_priv->gw.list);
+- spin_unlock_bh(&bat_priv->gw.list_lock);
+
+ batadv_dbg(BATADV_DBG_BATMAN, bat_priv,
+ "Found new gateway %pM -> gw bandwidth: %u.%u/%u.%u MBit\n",
+@@ -404,11 +408,14 @@ void batadv_gw_node_update(struct batadv_priv *bat_priv,
+ {
+ struct batadv_gw_node *gw_node, *curr_gw = NULL;
+
++ spin_lock_bh(&bat_priv->gw.list_lock);
+ gw_node = batadv_gw_node_get(bat_priv, orig_node);
+ if (!gw_node) {
+ batadv_gw_node_add(bat_priv, orig_node, gateway);
++ spin_unlock_bh(&bat_priv->gw.list_lock);
+ goto out;
+ }
++ spin_unlock_bh(&bat_priv->gw.list_lock);
+
+ if ((gw_node->bandwidth_down == ntohl(gateway->bandwidth_down)) &&
+ (gw_node->bandwidth_up == ntohl(gateway->bandwidth_up)))
+diff --git a/net/batman-adv/gateway_common.c b/net/batman-adv/gateway_common.c
+index 21184810d89f..3e3f91ab694f 100644
+--- a/net/batman-adv/gateway_common.c
++++ b/net/batman-adv/gateway_common.c
+@@ -253,6 +253,11 @@ static void batadv_gw_tvlv_ogm_handler_v1(struct batadv_priv *bat_priv,
+ */
+ void batadv_gw_init(struct batadv_priv *bat_priv)
+ {
++ if (bat_priv->algo_ops->gw.init_sel_class)
++ bat_priv->algo_ops->gw.init_sel_class(bat_priv);
++ else
++ atomic_set(&bat_priv->gw.sel_class, 1);
++
+ batadv_tvlv_handler_register(bat_priv, batadv_gw_tvlv_ogm_handler_v1,
+ NULL, BATADV_TVLV_GW, 1,
+ BATADV_TVLV_HANDLER_OGM_CIFNOTFND);
+diff --git a/net/batman-adv/hard-interface.c b/net/batman-adv/hard-interface.c
+index 8f7883b7d717..f528761674df 100644
+--- a/net/batman-adv/hard-interface.c
++++ b/net/batman-adv/hard-interface.c
+@@ -28,6 +28,7 @@
+ #include <linux/kernel.h>
+ #include <linux/kref.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+ #include <linux/printk.h>
+ #include <linux/rculist.h>
+@@ -539,6 +540,11 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
+ hard_iface->soft_iface = soft_iface;
+ bat_priv = netdev_priv(hard_iface->soft_iface);
+
++ if (bat_priv->num_ifaces >= UINT_MAX) {
++ ret = -ENOSPC;
++ goto err_dev;
++ }
++
+ ret = netdev_master_upper_dev_link(hard_iface->net_dev,
+ soft_iface, NULL, NULL);
+ if (ret)
+@@ -591,6 +597,9 @@ int batadv_hardif_enable_interface(struct batadv_hard_iface *hard_iface,
+
+ batadv_hardif_recalc_extra_skbroom(soft_iface);
+
++ if (bat_priv->algo_ops->iface.enabled)
++ bat_priv->algo_ops->iface.enabled(hard_iface);
++
+ out:
+ return 0;
+
+@@ -646,7 +655,7 @@ void batadv_hardif_disable_interface(struct batadv_hard_iface *hard_iface,
+ batadv_hardif_recalc_extra_skbroom(hard_iface->soft_iface);
+
+ /* nobody uses this interface anymore */
+- if (!bat_priv->num_ifaces) {
++ if (bat_priv->num_ifaces == 0) {
+ batadv_gw_check_client_stop(bat_priv);
+
+ if (autodel == BATADV_IF_CLEANUP_AUTO)
+@@ -682,7 +691,7 @@ batadv_hardif_add_interface(struct net_device *net_dev)
+ if (ret)
+ goto free_if;
+
+- hard_iface->if_num = -1;
++ hard_iface->if_num = 0;
+ hard_iface->net_dev = net_dev;
+ hard_iface->soft_iface = NULL;
+ hard_iface->if_status = BATADV_IF_NOT_IN_USE;
+@@ -694,6 +703,7 @@ batadv_hardif_add_interface(struct net_device *net_dev)
+ INIT_LIST_HEAD(&hard_iface->list);
+ INIT_HLIST_HEAD(&hard_iface->neigh_list);
+
++ mutex_init(&hard_iface->bat_iv.ogm_buff_mutex);
+ spin_lock_init(&hard_iface->neigh_list_lock);
+ kref_init(&hard_iface->refcount);
+
+@@ -750,6 +760,32 @@ void batadv_hardif_remove_interfaces(void)
+ rtnl_unlock();
+ }
+
++/**
++ * batadv_hard_if_event_softif() - Handle events for soft interfaces
++ * @event: NETDEV_* event to handle
++ * @net_dev: net_device which generated an event
++ *
++ * Return: NOTIFY_* result
++ */
++static int batadv_hard_if_event_softif(unsigned long event,
++ struct net_device *net_dev)
++{
++ struct batadv_priv *bat_priv;
++
++ switch (event) {
++ case NETDEV_REGISTER:
++ batadv_sysfs_add_meshif(net_dev);
++ bat_priv = netdev_priv(net_dev);
++ batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS);
++ break;
++ case NETDEV_CHANGENAME:
++ batadv_debugfs_rename_meshif(net_dev);
++ break;
++ }
++
++ return NOTIFY_DONE;
++}
++
+ static int batadv_hard_if_event(struct notifier_block *this,
+ unsigned long event, void *ptr)
+ {
+@@ -758,12 +794,8 @@ static int batadv_hard_if_event(struct notifier_block *this,
+ struct batadv_hard_iface *primary_if = NULL;
+ struct batadv_priv *bat_priv;
+
+- if (batadv_softif_is_valid(net_dev) && event == NETDEV_REGISTER) {
+- batadv_sysfs_add_meshif(net_dev);
+- bat_priv = netdev_priv(net_dev);
+- batadv_softif_create_vlan(bat_priv, BATADV_NO_FLAGS);
+- return NOTIFY_DONE;
+- }
++ if (batadv_softif_is_valid(net_dev))
++ return batadv_hard_if_event_softif(event, net_dev);
+
+ hard_iface = batadv_hardif_get_by_netdev(net_dev);
+ if (!hard_iface && (event == NETDEV_REGISTER ||
+@@ -807,6 +839,9 @@ static int batadv_hard_if_event(struct notifier_block *this,
+ if (hard_iface == primary_if)
+ batadv_primary_if_update_addr(bat_priv, NULL);
+ break;
++ case NETDEV_CHANGENAME:
++ batadv_debugfs_rename_hardif(hard_iface);
++ break;
+ default:
+ break;
+ }
+diff --git a/net/batman-adv/originator.c b/net/batman-adv/originator.c
+index 7c8d16086f0f..8466f83fc32f 100644
+--- a/net/batman-adv/originator.c
++++ b/net/batman-adv/originator.c
+@@ -1495,7 +1495,7 @@ int batadv_orig_dump(struct sk_buff *msg, struct netlink_callback *cb)
+ }
+
+ int batadv_orig_hash_add_if(struct batadv_hard_iface *hard_iface,
+- int max_if_num)
++ unsigned int max_if_num)
+ {
+ struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
+ struct batadv_algo_ops *bao = bat_priv->algo_ops;
+@@ -1530,7 +1530,7 @@ err:
+ }
+
+ int batadv_orig_hash_del_if(struct batadv_hard_iface *hard_iface,
+- int max_if_num)
++ unsigned int max_if_num)
+ {
+ struct batadv_priv *bat_priv = netdev_priv(hard_iface->soft_iface);
+ struct batadv_hashtable *hash = bat_priv->orig_hash;
+diff --git a/net/batman-adv/originator.h b/net/batman-adv/originator.h
+index ebc56183f358..fab0b2cc141d 100644
+--- a/net/batman-adv/originator.h
++++ b/net/batman-adv/originator.h
+@@ -78,9 +78,9 @@ int batadv_orig_seq_print_text(struct seq_file *seq, void *offset);
+ int batadv_orig_dump(struct sk_buff *msg, struct netlink_callback *cb);
+ int batadv_orig_hardif_seq_print_text(struct seq_file *seq, void *offset);
+ int batadv_orig_hash_add_if(struct batadv_hard_iface *hard_iface,
+- int max_if_num);
++ unsigned int max_if_num);
+ int batadv_orig_hash_del_if(struct batadv_hard_iface *hard_iface,
+- int max_if_num);
++ unsigned int max_if_num);
+ struct batadv_orig_node_vlan *
+ batadv_orig_node_vlan_new(struct batadv_orig_node *orig_node,
+ unsigned short vid);
+diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
+index 8b98609ebc1e..19059ae26e51 100644
+--- a/net/batman-adv/routing.c
++++ b/net/batman-adv/routing.c
+@@ -930,7 +930,6 @@ int batadv_recv_unicast_packet(struct sk_buff *skb,
+ bool is4addr;
+
+ unicast_packet = (struct batadv_unicast_packet *)skb->data;
+- unicast_4addr_packet = (struct batadv_unicast_4addr_packet *)skb->data;
+
+ is4addr = unicast_packet->packet_type == BATADV_UNICAST_4ADDR;
+ /* the caller function should have already pulled 2 bytes */
+@@ -951,9 +950,13 @@ int batadv_recv_unicast_packet(struct sk_buff *skb,
+ if (!batadv_check_unicast_ttvn(bat_priv, skb, hdr_size))
+ return NET_RX_DROP;
+
++ unicast_packet = (struct batadv_unicast_packet *)skb->data;
++
+ /* packet for me */
+ if (batadv_is_my_mac(bat_priv, unicast_packet->dest)) {
+ if (is4addr) {
++ unicast_4addr_packet =
++ (struct batadv_unicast_4addr_packet *)skb->data;
+ subtype = unicast_4addr_packet->subtype;
+ batadv_dat_inc_counter(bat_priv, subtype);
+
+@@ -1080,6 +1083,12 @@ int batadv_recv_frag_packet(struct sk_buff *skb,
+ batadv_inc_counter(bat_priv, BATADV_CNT_FRAG_RX);
+ batadv_add_counter(bat_priv, BATADV_CNT_FRAG_RX_BYTES, skb->len);
+
++ /* batadv_frag_skb_buffer will always consume the skb and
++ * the caller should therefore never try to free the
++ * skb after this point
++ */
++ ret = NET_RX_SUCCESS;
++
+ /* Add fragment to buffer and merge if possible. */
+ if (!batadv_frag_skb_buffer(&skb, orig_node_src))
+ goto out;
+diff --git a/net/batman-adv/soft-interface.c b/net/batman-adv/soft-interface.c
+index a92512a46e91..99d2c453c872 100644
+--- a/net/batman-adv/soft-interface.c
++++ b/net/batman-adv/soft-interface.c
+@@ -808,7 +808,6 @@ static int batadv_softif_init_late(struct net_device *dev)
+ atomic_set(&bat_priv->mcast.num_want_all_ipv6, 0);
+ #endif
+ atomic_set(&bat_priv->gw.mode, BATADV_GW_MODE_OFF);
+- atomic_set(&bat_priv->gw.sel_class, 20);
+ atomic_set(&bat_priv->gw.bandwidth_down, 100);
+ atomic_set(&bat_priv->gw.bandwidth_up, 20);
+ atomic_set(&bat_priv->orig_interval, 1000);
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 1fab9bcf535d..d40d83949b00 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -867,7 +867,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
+ struct batadv_orig_node_vlan *vlan;
+ u8 *tt_change_ptr;
+
+- rcu_read_lock();
++ spin_lock_bh(&orig_node->vlan_list_lock);
+ hlist_for_each_entry_rcu(vlan, &orig_node->vlan_list, list) {
+ num_vlan++;
+ num_entries += atomic_read(&vlan->tt.num_entries);
+@@ -905,7 +905,7 @@ batadv_tt_prepare_tvlv_global_data(struct batadv_orig_node *orig_node,
+ *tt_change = (struct batadv_tvlv_tt_change *)tt_change_ptr;
+
+ out:
+- rcu_read_unlock();
++ spin_unlock_bh(&orig_node->vlan_list_lock);
+ return tvlv_len;
+ }
+
+@@ -936,15 +936,20 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
+ struct batadv_tvlv_tt_vlan_data *tt_vlan;
+ struct batadv_softif_vlan *vlan;
+ u16 num_vlan = 0;
+- u16 num_entries = 0;
++ u16 vlan_entries = 0;
++ u16 total_entries = 0;
+ u16 tvlv_len;
+ u8 *tt_change_ptr;
+ int change_offset;
+
+- rcu_read_lock();
++ spin_lock_bh(&bat_priv->softif_vlan_list_lock);
+ hlist_for_each_entry_rcu(vlan, &bat_priv->softif_vlan_list, list) {
++ vlan_entries = atomic_read(&vlan->tt.num_entries);
++ if (vlan_entries < 1)
++ continue;
++
+ num_vlan++;
+- num_entries += atomic_read(&vlan->tt.num_entries);
++ total_entries += vlan_entries;
+ }
+
+ change_offset = sizeof(**tt_data);
+@@ -952,7 +957,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
+
+ /* if tt_len is negative, allocate the space needed by the full table */
+ if (*tt_len < 0)
+- *tt_len = batadv_tt_len(num_entries);
++ *tt_len = batadv_tt_len(total_entries);
+
+ tvlv_len = *tt_len;
+ tvlv_len += change_offset;
+@@ -969,6 +974,10 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
+
+ tt_vlan = (struct batadv_tvlv_tt_vlan_data *)(*tt_data + 1);
+ hlist_for_each_entry_rcu(vlan, &bat_priv->softif_vlan_list, list) {
++ vlan_entries = atomic_read(&vlan->tt.num_entries);
++ if (vlan_entries < 1)
++ continue;
++
+ tt_vlan->vid = htons(vlan->vid);
+ tt_vlan->crc = htonl(vlan->tt.crc);
+
+@@ -979,7 +988,7 @@ batadv_tt_prepare_tvlv_local_data(struct batadv_priv *bat_priv,
+ *tt_change = (struct batadv_tvlv_tt_change *)tt_change_ptr;
+
+ out:
+- rcu_read_unlock();
++ spin_unlock_bh(&bat_priv->softif_vlan_list_lock);
+ return tvlv_len;
+ }
+
+@@ -1539,6 +1548,8 @@ batadv_tt_global_orig_entry_find(const struct batadv_tt_global_entry *entry,
+ * by a given originator
+ * @entry: the TT global entry to check
+ * @orig_node: the originator to search in the list
++ * @flags: a pointer to store TT flags for the given @entry received
++ * from @orig_node
+ *
+ * find out if an orig_node is already in the list of a tt_global_entry.
+ *
+@@ -1546,7 +1557,8 @@ batadv_tt_global_orig_entry_find(const struct batadv_tt_global_entry *entry,
+ */
+ static bool
+ batadv_tt_global_entry_has_orig(const struct batadv_tt_global_entry *entry,
+- const struct batadv_orig_node *orig_node)
++ const struct batadv_orig_node *orig_node,
++ u8 *flags)
+ {
+ struct batadv_tt_orig_list_entry *orig_entry;
+ bool found = false;
+@@ -1554,15 +1566,51 @@ batadv_tt_global_entry_has_orig(const struct batadv_tt_global_entry *entry,
+ orig_entry = batadv_tt_global_orig_entry_find(entry, orig_node);
+ if (orig_entry) {
+ found = true;
++
++ if (flags)
++ *flags = orig_entry->flags;
++
+ batadv_tt_orig_list_entry_put(orig_entry);
+ }
+
+ return found;
+ }
+
++/**
++ * batadv_tt_global_sync_flags - update TT sync flags
++ * @tt_global: the TT global entry to update sync flags in
++ *
++ * Updates the sync flag bits in the tt_global flag attribute with a logical
++ * OR of all sync flags from any of its TT orig entries.
++ */
++static void
++batadv_tt_global_sync_flags(struct batadv_tt_global_entry *tt_global)
++{
++ struct batadv_tt_orig_list_entry *orig_entry;
++ const struct hlist_head *head;
++ u16 flags = BATADV_NO_FLAGS;
++
++ rcu_read_lock();
++ head = &tt_global->orig_list;
++ hlist_for_each_entry_rcu(orig_entry, head, list)
++ flags |= orig_entry->flags;
++ rcu_read_unlock();
++
++ flags |= tt_global->common.flags & (~BATADV_TT_SYNC_MASK);
++ tt_global->common.flags = flags;
++}
++
++/**
++ * batadv_tt_global_orig_entry_add - add or update a TT orig entry
++ * @tt_global: the TT global entry to add an orig entry in
++ * @orig_node: the originator to add an orig entry for
++ * @ttvn: translation table version number of this changeset
++ * @flags: TT sync flags
++ */
+ static void
+ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+- struct batadv_orig_node *orig_node, int ttvn)
++ struct batadv_orig_node *orig_node, int ttvn,
++ u8 flags)
+ {
+ struct batadv_tt_orig_list_entry *orig_entry;
+
+@@ -1574,7 +1622,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ * was added during a "temporary client detection"
+ */
+ orig_entry->ttvn = ttvn;
+- goto out;
++ orig_entry->flags = flags;
++ goto sync_flags;
+ }
+
+ orig_entry = kmem_cache_zalloc(batadv_tt_orig_cache, GFP_ATOMIC);
+@@ -1586,6 +1635,7 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ batadv_tt_global_size_inc(orig_node, tt_global->common.vid);
+ orig_entry->orig_node = orig_node;
+ orig_entry->ttvn = ttvn;
++ orig_entry->flags = flags;
+ kref_init(&orig_entry->refcount);
+
+ kref_get(&orig_entry->refcount);
+@@ -1593,6 +1643,8 @@ batadv_tt_global_orig_entry_add(struct batadv_tt_global_entry *tt_global,
+ &tt_global->orig_list);
+ atomic_inc(&tt_global->orig_list_count);
+
++sync_flags:
++ batadv_tt_global_sync_flags(tt_global);
+ out:
+ if (orig_entry)
+ batadv_tt_orig_list_entry_put(orig_entry);
+@@ -1656,7 +1708,9 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
+ ether_addr_copy(common->addr, tt_addr);
+ common->vid = vid;
+
+- common->flags = flags;
++ if (!is_multicast_ether_addr(common->addr))
++ common->flags = flags & (~BATADV_TT_SYNC_MASK);
++
+ tt_global_entry->roam_at = 0;
+ /* node must store current time in case of roaming. This is
+ * needed to purge this entry out on timeout (if nobody claims
+@@ -1698,7 +1752,7 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
+ if (!(common->flags & BATADV_TT_CLIENT_TEMP))
+ goto out;
+ if (batadv_tt_global_entry_has_orig(tt_global_entry,
+- orig_node))
++ orig_node, NULL))
+ goto out_remove;
+ batadv_tt_global_del_orig_list(tt_global_entry);
+ goto add_orig_entry;
+@@ -1716,10 +1770,11 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
+ }
+
+ /* the change can carry possible "attribute" flags like the
+- * TT_CLIENT_WIFI, therefore they have to be copied in the
++ * TT_CLIENT_TEMP, therefore they have to be copied in the
+ * client entry
+ */
+- common->flags |= flags;
++ if (!is_multicast_ether_addr(common->addr))
++ common->flags |= flags & (~BATADV_TT_SYNC_MASK);
+
+ /* If there is the BATADV_TT_CLIENT_ROAM flag set, there is only
+ * one originator left in the list and we previously received a
+@@ -1736,7 +1791,8 @@ static bool batadv_tt_global_add(struct batadv_priv *bat_priv,
+ }
+ add_orig_entry:
+ /* add the new orig_entry (if needed) or update it */
+- batadv_tt_global_orig_entry_add(tt_global_entry, orig_node, ttvn);
++ batadv_tt_global_orig_entry_add(tt_global_entry, orig_node, ttvn,
++ flags & BATADV_TT_SYNC_MASK);
+
+ batadv_dbg(BATADV_DBG_TT, bat_priv,
+ "Creating new global tt entry: %pM (vid: %d, via %pM)\n",
+@@ -1959,6 +2015,7 @@ batadv_tt_global_dump_subentry(struct sk_buff *msg, u32 portid, u32 seq,
+ struct batadv_tt_orig_list_entry *orig,
+ bool best)
+ {
++ u16 flags = (common->flags & (~BATADV_TT_SYNC_MASK)) | orig->flags;
+ void *hdr;
+ struct batadv_orig_node_vlan *vlan;
+ u8 last_ttvn;
+@@ -1988,7 +2045,7 @@ batadv_tt_global_dump_subentry(struct sk_buff *msg, u32 portid, u32 seq,
+ nla_put_u8(msg, BATADV_ATTR_TT_LAST_TTVN, last_ttvn) ||
+ nla_put_u32(msg, BATADV_ATTR_TT_CRC32, crc) ||
+ nla_put_u16(msg, BATADV_ATTR_TT_VID, common->vid) ||
+- nla_put_u32(msg, BATADV_ATTR_TT_FLAGS, common->flags))
++ nla_put_u32(msg, BATADV_ATTR_TT_FLAGS, flags))
+ goto nla_put_failure;
+
+ if (best && nla_put_flag(msg, BATADV_ATTR_FLAG_BEST))
+@@ -2602,6 +2659,7 @@ static u32 batadv_tt_global_crc(struct batadv_priv *bat_priv,
+ unsigned short vid)
+ {
+ struct batadv_hashtable *hash = bat_priv->tt.global_hash;
++ struct batadv_tt_orig_list_entry *tt_orig;
+ struct batadv_tt_common_entry *tt_common;
+ struct batadv_tt_global_entry *tt_global;
+ struct hlist_head *head;
+@@ -2640,8 +2698,9 @@ static u32 batadv_tt_global_crc(struct batadv_priv *bat_priv,
+ /* find out if this global entry is announced by this
+ * originator
+ */
+- if (!batadv_tt_global_entry_has_orig(tt_global,
+- orig_node))
++ tt_orig = batadv_tt_global_orig_entry_find(tt_global,
++ orig_node);
++ if (!tt_orig)
+ continue;
+
+ /* use network order to read the VID: this ensures that
+@@ -2653,10 +2712,12 @@ static u32 batadv_tt_global_crc(struct batadv_priv *bat_priv,
+ /* compute the CRC on flags that have to be kept in sync
+ * among nodes
+ */
+- flags = tt_common->flags & BATADV_TT_SYNC_MASK;
++ flags = tt_orig->flags;
+ crc_tmp = crc32c(crc_tmp, &flags, sizeof(flags));
+
+ crc ^= crc32c(crc_tmp, tt_common->addr, ETH_ALEN);
++
++ batadv_tt_orig_list_entry_put(tt_orig);
+ }
+ rcu_read_unlock();
+ }
+@@ -2834,23 +2895,46 @@ unlock:
+ }
+
+ /**
+- * batadv_tt_local_valid - verify that given tt entry is a valid one
++ * batadv_tt_local_valid() - verify local tt entry and get flags
+ * @entry_ptr: to be checked local tt entry
+ * @data_ptr: not used but definition required to satisfy the callback prototype
++ * @flags: a pointer to store TT flags for this client to
++ *
++ * Checks the validity of the given local TT entry. If it is, then the provided
++ * flags pointer is updated.
+ *
+ * Return: true if the entry is a valid, false otherwise.
+ */
+-static bool batadv_tt_local_valid(const void *entry_ptr, const void *data_ptr)
++static bool batadv_tt_local_valid(const void *entry_ptr,
++ const void *data_ptr,
++ u8 *flags)
+ {
+ const struct batadv_tt_common_entry *tt_common_entry = entry_ptr;
+
+ if (tt_common_entry->flags & BATADV_TT_CLIENT_NEW)
+ return false;
++
++ if (flags)
++ *flags = tt_common_entry->flags;
++
+ return true;
+ }
+
++/**
++ * batadv_tt_global_valid() - verify global tt entry and get flags
++ * @entry_ptr: to be checked global tt entry
++ * @data_ptr: an orig_node object (may be NULL)
++ * @flags: a pointer to store TT flags for this client to
++ *
++ * Checks the validity of the given global TT entry. If it is, then the provided
++ * flags pointer is updated either with the common (summed) TT flags if data_ptr
++ * is NULL or the specific, per originator TT flags otherwise.
++ *
++ * Return: true if the entry is a valid, false otherwise.
++ */
+ static bool batadv_tt_global_valid(const void *entry_ptr,
+- const void *data_ptr)
++ const void *data_ptr,
++ u8 *flags)
+ {
+ const struct batadv_tt_common_entry *tt_common_entry = entry_ptr;
+ const struct batadv_tt_global_entry *tt_global_entry;
+@@ -2864,7 +2948,8 @@ static bool batadv_tt_global_valid(const void *entry_ptr,
+ struct batadv_tt_global_entry,
+ common);
+
+- return batadv_tt_global_entry_has_orig(tt_global_entry, orig_node);
++ return batadv_tt_global_entry_has_orig(tt_global_entry, orig_node,
++ flags);
+ }
+
+ /**
+@@ -2874,25 +2959,34 @@ static bool batadv_tt_global_valid(const void *entry_ptr,
+ * @hash: hash table containing the tt entries
+ * @tt_len: expected tvlv tt data buffer length in number of bytes
+ * @tvlv_buff: pointer to the buffer to fill with the TT data
+- * @valid_cb: function to filter tt change entries
++ * @valid_cb: function to filter tt change entries and to return TT flags
+ * @cb_data: data passed to the filter function as argument
++ *
++ * Fills the tvlv buff with the tt entries from the specified hash. If valid_cb
++ * is not provided then this becomes a no-op.
+ */
+ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+ struct batadv_hashtable *hash,
+ void *tvlv_buff, u16 tt_len,
+ bool (*valid_cb)(const void *,
+- const void *),
++ const void *,
++ u8 *flags),
+ void *cb_data)
+ {
+ struct batadv_tt_common_entry *tt_common_entry;
+ struct batadv_tvlv_tt_change *tt_change;
+ struct hlist_head *head;
+ u16 tt_tot, tt_num_entries = 0;
++ u8 flags;
++ bool ret;
+ u32 i;
+
+ tt_tot = batadv_tt_entries(tt_len);
+ tt_change = (struct batadv_tvlv_tt_change *)tvlv_buff;
+
++ if (!valid_cb)
++ return;
++
+ rcu_read_lock();
+ for (i = 0; i < hash->size; i++) {
+ head = &hash->table[i];
+@@ -2902,11 +2996,12 @@ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+ if (tt_tot == tt_num_entries)
+ break;
+
+- if ((valid_cb) && (!valid_cb(tt_common_entry, cb_data)))
++ ret = valid_cb(tt_common_entry, cb_data, &flags);
++ if (!ret)
+ continue;
+
+ ether_addr_copy(tt_change->addr, tt_common_entry->addr);
+- tt_change->flags = tt_common_entry->flags;
++ tt_change->flags = flags;
+ tt_change->vid = htons(tt_common_entry->vid);
+ memset(tt_change->reserved, 0,
+ sizeof(tt_change->reserved));
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index b3dd1a381aad..c17b74e51fe9 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -27,6 +27,7 @@
+ #include <linux/compiler.h>
+ #include <linux/if_ether.h>
+ #include <linux/kref.h>
++#include <linux/mutex.h>
+ #include <linux/netdevice.h>
+ #include <linux/netlink.h>
+ #include <linux/sched.h> /* for linux/wait.h */
+@@ -81,11 +82,13 @@ enum batadv_dhcp_recipient {
+ * @ogm_buff: buffer holding the OGM packet
+ * @ogm_buff_len: length of the OGM packet buffer
+ * @ogm_seqno: OGM sequence number - used to identify each OGM
++ * @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len
+ */
+ struct batadv_hard_iface_bat_iv {
+ unsigned char *ogm_buff;
+ int ogm_buff_len;
+ atomic_t ogm_seqno;
++ struct mutex ogm_buff_mutex;
+ };
+
+ /**
+@@ -139,7 +142,7 @@ struct batadv_hard_iface_bat_v {
+ */
+ struct batadv_hard_iface {
+ struct list_head list;
+- s16 if_num;
++ unsigned int if_num;
+ char if_status;
+ struct net_device *net_dev;
+ u8 num_bcasts;
+@@ -966,12 +969,14 @@ struct batadv_softif_vlan {
+ * @ogm_buff: buffer holding the OGM packet
+ * @ogm_buff_len: length of the OGM packet buffer
+ * @ogm_seqno: OGM sequence number - used to identify each OGM
++ * @ogm_buff_mutex: lock protecting ogm_buff and ogm_buff_len
+ * @ogm_wq: workqueue used to schedule OGM transmissions
+ */
+ struct batadv_priv_bat_v {
+ unsigned char *ogm_buff;
+ int ogm_buff_len;
+ atomic_t ogm_seqno;
++ struct mutex ogm_buff_mutex;
+ struct delayed_work ogm_wq;
+ };
+
+@@ -1060,7 +1065,7 @@ struct batadv_priv {
+ atomic_t bcast_seqno;
+ atomic_t bcast_queue_left;
+ atomic_t batman_queue_left;
+- char num_ifaces;
++ unsigned int num_ifaces;
+ struct kobject *mesh_obj;
+ struct dentry *debug_dir;
+ struct hlist_head forw_bat_list;
+@@ -1241,6 +1246,7 @@ struct batadv_tt_global_entry {
+ * struct batadv_tt_orig_list_entry - orig node announcing a non-mesh client
+ * @orig_node: pointer to orig node announcing this non-mesh client
+ * @ttvn: translation table version number which added the non-mesh client
++ * @flags: per orig entry TT sync flags
+ * @list: list node for batadv_tt_global_entry::orig_list
+ * @refcount: number of contexts the object is used
+ * @rcu: struct used for freeing in an RCU-safe manner
+@@ -1248,6 +1254,7 @@ struct batadv_tt_global_entry {
+ struct batadv_tt_orig_list_entry {
+ struct batadv_orig_node *orig_node;
+ u8 ttvn;
++ u8 flags;
+ struct hlist_node list;
+ struct kref refcount;
+ struct rcu_head rcu;
+@@ -1397,6 +1404,7 @@ struct batadv_forw_packet {
+ * @activate: start routing mechanisms when hard-interface is brought up
+ * (optional)
+ * @enable: init routing info when hard-interface is enabled
++ * @enabled: notification when hard-interface was enabled (optional)
+ * @disable: de-init routing info when hard-interface is disabled
+ * @update_mac: (re-)init mac addresses of the protocol information
+ * belonging to this hard-interface
+@@ -1405,6 +1413,7 @@ struct batadv_forw_packet {
+ struct batadv_algo_iface_ops {
+ void (*activate)(struct batadv_hard_iface *hard_iface);
+ int (*enable)(struct batadv_hard_iface *hard_iface);
++ void (*enabled)(struct batadv_hard_iface *hard_iface);
+ void (*disable)(struct batadv_hard_iface *hard_iface);
+ void (*update_mac)(struct batadv_hard_iface *hard_iface);
+ void (*primary_set)(struct batadv_hard_iface *hard_iface);
+@@ -1452,9 +1461,10 @@ struct batadv_algo_neigh_ops {
+ */
+ struct batadv_algo_orig_ops {
+ void (*free)(struct batadv_orig_node *orig_node);
+- int (*add_if)(struct batadv_orig_node *orig_node, int max_if_num);
+- int (*del_if)(struct batadv_orig_node *orig_node, int max_if_num,
+- int del_if_num);
++ int (*add_if)(struct batadv_orig_node *orig_node,
++ unsigned int max_if_num);
++ int (*del_if)(struct batadv_orig_node *orig_node,
++ unsigned int max_if_num, unsigned int del_if_num);
+ #ifdef CONFIG_BATMAN_ADV_DEBUGFS
+ void (*print)(struct batadv_priv *priv, struct seq_file *seq,
+ struct batadv_hard_iface *hard_iface);
+@@ -1466,6 +1476,7 @@ struct batadv_algo_orig_ops {
+
+ /**
+ * struct batadv_algo_gw_ops - mesh algorithm callbacks (GW specific)
++ * @init_sel_class: initialize GW selection class (optional)
+ * @store_sel_class: parse and stores a new GW selection class (optional)
+ * @show_sel_class: prints the current GW selection class (optional)
+ * @get_best_gw_node: select the best GW from the list of available nodes
+@@ -1476,6 +1487,7 @@ struct batadv_algo_orig_ops {
+ * @dump: dump gateways to a netlink socket (optional)
+ */
+ struct batadv_algo_gw_ops {
++ void (*init_sel_class)(struct batadv_priv *bat_priv);
+ ssize_t (*store_sel_class)(struct batadv_priv *bat_priv, char *buff,
+ size_t count);
+ ssize_t (*show_sel_class)(struct batadv_priv *bat_priv, char *buff);
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 2e4eef71471d..db65b0cdfc4c 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -55,30 +55,60 @@ static void cgrp_css_free(struct cgroup_subsys_state *css)
+ kfree(css_cls_state(css));
+ }
+
++/*
++ * To avoid freezing of sockets creation for tasks with big number of threads
++ * and opened sockets lets release file_lock every 1000 iterated descriptors.
++ * New sockets will already have been created with new classid.
++ */
++
++struct update_classid_context {
++ u32 classid;
++ unsigned int batch;
++};
++
++#define UPDATE_CLASSID_BATCH 1000
++
+ static int update_classid_sock(const void *v, struct file *file, unsigned n)
+ {
+ int err;
++ struct update_classid_context *ctx = (void *)v;
+ struct socket *sock = sock_from_file(file, &err);
+
+ if (sock) {
+ spin_lock(&cgroup_sk_update_lock);
+- sock_cgroup_set_classid(&sock->sk->sk_cgrp_data,
+- (unsigned long)v);
++ sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);
+ spin_unlock(&cgroup_sk_update_lock);
+ }
++ if (--ctx->batch == 0) {
++ ctx->batch = UPDATE_CLASSID_BATCH;
++ return n + 1;
++ }
+ return 0;
+ }
+
++static void update_classid_task(struct task_struct *p, u32 classid)
++{
++ struct update_classid_context ctx = {
++ .classid = classid,
++ .batch = UPDATE_CLASSID_BATCH
++ };
++ unsigned int fd = 0;
++
++ do {
++ task_lock(p);
++ fd = iterate_fd(p->files, fd, update_classid_sock, &ctx);
++ task_unlock(p);
++ cond_resched();
++ } while (fd);
++}
++
+ static void cgrp_attach(struct cgroup_taskset *tset)
+ {
+ struct cgroup_subsys_state *css;
+ struct task_struct *p;
+
+ cgroup_taskset_for_each(p, css, tset) {
+- task_lock(p);
+- iterate_fd(p->files, 0, update_classid_sock,
+- (void *)(unsigned long)css_cls_state(css)->classid);
+- task_unlock(p);
++ update_classid_task(p, css_cls_state(css)->classid);
+ }
+ }
+
+@@ -100,10 +130,7 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+
+ css_task_iter_start(css, &it);
+ while ((p = css_task_iter_next(&it))) {
+- task_lock(p);
+- iterate_fd(p->files, 0, update_classid_sock,
+- (void *)(unsigned long)cs->classid);
+- task_unlock(p);
++ update_classid_task(p, cs->classid);
+ cond_resched();
+ }
+ css_task_iter_end(&it);
+diff --git a/net/ieee802154/nl_policy.c b/net/ieee802154/nl_policy.c
+index 35c432668454..040983fc15da 100644
+--- a/net/ieee802154/nl_policy.c
++++ b/net/ieee802154/nl_policy.c
+@@ -30,7 +30,13 @@ const struct nla_policy ieee802154_policy[IEEE802154_ATTR_MAX + 1] = {
+ [IEEE802154_ATTR_HW_ADDR] = { .type = NLA_HW_ADDR, },
+ [IEEE802154_ATTR_PAN_ID] = { .type = NLA_U16, },
+ [IEEE802154_ATTR_CHANNEL] = { .type = NLA_U8, },
++ [IEEE802154_ATTR_BCN_ORD] = { .type = NLA_U8, },
++ [IEEE802154_ATTR_SF_ORD] = { .type = NLA_U8, },
++ [IEEE802154_ATTR_PAN_COORD] = { .type = NLA_U8, },
++ [IEEE802154_ATTR_BAT_EXT] = { .type = NLA_U8, },
++ [IEEE802154_ATTR_COORD_REALIGN] = { .type = NLA_U8, },
+ [IEEE802154_ATTR_PAGE] = { .type = NLA_U8, },
++ [IEEE802154_ATTR_DEV_TYPE] = { .type = NLA_U8, },
+ [IEEE802154_ATTR_COORD_SHORT_ADDR] = { .type = NLA_U16, },
+ [IEEE802154_ATTR_COORD_HW_ADDR] = { .type = NLA_HW_ADDR, },
+ [IEEE802154_ATTR_COORD_PAN_ID] = { .type = NLA_U16, },
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 71bcab94c5c7..0a6f72763beb 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1738,6 +1738,7 @@ void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway)
+ {
+ unsigned char optbuf[sizeof(struct ip_options) + 40];
+ struct ip_options *opt = (struct ip_options *)optbuf;
++ int res;
+
+ if (ip_hdr(skb)->protocol == IPPROTO_ICMP || error != -EACCES)
+ return;
+@@ -1749,7 +1750,11 @@ void cipso_v4_error(struct sk_buff *skb, int error, u32 gateway)
+
+ memset(opt, 0, sizeof(struct ip_options));
+ opt->optlen = ip_hdr(skb)->ihl*4 - sizeof(struct iphdr);
+- if (__ip_options_compile(dev_net(skb->dev), opt, skb, NULL))
++ rcu_read_lock();
++ res = __ip_options_compile(dev_net(skb->dev), opt, skb, NULL);
++ rcu_read_unlock();
++
++ if (res)
+ return;
+
+ if (gateway)
+diff --git a/net/ipv4/gre_demux.c b/net/ipv4/gre_demux.c
+index 7efe740c06eb..4a5e55e94a9e 100644
+--- a/net/ipv4/gre_demux.c
++++ b/net/ipv4/gre_demux.c
+@@ -60,7 +60,9 @@ int gre_del_protocol(const struct gre_protocol *proto, u8 version)
+ }
+ EXPORT_SYMBOL_GPL(gre_del_protocol);
+
+-/* Fills in tpi and returns header length to be pulled. */
++/* Fills in tpi and returns header length to be pulled.
++ * Note that caller must use pskb_may_pull() before pulling GRE header.
++ */
+ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ bool *csum_err, __be16 proto, int nhs)
+ {
+@@ -114,8 +116,14 @@ int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header
+ */
+ if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {
++ u8 _val, *val;
++
++ val = skb_header_pointer(skb, nhs + hdr_len,
++ sizeof(_val), &_val);
++ if (!val)
++ return -EINVAL;
+ tpi->proto = proto;
+- if ((*(u8 *)options & 0xF0) != 0x40)
++ if ((*val & 0xF0) != 0x40)
+ hdr_len += 4;
+ }
+ tpi->hdr_len = hdr_len;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 6b1310d5e808..a4c00242a90b 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3189,6 +3189,10 @@ static void addrconf_dev_config(struct net_device *dev)
+ (dev->type != ARPHRD_6LOWPAN) &&
+ (dev->type != ARPHRD_NONE)) {
+ /* Alas, we support only Ethernet autoconfiguration. */
++ idev = __in6_dev_get(dev);
++ if (!IS_ERR_OR_NULL(idev) && dev->flags & IFF_UP &&
++ dev->flags & IFF_MULTICAST)
++ ipv6_mc_up(idev);
+ return;
+ }
+
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 81fd35ed8732..1080770b5eaf 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -184,9 +184,15 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ retv = -EBUSY;
+ break;
+ }
+- } else if (sk->sk_protocol != IPPROTO_TCP)
++ } else if (sk->sk_protocol == IPPROTO_TCP) {
++ if (sk->sk_prot != &tcpv6_prot) {
++ retv = -EBUSY;
++ break;
++ }
+ break;
+-
++ } else {
++ break;
++ }
+ if (sk->sk_state != TCP_ESTABLISHED) {
+ retv = -ENOTCONN;
+ break;
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 74652eb2f90f..a6f265262f15 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -3841,7 +3841,7 @@ void __ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
+
+ lockdep_assert_held(&local->sta_mtx);
+
+- list_for_each_entry_rcu(sta, &local->sta_list, list) {
++ list_for_each_entry(sta, &local->sta_list, list) {
+ if (sdata != sta->sdata &&
+ (!sta->sdata->bss || sta->sdata->bss != sdata->bss))
+ continue;
+diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c
+index 3f499126727c..8396dc8ee247 100644
+--- a/net/netfilter/nfnetlink_cthelper.c
++++ b/net/netfilter/nfnetlink_cthelper.c
+@@ -711,6 +711,8 @@ static const struct nla_policy nfnl_cthelper_policy[NFCTH_MAX+1] = {
+ [NFCTH_NAME] = { .type = NLA_NUL_STRING,
+ .len = NF_CT_HELPER_NAME_LEN-1 },
+ [NFCTH_QUEUE_NUM] = { .type = NLA_U32, },
++ [NFCTH_PRIV_DATA_LEN] = { .type = NLA_U32, },
++ [NFCTH_STATUS] = { .type = NLA_U32, },
+ };
+
+ static const struct nfnl_callback nfnl_cthelper_cb[NFNL_MSG_CTHELPER_MAX] = {
+diff --git a/net/nfc/hci/core.c b/net/nfc/hci/core.c
+index 5a58f9f38095..291f24fef19a 100644
+--- a/net/nfc/hci/core.c
++++ b/net/nfc/hci/core.c
+@@ -193,13 +193,20 @@ exit:
+ void nfc_hci_cmd_received(struct nfc_hci_dev *hdev, u8 pipe, u8 cmd,
+ struct sk_buff *skb)
+ {
+- u8 gate = hdev->pipes[pipe].gate;
+ u8 status = NFC_HCI_ANY_OK;
+ struct hci_create_pipe_resp *create_info;
+ struct hci_delete_pipe_noti *delete_info;
+ struct hci_all_pipe_cleared_noti *cleared_info;
++ u8 gate;
+
+- pr_debug("from gate %x pipe %x cmd %x\n", gate, pipe, cmd);
++ pr_debug("from pipe %x cmd %x\n", pipe, cmd);
++
++ if (pipe >= NFC_HCI_MAX_PIPES) {
++ status = NFC_HCI_ANY_E_NOK;
++ goto exit;
++ }
++
++ gate = hdev->pipes[pipe].gate;
+
+ switch (cmd) {
+ case NFC_HCI_ADM_NOTIFY_PIPE_CREATED:
+@@ -387,8 +394,14 @@ void nfc_hci_event_received(struct nfc_hci_dev *hdev, u8 pipe, u8 event,
+ struct sk_buff *skb)
+ {
+ int r = 0;
+- u8 gate = hdev->pipes[pipe].gate;
++ u8 gate;
++
++ if (pipe >= NFC_HCI_MAX_PIPES) {
++ pr_err("Discarded event %x to invalid pipe %x\n", event, pipe);
++ goto exit;
++ }
+
++ gate = hdev->pipes[pipe].gate;
+ if (gate == NFC_HCI_INVALID_GATE) {
+ pr_err("Discarded event %x to unopened pipe %x\n", event, pipe);
+ goto exit;
+diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c
+index d3c8dd5dc817..e79a49fe61e8 100644
+--- a/net/nfc/netlink.c
++++ b/net/nfc/netlink.c
+@@ -62,7 +62,10 @@ static const struct nla_policy nfc_genl_policy[NFC_ATTR_MAX + 1] = {
+ [NFC_ATTR_LLC_SDP] = { .type = NLA_NESTED },
+ [NFC_ATTR_FIRMWARE_NAME] = { .type = NLA_STRING,
+ .len = NFC_FIRMWARE_NAME_MAXSIZE },
++ [NFC_ATTR_SE_INDEX] = { .type = NLA_U32 },
+ [NFC_ATTR_SE_APDU] = { .type = NLA_BINARY },
++ [NFC_ATTR_VENDOR_ID] = { .type = NLA_U32 },
++ [NFC_ATTR_VENDOR_SUBCMD] = { .type = NLA_U32 },
+ [NFC_ATTR_VENDOR_DATA] = { .type = NLA_BINARY },
+
+ };
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 7e7eba33bbdb..9f53d4ec0e37 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -697,6 +697,7 @@ static const struct nla_policy fq_policy[TCA_FQ_MAX + 1] = {
+ [TCA_FQ_FLOW_MAX_RATE] = { .type = NLA_U32 },
+ [TCA_FQ_BUCKETS_LOG] = { .type = NLA_U32 },
+ [TCA_FQ_FLOW_REFILL_DELAY] = { .type = NLA_U32 },
++ [TCA_FQ_ORPHAN_MASK] = { .type = NLA_U32 },
+ [TCA_FQ_LOW_RATE_THRESHOLD] = { .type = NLA_U32 },
+ };
+
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 9823bef65e5e..0048f90944dd 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -359,6 +359,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [NL80211_ATTR_KEY_DEFAULT_TYPES] = { .type = NLA_NESTED },
+ [NL80211_ATTR_WOWLAN_TRIGGERS] = { .type = NLA_NESTED },
+ [NL80211_ATTR_STA_PLINK_STATE] = { .type = NLA_U8 },
++ [NL80211_ATTR_MEASUREMENT_DURATION] = { .type = NLA_U16 },
++ [NL80211_ATTR_MEASUREMENT_DURATION_MANDATORY] = { .type = NLA_FLAG },
+ [NL80211_ATTR_SCHED_SCAN_INTERVAL] = { .type = NLA_U32 },
+ [NL80211_ATTR_REKEY_DATA] = { .type = NLA_NESTED },
+ [NL80211_ATTR_SCAN_SUPP_RATES] = { .type = NLA_NESTED },
+@@ -407,6 +409,8 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [NL80211_ATTR_MDID] = { .type = NLA_U16 },
+ [NL80211_ATTR_IE_RIC] = { .type = NLA_BINARY,
+ .len = IEEE80211_MAX_DATA_LEN },
++ [NL80211_ATTR_CRIT_PROT_ID] = { .type = NLA_U16 },
++ [NL80211_ATTR_MAX_CRIT_PROT_DURATION] = { .type = NLA_U16 },
+ [NL80211_ATTR_PEER_AID] = { .type = NLA_U16 },
+ [NL80211_ATTR_CH_SWITCH_COUNT] = { .type = NLA_U32 },
+ [NL80211_ATTR_CH_SWITCH_BLOCK_TX] = { .type = NLA_FLAG },
+@@ -432,6 +436,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [NL80211_ATTR_USER_PRIO] = { .type = NLA_U8 },
+ [NL80211_ATTR_ADMITTED_TIME] = { .type = NLA_U16 },
+ [NL80211_ATTR_SMPS_MODE] = { .type = NLA_U8 },
++ [NL80211_ATTR_OPER_CLASS] = { .type = NLA_U8 },
+ [NL80211_ATTR_MAC_MASK] = { .len = ETH_ALEN },
+ [NL80211_ATTR_WIPHY_SELF_MANAGED_REG] = { .type = NLA_FLAG },
+ [NL80211_ATTR_NETNS_FD] = { .type = NLA_U32 },
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 0e66768427ba..6d5f3f737207 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -1730,7 +1730,7 @@ static void handle_channel_custom(struct wiphy *wiphy,
+ break;
+ }
+
+- if (IS_ERR(reg_rule)) {
++ if (IS_ERR_OR_NULL(reg_rule)) {
+ pr_debug("Disabling freq %d MHz as custom regd has no rule that fits it\n",
+ chan->center_freq);
+ if (wiphy->regulatory_flags & REGULATORY_WIPHY_SELF_MANAGED) {
next reply other threads:[~2020-03-20 11:54 UTC|newest]
Thread overview: 393+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-20 11:54 Mike Pagano [this message]
-- strict thread matches above, loose matches on Subject: below --
2023-01-07 11:37 [gentoo-commits] proj/linux-patches:4.9 commit in: / Mike Pagano
2022-12-14 12:24 Mike Pagano
2022-12-08 13:09 Alice Ferrazzi
2022-11-25 17:02 Mike Pagano
2022-11-10 15:14 Mike Pagano
2022-11-03 15:09 Mike Pagano
2022-10-26 11:43 Mike Pagano
2022-09-28 9:19 Mike Pagano
2022-09-20 12:04 Mike Pagano
2022-09-15 11:10 Mike Pagano
2022-09-05 12:08 Mike Pagano
2022-08-25 10:37 Mike Pagano
2022-07-29 15:25 Mike Pagano
2022-07-21 20:14 Mike Pagano
2022-07-12 16:03 Mike Pagano
2022-07-07 16:20 Mike Pagano
2022-07-02 16:04 Mike Pagano
2022-06-25 10:24 Mike Pagano
2022-06-16 11:42 Mike Pagano
2022-06-14 15:49 Mike Pagano
2022-06-06 11:07 Mike Pagano
2022-05-27 12:41 Mike Pagano
2022-05-25 11:57 Mike Pagano
2022-05-18 9:52 Mike Pagano
2022-05-15 22:14 Mike Pagano
2022-05-12 11:32 Mike Pagano
2022-04-27 11:38 Mike Pagano
2022-04-20 12:12 Mike Pagano
2022-03-28 11:01 Mike Pagano
2022-03-23 11:59 Mike Pagano
2022-03-16 13:22 Mike Pagano
2022-03-11 10:57 Mike Pagano
2022-03-08 18:28 Mike Pagano
2022-03-02 13:09 Mike Pagano
2022-02-26 23:38 Mike Pagano
2022-02-23 12:40 Mike Pagano
2022-02-16 12:49 Mike Pagano
2022-02-11 12:38 Mike Pagano
2022-02-08 18:03 Mike Pagano
2022-01-29 17:46 Mike Pagano
2022-01-27 11:41 Mike Pagano
2022-01-11 12:59 Mike Pagano
2022-01-05 12:57 Mike Pagano
2021-12-29 13:13 Mike Pagano
2021-12-22 14:08 Mike Pagano
2021-12-14 10:37 Mike Pagano
2021-12-08 12:57 Mike Pagano
2021-11-26 12:01 Mike Pagano
2021-11-12 13:38 Mike Pagano
2021-11-02 17:06 Mike Pagano
2021-10-27 12:00 Mike Pagano
2021-10-17 13:14 Mike Pagano
2021-10-09 21:35 Mike Pagano
2021-10-06 11:32 Mike Pagano
2021-09-26 14:15 Mike Pagano
2021-09-22 11:42 Mike Pagano
2021-09-20 22:06 Mike Pagano
2021-09-03 11:24 Mike Pagano
2021-08-26 14:03 Mike Pagano
2021-08-25 23:14 Mike Pagano
2021-08-25 23:13 Mike Pagano
2021-08-15 20:10 Mike Pagano
2021-08-08 13:41 Mike Pagano
2021-08-04 11:55 Mike Pagano
2021-08-03 12:49 Mike Pagano
2021-07-28 12:39 Mike Pagano
2021-07-20 15:29 Alice Ferrazzi
2021-07-11 14:47 Mike Pagano
2021-06-30 14:28 Mike Pagano
2021-06-17 14:23 Alice Ferrazzi
2021-06-17 11:08 Alice Ferrazzi
2021-06-10 11:10 Mike Pagano
2021-06-03 10:41 Alice Ferrazzi
2021-05-26 12:03 Mike Pagano
2021-05-22 10:01 Mike Pagano
2021-04-28 11:03 Alice Ferrazzi
2021-04-16 11:19 Alice Ferrazzi
2021-04-10 13:22 Mike Pagano
2021-04-07 12:14 Mike Pagano
2021-03-30 14:14 Mike Pagano
2021-03-24 12:07 Mike Pagano
2021-03-17 15:58 Mike Pagano
2021-03-11 14:04 Mike Pagano
2021-03-07 15:13 Mike Pagano
2021-03-03 17:24 Alice Ferrazzi
2021-02-23 13:38 Alice Ferrazzi
2021-02-10 10:15 Alice Ferrazzi
2021-02-05 14:53 Alice Ferrazzi
2021-02-03 23:25 Mike Pagano
2021-01-30 13:18 Alice Ferrazzi
2021-01-23 16:34 Mike Pagano
2021-01-17 16:22 Mike Pagano
2021-01-12 20:08 Mike Pagano
2021-01-09 12:54 Mike Pagano
2020-12-29 14:18 Mike Pagano
2020-12-11 12:54 Mike Pagano
2020-12-02 12:48 Mike Pagano
2020-11-24 13:39 Mike Pagano
2020-11-22 19:12 Mike Pagano
2020-11-18 19:23 Mike Pagano
2020-11-11 15:32 Mike Pagano
2020-11-10 13:54 Mike Pagano
2020-10-29 11:17 Mike Pagano
2020-10-17 10:14 Mike Pagano
2020-10-14 20:34 Mike Pagano
2020-10-01 19:03 Mike Pagano
2020-10-01 18:59 Mike Pagano
2020-09-24 16:02 Mike Pagano
2020-09-23 11:59 Mike Pagano
2020-09-23 11:57 Mike Pagano
2020-09-12 17:31 Mike Pagano
2020-09-03 11:34 Mike Pagano
2020-08-26 11:13 Mike Pagano
2020-08-21 11:23 Alice Ferrazzi
2020-08-21 11:02 Alice Ferrazzi
2020-07-31 16:13 Mike Pagano
2020-07-22 12:30 Mike Pagano
2020-07-09 12:07 Mike Pagano
2020-07-01 12:10 Mike Pagano
2020-06-22 14:44 Mike Pagano
2020-06-11 11:28 Mike Pagano
2020-06-03 11:37 Mike Pagano
2020-05-27 15:26 Mike Pagano
2020-05-20 11:24 Mike Pagano
2020-05-13 12:50 Mike Pagano
2020-05-11 22:52 Mike Pagano
2020-05-05 17:39 Mike Pagano
2020-05-02 19:22 Mike Pagano
2020-04-24 12:01 Mike Pagano
2020-04-15 17:55 Mike Pagano
2020-04-13 11:15 Mike Pagano
2020-04-02 18:55 Mike Pagano
2020-03-11 10:15 Mike Pagano
2020-02-28 15:29 Mike Pagano
2020-02-14 23:36 Mike Pagano
2020-02-05 14:48 Mike Pagano
2020-01-29 12:36 Mike Pagano
2020-01-23 11:02 Mike Pagano
2020-01-14 22:26 Mike Pagano
2020-01-12 14:52 Mike Pagano
2020-01-04 16:48 Mike Pagano
2019-12-21 14:54 Mike Pagano
2019-12-05 15:17 Alice Ferrazzi
2019-11-29 21:39 Thomas Deutschmann
2019-11-28 23:51 Mike Pagano
2019-11-25 12:08 Mike Pagano
2019-11-16 10:54 Mike Pagano
2019-11-12 20:58 Mike Pagano
2019-11-10 16:15 Mike Pagano
2019-11-06 14:24 Mike Pagano
2019-10-29 11:16 Mike Pagano
2019-10-17 22:21 Mike Pagano
2019-10-07 17:37 Mike Pagano
2019-10-05 11:39 Mike Pagano
2019-09-21 15:57 Mike Pagano
2019-09-19 23:16 Mike Pagano
2019-09-16 12:22 Mike Pagano
2019-09-10 11:10 Mike Pagano
2019-09-06 17:18 Mike Pagano
2019-08-25 17:34 Mike Pagano
2019-08-11 10:59 Mike Pagano
2019-08-06 19:16 Mike Pagano
2019-08-04 16:05 Mike Pagano
2019-07-21 14:38 Mike Pagano
2019-07-10 11:03 Mike Pagano
2019-06-27 11:10 Mike Pagano
2019-06-22 19:04 Mike Pagano
2019-06-17 19:19 Mike Pagano
2019-06-11 17:40 Mike Pagano
2019-06-11 12:39 Mike Pagano
2019-05-31 16:42 Mike Pagano
2019-05-26 17:12 Mike Pagano
2019-05-21 17:14 Mike Pagano
2019-05-16 22:59 Mike Pagano
2019-05-14 20:08 Mike Pagano
2019-05-10 19:38 Mike Pagano
2019-05-08 10:03 Mike Pagano
2019-05-04 18:26 Mike Pagano
2019-05-02 10:16 Mike Pagano
2019-04-27 17:29 Mike Pagano
2019-04-20 11:06 Mike Pagano
2019-04-19 19:54 Mike Pagano
2019-04-05 21:42 Mike Pagano
2019-04-03 10:48 Mike Pagano
2019-03-27 10:20 Mike Pagano
2019-03-23 14:57 Mike Pagano
2019-03-23 14:18 Mike Pagano
2019-03-19 16:56 Mike Pagano
2019-03-13 22:05 Mike Pagano
2019-03-06 19:12 Mike Pagano
2019-03-05 17:59 Mike Pagano
2019-02-27 11:20 Mike Pagano
2019-02-23 14:42 Mike Pagano
2019-02-20 11:16 Mike Pagano
2019-02-15 12:46 Mike Pagano
2019-02-12 20:51 Mike Pagano
2019-02-06 20:14 Mike Pagano
2019-01-31 11:22 Mike Pagano
2019-01-26 15:03 Mike Pagano
2019-01-23 11:29 Mike Pagano
2019-01-16 23:29 Mike Pagano
2019-01-13 19:26 Mike Pagano
2019-01-09 18:09 Mike Pagano
2019-01-09 17:52 Mike Pagano
2018-12-29 22:53 Mike Pagano
2018-12-29 18:51 Mike Pagano
2018-12-21 14:44 Mike Pagano
2018-12-17 11:39 Mike Pagano
2018-12-13 11:36 Mike Pagano
2018-12-08 13:25 Mike Pagano
2018-12-05 19:44 Mike Pagano
2018-12-01 18:00 Mike Pagano
2018-12-01 15:04 Mike Pagano
2018-11-27 16:22 Mike Pagano
2018-11-23 12:48 Mike Pagano
2018-11-23 12:45 Mike Pagano
2018-11-21 12:20 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-14 14:37 Mike Pagano
2018-11-13 21:20 Mike Pagano
2018-11-11 1:44 Mike Pagano
2018-11-11 1:31 Mike Pagano
2018-11-10 21:30 Mike Pagano
2018-10-20 12:43 Mike Pagano
2018-10-18 10:25 Mike Pagano
2018-10-13 16:34 Mike Pagano
2018-10-10 11:19 Mike Pagano
2018-10-04 10:40 Mike Pagano
2018-09-29 13:33 Mike Pagano
2018-09-26 10:42 Mike Pagano
2018-09-19 22:38 Mike Pagano
2018-09-15 10:10 Mike Pagano
2018-09-09 23:27 Mike Pagano
2018-09-05 15:27 Mike Pagano
2018-08-24 11:43 Mike Pagano
2018-08-22 10:06 Alice Ferrazzi
2018-08-18 18:07 Mike Pagano
2018-08-17 19:32 Mike Pagano
2018-08-17 19:25 Mike Pagano
2018-08-16 11:51 Mike Pagano
2018-08-15 16:46 Mike Pagano
2018-08-09 10:52 Mike Pagano
2018-08-07 18:12 Mike Pagano
2018-08-03 12:25 Mike Pagano
2018-07-28 10:38 Mike Pagano
2018-07-25 10:26 Mike Pagano
2018-07-22 15:14 Mike Pagano
2018-07-17 10:25 Mike Pagano
2018-07-12 15:42 Alice Ferrazzi
2018-07-03 13:16 Mike Pagano
2018-06-26 16:34 Alice Ferrazzi
2018-06-16 15:42 Mike Pagano
2018-06-13 15:03 Mike Pagano
2018-06-06 18:04 Mike Pagano
2018-06-05 11:21 Mike Pagano
2018-05-30 22:34 Mike Pagano
2018-05-30 11:39 Mike Pagano
2018-05-25 14:54 Mike Pagano
2018-05-22 17:28 Mike Pagano
2018-05-20 22:20 Mike Pagano
2018-05-16 10:23 Mike Pagano
2018-05-09 10:54 Mike Pagano
2018-05-02 16:13 Mike Pagano
2018-04-30 10:29 Mike Pagano
2018-04-24 11:30 Mike Pagano
2018-04-20 11:12 Mike Pagano
2018-04-13 22:21 Mike Pagano
2018-04-08 14:26 Mike Pagano
2018-03-31 22:17 Mike Pagano
2018-03-28 17:42 Mike Pagano
2018-03-25 14:31 Mike Pagano
2018-03-25 13:39 Mike Pagano
2018-03-22 12:58 Mike Pagano
2018-03-18 22:15 Mike Pagano
2018-03-11 18:26 Mike Pagano
2018-03-05 2:38 Alice Ferrazzi
2018-02-28 18:46 Alice Ferrazzi
2018-02-28 15:02 Alice Ferrazzi
2018-02-25 15:47 Mike Pagano
2018-02-22 23:22 Mike Pagano
2018-02-17 15:02 Alice Ferrazzi
2018-02-13 13:25 Alice Ferrazzi
2018-02-03 21:22 Mike Pagano
2018-01-31 13:31 Alice Ferrazzi
2018-01-23 21:17 Mike Pagano
2018-01-17 10:18 Alice Ferrazzi
2018-01-17 10:18 Alice Ferrazzi
2018-01-17 9:16 Alice Ferrazzi
2018-01-15 14:57 Alice Ferrazzi
2018-01-10 12:21 Alice Ferrazzi
2018-01-10 11:47 Mike Pagano
2018-01-05 15:54 Alice Ferrazzi
2018-01-05 15:04 Alice Ferrazzi
2018-01-02 20:13 Mike Pagano
2017-12-29 17:20 Alice Ferrazzi
2017-12-25 14:36 Alice Ferrazzi
2017-12-20 12:44 Mike Pagano
2017-12-16 17:42 Alice Ferrazzi
2017-12-14 8:58 Alice Ferrazzi
2017-12-09 23:29 Mike Pagano
2017-12-05 11:38 Mike Pagano
2017-11-30 12:19 Alice Ferrazzi
2017-11-24 9:44 Alice Ferrazzi
2017-11-21 9:18 Alice Ferrazzi
2017-11-18 18:24 Mike Pagano
2017-11-15 15:44 Mike Pagano
2017-11-08 13:49 Mike Pagano
2017-11-02 10:03 Mike Pagano
2017-10-27 10:29 Mike Pagano
2017-10-21 20:15 Mike Pagano
2017-10-18 13:46 Mike Pagano
2017-10-12 22:26 Mike Pagano
2017-10-12 12:37 Mike Pagano
2017-10-08 14:23 Mike Pagano
2017-10-08 14:21 Mike Pagano
2017-10-08 14:13 Mike Pagano
2017-10-05 11:38 Mike Pagano
2017-09-27 16:38 Mike Pagano
2017-09-20 10:11 Mike Pagano
2017-09-14 11:39 Mike Pagano
2017-09-13 22:28 Mike Pagano
2017-09-13 16:25 Mike Pagano
2017-09-10 14:38 Mike Pagano
2017-09-07 22:43 Mike Pagano
2017-09-02 17:45 Mike Pagano
2017-08-30 10:06 Mike Pagano
2017-08-25 10:59 Mike Pagano
2017-08-16 22:29 Mike Pagano
2017-08-13 16:51 Mike Pagano
2017-08-11 17:41 Mike Pagano
2017-08-07 10:26 Mike Pagano
2017-05-14 13:31 Mike Pagano
2017-05-08 10:43 Mike Pagano
2017-05-03 17:45 Mike Pagano
2017-04-27 9:05 Alice Ferrazzi
2017-04-22 17:01 Mike Pagano
2017-04-18 10:23 Mike Pagano
2017-04-12 18:01 Mike Pagano
2017-04-08 13:53 Mike Pagano
2017-03-31 10:44 Mike Pagano
2017-03-30 18:15 Mike Pagano
2017-03-26 11:54 Mike Pagano
2017-03-23 18:38 Mike Pagano
2017-03-22 12:42 Mike Pagano
2017-03-18 14:34 Mike Pagano
2017-03-15 19:21 Mike Pagano
2017-03-12 12:22 Mike Pagano
2017-03-02 16:23 Mike Pagano
2017-02-26 20:38 Mike Pagano
2017-02-26 20:36 Mike Pagano
2017-02-23 20:34 Mike Pagano
2017-02-23 20:11 Mike Pagano
2017-02-18 20:37 Mike Pagano
2017-02-18 16:13 Alice Ferrazzi
2017-02-15 16:02 Alice Ferrazzi
2017-02-14 23:08 Mike Pagano
2017-02-09 11:11 Alice Ferrazzi
2017-02-04 11:34 Alice Ferrazzi
2017-02-01 13:07 Alice Ferrazzi
2017-01-29 23:08 Alice Ferrazzi
2017-01-26 8:51 Alice Ferrazzi
2017-01-20 11:33 Alice Ferrazzi
2017-01-15 22:59 Mike Pagano
2017-01-12 22:53 Mike Pagano
2017-01-09 12:41 Mike Pagano
2017-01-07 0:55 Mike Pagano
2017-01-06 23:09 Mike Pagano
2016-12-31 19:39 Mike Pagano
2016-12-11 23:20 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1584705273.aa82544079131e7dd335b996c29c4a43c6b3545d.mpagano@gentoo \
--to=mpagano@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox