From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 3271015827B for ; Thu, 21 Aug 2025 05:21:23 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 1C869340DC8 for ; Thu, 21 Aug 2025 05:21:23 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id 8EF9B11055E; Thu, 21 Aug 2025 05:21:21 +0000 (UTC) Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id 76D1111055E for ; Thu, 21 Aug 2025 05:21:21 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 9BAF6340DC8 for ; Thu, 21 Aug 2025 05:21:20 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 3C7843222 for ; Thu, 21 Aug 2025 05:21:19 +0000 (UTC) From: "Arisu Tachibana" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Arisu Tachibana" Message-ID: <1755753665.7a9021e933f9ce4c1f18c8cc896cff7a8afb4c98.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:5.4 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1283_linux-5.4.284.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Arisu Tachibana X-VCS-Revision: 7a9021e933f9ce4c1f18c8cc896cff7a8afb4c98 X-VCS-Branch: 5.4 Date: Thu, 21 Aug 2025 05:21:19 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: b3fd0c0a-ea6d-4b06-b86e-805657740162 X-Archives-Hash: 4b6556f247c2a94aaf8ba8ec27a57a48 commit: 7a9021e933f9ce4c1f18c8cc896cff7a8afb4c98 Author: Arisu Tachibana gentoo org> AuthorDate: Thu Aug 21 05:21:05 2025 +0000 Commit: Arisu Tachibana gentoo org> CommitDate: Thu Aug 21 05:21:05 2025 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7a9021e9 Linux patch 5.4.284 Signed-off-by: Arisu Tachibana gentoo.org> 0000_README | 4 + 1283_linux-5.4.284.patch | 4051 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 4055 insertions(+) diff --git a/0000_README b/0000_README index dc1c2615..caac3acb 100644 --- a/0000_README +++ b/0000_README @@ -1175,6 +1175,10 @@ Patch: 1282_linux-5.4.283.patch From: https://www.kernel.org Desc: Linux 5.4.283 +Patch: 1283_linux-5.4.284.patch +From: https://www.kernel.org +Desc: Linux 5.4.284 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1283_linux-5.4.284.patch b/1283_linux-5.4.284.patch new file mode 100644 index 00000000..c4475f53 --- /dev/null +++ b/1283_linux-5.4.284.patch @@ -0,0 +1,4051 @@ +diff --git a/Makefile b/Makefile +index 362e593d630fe..6a4587f51d008 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 5 + PATCHLEVEL = 4 +-SUBLEVEL = 283 ++SUBLEVEL = 284 + EXTRAVERSION = + NAME = Kleptomaniac Octopus + +diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h +index a45366c3909be..cbf97e648d19c 100644 +--- a/arch/arm64/include/asm/acpi.h ++++ b/arch/arm64/include/asm/acpi.h +@@ -110,6 +110,18 @@ static inline u32 get_acpi_id_for_cpu(unsigned int cpu) + return acpi_cpu_get_madt_gicc(cpu)->uid; + } + ++static inline int get_cpu_for_acpi_id(u32 uid) ++{ ++ int cpu; ++ ++ for (cpu = 0; cpu < nr_cpu_ids; cpu++) ++ if (acpi_cpu_get_madt_gicc(cpu) && ++ uid == get_acpi_id_for_cpu(cpu)) ++ return cpu; ++ ++ return -EINVAL; ++} ++ + static inline void arch_fix_phys_package_id(int num, u32 slot) { } + void __init acpi_init_cpus(void); + int apei_claim_sea(struct pt_regs *regs); +diff --git a/arch/arm64/kernel/acpi_numa.c b/arch/arm64/kernel/acpi_numa.c +index 048b75cadd2fd..c5feac18c238a 100644 +--- a/arch/arm64/kernel/acpi_numa.c ++++ b/arch/arm64/kernel/acpi_numa.c +@@ -34,17 +34,6 @@ int __init acpi_numa_get_nid(unsigned int cpu) + return acpi_early_node_map[cpu]; + } + +-static inline int get_cpu_for_acpi_id(u32 uid) +-{ +- int cpu; +- +- for (cpu = 0; cpu < nr_cpu_ids; cpu++) +- if (uid == get_acpi_id_for_cpu(cpu)) +- return cpu; +- +- return -EINVAL; +-} +- + static int __init acpi_parse_gicc_pxm(union acpi_subtable_headers *header, + const unsigned long end) + { +diff --git a/arch/parisc/kernel/irq.c b/arch/parisc/kernel/irq.c +index c4c06bcd04835..b4aa5af943ba5 100644 +--- a/arch/parisc/kernel/irq.c ++++ b/arch/parisc/kernel/irq.c +@@ -520,7 +520,7 @@ void do_cpu_irq_mask(struct pt_regs *regs) + + old_regs = set_irq_regs(regs); + local_irq_disable(); +- irq_enter_rcu(); ++ irq_enter(); + + eirr_val = mfctl(23) & cpu_eiem & per_cpu(local_ack_eiem, cpu); + if (!eirr_val) +@@ -555,7 +555,7 @@ void do_cpu_irq_mask(struct pt_regs *regs) + #endif /* CONFIG_IRQSTACKS */ + + out: +- irq_exit_rcu(); ++ irq_exit(); + set_irq_regs(old_regs); + return; + +diff --git a/arch/um/drivers/line.c b/arch/um/drivers/line.c +index d6a78c3548a55..de0ab2e455b03 100644 +--- a/arch/um/drivers/line.c ++++ b/arch/um/drivers/line.c +@@ -383,6 +383,7 @@ int setup_one_line(struct line *lines, int n, char *init, + parse_chan_pair(NULL, line, n, opts, error_out); + err = 0; + } ++ *error_out = "configured as 'none'"; + } else { + char *new = kstrdup(init, GFP_KERNEL); + if (!new) { +@@ -406,6 +407,7 @@ int setup_one_line(struct line *lines, int n, char *init, + } + } + if (err) { ++ *error_out = "failed to parse channel pair"; + line->init_str = NULL; + line->valid = 0; + kfree(new); +diff --git a/block/bio-integrity.c b/block/bio-integrity.c +index 247f7c480e662..619ed4d84d50f 100644 +--- a/block/bio-integrity.c ++++ b/block/bio-integrity.c +@@ -212,6 +212,7 @@ bool bio_integrity_prep(struct bio *bio) + unsigned int bytes, offset, i; + unsigned int intervals; + blk_status_t status; ++ gfp_t gfp = GFP_NOIO; + + if (!bi) + return true; +@@ -234,12 +235,20 @@ bool bio_integrity_prep(struct bio *bio) + if (!bi->profile->generate_fn || + !(bi->flags & BLK_INTEGRITY_GENERATE)) + return true; ++ ++ /* ++ * Zero the memory allocated to not leak uninitialized kernel ++ * memory to disk. For PI this only affects the app tag, but ++ * for non-integrity metadata it affects the entire metadata ++ * buffer. ++ */ ++ gfp |= __GFP_ZERO; + } + intervals = bio_integrity_intervals(bi, bio_sectors(bio)); + + /* Allocate kernel buffer for protection data */ + len = intervals * bi->tuple_size; +- buf = kmalloc(len, GFP_NOIO | q->bounce_gfp); ++ buf = kmalloc(len, gfp | q->bounce_gfp); + status = BLK_STS_RESOURCE; + if (unlikely(buf == NULL)) { + printk(KERN_ERR "could not allocate integrity buffer\n"); +diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c +index 2c4dda0787e84..c8338d6278578 100644 +--- a/drivers/acpi/acpi_processor.c ++++ b/drivers/acpi/acpi_processor.c +@@ -388,7 +388,7 @@ static int acpi_processor_add(struct acpi_device *device, + + result = acpi_processor_get_info(device); + if (result) /* Processor is not physically present or unavailable */ +- return 0; ++ goto err_clear_driver_data; + + BUG_ON(pr->id >= nr_cpu_ids); + +@@ -403,7 +403,7 @@ static int acpi_processor_add(struct acpi_device *device, + "BIOS reported wrong ACPI id %d for the processor\n", + pr->id); + /* Give up, but do not abort the namespace scan. */ +- goto err; ++ goto err_clear_driver_data; + } + /* + * processor_device_array is not cleared on errors to allow buggy BIOS +@@ -415,12 +415,12 @@ static int acpi_processor_add(struct acpi_device *device, + dev = get_cpu_device(pr->id); + if (!dev) { + result = -ENODEV; +- goto err; ++ goto err_clear_per_cpu; + } + + result = acpi_bind_one(dev, device); + if (result) +- goto err; ++ goto err_clear_per_cpu; + + pr->dev = dev; + +@@ -431,10 +431,11 @@ static int acpi_processor_add(struct acpi_device *device, + dev_err(dev, "Processor driver could not be attached\n"); + acpi_unbind_one(dev); + +- err: +- free_cpumask_var(pr->throttling.shared_cpu_map); +- device->driver_data = NULL; ++ err_clear_per_cpu: + per_cpu(processors, pr->id) = NULL; ++ err_clear_driver_data: ++ device->driver_data = NULL; ++ free_cpumask_var(pr->throttling.shared_cpu_map); + err_free_pr: + kfree(pr); + return result; +diff --git a/drivers/android/binder.c b/drivers/android/binder.c +index 7a8cdecaf3483..30d71b928f0db 100644 +--- a/drivers/android/binder.c ++++ b/drivers/android/binder.c +@@ -3534,6 +3534,7 @@ static void binder_transaction(struct binder_proc *proc, + */ + copy_size = object_offset - user_offset; + if (copy_size && (user_offset > object_offset || ++ object_offset > tr->data_size || + binder_alloc_copy_user_to_buffer( + &target_proc->alloc, + t->buffer, user_offset, +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c +index 2ee0ee2b4752a..37b300b0857a9 100644 +--- a/drivers/ata/libata-core.c ++++ b/drivers/ata/libata-core.c +@@ -6203,8 +6203,10 @@ struct ata_host *ata_host_alloc(struct device *dev, int max_ports) + } + + dr = devres_alloc(ata_devres_release, 0, GFP_KERNEL); +- if (!dr) ++ if (!dr) { ++ kfree(host); + goto err_out; ++ } + + devres_add(dev, dr); + dev_set_drvdata(dev, host); +diff --git a/drivers/ata/pata_macio.c b/drivers/ata/pata_macio.c +index 1bfd0154dad5d..a601768956e8e 100644 +--- a/drivers/ata/pata_macio.c ++++ b/drivers/ata/pata_macio.c +@@ -540,7 +540,8 @@ static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc) + + while (sg_len) { + /* table overflow should never happen */ +- BUG_ON (pi++ >= MAX_DCMDS); ++ if (WARN_ON_ONCE(pi >= MAX_DCMDS)) ++ return AC_ERR_SYSTEM; + + len = (sg_len < MAX_DBDMA_SEG) ? sg_len : MAX_DBDMA_SEG; + table->command = cpu_to_le16(write ? OUTPUT_MORE: INPUT_MORE); +@@ -552,11 +553,13 @@ static enum ata_completion_errors pata_macio_qc_prep(struct ata_queued_cmd *qc) + addr += len; + sg_len -= len; + ++table; ++ ++pi; + } + } + + /* Should never happen according to Tejun */ +- BUG_ON(!pi); ++ if (WARN_ON_ONCE(!pi)) ++ return AC_ERR_SYSTEM; + + /* Convert the last command to an input/output */ + table--; +diff --git a/drivers/base/devres.c b/drivers/base/devres.c +index 5a84bafae3288..be87133d2cf1e 100644 +--- a/drivers/base/devres.c ++++ b/drivers/base/devres.c +@@ -561,6 +561,7 @@ void * devres_open_group(struct device *dev, void *id, gfp_t gfp) + grp->id = grp; + if (id) + grp->id = id; ++ grp->color = 0; + + spin_lock_irqsave(&dev->devres_lock, flags); + add_dr(dev, &grp->node[0]); +diff --git a/drivers/clk/hisilicon/clk-hi6220.c b/drivers/clk/hisilicon/clk-hi6220.c +index b2c5b6bbb1c13..e7cdf72d4b062 100644 +--- a/drivers/clk/hisilicon/clk-hi6220.c ++++ b/drivers/clk/hisilicon/clk-hi6220.c +@@ -86,7 +86,8 @@ static void __init hi6220_clk_ao_init(struct device_node *np) + hisi_clk_register_gate_sep(hi6220_separated_gate_clks_ao, + ARRAY_SIZE(hi6220_separated_gate_clks_ao), clk_data_ao); + } +-CLK_OF_DECLARE(hi6220_clk_ao, "hisilicon,hi6220-aoctrl", hi6220_clk_ao_init); ++/* Allow reset driver to probe as well */ ++CLK_OF_DECLARE_DRIVER(hi6220_clk_ao, "hisilicon,hi6220-aoctrl", hi6220_clk_ao_init); + + + /* clocks in sysctrl */ +diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c +index a69f53e435ed5..6bf33ba6493c8 100644 +--- a/drivers/clk/qcom/clk-alpha-pll.c ++++ b/drivers/clk/qcom/clk-alpha-pll.c +@@ -38,7 +38,7 @@ + + #define PLL_USER_CTL(p) ((p)->offset + (p)->regs[PLL_OFF_USER_CTL]) + # define PLL_POST_DIV_SHIFT 8 +-# define PLL_POST_DIV_MASK(p) GENMASK((p)->width, 0) ++# define PLL_POST_DIV_MASK(p) GENMASK((p)->width - 1, 0) + # define PLL_ALPHA_EN BIT(24) + # define PLL_ALPHA_MODE BIT(25) + # define PLL_VCO_SHIFT 20 +@@ -1257,8 +1257,8 @@ clk_trion_pll_postdiv_set_rate(struct clk_hw *hw, unsigned long rate, + } + + return regmap_update_bits(regmap, PLL_USER_CTL(pll), +- PLL_POST_DIV_MASK(pll) << PLL_POST_DIV_SHIFT, +- val << PLL_POST_DIV_SHIFT); ++ PLL_POST_DIV_MASK(pll) << pll->post_div_shift, ++ val << pll->post_div_shift); + } + + const struct clk_ops clk_trion_pll_postdiv_ops = { +diff --git a/drivers/clocksource/timer-imx-tpm.c b/drivers/clocksource/timer-imx-tpm.c +index c1d52d5264c27..75292e487d430 100644 +--- a/drivers/clocksource/timer-imx-tpm.c ++++ b/drivers/clocksource/timer-imx-tpm.c +@@ -83,20 +83,28 @@ static u64 notrace tpm_read_sched_clock(void) + static int tpm_set_next_event(unsigned long delta, + struct clock_event_device *evt) + { +- unsigned long next, now; ++ unsigned long next, prev, now; + +- next = tpm_read_counter(); +- next += delta; ++ prev = tpm_read_counter(); ++ next = prev + delta; + writel(next, timer_base + TPM_C0V); + now = tpm_read_counter(); + ++ /* ++ * Need to wait CNT increase at least 1 cycle to make sure ++ * the C0V has been updated into HW. ++ */ ++ if ((next & 0xffffffff) != readl(timer_base + TPM_C0V)) ++ while (now == tpm_read_counter()) ++ ; ++ + /* + * NOTE: We observed in a very small probability, the bus fabric + * contention between GPU and A7 may results a few cycles delay + * of writing CNT registers which may cause the min_delta event got + * missed, so we need add a ETIME check here in case it happened. + */ +- return (int)(next - now) <= 0 ? -ETIME : 0; ++ return (now - prev) >= delta ? -ETIME : 0; + } + + static int tpm_set_state_oneshot(struct clock_event_device *evt) +diff --git a/drivers/clocksource/timer-of.c b/drivers/clocksource/timer-of.c +index bf2a6f64ba0c5..377afa0bdcaa6 100644 +--- a/drivers/clocksource/timer-of.c ++++ b/drivers/clocksource/timer-of.c +@@ -25,10 +25,7 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq) + + struct clock_event_device *clkevt = &to->clkevt; + +- if (of_irq->percpu) +- free_percpu_irq(of_irq->irq, clkevt); +- else +- free_irq(of_irq->irq, clkevt); ++ free_irq(of_irq->irq, clkevt); + } + + /** +@@ -42,9 +39,6 @@ static __init void timer_of_irq_exit(struct of_timer_irq *of_irq) + * - Get interrupt number by name + * - Get interrupt number by index + * +- * When the interrupt is per CPU, 'request_percpu_irq()' is called, +- * otherwise 'request_irq()' is used. +- * + * Returns 0 on success, < 0 otherwise + */ + static __init int timer_of_irq_init(struct device_node *np, +@@ -69,12 +63,9 @@ static __init int timer_of_irq_init(struct device_node *np, + return -EINVAL; + } + +- ret = of_irq->percpu ? +- request_percpu_irq(of_irq->irq, of_irq->handler, +- np->full_name, clkevt) : +- request_irq(of_irq->irq, of_irq->handler, +- of_irq->flags ? of_irq->flags : IRQF_TIMER, +- np->full_name, clkevt); ++ ret = request_irq(of_irq->irq, of_irq->handler, ++ of_irq->flags ? of_irq->flags : IRQF_TIMER, ++ np->full_name, clkevt); + if (ret) { + pr_err("Failed to request irq %d for %pOF\n", of_irq->irq, np); + return ret; +diff --git a/drivers/clocksource/timer-of.h b/drivers/clocksource/timer-of.h +index a5478f3e8589d..01a2c6b7db065 100644 +--- a/drivers/clocksource/timer-of.h ++++ b/drivers/clocksource/timer-of.h +@@ -11,7 +11,6 @@ + struct of_timer_irq { + int irq; + int index; +- int percpu; + const char *name; + unsigned long flags; + irq_handler_t handler; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c +index a4d65973bf7cf..80771b1480fff 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_afmt.c +@@ -100,6 +100,7 @@ struct amdgpu_afmt_acr amdgpu_afmt_acr(uint32_t clock) + amdgpu_afmt_calc_cts(clock, &res.cts_32khz, &res.n_32khz, 32000); + amdgpu_afmt_calc_cts(clock, &res.cts_44_1khz, &res.n_44_1khz, 44100); + amdgpu_afmt_calc_cts(clock, &res.cts_48khz, &res.n_48khz, 48000); ++ res.clock = clock; + + return res; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c +index c687432da4262..89930a38b63eb 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atombios.c +@@ -1626,6 +1626,8 @@ int amdgpu_atombios_init_mc_reg_table(struct amdgpu_device *adev, + (u32)le32_to_cpu(*((u32 *)reg_data + j)); + j++; + } else if ((reg_table->mc_reg_address[i].pre_reg_data & LOW_NIBBLE_MASK) == DATA_EQU_PREV) { ++ if (i == 0) ++ continue; + reg_table->mc_reg_table_entry[num_ranges].mc_data[i] = + reg_table->mc_reg_table_entry[num_ranges].mc_data[i - 1]; + } +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c +index 031b094607bdd..3ce4447052b9b 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cgs.c +@@ -213,6 +213,9 @@ static int amdgpu_cgs_get_firmware_info(struct cgs_device *cgs_device, + struct amdgpu_firmware_info *ucode; + + id = fw_type_convert(cgs_device, type); ++ if (id >= AMDGPU_UCODE_ID_MAXIMUM) ++ return -EINVAL; ++ + ucode = &adev->firmware.ucode[id]; + if (ucode->fw == NULL) + return -EINVAL; +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +index e5c83e164d82a..8fafda87d4ce8 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +@@ -437,8 +437,9 @@ static ssize_t amdgpu_debugfs_ring_read(struct file *f, char __user *buf, + size_t size, loff_t *pos) + { + struct amdgpu_ring *ring = file_inode(f)->i_private; +- int r, i; + uint32_t value, result, early[3]; ++ loff_t i; ++ int r; + + if (*pos & 3 || size & 3) + return -EINVAL; +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h +index d54ceebd346b7..30c70b3ab17f1 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.h ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.h +@@ -42,8 +42,6 @@ + #define CRAT_OEMTABLEID_LENGTH 8 + #define CRAT_RESERVED_LENGTH 6 + +-#define CRAT_OEMID_64BIT_MASK ((1ULL << (CRAT_OEMID_LENGTH * 8)) - 1) +- + /* Compute Unit flags */ + #define COMPUTE_UNIT_CPU (1 << 0) /* Create Virtual CRAT for CPU */ + #define COMPUTE_UNIT_GPU (1 << 1) /* Create Virtual CRAT for GPU */ +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c +index a49e2ab071d68..de892ee147dea 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c +@@ -883,8 +883,7 @@ static void kfd_update_system_properties(void) + dev = list_last_entry(&topology_device_list, + struct kfd_topology_device, list); + if (dev) { +- sys_props.platform_id = +- (*((uint64_t *)dev->oem_id)) & CRAT_OEMID_64BIT_MASK; ++ sys_props.platform_id = dev->oem_id64; + sys_props.platform_oem = *((uint64_t *)dev->oem_table_id); + sys_props.platform_rev = dev->oem_revision; + } +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h +index d4718d58d0f24..7230b5b5bfe5a 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.h ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.h +@@ -172,7 +172,10 @@ struct kfd_topology_device { + struct attribute attr_gpuid; + struct attribute attr_name; + struct attribute attr_props; +- uint8_t oem_id[CRAT_OEMID_LENGTH]; ++ union { ++ uint8_t oem_id[CRAT_OEMID_LENGTH]; ++ uint64_t oem_id64; ++ }; + uint8_t oem_table_id[CRAT_OEMTABLEID_LENGTH]; + uint32_t oem_revision; + }; +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index 3bfc4aa328c6f..869b38908b28d 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -2263,7 +2263,10 @@ static int amdgpu_dm_initialize_drm_device(struct amdgpu_device *adev) + + /* There is one primary plane per CRTC */ + primary_planes = dm->dc->caps.max_streams; +- ASSERT(primary_planes <= AMDGPU_MAX_PLANES); ++ if (primary_planes > AMDGPU_MAX_PLANES) { ++ DRM_ERROR("DM: Plane nums out of 6 planes\n"); ++ return -EINVAL; ++ } + + /* + * Initialize primary planes, implicit planes for legacy IOCTLS. +diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +index 9f301f8575a54..fec3ca955b264 100644 +--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c ++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c +@@ -453,7 +453,8 @@ void build_watermark_ranges(struct clk_bw_params *bw_params, struct pp_smu_wm_ra + ranges->reader_wm_sets[num_valid_sets].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX; + + /* Modify previous watermark range to cover up to max */ +- ranges->reader_wm_sets[num_valid_sets - 1].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX; ++ if (num_valid_sets > 0) ++ ranges->reader_wm_sets[num_valid_sets - 1].max_fill_clk_mhz = PP_SMU_WM_SET_RANGE_CLK_UNCONSTRAINED_MAX; + } + num_valid_sets++; + } +diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c +index cd8bc92ce3ba9..4058a4fd6b224 100644 +--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dwb_scl.c +@@ -690,6 +690,9 @@ static void wbscl_set_scaler_filter( + int pair; + uint16_t odd_coef, even_coef; + ++ if (!filter) ++ return; ++ + for (phase = 0; phase < (NUM_PHASES / 2 + 1); phase++) { + for (pair = 0; pair < tap_pairs; pair++) { + even_coef = filter[phase * taps + 2 * pair]; +diff --git a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c +index 0be817f8cae6b..a61cec470d28c 100644 +--- a/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c ++++ b/drivers/gpu/drm/amd/display/dc/gpio/gpio_service.c +@@ -58,7 +58,7 @@ struct gpio_service *dal_gpio_service_create( + struct dc_context *ctx) + { + struct gpio_service *service; +- uint32_t index_of_id; ++ int32_t index_of_id; + + service = kzalloc(sizeof(struct gpio_service), GFP_KERNEL); + +@@ -114,7 +114,7 @@ struct gpio_service *dal_gpio_service_create( + return service; + + failure_2: +- while (index_of_id) { ++ while (index_of_id > 0) { + --index_of_id; + kfree(service->busyness[index_of_id]); + } +@@ -242,6 +242,9 @@ static bool is_pin_busy( + enum gpio_id id, + uint32_t en) + { ++ if (id == GPIO_ID_UNKNOWN) ++ return false; ++ + return service->busyness[id][en]; + } + +@@ -250,6 +253,9 @@ static void set_pin_busy( + enum gpio_id id, + uint32_t en) + { ++ if (id == GPIO_ID_UNKNOWN) ++ return; ++ + service->busyness[id][en] = true; + } + +@@ -258,6 +264,9 @@ static void set_pin_free( + enum gpio_id id, + uint32_t en) + { ++ if (id == GPIO_ID_UNKNOWN) ++ return; ++ + service->busyness[id][en] = false; + } + +@@ -266,7 +275,7 @@ enum gpio_result dal_gpio_service_lock( + enum gpio_id id, + uint32_t en) + { +- if (!service->busyness[id]) { ++ if (id != GPIO_ID_UNKNOWN && !service->busyness[id]) { + ASSERT_CRITICAL(false); + return GPIO_RESULT_OPEN_FAILED; + } +@@ -280,7 +289,7 @@ enum gpio_result dal_gpio_service_unlock( + enum gpio_id id, + uint32_t en) + { +- if (!service->busyness[id]) { ++ if (id != GPIO_ID_UNKNOWN && !service->busyness[id]) { + ASSERT_CRITICAL(false); + return GPIO_RESULT_OPEN_FAILED; + } +diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c +index 43de9dfcba19a..f1091cb87de0c 100644 +--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c ++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c +@@ -318,6 +318,12 @@ static const struct dmi_system_id orientation_data[] = { + DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"), + }, + .driver_data = (void *)&lcd1600x2560_leftside_up, ++ }, { /* OrangePi Neo */ ++ .matches = { ++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "OrangePi"), ++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "NEO-01"), ++ }, ++ .driver_data = (void *)&lcd1200x1920_rightside_up, + }, { /* Samsung GalaxyBook 10.6 */ + .matches = { + DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), +diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c +index b3fd6ff665dae..8f20668a860e4 100644 +--- a/drivers/gpu/drm/i915/i915_sw_fence.c ++++ b/drivers/gpu/drm/i915/i915_sw_fence.c +@@ -38,7 +38,7 @@ static inline void debug_fence_init(struct i915_sw_fence *fence) + debug_object_init(fence, &i915_sw_fence_debug_descr); + } + +-static inline void debug_fence_init_onstack(struct i915_sw_fence *fence) ++static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence) + { + debug_object_init_on_stack(fence, &i915_sw_fence_debug_descr); + } +@@ -64,7 +64,7 @@ static inline void debug_fence_destroy(struct i915_sw_fence *fence) + debug_object_destroy(fence, &i915_sw_fence_debug_descr); + } + +-static inline void debug_fence_free(struct i915_sw_fence *fence) ++static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence) + { + debug_object_free(fence, &i915_sw_fence_debug_descr); + smp_wmb(); /* flush the change in state before reallocation */ +@@ -81,7 +81,7 @@ static inline void debug_fence_init(struct i915_sw_fence *fence) + { + } + +-static inline void debug_fence_init_onstack(struct i915_sw_fence *fence) ++static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence) + { + } + +@@ -102,7 +102,7 @@ static inline void debug_fence_destroy(struct i915_sw_fence *fence) + { + } + +-static inline void debug_fence_free(struct i915_sw_fence *fence) ++static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence) + { + } + +diff --git a/drivers/hid/hid-cougar.c b/drivers/hid/hid-cougar.c +index 4ff3bc1d25e23..5294299afb263 100644 +--- a/drivers/hid/hid-cougar.c ++++ b/drivers/hid/hid-cougar.c +@@ -106,7 +106,7 @@ static void cougar_fix_g6_mapping(void) + static __u8 *cougar_report_fixup(struct hid_device *hdev, __u8 *rdesc, + unsigned int *rsize) + { +- if (rdesc[2] == 0x09 && rdesc[3] == 0x02 && ++ if (*rsize >= 117 && rdesc[2] == 0x09 && rdesc[3] == 0x02 && + (rdesc[115] | rdesc[116] << 8) >= HID_MAX_USAGES) { + hid_info(hdev, + "usage count exceeds max: fixing up report descriptor\n"); +diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c +index a1cfa75968535..01a2eeb2ec961 100644 +--- a/drivers/hv/vmbus_drv.c ++++ b/drivers/hv/vmbus_drv.c +@@ -1908,6 +1908,7 @@ void vmbus_device_unregister(struct hv_device *device_obj) + */ + device_unregister(&device_obj->device); + } ++EXPORT_SYMBOL_GPL(vmbus_device_unregister); + + + /* +diff --git a/drivers/hwmon/adc128d818.c b/drivers/hwmon/adc128d818.c +index f9edec195c352..08d8bd72ec0e1 100644 +--- a/drivers/hwmon/adc128d818.c ++++ b/drivers/hwmon/adc128d818.c +@@ -176,7 +176,7 @@ static ssize_t adc128_in_store(struct device *dev, + + mutex_lock(&data->update_lock); + /* 10 mV LSB on limit registers */ +- regval = clamp_val(DIV_ROUND_CLOSEST(val, 10), 0, 255); ++ regval = DIV_ROUND_CLOSEST(clamp_val(val, 0, 2550), 10); + data->in[index][nr] = regval << 4; + reg = index == 1 ? ADC128_REG_IN_MIN(nr) : ADC128_REG_IN_MAX(nr); + i2c_smbus_write_byte_data(data->client, reg, regval); +@@ -214,7 +214,7 @@ static ssize_t adc128_temp_store(struct device *dev, + return err; + + mutex_lock(&data->update_lock); +- regval = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -128, 127); ++ regval = DIV_ROUND_CLOSEST(clamp_val(val, -128000, 127000), 1000); + data->temp[index] = regval << 1; + i2c_smbus_write_byte_data(data->client, + index == 1 ? ADC128_REG_TEMP_MAX +diff --git a/drivers/hwmon/lm95234.c b/drivers/hwmon/lm95234.c +index 8a2a2a4904969..c49aaf0d710fa 100644 +--- a/drivers/hwmon/lm95234.c ++++ b/drivers/hwmon/lm95234.c +@@ -301,7 +301,8 @@ static ssize_t tcrit2_store(struct device *dev, struct device_attribute *attr, + if (ret < 0) + return ret; + +- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, index ? 255 : 127); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, (index ? 255 : 127) * 1000), ++ 1000); + + mutex_lock(&data->update_lock); + data->tcrit2[index] = val; +@@ -350,7 +351,7 @@ static ssize_t tcrit1_store(struct device *dev, struct device_attribute *attr, + if (ret < 0) + return ret; + +- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000); + + mutex_lock(&data->update_lock); + data->tcrit1[index] = val; +@@ -391,7 +392,7 @@ static ssize_t tcrit1_hyst_store(struct device *dev, + if (ret < 0) + return ret; + +- val = DIV_ROUND_CLOSEST(val, 1000); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, -255000, 255000), 1000); + val = clamp_val((int)data->tcrit1[index] - val, 0, 31); + + mutex_lock(&data->update_lock); +@@ -431,7 +432,7 @@ static ssize_t offset_store(struct device *dev, struct device_attribute *attr, + return ret; + + /* Accuracy is 1/2 degrees C */ +- val = clamp_val(DIV_ROUND_CLOSEST(val, 500), -128, 127); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, -64000, 63500), 500); + + mutex_lock(&data->update_lock); + data->toffset[index] = val; +diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c +index ba9b96973e808..da6bbfca15fee 100644 +--- a/drivers/hwmon/nct6775.c ++++ b/drivers/hwmon/nct6775.c +@@ -2374,7 +2374,7 @@ store_temp_offset(struct device *dev, struct device_attribute *attr, + if (err < 0) + return err; + +- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), -128, 127); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, -128000, 127000), 1000); + + mutex_lock(&data->update_lock); + data->temp_offset[nr] = val; +diff --git a/drivers/hwmon/w83627ehf.c b/drivers/hwmon/w83627ehf.c +index eb171d15ac489..e4e5bb9115584 100644 +--- a/drivers/hwmon/w83627ehf.c ++++ b/drivers/hwmon/w83627ehf.c +@@ -1506,7 +1506,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr, + if (err < 0) + return err; + +- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 127); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 127000), 1000); + + mutex_lock(&data->update_lock); + data->target_temp[nr] = val; +@@ -1532,7 +1532,7 @@ store_tolerance(struct device *dev, struct device_attribute *attr, + return err; + + /* Limit the temp to 0C - 15C */ +- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 15); ++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 15000), 1000); + + mutex_lock(&data->update_lock); + if (sio_data->kind == nct6775 || sio_data->kind == nct6776) { +diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c b/drivers/iio/buffer/industrialio-buffer-dmaengine.c +index bea4a75e92f13..19acca31af508 100644 +--- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c ++++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c +@@ -158,7 +158,7 @@ struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, + + ret = dma_get_slave_caps(chan, &caps); + if (ret < 0) +- goto err_free; ++ goto err_release; + + /* Needs to be aligned to the maximum of the minimums */ + if (caps.src_addr_widths) +@@ -183,6 +183,8 @@ struct iio_buffer *iio_dmaengine_buffer_alloc(struct device *dev, + + return &dmaengine_buffer->queue.buffer; + ++err_release: ++ dma_release_channel(chan); + err_free: + kfree(dmaengine_buffer); + return ERR_PTR(ret); +diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c +index d00f3045557cf..730c821b52214 100644 +--- a/drivers/iio/inkern.c ++++ b/drivers/iio/inkern.c +@@ -637,17 +637,17 @@ static int iio_convert_raw_to_processed_unlocked(struct iio_channel *chan, + break; + case IIO_VAL_INT_PLUS_MICRO: + if (scale_val2 < 0) +- *processed = -raw64 * scale_val; ++ *processed = -raw64 * scale_val * scale; + else +- *processed = raw64 * scale_val; ++ *processed = raw64 * scale_val * scale; + *processed += div_s64(raw64 * (s64)scale_val2 * scale, + 1000000LL); + break; + case IIO_VAL_INT_PLUS_NANO: + if (scale_val2 < 0) +- *processed = -raw64 * scale_val; ++ *processed = -raw64 * scale_val * scale; + else +- *processed = raw64 * scale_val; ++ *processed = raw64 * scale_val * scale; + *processed += div_s64(raw64 * (s64)scale_val2 * scale, + 1000000000LL); + break; +diff --git a/drivers/input/misc/uinput.c b/drivers/input/misc/uinput.c +index 002654ec70404..e707da0b1fe2e 100644 +--- a/drivers/input/misc/uinput.c ++++ b/drivers/input/misc/uinput.c +@@ -416,6 +416,20 @@ static int uinput_validate_absinfo(struct input_dev *dev, unsigned int code, + return -EINVAL; + } + ++ /* ++ * Limit number of contacts to a reasonable value (100). This ++ * ensures that we need less than 2 pages for struct input_mt ++ * (we are not using in-kernel slot assignment so not going to ++ * allocate memory for the "red" table), and we should have no ++ * trouble getting this much memory. ++ */ ++ if (code == ABS_MT_SLOT && max > 99) { ++ printk(KERN_DEBUG ++ "%s: unreasonably large number of slots requested: %d\n", ++ UINPUT_NAME, max); ++ return -EINVAL; ++ } ++ + return 0; + } + +diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c +index 36900d65386fc..a4805d17317d4 100644 +--- a/drivers/iommu/dmar.c ++++ b/drivers/iommu/dmar.c +@@ -1294,7 +1294,7 @@ int qi_submit_sync(struct qi_desc *desc, struct intel_iommu *iommu) + */ + writel(qi->free_head << shift, iommu->reg + DMAR_IQT_REG); + +- while (qi->desc_status[wait_index] != QI_DONE) { ++ while (READ_ONCE(qi->desc_status[wait_index]) != QI_DONE) { + /* + * We will leave the interrupts disabled, to prevent interrupt + * context to queue another cmd while a cmd is already submitted +diff --git a/drivers/irqchip/irq-armada-370-xp.c b/drivers/irqchip/irq-armada-370-xp.c +index 0fd428db3aa4d..73c386aba3683 100644 +--- a/drivers/irqchip/irq-armada-370-xp.c ++++ b/drivers/irqchip/irq-armada-370-xp.c +@@ -346,6 +346,10 @@ static struct irq_chip armada_370_xp_irq_chip = { + static int armada_370_xp_mpic_irq_map(struct irq_domain *h, + unsigned int virq, irq_hw_number_t hw) + { ++ /* IRQs 0 and 1 cannot be mapped, they are handled internally */ ++ if (hw <= 1) ++ return -EINVAL; ++ + armada_370_xp_irq_mask(irq_get_irq_data(virq)); + if (!is_percpu_irq(hw)) + writel(hw, per_cpu_int_base + +diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c +index e88e75c22b6a0..11efd6c6b111b 100644 +--- a/drivers/irqchip/irq-gic-v2m.c ++++ b/drivers/irqchip/irq-gic-v2m.c +@@ -441,12 +441,12 @@ static int __init gicv2m_of_init(struct fwnode_handle *parent_handle, + + ret = gicv2m_init_one(&child->fwnode, spi_start, nr_spis, + &res, 0); +- if (ret) { +- of_node_put(child); ++ if (ret) + break; +- } + } + ++ if (ret && child) ++ of_node_put(child); + if (!ret) + ret = gicv2m_allocate_domains(parent); + if (ret) +diff --git a/drivers/md/dm-init.c b/drivers/md/dm-init.c +index b869316d37229..4a8bbe0391a26 100644 +--- a/drivers/md/dm-init.c ++++ b/drivers/md/dm-init.c +@@ -207,8 +207,10 @@ static char __init *dm_parse_device_entry(struct dm_device *dev, char *str) + strscpy(dev->dmi.uuid, field[1], sizeof(dev->dmi.uuid)); + /* minor */ + if (strlen(field[2])) { +- if (kstrtoull(field[2], 0, &dev->dmi.dev)) ++ if (kstrtoull(field[2], 0, &dev->dmi.dev) || ++ dev->dmi.dev >= (1 << MINORBITS)) + return ERR_PTR(-EINVAL); ++ dev->dmi.dev = huge_encode_dev((dev_t)dev->dmi.dev); + dev->dmi.flags |= DM_PERSISTENT_DEV_FLAG; + } + /* flags */ +diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c +index 2483641799dfb..2db9229d5601a 100644 +--- a/drivers/media/platform/qcom/camss/camss.c ++++ b/drivers/media/platform/qcom/camss/camss.c +@@ -431,8 +431,11 @@ static int camss_of_parse_endpoint_node(struct device *dev, + struct v4l2_fwnode_bus_mipi_csi2 *mipi_csi2; + struct v4l2_fwnode_endpoint vep = { { 0 } }; + unsigned int i; ++ int ret; + +- v4l2_fwnode_endpoint_parse(of_fwnode_handle(node), &vep); ++ ret = v4l2_fwnode_endpoint_parse(of_fwnode_handle(node), &vep); ++ if (ret) ++ return ret; + + csd->interface.csiphy_id = vep.base.port; + +diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c +index 0caa57a6782aa..6d1a7e02da51f 100644 +--- a/drivers/media/usb/uvc/uvc_driver.c ++++ b/drivers/media/usb/uvc/uvc_driver.c +@@ -887,16 +887,26 @@ static int uvc_parse_streaming(struct uvc_device *dev, + goto error; + } + +- size = nformats * sizeof(*format) + nframes * sizeof(*frame) ++ /* ++ * Allocate memory for the formats, the frames and the intervals, ++ * plus any required padding to guarantee that everything has the ++ * correct alignment. ++ */ ++ size = nformats * sizeof(*format); ++ size = ALIGN(size, __alignof__(*frame)) + nframes * sizeof(*frame); ++ size = ALIGN(size, __alignof__(*interval)) + + nintervals * sizeof(*interval); ++ + format = kzalloc(size, GFP_KERNEL); +- if (format == NULL) { ++ if (!format) { + ret = -ENOMEM; + goto error; + } + +- frame = (struct uvc_frame *)&format[nformats]; +- interval = (u32 *)&frame[nframes]; ++ frame = (void *)format + nformats * sizeof(*format); ++ frame = PTR_ALIGN(frame, __alignof__(*frame)); ++ interval = (void *)frame + nframes * sizeof(*frame); ++ interval = PTR_ALIGN(interval, __alignof__(*interval)); + + streaming->format = format; + streaming->nformats = nformats; +diff --git a/drivers/misc/vmw_vmci/vmci_resource.c b/drivers/misc/vmw_vmci/vmci_resource.c +index 692daa9eff341..19c9d2cdd277b 100644 +--- a/drivers/misc/vmw_vmci/vmci_resource.c ++++ b/drivers/misc/vmw_vmci/vmci_resource.c +@@ -144,7 +144,8 @@ void vmci_resource_remove(struct vmci_resource *resource) + spin_lock(&vmci_resource_table.lock); + + hlist_for_each_entry(r, &vmci_resource_table.entries[idx], node) { +- if (vmci_handle_is_equal(r->handle, resource->handle)) { ++ if (vmci_handle_is_equal(r->handle, resource->handle) && ++ resource->type == r->type) { + hlist_del_init_rcu(&r->node); + break; + } +diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c +index 78e20327828ad..7c5ce9c96841a 100644 +--- a/drivers/mmc/host/dw_mmc.c ++++ b/drivers/mmc/host/dw_mmc.c +@@ -2831,8 +2831,8 @@ static int dw_mci_init_slot(struct dw_mci *host) + if (host->use_dma == TRANS_MODE_IDMAC) { + mmc->max_segs = host->ring_size; + mmc->max_blk_size = 65535; +- mmc->max_seg_size = 0x1000; +- mmc->max_req_size = mmc->max_seg_size * host->ring_size; ++ mmc->max_req_size = DW_MCI_DESC_DATA_LENGTH * host->ring_size; ++ mmc->max_seg_size = mmc->max_req_size; + mmc->max_blk_count = mmc->max_req_size / 512; + } else if (host->use_dma == TRANS_MODE_EDMAC) { + mmc->max_segs = 64; +diff --git a/drivers/mmc/host/sdhci-of-aspeed.c b/drivers/mmc/host/sdhci-of-aspeed.c +index 47ddded570001..792806a925ab9 100644 +--- a/drivers/mmc/host/sdhci-of-aspeed.c ++++ b/drivers/mmc/host/sdhci-of-aspeed.c +@@ -224,6 +224,7 @@ static const struct of_device_id aspeed_sdhci_of_match[] = { + { .compatible = "aspeed,ast2600-sdhci", }, + { } + }; ++MODULE_DEVICE_TABLE(of, aspeed_sdhci_of_match); + + static struct platform_driver aspeed_sdhci_driver = { + .driver = { +diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c +index a1dd82d25ce3c..b95e7920f2731 100644 +--- a/drivers/net/dsa/vitesse-vsc73xx-core.c ++++ b/drivers/net/dsa/vitesse-vsc73xx-core.c +@@ -34,7 +34,7 @@ + #define VSC73XX_BLOCK_ANALYZER 0x2 /* Only subblock 0 */ + #define VSC73XX_BLOCK_MII 0x3 /* Subblocks 0 and 1 */ + #define VSC73XX_BLOCK_MEMINIT 0x3 /* Only subblock 2 */ +-#define VSC73XX_BLOCK_CAPTURE 0x4 /* Only subblock 2 */ ++#define VSC73XX_BLOCK_CAPTURE 0x4 /* Subblocks 0-4, 6, 7 */ + #define VSC73XX_BLOCK_ARBITER 0x5 /* Only subblock 0 */ + #define VSC73XX_BLOCK_SYSTEM 0x7 /* Only subblock 0 */ + +@@ -360,13 +360,19 @@ int vsc73xx_is_addr_valid(u8 block, u8 subblock) + break; + + case VSC73XX_BLOCK_MII: +- case VSC73XX_BLOCK_CAPTURE: + case VSC73XX_BLOCK_ARBITER: + switch (subblock) { + case 0 ... 1: + return 1; + } + break; ++ case VSC73XX_BLOCK_CAPTURE: ++ switch (subblock) { ++ case 0 ... 4: ++ case 6 ... 7: ++ return 1; ++ } ++ break; + } + + return 0; +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index cceff1515ea1d..884beeb67a1fb 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -6522,10 +6522,20 @@ static void igb_extts(struct igb_adapter *adapter, int tsintr_tt) + + static void igb_tsync_interrupt(struct igb_adapter *adapter) + { ++ const u32 mask = (TSINTR_SYS_WRAP | E1000_TSICR_TXTS | ++ TSINTR_TT0 | TSINTR_TT1 | ++ TSINTR_AUTT0 | TSINTR_AUTT1); + struct e1000_hw *hw = &adapter->hw; + u32 tsicr = rd32(E1000_TSICR); + struct ptp_clock_event event; + ++ if (hw->mac.type == e1000_82580) { ++ /* 82580 has a hardware bug that requires an explicit ++ * write to clear the TimeSync interrupt cause. ++ */ ++ wr32(E1000_TSICR, tsicr & mask); ++ } ++ + if (tsicr & TSINTR_SYS_WRAP) { + event.type = PTP_CLOCK_PPS; + if (adapter->ptp_caps.pps) +diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c +index 7adad91617d8c..20e5e0406c881 100644 +--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c ++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c +@@ -143,7 +143,7 @@ static int ionic_request_irq(struct ionic_lif *lif, struct ionic_qcq *qcq) + name = dev_name(dev); + + snprintf(intr->name, sizeof(intr->name), +- "%s-%s-%s", IONIC_DRV_NAME, name, q->name); ++ "%.5s-%.16s-%.8s", IONIC_DRV_NAME, name, q->name); + + return devm_request_irq(dev, intr->vector, ionic_isr, + 0, intr->name, &qcq->napi); +diff --git a/drivers/net/usb/ch9200.c b/drivers/net/usb/ch9200.c +index 9df3c1ffff355..6ed8da85b0811 100644 +--- a/drivers/net/usb/ch9200.c ++++ b/drivers/net/usb/ch9200.c +@@ -338,6 +338,7 @@ static int ch9200_bind(struct usbnet *dev, struct usb_interface *intf) + { + int retval = 0; + unsigned char data[2]; ++ u8 addr[ETH_ALEN]; + + retval = usbnet_get_endpoints(dev, intf); + if (retval) +@@ -385,7 +386,8 @@ static int ch9200_bind(struct usbnet *dev, struct usb_interface *intf) + retval = control_write(dev, REQUEST_WRITE, 0, MAC_REG_CTRL, data, 0x02, + CONTROL_TIMEOUT_MS); + +- retval = get_mac_address(dev, dev->net->dev_addr); ++ retval = get_mac_address(dev, addr); ++ eth_hw_addr_set(dev->net, addr); + + return retval; + } +diff --git a/drivers/net/usb/cx82310_eth.c b/drivers/net/usb/cx82310_eth.c +index 32b08b18e1208..a7356c5e750c6 100644 +--- a/drivers/net/usb/cx82310_eth.c ++++ b/drivers/net/usb/cx82310_eth.c +@@ -40,6 +40,11 @@ enum cx82310_status { + #define CX82310_MTU 1514 + #define CMD_EP 0x01 + ++struct cx82310_priv { ++ struct work_struct reenable_work; ++ struct usbnet *dev; ++}; ++ + /* + * execute control command + * - optionally send some data (command parameters) +@@ -115,6 +120,23 @@ static int cx82310_cmd(struct usbnet *dev, enum cx82310_cmd cmd, bool reply, + return ret; + } + ++static int cx82310_enable_ethernet(struct usbnet *dev) ++{ ++ int ret = cx82310_cmd(dev, CMD_ETHERNET_MODE, true, "\x01", 1, NULL, 0); ++ ++ if (ret) ++ netdev_err(dev->net, "unable to enable ethernet mode: %d\n", ++ ret); ++ return ret; ++} ++ ++static void cx82310_reenable_work(struct work_struct *work) ++{ ++ struct cx82310_priv *priv = container_of(work, struct cx82310_priv, ++ reenable_work); ++ cx82310_enable_ethernet(priv->dev); ++} ++ + #define partial_len data[0] /* length of partial packet data */ + #define partial_rem data[1] /* remaining (missing) data length */ + #define partial_data data[2] /* partial packet data */ +@@ -126,6 +148,8 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf) + struct usb_device *udev = dev->udev; + u8 link[3]; + int timeout = 50; ++ struct cx82310_priv *priv; ++ u8 addr[ETH_ALEN]; + + /* avoid ADSL modems - continue only if iProduct is "USB NET CARD" */ + if (usb_string(udev, udev->descriptor.iProduct, buf, sizeof(buf)) > 0 +@@ -152,6 +176,15 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf) + if (!dev->partial_data) + return -ENOMEM; + ++ priv = kzalloc(sizeof(*priv), GFP_KERNEL); ++ if (!priv) { ++ ret = -ENOMEM; ++ goto err_partial; ++ } ++ dev->driver_priv = priv; ++ INIT_WORK(&priv->reenable_work, cx82310_reenable_work); ++ priv->dev = dev; ++ + /* wait for firmware to become ready (indicated by the link being up) */ + while (--timeout) { + ret = cx82310_cmd(dev, CMD_GET_LINK_STATUS, true, NULL, 0, +@@ -168,20 +201,17 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf) + } + + /* enable ethernet mode (?) */ +- ret = cx82310_cmd(dev, CMD_ETHERNET_MODE, true, "\x01", 1, NULL, 0); +- if (ret) { +- dev_err(&udev->dev, "unable to enable ethernet mode: %d\n", +- ret); ++ ret = cx82310_enable_ethernet(dev); ++ if (ret) + goto err; +- } + + /* get the MAC address */ +- ret = cx82310_cmd(dev, CMD_GET_MAC_ADDR, true, NULL, 0, +- dev->net->dev_addr, ETH_ALEN); ++ ret = cx82310_cmd(dev, CMD_GET_MAC_ADDR, true, NULL, 0, addr, ETH_ALEN); + if (ret) { + dev_err(&udev->dev, "unable to read MAC address: %d\n", ret); + goto err; + } ++ eth_hw_addr_set(dev->net, addr); + + /* start (does not seem to have any effect?) */ + ret = cx82310_cmd(dev, CMD_START, false, NULL, 0, NULL, 0); +@@ -190,13 +220,19 @@ static int cx82310_bind(struct usbnet *dev, struct usb_interface *intf) + + return 0; + err: ++ kfree(dev->driver_priv); ++err_partial: + kfree((void *)dev->partial_data); + return ret; + } + + static void cx82310_unbind(struct usbnet *dev, struct usb_interface *intf) + { ++ struct cx82310_priv *priv = dev->driver_priv; ++ + kfree((void *)dev->partial_data); ++ cancel_work_sync(&priv->reenable_work); ++ kfree(dev->driver_priv); + } + + /* +@@ -211,6 +247,7 @@ static int cx82310_rx_fixup(struct usbnet *dev, struct sk_buff *skb) + { + int len; + struct sk_buff *skb2; ++ struct cx82310_priv *priv = dev->driver_priv; + + /* + * If the last skb ended with an incomplete packet, this skb contains +@@ -245,7 +282,10 @@ static int cx82310_rx_fixup(struct usbnet *dev, struct sk_buff *skb) + break; + } + +- if (len > CX82310_MTU) { ++ if (len == 0xffff) { ++ netdev_info(dev->net, "router was rebooted, re-enabling ethernet mode"); ++ schedule_work(&priv->reenable_work); ++ } else if (len > CX82310_MTU) { + dev_err(&dev->udev->dev, "RX packet too long: %d B\n", + len); + return 0; +diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c +index 73ad78f47763c..05576f66f73da 100644 +--- a/drivers/net/usb/ipheth.c ++++ b/drivers/net/usb/ipheth.c +@@ -353,8 +353,8 @@ static int ipheth_close(struct net_device *net) + { + struct ipheth_device *dev = netdev_priv(net); + +- cancel_delayed_work_sync(&dev->carrier_work); + netif_stop_queue(net); ++ cancel_delayed_work_sync(&dev->carrier_work); + return 0; + } + +@@ -443,7 +443,7 @@ static int ipheth_probe(struct usb_interface *intf, + + netdev->netdev_ops = &ipheth_netdev_ops; + netdev->watchdog_timeo = IPHETH_TX_TIMEOUT; +- strcpy(netdev->name, "eth%d"); ++ strscpy(netdev->name, "eth%d", sizeof(netdev->name)); + + dev = netdev_priv(netdev); + dev->udev = udev; +diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c +index 8e210ba4a3134..243e2b55aabe2 100644 +--- a/drivers/net/usb/kaweth.c ++++ b/drivers/net/usb/kaweth.c +@@ -1127,8 +1127,7 @@ static int kaweth_probe( + goto err_all_but_rxbuf; + + memcpy(netdev->broadcast, &bcast_addr, sizeof(bcast_addr)); +- memcpy(netdev->dev_addr, &kaweth->configuration.hw_addr, +- sizeof(kaweth->configuration.hw_addr)); ++ eth_hw_addr_set(netdev, (u8 *)&kaweth->configuration.hw_addr); + + netdev->netdev_ops = &kaweth_netdev_ops; + netdev->watchdog_timeo = KAWETH_TX_TIMEOUT; +diff --git a/drivers/net/usb/mcs7830.c b/drivers/net/usb/mcs7830.c +index 7e40e2e2f3723..57281296ba2ca 100644 +--- a/drivers/net/usb/mcs7830.c ++++ b/drivers/net/usb/mcs7830.c +@@ -480,17 +480,19 @@ static const struct net_device_ops mcs7830_netdev_ops = { + static int mcs7830_bind(struct usbnet *dev, struct usb_interface *udev) + { + struct net_device *net = dev->net; ++ u8 addr[ETH_ALEN]; + int ret; + int retry; + + /* Initial startup: Gather MAC address setting from EEPROM */ + ret = -EINVAL; + for (retry = 0; retry < 5 && ret; retry++) +- ret = mcs7830_hif_get_mac_address(dev, net->dev_addr); ++ ret = mcs7830_hif_get_mac_address(dev, addr); + if (ret) { + dev_warn(&dev->udev->dev, "Cannot read MAC address\n"); + goto out; + } ++ eth_hw_addr_set(net, addr); + + mcs7830_data_set_multicast(net); + +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 3e219cf4dd851..cce5ee84d29d3 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -1387,6 +1387,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x2692, 0x9025, 4)}, /* Cellient MPL200 (rebranded Qualcomm 05c6:9025) */ + {QMI_QUIRK_SET_DTR(0x1546, 0x1342, 4)}, /* u-blox LARA-L6 */ + {QMI_QUIRK_SET_DTR(0x33f8, 0x0104, 4)}, /* Rolling RW101 RMNET */ ++ {QMI_FIXED_INTF(0x2dee, 0x4d22, 5)}, /* MeiG Smart SRM825L */ + + /* 4. Gobi 1000 devices */ + {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */ +diff --git a/drivers/net/usb/sierra_net.c b/drivers/net/usb/sierra_net.c +index 34c1eaba536c0..6f9ec5ce61dcc 100644 +--- a/drivers/net/usb/sierra_net.c ++++ b/drivers/net/usb/sierra_net.c +@@ -674,6 +674,7 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf) + 0x00, 0x00, SIERRA_NET_HIP_MSYNC_ID, 0x00}; + static const u8 shdwn_tmplate[sizeof(priv->shdwn_msg)] = { + 0x00, 0x00, SIERRA_NET_HIP_SHUTD_ID, 0x00}; ++ u8 mod[2]; + + dev_dbg(&dev->udev->dev, "%s", __func__); + +@@ -703,8 +704,9 @@ static int sierra_net_bind(struct usbnet *dev, struct usb_interface *intf) + dev->net->netdev_ops = &sierra_net_device_ops; + + /* change MAC addr to include, ifacenum, and to be unique */ +- dev->net->dev_addr[ETH_ALEN-2] = atomic_inc_return(&iface_counter); +- dev->net->dev_addr[ETH_ALEN-1] = ifacenum; ++ mod[0] = atomic_inc_return(&iface_counter); ++ mod[1] = ifacenum; ++ dev_addr_mod(dev->net, ETH_ALEN - 2, mod, 2); + + /* prepare shutdown message template */ + memcpy(priv->shdwn_msg, shdwn_tmplate, sizeof(priv->shdwn_msg)); +diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c +index 8d2e3daf03cf2..1ec11a08820d4 100644 +--- a/drivers/net/usb/sr9700.c ++++ b/drivers/net/usb/sr9700.c +@@ -326,6 +326,7 @@ static int sr9700_bind(struct usbnet *dev, struct usb_interface *intf) + { + struct net_device *netdev; + struct mii_if_info *mii; ++ u8 addr[ETH_ALEN]; + int ret; + + ret = usbnet_get_endpoints(dev, intf); +@@ -356,11 +357,12 @@ static int sr9700_bind(struct usbnet *dev, struct usb_interface *intf) + * EEPROM automatically to PAR. In case there is no EEPROM externally, + * a default MAC address is stored in PAR for making chip work properly. + */ +- if (sr_read(dev, SR_PAR, ETH_ALEN, netdev->dev_addr) < 0) { ++ if (sr_read(dev, SR_PAR, ETH_ALEN, addr) < 0) { + netdev_err(netdev, "Error reading MAC address\n"); + ret = -ENODEV; + goto out; + } ++ eth_hw_addr_set(netdev, addr); + + /* power up and reset phy */ + sr_write_reg(dev, SR_PRR, PRR_PHY_RST); +diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c +index a5332e99102a5..351e0edcda2af 100644 +--- a/drivers/net/usb/sr9800.c ++++ b/drivers/net/usb/sr9800.c +@@ -731,6 +731,7 @@ static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf) + struct sr_data *data = (struct sr_data *)&dev->data; + u16 led01_mux, led23_mux; + int ret, embd_phy; ++ u8 addr[ETH_ALEN]; + u32 phyid; + u16 rx_ctl; + +@@ -756,12 +757,12 @@ static int sr9800_bind(struct usbnet *dev, struct usb_interface *intf) + } + + /* Get the MAC address */ +- ret = sr_read_cmd(dev, SR_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, +- dev->net->dev_addr); ++ ret = sr_read_cmd(dev, SR_CMD_READ_NODE_ID, 0, 0, ETH_ALEN, addr); + if (ret < 0) { + netdev_dbg(dev->net, "Failed to read MAC address: %d\n", ret); + return ret; + } ++ eth_hw_addr_set(dev->net, addr); + netdev_dbg(dev->net, "mac addr : %pM\n", dev->net->dev_addr); + + /* Initialize MII structure */ +diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c +index bc37e268a15e7..240511b4246db 100644 +--- a/drivers/net/usb/usbnet.c ++++ b/drivers/net/usb/usbnet.c +@@ -67,9 +67,6 @@ + + /*-------------------------------------------------------------------------*/ + +-// randomly generated ethernet address +-static u8 node_id [ETH_ALEN]; +- + /* use ethtool to change the level for any given device */ + static int msg_level = -1; + module_param (msg_level, int, 0); +@@ -151,12 +148,13 @@ EXPORT_SYMBOL_GPL(usbnet_get_endpoints); + + int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress) + { ++ u8 addr[ETH_ALEN]; + int tmp = -1, ret; + unsigned char buf [13]; + + ret = usb_string(dev->udev, iMACAddress, buf, sizeof buf); + if (ret == 12) +- tmp = hex2bin(dev->net->dev_addr, buf, 6); ++ tmp = hex2bin(addr, buf, 6); + if (tmp < 0) { + dev_dbg(&dev->udev->dev, + "bad MAC string %d fetch, %d\n", iMACAddress, tmp); +@@ -164,6 +162,7 @@ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress) + ret = -EINVAL; + return ret; + } ++ eth_hw_addr_set(dev->net, addr); + return 0; + } + EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr); +@@ -1711,8 +1710,7 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) + dev->interrupt_count = 0; + + dev->net = net; +- strcpy (net->name, "usb%d"); +- memcpy (net->dev_addr, node_id, sizeof node_id); ++ strscpy(net->name, "usb%d", sizeof(net->name)); + + /* rx and tx sides can use different message sizes; + * bind() should set rx_urb_size in that case. +@@ -1738,13 +1736,13 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) + if ((dev->driver_info->flags & FLAG_ETHER) != 0 && + ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 || + (net->dev_addr [0] & 0x02) == 0)) +- strcpy (net->name, "eth%d"); ++ strscpy(net->name, "eth%d", sizeof(net->name)); + /* WLAN devices should always be named "wlan%d" */ + if ((dev->driver_info->flags & FLAG_WLAN) != 0) +- strcpy(net->name, "wlan%d"); ++ strscpy(net->name, "wlan%d", sizeof(net->name)); + /* WWAN devices should always be named "wwan%d" */ + if ((dev->driver_info->flags & FLAG_WWAN) != 0) +- strcpy(net->name, "wwan%d"); ++ strscpy(net->name, "wwan%d", sizeof(net->name)); + + /* devices that cannot do ARP */ + if ((dev->driver_info->flags & FLAG_NOARP) != 0) +@@ -1786,9 +1784,9 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod) + goto out4; + } + +- /* let userspace know we have a random address */ +- if (ether_addr_equal(net->dev_addr, node_id)) +- net->addr_assign_type = NET_ADDR_RANDOM; ++ /* this flags the device for user space */ ++ if (!is_valid_ether_addr(net->dev_addr)) ++ eth_hw_addr_random(net); + + if ((dev->driver_info->flags & FLAG_WLAN) != 0) + SET_NETDEV_DEVTYPE(net, &wlan_type); +@@ -2198,7 +2196,6 @@ static int __init usbnet_init(void) + BUILD_BUG_ON( + FIELD_SIZEOF(struct sk_buff, cb) < sizeof(struct skb_data)); + +- eth_random_addr(node_id); + return 0; + } + module_init(usbnet_init); +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index ef8770093c48c..182b672700442 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -1479,7 +1479,7 @@ static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q) + return false; + } + +-static void virtnet_poll_cleantx(struct receive_queue *rq) ++static void virtnet_poll_cleantx(struct receive_queue *rq, int budget) + { + struct virtnet_info *vi = rq->vq->vdev->priv; + unsigned int index = vq2rxq(rq->vq); +@@ -1490,7 +1490,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) + return; + + if (__netif_tx_trylock(txq)) { +- free_old_xmit_skbs(sq, true); ++ free_old_xmit_skbs(sq, !!budget); + __netif_tx_unlock(txq); + } + +@@ -1507,7 +1507,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget) + unsigned int received; + unsigned int xdp_xmit = 0; + +- virtnet_poll_cleantx(rq); ++ virtnet_poll_cleantx(rq, budget); + + received = virtnet_receive(rq, budget, &xdp_xmit); + +@@ -1580,7 +1580,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) + txq = netdev_get_tx_queue(vi->dev, index); + __netif_tx_lock(txq, raw_smp_processor_id()); + virtqueue_disable_cb(sq->vq); +- free_old_xmit_skbs(sq, true); ++ free_old_xmit_skbs(sq, !!budget); + + opaque = virtqueue_enable_cb_prepare(sq->vq); + +diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c +index 288d4d4d44548..eb735b0547905 100644 +--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c ++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c +@@ -1091,6 +1091,7 @@ static int ieee_hw_init(struct ieee80211_hw *hw) + ieee80211_hw_set(hw, AMPDU_AGGREGATION); + ieee80211_hw_set(hw, SIGNAL_DBM); + ieee80211_hw_set(hw, REPORTS_TX_ACK_STATUS); ++ ieee80211_hw_set(hw, MFP_CAPABLE); + + hw->extra_tx_headroom = brcms_c_get_header_len(); + hw->queues = N_TX_QUEUES; +diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h +index fa5634af40f7c..2e7f31bf38002 100644 +--- a/drivers/net/wireless/marvell/mwifiex/main.h ++++ b/drivers/net/wireless/marvell/mwifiex/main.h +@@ -1307,6 +1307,9 @@ mwifiex_get_priv_by_id(struct mwifiex_adapter *adapter, + + for (i = 0; i < adapter->priv_num; i++) { + if (adapter->priv[i]) { ++ if (adapter->priv[i]->bss_mode == NL80211_IFTYPE_UNSPECIFIED) ++ continue; ++ + if ((adapter->priv[i]->bss_num == bss_num) && + (adapter->priv[i]->bss_type == bss_type)) + break; +diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c +index fa6e7fbf356e7..11c8506e04ca3 100644 +--- a/drivers/nvme/target/tcp.c ++++ b/drivers/nvme/target/tcp.c +@@ -1751,8 +1751,10 @@ static u16 nvmet_tcp_install_queue(struct nvmet_sq *sq) + } + + queue->nr_cmds = sq->size * 2; +- if (nvmet_tcp_alloc_cmds(queue)) ++ if (nvmet_tcp_alloc_cmds(queue)) { ++ queue->nr_cmds = 0; + return NVME_SC_INTERNAL; ++ } + return 0; + } + +diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c +index cde1d77845fee..01a17505a2fca 100644 +--- a/drivers/nvmem/core.c ++++ b/drivers/nvmem/core.c +@@ -684,13 +684,13 @@ void nvmem_device_put(struct nvmem_device *nvmem) + EXPORT_SYMBOL_GPL(nvmem_device_put); + + /** +- * devm_nvmem_device_get() - Get nvmem cell of device form a given id ++ * devm_nvmem_device_get() - Get nvmem device of device form a given id + * + * @dev: Device that requests the nvmem device. + * @id: name id for the requested nvmem device. + * +- * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_cell +- * on success. The nvmem_cell will be freed by the automatically once the ++ * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_device ++ * on success. The nvmem_device will be freed by the automatically once the + * device is freed. + */ + struct nvmem_device *devm_nvmem_device_get(struct device *dev, const char *id) +diff --git a/drivers/of/irq.c b/drivers/of/irq.c +index 352e14b007e78..ad0cb49e233ac 100644 +--- a/drivers/of/irq.c ++++ b/drivers/of/irq.c +@@ -288,7 +288,8 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar + struct device_node *p; + const __be32 *addr; + u32 intsize; +- int i, res; ++ int i, res, addr_len; ++ __be32 addr_buf[3] = { 0 }; + + pr_debug("of_irq_parse_one: dev=%pOF, index=%d\n", device, index); + +@@ -297,13 +298,19 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar + return of_irq_parse_oldworld(device, index, out_irq); + + /* Get the reg property (if any) */ +- addr = of_get_property(device, "reg", NULL); ++ addr = of_get_property(device, "reg", &addr_len); ++ ++ /* Prevent out-of-bounds read in case of longer interrupt parent address size */ ++ if (addr_len > (3 * sizeof(__be32))) ++ addr_len = 3 * sizeof(__be32); ++ if (addr) ++ memcpy(addr_buf, addr, addr_len); + + /* Try the new-style interrupts-extended first */ + res = of_parse_phandle_with_args(device, "interrupts-extended", + "#interrupt-cells", index, out_irq); + if (!res) +- return of_irq_parse_raw(addr, out_irq); ++ return of_irq_parse_raw(addr_buf, out_irq); + + /* Look for the interrupt parent. */ + p = of_irq_find_parent(device); +@@ -333,7 +340,7 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar + + + /* Check if there are any interrupt-map translations to process */ +- res = of_irq_parse_raw(addr, out_irq); ++ res = of_irq_parse_raw(addr_buf, out_irq); + out: + of_node_put(p); + return res; +diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c +index b18ddb2b9ef8b..a16fe2a558c7a 100644 +--- a/drivers/pci/controller/dwc/pci-keystone.c ++++ b/drivers/pci/controller/dwc/pci-keystone.c +@@ -35,6 +35,11 @@ + #define PCIE_DEVICEID_SHIFT 16 + + /* Application registers */ ++#define PID 0x000 ++#define RTL GENMASK(15, 11) ++#define RTL_SHIFT 11 ++#define AM6_PCI_PG1_RTL_VER 0x15 ++ + #define CMD_STATUS 0x004 + #define LTSSM_EN_VAL BIT(0) + #define OB_XLAT_EN_VAL BIT(1) +@@ -107,6 +112,8 @@ + + #define to_keystone_pcie(x) dev_get_drvdata((x)->dev) + ++#define PCI_DEVICE_ID_TI_AM654X 0xb00c ++ + struct ks_pcie_of_data { + enum dw_pcie_device_mode mode; + const struct dw_pcie_host_ops *host_ops; +@@ -534,7 +541,11 @@ static int ks_pcie_start_link(struct dw_pcie *pci) + static void ks_pcie_quirk(struct pci_dev *dev) + { + struct pci_bus *bus = dev->bus; ++ struct keystone_pcie *ks_pcie; ++ struct device *bridge_dev; + struct pci_dev *bridge; ++ u32 val; ++ + static const struct pci_device_id rc_pci_devids[] = { + { PCI_DEVICE(PCI_VENDOR_ID_TI, PCIE_RC_K2HK), + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, +@@ -546,6 +557,11 @@ static void ks_pcie_quirk(struct pci_dev *dev) + .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, + { 0, }, + }; ++ static const struct pci_device_id am6_pci_devids[] = { ++ { PCI_DEVICE(PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_AM654X), ++ .class = PCI_CLASS_BRIDGE_PCI << 8, .class_mask = ~0, }, ++ { 0, }, ++ }; + + if (pci_is_root_bus(bus)) + bridge = dev; +@@ -567,10 +583,36 @@ static void ks_pcie_quirk(struct pci_dev *dev) + */ + if (pci_match_id(rc_pci_devids, bridge)) { + if (pcie_get_readrq(dev) > 256) { +- dev_info(&dev->dev, "limiting MRRS to 256\n"); ++ dev_info(&dev->dev, "limiting MRRS to 256 bytes\n"); + pcie_set_readrq(dev, 256); + } + } ++ ++ /* ++ * Memory transactions fail with PCI controller in AM654 PG1.0 ++ * when MRRS is set to more than 128 bytes. Force the MRRS to ++ * 128 bytes in all downstream devices. ++ */ ++ if (pci_match_id(am6_pci_devids, bridge)) { ++ bridge_dev = pci_get_host_bridge_device(dev); ++ if (!bridge_dev && !bridge_dev->parent) ++ return; ++ ++ ks_pcie = dev_get_drvdata(bridge_dev->parent); ++ if (!ks_pcie) ++ return; ++ ++ val = ks_pcie_app_readl(ks_pcie, PID); ++ val &= RTL; ++ val >>= RTL_SHIFT; ++ if (val != AM6_PCI_PG1_RTL_VER) ++ return; ++ ++ if (pcie_get_readrq(dev) > 128) { ++ dev_info(&dev->dev, "limiting MRRS to 128 bytes\n"); ++ pcie_set_readrq(dev, 128); ++ } ++ } + } + DECLARE_PCI_FIXUP_ENABLE(PCI_ANY_ID, PCI_ANY_ID, ks_pcie_quirk); + +diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c +index d7b2b47bc33eb..3824942618305 100644 +--- a/drivers/pci/hotplug/pnv_php.c ++++ b/drivers/pci/hotplug/pnv_php.c +@@ -35,7 +35,6 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot, + bool disable_device) + { + struct pci_dev *pdev = php_slot->pdev; +- int irq = php_slot->irq; + u16 ctrl; + + if (php_slot->irq > 0) { +@@ -54,7 +53,7 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot, + php_slot->wq = NULL; + } + +- if (disable_device || irq > 0) { ++ if (disable_device) { + if (pdev->msix_enabled) + pci_disable_msix(pdev); + else if (pdev->msi_enabled) +diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c +index deafd229ef8b4..41050a35631fa 100644 +--- a/drivers/pci/pci.c ++++ b/drivers/pci/pci.c +@@ -5200,10 +5200,12 @@ static void pci_bus_lock(struct pci_bus *bus) + { + struct pci_dev *dev; + ++ pci_dev_lock(bus->self); + list_for_each_entry(dev, &bus->devices, bus_list) { +- pci_dev_lock(dev); + if (dev->subordinate) + pci_bus_lock(dev->subordinate); ++ else ++ pci_dev_lock(dev); + } + } + +@@ -5215,8 +5217,10 @@ static void pci_bus_unlock(struct pci_bus *bus) + list_for_each_entry(dev, &bus->devices, bus_list) { + if (dev->subordinate) + pci_bus_unlock(dev->subordinate); +- pci_dev_unlock(dev); ++ else ++ pci_dev_unlock(dev); + } ++ pci_dev_unlock(bus->self); + } + + /* Return 1 on successful lock, 0 on contention */ +@@ -5224,15 +5228,15 @@ static int pci_bus_trylock(struct pci_bus *bus) + { + struct pci_dev *dev; + ++ if (!pci_dev_trylock(bus->self)) ++ return 0; ++ + list_for_each_entry(dev, &bus->devices, bus_list) { +- if (!pci_dev_trylock(dev)) +- goto unlock; + if (dev->subordinate) { +- if (!pci_bus_trylock(dev->subordinate)) { +- pci_dev_unlock(dev); ++ if (!pci_bus_trylock(dev->subordinate)) + goto unlock; +- } +- } ++ } else if (!pci_dev_trylock(dev)) ++ goto unlock; + } + return 1; + +@@ -5240,8 +5244,10 @@ static int pci_bus_trylock(struct pci_bus *bus) + list_for_each_entry_continue_reverse(dev, &bus->devices, bus_list) { + if (dev->subordinate) + pci_bus_unlock(dev->subordinate); +- pci_dev_unlock(dev); ++ else ++ pci_dev_unlock(dev); + } ++ pci_dev_unlock(bus->self); + return 0; + } + +@@ -5273,9 +5279,10 @@ static void pci_slot_lock(struct pci_slot *slot) + list_for_each_entry(dev, &slot->bus->devices, bus_list) { + if (!dev->slot || dev->slot != slot) + continue; +- pci_dev_lock(dev); + if (dev->subordinate) + pci_bus_lock(dev->subordinate); ++ else ++ pci_dev_lock(dev); + } + } + +@@ -5301,14 +5308,13 @@ static int pci_slot_trylock(struct pci_slot *slot) + list_for_each_entry(dev, &slot->bus->devices, bus_list) { + if (!dev->slot || dev->slot != slot) + continue; +- if (!pci_dev_trylock(dev)) +- goto unlock; + if (dev->subordinate) { + if (!pci_bus_trylock(dev->subordinate)) { + pci_dev_unlock(dev); + goto unlock; + } +- } ++ } else if (!pci_dev_trylock(dev)) ++ goto unlock; + } + return 1; + +@@ -5319,7 +5325,8 @@ static int pci_slot_trylock(struct pci_slot *slot) + continue; + if (dev->subordinate) + pci_bus_unlock(dev->subordinate); +- pci_dev_unlock(dev); ++ else ++ pci_dev_unlock(dev); + } + return 0; + } +diff --git a/drivers/pcmcia/yenta_socket.c b/drivers/pcmcia/yenta_socket.c +index 810761ab8e9d2..ba82ccb40db75 100644 +--- a/drivers/pcmcia/yenta_socket.c ++++ b/drivers/pcmcia/yenta_socket.c +@@ -637,11 +637,11 @@ static int yenta_search_one_res(struct resource *root, struct resource *res, + start = PCIBIOS_MIN_CARDBUS_IO; + end = ~0U; + } else { +- unsigned long avail = root->end - root->start; ++ unsigned long avail = resource_size(root); + int i; + size = BRIDGE_MEM_MAX; +- if (size > avail/8) { +- size = (avail+1)/8; ++ if (size > (avail - 1) / 8) { ++ size = avail / 8; + /* round size down to next power of 2 */ + i = 0; + while ((size /= 2) != 0) +diff --git a/drivers/platform/x86/dell-smbios-base.c b/drivers/platform/x86/dell-smbios-base.c +index ceb8e701028df..2f9c3c1f76f1c 100644 +--- a/drivers/platform/x86/dell-smbios-base.c ++++ b/drivers/platform/x86/dell-smbios-base.c +@@ -610,7 +610,10 @@ static int __init dell_smbios_init(void) + return 0; + + fail_sysfs: +- free_group(platform_device); ++ if (!wmi) ++ exit_dell_smbios_wmi(); ++ if (!smm) ++ exit_dell_smbios_smm(); + + fail_create_group: + platform_device_del(platform_device); +diff --git a/drivers/reset/hisilicon/hi6220_reset.c b/drivers/reset/hisilicon/hi6220_reset.c +index 84e761f454b63..2a7688fa9b9ba 100644 +--- a/drivers/reset/hisilicon/hi6220_reset.c ++++ b/drivers/reset/hisilicon/hi6220_reset.c +@@ -33,6 +33,7 @@ + enum hi6220_reset_ctrl_type { + PERIPHERAL, + MEDIA, ++ AO, + }; + + struct hi6220_reset_data { +@@ -92,6 +93,65 @@ static const struct reset_control_ops hi6220_media_reset_ops = { + .deassert = hi6220_media_deassert, + }; + ++#define AO_SCTRL_SC_PW_CLKEN0 0x800 ++#define AO_SCTRL_SC_PW_CLKDIS0 0x804 ++ ++#define AO_SCTRL_SC_PW_RSTEN0 0x810 ++#define AO_SCTRL_SC_PW_RSTDIS0 0x814 ++ ++#define AO_SCTRL_SC_PW_ISOEN0 0x820 ++#define AO_SCTRL_SC_PW_ISODIS0 0x824 ++#define AO_MAX_INDEX 12 ++ ++static int hi6220_ao_assert(struct reset_controller_dev *rc_dev, ++ unsigned long idx) ++{ ++ struct hi6220_reset_data *data = to_reset_data(rc_dev); ++ struct regmap *regmap = data->regmap; ++ int ret; ++ ++ ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTEN0, BIT(idx)); ++ if (ret) ++ return ret; ++ ++ ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISOEN0, BIT(idx)); ++ if (ret) ++ return ret; ++ ++ ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKDIS0, BIT(idx)); ++ return ret; ++} ++ ++static int hi6220_ao_deassert(struct reset_controller_dev *rc_dev, ++ unsigned long idx) ++{ ++ struct hi6220_reset_data *data = to_reset_data(rc_dev); ++ struct regmap *regmap = data->regmap; ++ int ret; ++ ++ /* ++ * It was suggested to disable isolation before enabling ++ * the clocks and deasserting reset, to avoid glitches. ++ * But this order is preserved to keep it matching the ++ * vendor code. ++ */ ++ ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTDIS0, BIT(idx)); ++ if (ret) ++ return ret; ++ ++ ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISODIS0, BIT(idx)); ++ if (ret) ++ return ret; ++ ++ ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKEN0, BIT(idx)); ++ return ret; ++} ++ ++static const struct reset_control_ops hi6220_ao_reset_ops = { ++ .assert = hi6220_ao_assert, ++ .deassert = hi6220_ao_deassert, ++}; ++ + static int hi6220_reset_probe(struct platform_device *pdev) + { + struct device_node *np = pdev->dev.of_node; +@@ -117,9 +177,12 @@ static int hi6220_reset_probe(struct platform_device *pdev) + if (type == MEDIA) { + data->rc_dev.ops = &hi6220_media_reset_ops; + data->rc_dev.nr_resets = MEDIA_MAX_INDEX; +- } else { ++ } else if (type == PERIPHERAL) { + data->rc_dev.ops = &hi6220_peripheral_reset_ops; + data->rc_dev.nr_resets = PERIPH_MAX_INDEX; ++ } else { ++ data->rc_dev.ops = &hi6220_ao_reset_ops; ++ data->rc_dev.nr_resets = AO_MAX_INDEX; + } + + return reset_controller_register(&data->rc_dev); +@@ -134,6 +197,10 @@ static const struct of_device_id hi6220_reset_match[] = { + .compatible = "hisilicon,hi6220-mediactrl", + .data = (void *)MEDIA, + }, ++ { ++ .compatible = "hisilicon,hi6220-aoctrl", ++ .data = (void *)AO, ++ }, + { /* sentinel */ }, + }; + MODULE_DEVICE_TABLE(of, hi6220_reset_match); +diff --git a/drivers/staging/iio/frequency/ad9834.c b/drivers/staging/iio/frequency/ad9834.c +index 23026978a5a5f..2e661a905a577 100644 +--- a/drivers/staging/iio/frequency/ad9834.c ++++ b/drivers/staging/iio/frequency/ad9834.c +@@ -115,7 +115,7 @@ static int ad9834_write_frequency(struct ad9834_state *st, + + clk_freq = clk_get_rate(st->mclk); + +- if (fout > (clk_freq / 2)) ++ if (!clk_freq || fout > (clk_freq / 2)) + return -EINVAL; + + regval = ad9834_calc_freqreg(clk_freq, fout); +diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c +index bdf3932ee3f20..8ea5ae954243c 100644 +--- a/drivers/uio/uio_hv_generic.c ++++ b/drivers/uio/uio_hv_generic.c +@@ -104,10 +104,11 @@ static void hv_uio_channel_cb(void *context) + + /* + * Callback from vmbus_event when channel is rescinded. ++ * It is meant for rescind of primary channels only. + */ + static void hv_uio_rescind(struct vmbus_channel *channel) + { +- struct hv_device *hv_dev = channel->primary_channel->device_obj; ++ struct hv_device *hv_dev = channel->device_obj; + struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); + + /* +@@ -118,6 +119,14 @@ static void hv_uio_rescind(struct vmbus_channel *channel) + + /* Wake up reader */ + uio_event_notify(&pdata->info); ++ ++ /* ++ * With rescind callback registered, rescind path will not unregister the device ++ * from vmbus when the primary channel is rescinded. ++ * Without it, rescind handling is incomplete and next onoffer msg does not come. ++ * Unregister the device from vmbus here. ++ */ ++ vmbus_device_unregister(channel->device_obj); + } + + /* Sysfs API to allow mmap of the ring buffers +diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c +index 678903d1ce4da..7493b4d9d1f58 100644 +--- a/drivers/usb/storage/uas.c ++++ b/drivers/usb/storage/uas.c +@@ -424,6 +424,7 @@ static void uas_data_cmplt(struct urb *urb) + uas_log_cmd_state(cmnd, "data cmplt err", status); + /* error: no data transfered */ + scsi_set_resid(cmnd, sdb->length); ++ set_host_byte(cmnd, DID_ERROR); + } else { + scsi_set_resid(cmnd, sdb->length - urb->actual_length); + } +diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h +index de87d0b8319d0..179ad343f42f0 100644 +--- a/drivers/usb/typec/ucsi/ucsi.h ++++ b/drivers/usb/typec/ucsi/ucsi.h +@@ -446,7 +446,7 @@ ucsi_register_displayport(struct ucsi_connector *con, + bool override, int offset, + struct typec_altmode_desc *desc) + { +- return NULL; ++ return typec_port_register_altmode(con->port, desc); + } + + static inline void +diff --git a/drivers/usb/usbip/stub_rx.c b/drivers/usb/usbip/stub_rx.c +index d3d360ff0d24e..6be5cd87e7cbb 100644 +--- a/drivers/usb/usbip/stub_rx.c ++++ b/drivers/usb/usbip/stub_rx.c +@@ -144,53 +144,62 @@ static int tweak_set_configuration_cmd(struct urb *urb) + if (err && err != -ENODEV) + dev_err(&sdev->udev->dev, "can't set config #%d, error %d\n", + config, err); +- return 0; ++ return err; + } + + static int tweak_reset_device_cmd(struct urb *urb) + { + struct stub_priv *priv = (struct stub_priv *) urb->context; + struct stub_device *sdev = priv->sdev; ++ int err; + + dev_info(&urb->dev->dev, "usb_queue_reset_device\n"); + +- if (usb_lock_device_for_reset(sdev->udev, NULL) < 0) { ++ err = usb_lock_device_for_reset(sdev->udev, NULL); ++ if (err < 0) { + dev_err(&urb->dev->dev, "could not obtain lock to reset device\n"); +- return 0; ++ return err; + } +- usb_reset_device(sdev->udev); ++ err = usb_reset_device(sdev->udev); + usb_unlock_device(sdev->udev); + +- return 0; ++ return err; + } + + /* + * clear_halt, set_interface, and set_configuration require special tricks. ++ * Returns 1 if request was tweaked, 0 otherwise. + */ +-static void tweak_special_requests(struct urb *urb) ++static int tweak_special_requests(struct urb *urb) + { ++ int err; ++ + if (!urb || !urb->setup_packet) +- return; ++ return 0; + + if (usb_pipetype(urb->pipe) != PIPE_CONTROL) +- return; ++ return 0; + + if (is_clear_halt_cmd(urb)) + /* tweak clear_halt */ +- tweak_clear_halt_cmd(urb); ++ err = tweak_clear_halt_cmd(urb); + + else if (is_set_interface_cmd(urb)) + /* tweak set_interface */ +- tweak_set_interface_cmd(urb); ++ err = tweak_set_interface_cmd(urb); + + else if (is_set_configuration_cmd(urb)) + /* tweak set_configuration */ +- tweak_set_configuration_cmd(urb); ++ err = tweak_set_configuration_cmd(urb); + + else if (is_reset_device_cmd(urb)) +- tweak_reset_device_cmd(urb); +- else ++ err = tweak_reset_device_cmd(urb); ++ else { + usbip_dbg_stub_rx("no need to tweak\n"); ++ return 0; ++ } ++ ++ return !err; + } + + /* +@@ -468,6 +477,7 @@ static void stub_recv_cmd_submit(struct stub_device *sdev, + int support_sg = 1; + int np = 0; + int ret, i; ++ int is_tweaked; + + if (pipe == -1) + return; +@@ -580,8 +590,11 @@ static void stub_recv_cmd_submit(struct stub_device *sdev, + priv->urbs[i]->pipe = pipe; + priv->urbs[i]->complete = stub_complete; + +- /* no need to submit an intercepted request, but harmless? */ +- tweak_special_requests(priv->urbs[i]); ++ /* ++ * all URBs belong to a single PDU, so a global is_tweaked flag is ++ * enough ++ */ ++ is_tweaked = tweak_special_requests(priv->urbs[i]); + + masking_bogus_flags(priv->urbs[i]); + } +@@ -594,22 +607,32 @@ static void stub_recv_cmd_submit(struct stub_device *sdev, + + /* urb is now ready to submit */ + for (i = 0; i < priv->num_urbs; i++) { +- ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL); ++ if (!is_tweaked) { ++ ret = usb_submit_urb(priv->urbs[i], GFP_KERNEL); + +- if (ret == 0) +- usbip_dbg_stub_rx("submit urb ok, seqnum %u\n", +- pdu->base.seqnum); +- else { +- dev_err(&udev->dev, "submit_urb error, %d\n", ret); +- usbip_dump_header(pdu); +- usbip_dump_urb(priv->urbs[i]); ++ if (ret == 0) ++ usbip_dbg_stub_rx("submit urb ok, seqnum %u\n", ++ pdu->base.seqnum); ++ else { ++ dev_err(&udev->dev, "submit_urb error, %d\n", ret); ++ usbip_dump_header(pdu); ++ usbip_dump_urb(priv->urbs[i]); + ++ /* ++ * Pessimistic. ++ * This connection will be discarded. ++ */ ++ usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT); ++ break; ++ } ++ } else { + /* +- * Pessimistic. +- * This connection will be discarded. ++ * An identical URB was already submitted in ++ * tweak_special_requests(). Skip submitting this URB to not ++ * duplicate the request. + */ +- usbip_event_add(ud, SDEV_EVENT_ERROR_SUBMIT); +- break; ++ priv->urbs[i]->status = 0; ++ stub_complete(priv->urbs[i]); + } + } + +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index a28b0eafb65ad..7e5ac187463ed 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -4684,7 +4684,15 @@ static noinline void reada_walk_down(struct btrfs_trans_handle *trans, + /* We don't care about errors in readahead. */ + if (ret < 0) + continue; +- BUG_ON(refs == 0); ++ ++ /* ++ * This could be racey, it's conceivable that we raced and end ++ * up with a bogus refs count, if that's the case just skip, if ++ * we are actually corrupt we will notice when we look up ++ * everything again with our locks. ++ */ ++ if (refs == 0) ++ continue; + + if (wc->stage == DROP_REFERENCE) { + if (refs == 1) +@@ -4743,7 +4751,7 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans, + if (lookup_info && + ((wc->stage == DROP_REFERENCE && wc->refs[level] != 1) || + (wc->stage == UPDATE_BACKREF && !(wc->flags[level] & flag)))) { +- BUG_ON(!path->locks[level]); ++ ASSERT(path->locks[level]); + ret = btrfs_lookup_extent_info(trans, fs_info, + eb->start, level, 1, + &wc->refs[level], +@@ -4751,7 +4759,11 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans, + BUG_ON(ret == -ENOMEM); + if (ret) + return ret; +- BUG_ON(wc->refs[level] == 0); ++ if (unlikely(wc->refs[level] == 0)) { ++ btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0", ++ eb->start); ++ return -EUCLEAN; ++ } + } + + if (wc->stage == DROP_REFERENCE) { +@@ -4767,7 +4779,7 @@ static noinline int walk_down_proc(struct btrfs_trans_handle *trans, + + /* wc->stage == UPDATE_BACKREF */ + if (!(wc->flags[level] & flag)) { +- BUG_ON(!path->locks[level]); ++ ASSERT(path->locks[level]); + ret = btrfs_inc_ref(trans, root, eb, 1); + BUG_ON(ret); /* -ENOMEM */ + ret = btrfs_dec_ref(trans, root, eb, 0); +@@ -4885,8 +4897,9 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans, + goto out_unlock; + + if (unlikely(wc->refs[level - 1] == 0)) { +- btrfs_err(fs_info, "Missing references."); +- ret = -EIO; ++ btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0", ++ bytenr); ++ ret = -EUCLEAN; + goto out_unlock; + } + *lookup_info = 0; +@@ -5088,7 +5101,12 @@ static noinline int walk_up_proc(struct btrfs_trans_handle *trans, + path->locks[level] = 0; + return ret; + } +- BUG_ON(wc->refs[level] == 0); ++ if (unlikely(wc->refs[level] == 0)) { ++ btrfs_tree_unlock_rw(eb, path->locks[level]); ++ btrfs_err(fs_info, "bytenr %llu has 0 references, expect > 0", ++ eb->start); ++ return -EUCLEAN; ++ } + if (wc->refs[level] == 1) { + btrfs_tree_unlock_rw(eb, path->locks[level]); + path->locks[level] = 0; +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index d2a988bf9c895..cd72409ccc943 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -6087,7 +6087,7 @@ struct inode *btrfs_lookup_dentry(struct inode *dir, struct dentry *dentry) + struct inode *inode; + struct btrfs_root *root = BTRFS_I(dir)->root; + struct btrfs_root *sub_root = root; +- struct btrfs_key location; ++ struct btrfs_key location = { 0 }; + u8 di_type = 0; + int index; + int ret = 0; +diff --git a/fs/fuse/file.c b/fs/fuse/file.c +index f6c499dbfd065..d157eef3ede2d 100644 +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -1694,10 +1694,16 @@ __acquires(fi->lock) + fuse_writepage_finish(fc, wpa); + spin_unlock(&fi->lock); + +- /* After fuse_writepage_finish() aux request list is private */ ++ /* After rb_erase() aux request list is private */ + for (aux = wpa->next; aux; aux = next) { ++ struct backing_dev_info *bdi = inode_to_bdi(aux->inode); ++ + next = aux->next; + aux->next = NULL; ++ ++ dec_wb_stat(&bdi->wb, WB_WRITEBACK); ++ dec_node_page_state(aux->ia.ap.pages[0], NR_WRITEBACK_TEMP); ++ wb_writeout_inc(&bdi->wb); + fuse_writepage_free(aux); + } + +diff --git a/fs/fuse/xattr.c b/fs/fuse/xattr.c +index 28fed52957707..cd93c5369068e 100644 +--- a/fs/fuse/xattr.c ++++ b/fs/fuse/xattr.c +@@ -79,7 +79,7 @@ ssize_t fuse_getxattr(struct inode *inode, const char *name, void *value, + } + ret = fuse_simple_request(fc, &args); + if (!ret && !size) +- ret = min_t(ssize_t, outarg.size, XATTR_SIZE_MAX); ++ ret = min_t(size_t, outarg.size, XATTR_SIZE_MAX); + if (ret == -ENOSYS) { + fc->no_getxattr = 1; + ret = -EOPNOTSUPP; +@@ -141,7 +141,7 @@ ssize_t fuse_listxattr(struct dentry *entry, char *list, size_t size) + } + ret = fuse_simple_request(fc, &args); + if (!ret && !size) +- ret = min_t(ssize_t, outarg.size, XATTR_LIST_MAX); ++ ret = min_t(size_t, outarg.size, XATTR_LIST_MAX); + if (ret > 0 && size) + ret = fuse_verify_xattr_list(list, ret); + if (ret == -ENOSYS) { +diff --git a/fs/nfs/super.c b/fs/nfs/super.c +index c7ca8cdc8801e..98fbd2c5d7b74 100644 +--- a/fs/nfs/super.c ++++ b/fs/nfs/super.c +@@ -47,6 +47,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -454,6 +455,7 @@ static int __nfs_list_for_each_server(struct list_head *head, + ret = fn(server, data); + if (ret) + goto out; ++ cond_resched(); + rcu_read_lock(); + } + rcu_read_unlock(); +diff --git a/fs/nilfs2/recovery.c b/fs/nilfs2/recovery.c +index 0923231e9e605..f9390e5a7fce5 100644 +--- a/fs/nilfs2/recovery.c ++++ b/fs/nilfs2/recovery.c +@@ -708,6 +708,33 @@ static void nilfs_finish_roll_forward(struct the_nilfs *nilfs, + brelse(bh); + } + ++/** ++ * nilfs_abort_roll_forward - cleaning up after a failed rollforward recovery ++ * @nilfs: nilfs object ++ */ ++static void nilfs_abort_roll_forward(struct the_nilfs *nilfs) ++{ ++ struct nilfs_inode_info *ii, *n; ++ LIST_HEAD(head); ++ ++ /* Abandon inodes that have read recovery data */ ++ spin_lock(&nilfs->ns_inode_lock); ++ list_splice_init(&nilfs->ns_dirty_files, &head); ++ spin_unlock(&nilfs->ns_inode_lock); ++ if (list_empty(&head)) ++ return; ++ ++ set_nilfs_purging(nilfs); ++ list_for_each_entry_safe(ii, n, &head, i_dirty) { ++ spin_lock(&nilfs->ns_inode_lock); ++ list_del_init(&ii->i_dirty); ++ spin_unlock(&nilfs->ns_inode_lock); ++ ++ iput(&ii->vfs_inode); ++ } ++ clear_nilfs_purging(nilfs); ++} ++ + /** + * nilfs_salvage_orphan_logs - salvage logs written after the latest checkpoint + * @nilfs: nilfs object +@@ -766,15 +793,19 @@ int nilfs_salvage_orphan_logs(struct the_nilfs *nilfs, + if (unlikely(err)) { + nilfs_err(sb, "error %d writing segment for recovery", + err); +- goto failed; ++ goto put_root; + } + + nilfs_finish_roll_forward(nilfs, ri); + } + +- failed: ++put_root: + nilfs_put_root(root); + return err; ++ ++failed: ++ nilfs_abort_roll_forward(nilfs); ++ goto put_root; + } + + /** +diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c +index 3c4272762779c..9a5dd4106c3d2 100644 +--- a/fs/nilfs2/segment.c ++++ b/fs/nilfs2/segment.c +@@ -1828,6 +1828,9 @@ static void nilfs_segctor_abort_construction(struct nilfs_sc_info *sci, + nilfs_abort_logs(&logs, ret ? : err); + + list_splice_tail_init(&sci->sc_segbufs, &logs); ++ if (list_empty(&logs)) ++ return; /* if the first segment buffer preparation failed */ ++ + nilfs_cancel_segusage(&logs, nilfs->ns_sufile); + nilfs_free_incomplete_logs(&logs, nilfs); + +@@ -2072,7 +2075,7 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode) + + err = nilfs_segctor_begin_construction(sci, nilfs); + if (unlikely(err)) +- goto out; ++ goto failed; + + /* Update time stamp */ + sci->sc_seg_ctime = ktime_get_real_seconds(); +@@ -2135,10 +2138,9 @@ static int nilfs_segctor_do_construct(struct nilfs_sc_info *sci, int mode) + return err; + + failed_to_write: +- if (sci->sc_stage.flags & NILFS_CF_IFILE_STARTED) +- nilfs_redirty_inodes(&sci->sc_dirty_files); +- + failed: ++ if (mode == SC_LSEG_SR && nilfs_sc_cstage_get(sci) >= NILFS_ST_IFILE) ++ nilfs_redirty_inodes(&sci->sc_dirty_files); + if (nilfs_doing_gc()) + nilfs_redirty_inodes(&sci->sc_gc_inodes); + nilfs_segctor_abort_construction(sci, nilfs, err); +diff --git a/fs/nilfs2/sysfs.c b/fs/nilfs2/sysfs.c +index 57afd06db62de..64ea44be0a646 100644 +--- a/fs/nilfs2/sysfs.c ++++ b/fs/nilfs2/sysfs.c +@@ -108,7 +108,7 @@ static ssize_t + nilfs_snapshot_inodes_count_show(struct nilfs_snapshot_attr *attr, + struct nilfs_root *root, char *buf) + { +- return snprintf(buf, PAGE_SIZE, "%llu\n", ++ return sysfs_emit(buf, "%llu\n", + (unsigned long long)atomic64_read(&root->inodes_count)); + } + +@@ -116,7 +116,7 @@ static ssize_t + nilfs_snapshot_blocks_count_show(struct nilfs_snapshot_attr *attr, + struct nilfs_root *root, char *buf) + { +- return snprintf(buf, PAGE_SIZE, "%llu\n", ++ return sysfs_emit(buf, "%llu\n", + (unsigned long long)atomic64_read(&root->blocks_count)); + } + +@@ -129,7 +129,7 @@ static ssize_t + nilfs_snapshot_README_show(struct nilfs_snapshot_attr *attr, + struct nilfs_root *root, char *buf) + { +- return snprintf(buf, PAGE_SIZE, snapshot_readme_str); ++ return sysfs_emit(buf, snapshot_readme_str); + } + + NILFS_SNAPSHOT_RO_ATTR(inodes_count); +@@ -230,7 +230,7 @@ static ssize_t + nilfs_mounted_snapshots_README_show(struct nilfs_mounted_snapshots_attr *attr, + struct the_nilfs *nilfs, char *buf) + { +- return snprintf(buf, PAGE_SIZE, mounted_snapshots_readme_str); ++ return sysfs_emit(buf, mounted_snapshots_readme_str); + } + + NILFS_MOUNTED_SNAPSHOTS_RO_ATTR(README); +@@ -268,7 +268,7 @@ nilfs_checkpoints_checkpoints_number_show(struct nilfs_checkpoints_attr *attr, + + ncheckpoints = cpstat.cs_ncps; + +- return snprintf(buf, PAGE_SIZE, "%llu\n", ncheckpoints); ++ return sysfs_emit(buf, "%llu\n", ncheckpoints); + } + + static ssize_t +@@ -291,7 +291,7 @@ nilfs_checkpoints_snapshots_number_show(struct nilfs_checkpoints_attr *attr, + + nsnapshots = cpstat.cs_nsss; + +- return snprintf(buf, PAGE_SIZE, "%llu\n", nsnapshots); ++ return sysfs_emit(buf, "%llu\n", nsnapshots); + } + + static ssize_t +@@ -305,7 +305,7 @@ nilfs_checkpoints_last_seg_checkpoint_show(struct nilfs_checkpoints_attr *attr, + last_cno = nilfs->ns_last_cno; + spin_unlock(&nilfs->ns_last_segment_lock); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", last_cno); ++ return sysfs_emit(buf, "%llu\n", last_cno); + } + + static ssize_t +@@ -319,7 +319,7 @@ nilfs_checkpoints_next_checkpoint_show(struct nilfs_checkpoints_attr *attr, + cno = nilfs->ns_cno; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", cno); ++ return sysfs_emit(buf, "%llu\n", cno); + } + + static const char checkpoints_readme_str[] = +@@ -335,7 +335,7 @@ static ssize_t + nilfs_checkpoints_README_show(struct nilfs_checkpoints_attr *attr, + struct the_nilfs *nilfs, char *buf) + { +- return snprintf(buf, PAGE_SIZE, checkpoints_readme_str); ++ return sysfs_emit(buf, checkpoints_readme_str); + } + + NILFS_CHECKPOINTS_RO_ATTR(checkpoints_number); +@@ -366,7 +366,7 @@ nilfs_segments_segments_number_show(struct nilfs_segments_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- return snprintf(buf, PAGE_SIZE, "%lu\n", nilfs->ns_nsegments); ++ return sysfs_emit(buf, "%lu\n", nilfs->ns_nsegments); + } + + static ssize_t +@@ -374,7 +374,7 @@ nilfs_segments_blocks_per_segment_show(struct nilfs_segments_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- return snprintf(buf, PAGE_SIZE, "%lu\n", nilfs->ns_blocks_per_segment); ++ return sysfs_emit(buf, "%lu\n", nilfs->ns_blocks_per_segment); + } + + static ssize_t +@@ -388,7 +388,7 @@ nilfs_segments_clean_segments_show(struct nilfs_segments_attr *attr, + ncleansegs = nilfs_sufile_get_ncleansegs(nilfs->ns_sufile); + up_read(&NILFS_MDT(nilfs->ns_dat)->mi_sem); + +- return snprintf(buf, PAGE_SIZE, "%lu\n", ncleansegs); ++ return sysfs_emit(buf, "%lu\n", ncleansegs); + } + + static ssize_t +@@ -408,7 +408,7 @@ nilfs_segments_dirty_segments_show(struct nilfs_segments_attr *attr, + return err; + } + +- return snprintf(buf, PAGE_SIZE, "%llu\n", sustat.ss_ndirtysegs); ++ return sysfs_emit(buf, "%llu\n", sustat.ss_ndirtysegs); + } + + static const char segments_readme_str[] = +@@ -424,7 +424,7 @@ nilfs_segments_README_show(struct nilfs_segments_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- return snprintf(buf, PAGE_SIZE, segments_readme_str); ++ return sysfs_emit(buf, segments_readme_str); + } + + NILFS_SEGMENTS_RO_ATTR(segments_number); +@@ -461,7 +461,7 @@ nilfs_segctor_last_pseg_block_show(struct nilfs_segctor_attr *attr, + last_pseg = nilfs->ns_last_pseg; + spin_unlock(&nilfs->ns_last_segment_lock); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", ++ return sysfs_emit(buf, "%llu\n", + (unsigned long long)last_pseg); + } + +@@ -476,7 +476,7 @@ nilfs_segctor_last_seg_sequence_show(struct nilfs_segctor_attr *attr, + last_seq = nilfs->ns_last_seq; + spin_unlock(&nilfs->ns_last_segment_lock); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", last_seq); ++ return sysfs_emit(buf, "%llu\n", last_seq); + } + + static ssize_t +@@ -490,7 +490,7 @@ nilfs_segctor_last_seg_checkpoint_show(struct nilfs_segctor_attr *attr, + last_cno = nilfs->ns_last_cno; + spin_unlock(&nilfs->ns_last_segment_lock); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", last_cno); ++ return sysfs_emit(buf, "%llu\n", last_cno); + } + + static ssize_t +@@ -504,7 +504,7 @@ nilfs_segctor_current_seg_sequence_show(struct nilfs_segctor_attr *attr, + seg_seq = nilfs->ns_seg_seq; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", seg_seq); ++ return sysfs_emit(buf, "%llu\n", seg_seq); + } + + static ssize_t +@@ -518,7 +518,7 @@ nilfs_segctor_current_last_full_seg_show(struct nilfs_segctor_attr *attr, + segnum = nilfs->ns_segnum; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", segnum); ++ return sysfs_emit(buf, "%llu\n", segnum); + } + + static ssize_t +@@ -532,7 +532,7 @@ nilfs_segctor_next_full_seg_show(struct nilfs_segctor_attr *attr, + nextnum = nilfs->ns_nextnum; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", nextnum); ++ return sysfs_emit(buf, "%llu\n", nextnum); + } + + static ssize_t +@@ -546,7 +546,7 @@ nilfs_segctor_next_pseg_offset_show(struct nilfs_segctor_attr *attr, + pseg_offset = nilfs->ns_pseg_offset; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%lu\n", pseg_offset); ++ return sysfs_emit(buf, "%lu\n", pseg_offset); + } + + static ssize_t +@@ -560,7 +560,7 @@ nilfs_segctor_next_checkpoint_show(struct nilfs_segctor_attr *attr, + cno = nilfs->ns_cno; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", cno); ++ return sysfs_emit(buf, "%llu\n", cno); + } + + static ssize_t +@@ -588,7 +588,7 @@ nilfs_segctor_last_seg_write_time_secs_show(struct nilfs_segctor_attr *attr, + ctime = nilfs->ns_ctime; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", ctime); ++ return sysfs_emit(buf, "%llu\n", ctime); + } + + static ssize_t +@@ -616,7 +616,7 @@ nilfs_segctor_last_nongc_write_time_secs_show(struct nilfs_segctor_attr *attr, + nongc_ctime = nilfs->ns_nongc_ctime; + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", nongc_ctime); ++ return sysfs_emit(buf, "%llu\n", nongc_ctime); + } + + static ssize_t +@@ -630,7 +630,7 @@ nilfs_segctor_dirty_data_blocks_count_show(struct nilfs_segctor_attr *attr, + ndirtyblks = atomic_read(&nilfs->ns_ndirtyblks); + up_read(&nilfs->ns_segctor_sem); + +- return snprintf(buf, PAGE_SIZE, "%u\n", ndirtyblks); ++ return sysfs_emit(buf, "%u\n", ndirtyblks); + } + + static const char segctor_readme_str[] = +@@ -667,7 +667,7 @@ static ssize_t + nilfs_segctor_README_show(struct nilfs_segctor_attr *attr, + struct the_nilfs *nilfs, char *buf) + { +- return snprintf(buf, PAGE_SIZE, segctor_readme_str); ++ return sysfs_emit(buf, segctor_readme_str); + } + + NILFS_SEGCTOR_RO_ATTR(last_pseg_block); +@@ -736,7 +736,7 @@ nilfs_superblock_sb_write_time_secs_show(struct nilfs_superblock_attr *attr, + sbwtime = nilfs->ns_sbwtime; + up_read(&nilfs->ns_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", sbwtime); ++ return sysfs_emit(buf, "%llu\n", sbwtime); + } + + static ssize_t +@@ -750,7 +750,7 @@ nilfs_superblock_sb_write_count_show(struct nilfs_superblock_attr *attr, + sbwcount = nilfs->ns_sbwcount; + up_read(&nilfs->ns_sem); + +- return snprintf(buf, PAGE_SIZE, "%u\n", sbwcount); ++ return sysfs_emit(buf, "%u\n", sbwcount); + } + + static ssize_t +@@ -764,7 +764,7 @@ nilfs_superblock_sb_update_frequency_show(struct nilfs_superblock_attr *attr, + sb_update_freq = nilfs->ns_sb_update_freq; + up_read(&nilfs->ns_sem); + +- return snprintf(buf, PAGE_SIZE, "%u\n", sb_update_freq); ++ return sysfs_emit(buf, "%u\n", sb_update_freq); + } + + static ssize_t +@@ -812,7 +812,7 @@ static ssize_t + nilfs_superblock_README_show(struct nilfs_superblock_attr *attr, + struct the_nilfs *nilfs, char *buf) + { +- return snprintf(buf, PAGE_SIZE, sb_readme_str); ++ return sysfs_emit(buf, sb_readme_str); + } + + NILFS_SUPERBLOCK_RO_ATTR(sb_write_time); +@@ -843,11 +843,17 @@ ssize_t nilfs_dev_revision_show(struct nilfs_dev_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- struct nilfs_super_block **sbp = nilfs->ns_sbp; +- u32 major = le32_to_cpu(sbp[0]->s_rev_level); +- u16 minor = le16_to_cpu(sbp[0]->s_minor_rev_level); ++ struct nilfs_super_block *raw_sb; ++ u32 major; ++ u16 minor; + +- return snprintf(buf, PAGE_SIZE, "%d.%d\n", major, minor); ++ down_read(&nilfs->ns_sem); ++ raw_sb = nilfs->ns_sbp[0]; ++ major = le32_to_cpu(raw_sb->s_rev_level); ++ minor = le16_to_cpu(raw_sb->s_minor_rev_level); ++ up_read(&nilfs->ns_sem); ++ ++ return sysfs_emit(buf, "%d.%d\n", major, minor); + } + + static +@@ -855,7 +861,7 @@ ssize_t nilfs_dev_blocksize_show(struct nilfs_dev_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- return snprintf(buf, PAGE_SIZE, "%u\n", nilfs->ns_blocksize); ++ return sysfs_emit(buf, "%u\n", nilfs->ns_blocksize); + } + + static +@@ -863,10 +869,15 @@ ssize_t nilfs_dev_device_size_show(struct nilfs_dev_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- struct nilfs_super_block **sbp = nilfs->ns_sbp; +- u64 dev_size = le64_to_cpu(sbp[0]->s_dev_size); ++ struct nilfs_super_block *raw_sb; ++ u64 dev_size; ++ ++ down_read(&nilfs->ns_sem); ++ raw_sb = nilfs->ns_sbp[0]; ++ dev_size = le64_to_cpu(raw_sb->s_dev_size); ++ up_read(&nilfs->ns_sem); + +- return snprintf(buf, PAGE_SIZE, "%llu\n", dev_size); ++ return sysfs_emit(buf, "%llu\n", dev_size); + } + + static +@@ -877,7 +888,7 @@ ssize_t nilfs_dev_free_blocks_show(struct nilfs_dev_attr *attr, + sector_t free_blocks = 0; + + nilfs_count_free_blocks(nilfs, &free_blocks); +- return snprintf(buf, PAGE_SIZE, "%llu\n", ++ return sysfs_emit(buf, "%llu\n", + (unsigned long long)free_blocks); + } + +@@ -886,9 +897,15 @@ ssize_t nilfs_dev_uuid_show(struct nilfs_dev_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- struct nilfs_super_block **sbp = nilfs->ns_sbp; ++ struct nilfs_super_block *raw_sb; ++ ssize_t len; + +- return snprintf(buf, PAGE_SIZE, "%pUb\n", sbp[0]->s_uuid); ++ down_read(&nilfs->ns_sem); ++ raw_sb = nilfs->ns_sbp[0]; ++ len = sysfs_emit(buf, "%pUb\n", raw_sb->s_uuid); ++ up_read(&nilfs->ns_sem); ++ ++ return len; + } + + static +@@ -896,10 +913,16 @@ ssize_t nilfs_dev_volume_name_show(struct nilfs_dev_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- struct nilfs_super_block **sbp = nilfs->ns_sbp; ++ struct nilfs_super_block *raw_sb; ++ ssize_t len; ++ ++ down_read(&nilfs->ns_sem); ++ raw_sb = nilfs->ns_sbp[0]; ++ len = scnprintf(buf, sizeof(raw_sb->s_volume_name), "%s\n", ++ raw_sb->s_volume_name); ++ up_read(&nilfs->ns_sem); + +- return scnprintf(buf, sizeof(sbp[0]->s_volume_name), "%s\n", +- sbp[0]->s_volume_name); ++ return len; + } + + static const char dev_readme_str[] = +@@ -916,7 +939,7 @@ static ssize_t nilfs_dev_README_show(struct nilfs_dev_attr *attr, + struct the_nilfs *nilfs, + char *buf) + { +- return snprintf(buf, PAGE_SIZE, dev_readme_str); ++ return sysfs_emit(buf, dev_readme_str); + } + + NILFS_DEV_RO_ATTR(revision); +@@ -1060,7 +1083,7 @@ void nilfs_sysfs_delete_device_group(struct the_nilfs *nilfs) + static ssize_t nilfs_feature_revision_show(struct kobject *kobj, + struct attribute *attr, char *buf) + { +- return snprintf(buf, PAGE_SIZE, "%d.%d\n", ++ return sysfs_emit(buf, "%d.%d\n", + NILFS_CURRENT_REV, NILFS_MINOR_REV); + } + +@@ -1073,7 +1096,7 @@ static ssize_t nilfs_feature_README_show(struct kobject *kobj, + struct attribute *attr, + char *buf) + { +- return snprintf(buf, PAGE_SIZE, features_readme_str); ++ return sysfs_emit(buf, features_readme_str); + } + + NILFS_FEATURE_RO_ATTR(revision); +diff --git a/fs/squashfs/inode.c b/fs/squashfs/inode.c +index 24463145b3513..f31649080a881 100644 +--- a/fs/squashfs/inode.c ++++ b/fs/squashfs/inode.c +@@ -276,8 +276,13 @@ int squashfs_read_inode(struct inode *inode, long long ino) + if (err < 0) + goto failed_read; + +- set_nlink(inode, le32_to_cpu(sqsh_ino->nlink)); + inode->i_size = le32_to_cpu(sqsh_ino->symlink_size); ++ if (inode->i_size > PAGE_SIZE) { ++ ERROR("Corrupted symlink\n"); ++ return -EINVAL; ++ } ++ ++ set_nlink(inode, le32_to_cpu(sqsh_ino->nlink)); + inode->i_op = &squashfs_symlink_inode_ops; + inode_nohighmem(inode); + inode->i_data.a_ops = &squashfs_symlink_aops; +diff --git a/fs/udf/super.c b/fs/udf/super.c +index 46384284e7e06..a1962c93bd262 100644 +--- a/fs/udf/super.c ++++ b/fs/udf/super.c +@@ -86,6 +86,13 @@ enum { + #define UDF_MAX_LVID_NESTING 1000 + + enum { UDF_MAX_LINKS = 0xffff }; ++/* ++ * We limit filesize to 4TB. This is arbitrary as the on-disk format supports ++ * more but because the file space is described by a linked list of extents, ++ * each of which can have at most 1GB, the creation and handling of extents ++ * gets unusably slow beyond certain point... ++ */ ++#define UDF_MAX_FILESIZE (1ULL << 42) + + /* These are the "meat" - everything else is stuffing */ + static int udf_fill_super(struct super_block *, void *, int); +@@ -1083,12 +1090,19 @@ static int udf_fill_partdesc_info(struct super_block *sb, + struct udf_part_map *map; + struct udf_sb_info *sbi = UDF_SB(sb); + struct partitionHeaderDesc *phd; ++ u32 sum; + int err; + + map = &sbi->s_partmaps[p_index]; + + map->s_partition_len = le32_to_cpu(p->partitionLength); /* blocks */ + map->s_partition_root = le32_to_cpu(p->partitionStartingLocation); ++ if (check_add_overflow(map->s_partition_root, map->s_partition_len, ++ &sum)) { ++ udf_err(sb, "Partition %d has invalid location %u + %u\n", ++ p_index, map->s_partition_root, map->s_partition_len); ++ return -EFSCORRUPTED; ++ } + + if (p->accessType == cpu_to_le32(PD_ACCESS_TYPE_READ_ONLY)) + map->s_partition_flags |= UDF_PART_FLAG_READ_ONLY; +@@ -1144,6 +1158,14 @@ static int udf_fill_partdesc_info(struct super_block *sb, + bitmap->s_extPosition = le32_to_cpu( + phd->unallocSpaceBitmap.extPosition); + map->s_partition_flags |= UDF_PART_FLAG_UNALLOC_BITMAP; ++ /* Check whether math over bitmap won't overflow. */ ++ if (check_add_overflow(map->s_partition_len, ++ sizeof(struct spaceBitmapDesc) << 3, ++ &sum)) { ++ udf_err(sb, "Partition %d is too long (%u)\n", p_index, ++ map->s_partition_len); ++ return -EFSCORRUPTED; ++ } + udf_debug("unallocSpaceBitmap (part %d) @ %u\n", + p_index, bitmap->s_extPosition); + } +@@ -2308,7 +2330,7 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent) + ret = -ENOMEM; + goto error_out; + } +- sb->s_maxbytes = MAX_LFS_FILESIZE; ++ sb->s_maxbytes = UDF_MAX_FILESIZE; + sb->s_max_links = UDF_MAX_LINKS; + return 0; + +diff --git a/include/linux/i2c.h b/include/linux/i2c.h +index af2b799d7a665..e622d09d9eb80 100644 +--- a/include/linux/i2c.h ++++ b/include/linux/i2c.h +@@ -979,7 +979,7 @@ static inline int of_i2c_get_board_info(struct device *dev, + struct acpi_resource; + struct acpi_resource_i2c_serialbus; + +-#if IS_ENABLED(CONFIG_ACPI) ++#if IS_REACHABLE(CONFIG_ACPI) && IS_REACHABLE(CONFIG_I2C) + bool i2c_acpi_get_i2c_resource(struct acpi_resource *ares, + struct acpi_resource_i2c_serialbus **i2c); + u32 i2c_acpi_find_bus_speed(struct device *dev); +diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h +index b739507722998..12492c8a22b39 100644 +--- a/include/linux/ring_buffer.h ++++ b/include/linux/ring_buffer.h +@@ -135,8 +135,7 @@ void ring_buffer_read_finish(struct ring_buffer_iter *iter); + + struct ring_buffer_event * + ring_buffer_iter_peek(struct ring_buffer_iter *iter, u64 *ts); +-struct ring_buffer_event * +-ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts); ++void ring_buffer_iter_advance(struct ring_buffer_iter *iter); + void ring_buffer_iter_reset(struct ring_buffer_iter *iter); + int ring_buffer_iter_empty(struct ring_buffer_iter *iter); + +diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c +index 62a7a50750149..16ae868941211 100644 +--- a/kernel/cgroup/cgroup.c ++++ b/kernel/cgroup/cgroup.c +@@ -1783,9 +1783,9 @@ int rebind_subsystems(struct cgroup_root *dst_root, u16 ss_mask) + RCU_INIT_POINTER(scgrp->subsys[ssid], NULL); + rcu_assign_pointer(dcgrp->subsys[ssid], css); + ss->root = dst_root; +- css->cgroup = dcgrp; + + spin_lock_irq(&css_set_lock); ++ css->cgroup = dcgrp; + WARN_ON(!list_empty(&dcgrp->e_csets[ss->id])); + list_for_each_entry_safe(cset, cset_pos, &scgrp->e_csets[ss->id], + e_cset_node[ss->id]) { +diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c +index 1799427a384ae..c812575f04619 100644 +--- a/kernel/events/uprobes.c ++++ b/kernel/events/uprobes.c +@@ -1491,7 +1491,7 @@ static struct xol_area *__create_xol_area(unsigned long vaddr) + uprobe_opcode_t insn = UPROBE_SWBP_INSN; + struct xol_area *area; + +- area = kmalloc(sizeof(*area), GFP_KERNEL); ++ area = kzalloc(sizeof(*area), GFP_KERNEL); + if (unlikely(!area)) + goto out; + +@@ -1501,7 +1501,6 @@ static struct xol_area *__create_xol_area(unsigned long vaddr) + goto free_area; + + area->xol_mapping.name = "[uprobes]"; +- area->xol_mapping.fault = NULL; + area->xol_mapping.pages = area->pages; + area->pages[0] = alloc_page(GFP_HIGHUSER); + if (!area->pages[0]) +diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c +index 734698aec5f9e..7c727c4944e62 100644 +--- a/kernel/locking/rtmutex.c ++++ b/kernel/locking/rtmutex.c +@@ -1204,6 +1204,7 @@ __rt_mutex_slowlock(struct rt_mutex *lock, int state, + } + + static void rt_mutex_handle_deadlock(int res, int detect_deadlock, ++ struct rt_mutex *lock, + struct rt_mutex_waiter *w) + { + /* +@@ -1213,6 +1214,7 @@ static void rt_mutex_handle_deadlock(int res, int detect_deadlock, + if (res != -EDEADLOCK || detect_deadlock) + return; + ++ raw_spin_unlock_irq(&lock->wait_lock); + /* + * Yell lowdly and stop the task right here. + */ +@@ -1268,7 +1270,7 @@ rt_mutex_slowlock(struct rt_mutex *lock, int state, + if (unlikely(ret)) { + __set_current_state(TASK_RUNNING); + remove_waiter(lock, &waiter); +- rt_mutex_handle_deadlock(ret, chwalk, &waiter); ++ rt_mutex_handle_deadlock(ret, chwalk, lock, &waiter); + } + + /* +diff --git a/kernel/smp.c b/kernel/smp.c +index be65b76cb8036..76de88dc16996 100644 +--- a/kernel/smp.c ++++ b/kernel/smp.c +@@ -813,6 +813,7 @@ int smp_call_on_cpu(unsigned int cpu, int (*func)(void *), void *par, bool phys) + + queue_work_on(cpu, system_wq, &sscs.work); + wait_for_completion(&sscs.done); ++ destroy_work_on_stack(&sscs.work); + + return sscs.ret; + } +diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c +index ad97515cd5a16..2011219c11a91 100644 +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -4495,35 +4495,24 @@ ring_buffer_read_finish(struct ring_buffer_iter *iter) + EXPORT_SYMBOL_GPL(ring_buffer_read_finish); + + /** +- * ring_buffer_read - read the next item in the ring buffer by the iterator ++ * ring_buffer_iter_advance - advance the iterator to the next location + * @iter: The ring buffer iterator +- * @ts: The time stamp of the event read. + * +- * This reads the next event in the ring buffer and increments the iterator. ++ * Move the location of the iterator such that the next read will ++ * be the next location of the iterator. + */ +-struct ring_buffer_event * +-ring_buffer_read(struct ring_buffer_iter *iter, u64 *ts) ++void ring_buffer_iter_advance(struct ring_buffer_iter *iter) + { +- struct ring_buffer_event *event; + struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer; + unsigned long flags; + + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); +- again: +- event = rb_iter_peek(iter, ts); +- if (!event) +- goto out; +- +- if (event->type_len == RINGBUF_TYPE_PADDING) +- goto again; + + rb_advance_iter(iter); +- out: +- raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + +- return event; ++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); + } +-EXPORT_SYMBOL_GPL(ring_buffer_read); ++EXPORT_SYMBOL_GPL(ring_buffer_iter_advance); + + /** + * ring_buffer_size - return the size of the ring buffer (in bytes) +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 6fd7dca57dd91..67466563d86f1 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -3326,7 +3326,7 @@ static void trace_iterator_increment(struct trace_iterator *iter) + + iter->idx++; + if (buf_iter) +- ring_buffer_read(buf_iter, NULL); ++ ring_buffer_iter_advance(buf_iter); + } + + static struct trace_entry * +@@ -3486,7 +3486,9 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu) + if (ts >= iter->trace_buffer->time_start) + break; + entries++; +- ring_buffer_read(buf_iter, NULL); ++ ring_buffer_iter_advance(buf_iter); ++ /* This could be a big loop */ ++ cond_resched(); + } + + per_cpu_ptr(iter->trace_buffer->data, cpu)->skipped_entries = entries; +diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c +index 78af97163147b..f577c11720a42 100644 +--- a/kernel/trace/trace_functions_graph.c ++++ b/kernel/trace/trace_functions_graph.c +@@ -482,7 +482,7 @@ get_return_for_leaf(struct trace_iterator *iter, + + /* this is a leaf, now advance the iterator */ + if (ring_iter) +- ring_buffer_read(ring_iter, NULL); ++ ring_buffer_iter_advance(ring_iter); + + return next; + } +diff --git a/lib/generic-radix-tree.c b/lib/generic-radix-tree.c +index f25eb111c0516..34d3ac52de894 100644 +--- a/lib/generic-radix-tree.c ++++ b/lib/generic-radix-tree.c +@@ -131,6 +131,8 @@ void *__genradix_ptr_alloc(struct __genradix *radix, size_t offset, + if ((v = cmpxchg_release(&radix->root, r, new_root)) == r) { + v = new_root; + new_node = NULL; ++ } else { ++ new_node->children[0] = NULL; + } + } + +diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c +index b1d3248c02521..89e0a6808d301 100644 +--- a/net/bridge/br_fdb.c ++++ b/net/bridge/br_fdb.c +@@ -75,8 +75,9 @@ static inline unsigned long hold_time(const struct net_bridge *br) + static inline int has_expired(const struct net_bridge *br, + const struct net_bridge_fdb_entry *fdb) + { +- return !fdb->is_static && !fdb->added_by_external_learn && +- time_before_eq(fdb->updated + hold_time(br), jiffies); ++ return !test_bit(BR_FDB_STATIC, &fdb->flags) && ++ !test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags) && ++ time_before_eq(fdb->updated + hold_time(br), jiffies); + } + + static void fdb_rcu_free(struct rcu_head *head) +@@ -197,7 +198,7 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f, + { + trace_fdb_delete(br, f); + +- if (f->is_static) ++ if (test_bit(BR_FDB_STATIC, &f->flags)) + fdb_del_hw_addr(br, f->key.addr.addr); + + hlist_del_init_rcu(&f->fdb_node); +@@ -224,7 +225,7 @@ static void fdb_delete_local(struct net_bridge *br, + if (op != p && ether_addr_equal(op->dev->dev_addr, addr) && + (!vid || br_vlan_find(vg, vid))) { + f->dst = op; +- f->added_by_user = 0; ++ clear_bit(BR_FDB_ADDED_BY_USER, &f->flags); + return; + } + } +@@ -235,7 +236,7 @@ static void fdb_delete_local(struct net_bridge *br, + if (p && ether_addr_equal(br->dev->dev_addr, addr) && + (!vid || (v && br_vlan_should_use(v)))) { + f->dst = NULL; +- f->added_by_user = 0; ++ clear_bit(BR_FDB_ADDED_BY_USER, &f->flags); + return; + } + +@@ -250,7 +251,8 @@ void br_fdb_find_delete_local(struct net_bridge *br, + + spin_lock_bh(&br->hash_lock); + f = br_fdb_find(br, addr, vid); +- if (f && f->is_local && !f->added_by_user && f->dst == p) ++ if (f && test_bit(BR_FDB_LOCAL, &f->flags) && ++ !test_bit(BR_FDB_ADDED_BY_USER, &f->flags) && f->dst == p) + fdb_delete_local(br, p, f); + spin_unlock_bh(&br->hash_lock); + } +@@ -265,7 +267,8 @@ void br_fdb_changeaddr(struct net_bridge_port *p, const unsigned char *newaddr) + spin_lock_bh(&br->hash_lock); + vg = nbp_vlan_group(p); + hlist_for_each_entry(f, &br->fdb_list, fdb_node) { +- if (f->dst == p && f->is_local && !f->added_by_user) { ++ if (f->dst == p && test_bit(BR_FDB_LOCAL, &f->flags) && ++ !test_bit(BR_FDB_ADDED_BY_USER, &f->flags)) { + /* delete old one */ + fdb_delete_local(br, p, f); + +@@ -306,7 +309,8 @@ void br_fdb_change_mac_address(struct net_bridge *br, const u8 *newaddr) + + /* If old entry was unassociated with any port, then delete it. */ + f = br_fdb_find(br, br->dev->dev_addr, 0); +- if (f && f->is_local && !f->dst && !f->added_by_user) ++ if (f && test_bit(BR_FDB_LOCAL, &f->flags) && ++ !f->dst && !test_bit(BR_FDB_ADDED_BY_USER, &f->flags)) + fdb_delete_local(br, NULL, f); + + fdb_insert(br, NULL, newaddr, 0); +@@ -321,7 +325,8 @@ void br_fdb_change_mac_address(struct net_bridge *br, const u8 *newaddr) + if (!br_vlan_should_use(v)) + continue; + f = br_fdb_find(br, br->dev->dev_addr, v->vid); +- if (f && f->is_local && !f->dst && !f->added_by_user) ++ if (f && test_bit(BR_FDB_LOCAL, &f->flags) && ++ !f->dst && !test_bit(BR_FDB_ADDED_BY_USER, &f->flags)) + fdb_delete_local(br, NULL, f); + fdb_insert(br, NULL, newaddr, v->vid); + } +@@ -346,7 +351,8 @@ void br_fdb_cleanup(struct work_struct *work) + hlist_for_each_entry_rcu(f, &br->fdb_list, fdb_node) { + unsigned long this_timer; + +- if (f->is_static || f->added_by_external_learn) ++ if (test_bit(BR_FDB_STATIC, &f->flags) || ++ test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &f->flags)) + continue; + this_timer = f->updated + delay; + if (time_after(this_timer, now)) { +@@ -373,7 +379,7 @@ void br_fdb_flush(struct net_bridge *br) + + spin_lock_bh(&br->hash_lock); + hlist_for_each_entry_safe(f, tmp, &br->fdb_list, fdb_node) { +- if (!f->is_static) ++ if (!test_bit(BR_FDB_STATIC, &f->flags)) + fdb_delete(br, f, true); + } + spin_unlock_bh(&br->hash_lock); +@@ -397,10 +403,11 @@ void br_fdb_delete_by_port(struct net_bridge *br, + continue; + + if (!do_all) +- if (f->is_static || (vid && f->key.vlan_id != vid)) ++ if (test_bit(BR_FDB_STATIC, &f->flags) || ++ (vid && f->key.vlan_id != vid)) + continue; + +- if (f->is_local) ++ if (test_bit(BR_FDB_LOCAL, &f->flags)) + fdb_delete_local(br, p, f); + else + fdb_delete(br, f, true); +@@ -469,8 +476,8 @@ int br_fdb_fillbuf(struct net_bridge *br, void *buf, + fe->port_no = f->dst->port_no; + fe->port_hi = f->dst->port_no >> 8; + +- fe->is_local = f->is_local; +- if (!f->is_static) ++ fe->is_local = test_bit(BR_FDB_LOCAL, &f->flags); ++ if (!test_bit(BR_FDB_STATIC, &f->flags)) + fe->ageing_timer_value = jiffies_delta_to_clock_t(jiffies - f->updated); + ++fe; + ++num; +@@ -494,12 +501,12 @@ static struct net_bridge_fdb_entry *fdb_create(struct net_bridge *br, + memcpy(fdb->key.addr.addr, addr, ETH_ALEN); + fdb->dst = source; + fdb->key.vlan_id = vid; +- fdb->is_local = is_local; +- fdb->is_static = is_static; +- fdb->added_by_user = 0; +- fdb->added_by_external_learn = 0; ++ fdb->flags = 0; ++ if (is_local) ++ set_bit(BR_FDB_LOCAL, &fdb->flags); ++ if (is_static) ++ set_bit(BR_FDB_STATIC, &fdb->flags); + fdb->offloaded = 0; +- fdb->is_sticky = 0; + fdb->updated = fdb->used = jiffies; + if (rhashtable_lookup_insert_fast(&br->fdb_hash_tbl, + &fdb->rhnode, +@@ -526,7 +533,7 @@ static int fdb_insert(struct net_bridge *br, struct net_bridge_port *source, + /* it is okay to have multiple ports with same + * address, just use the first one. + */ +- if (fdb->is_local) ++ if (test_bit(BR_FDB_LOCAL, &fdb->flags)) + return 0; + br_warn(br, "adding interface %s with same address as a received packet (addr:%pM, vlan:%u)\n", + source ? source->dev->name : br->dev->name, addr, vid); +@@ -572,7 +579,7 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, + fdb = fdb_find_rcu(&br->fdb_hash_tbl, addr, vid); + if (likely(fdb)) { + /* attempt to update an entry for a local interface */ +- if (unlikely(fdb->is_local)) { ++ if (unlikely(test_bit(BR_FDB_LOCAL, &fdb->flags))) { + if (net_ratelimit()) + br_warn(br, "received packet on %s with own address as source address (addr:%pM, vlan:%u)\n", + source->dev->name, addr, vid); +@@ -580,17 +587,18 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, + unsigned long now = jiffies; + + /* fastpath: update of existing entry */ +- if (unlikely(source != fdb->dst && !fdb->is_sticky)) { ++ if (unlikely(source != fdb->dst && ++ !test_bit(BR_FDB_STICKY, &fdb->flags))) { + fdb->dst = source; + fdb_modified = true; + /* Take over HW learned entry */ +- if (unlikely(fdb->added_by_external_learn)) +- fdb->added_by_external_learn = 0; ++ test_and_clear_bit(BR_FDB_ADDED_BY_EXT_LEARN, ++ &fdb->flags); + } + if (now != fdb->updated) + fdb->updated = now; + if (unlikely(added_by_user)) +- fdb->added_by_user = 1; ++ set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); + if (unlikely(fdb_modified)) { + trace_br_fdb_update(br, source, addr, vid, added_by_user); + fdb_notify(br, fdb, RTM_NEWNEIGH, true); +@@ -601,7 +609,7 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, + fdb = fdb_create(br, source, addr, vid, 0, 0); + if (fdb) { + if (unlikely(added_by_user)) +- fdb->added_by_user = 1; ++ set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); + trace_br_fdb_update(br, source, addr, vid, + added_by_user); + fdb_notify(br, fdb, RTM_NEWNEIGH, true); +@@ -616,9 +624,9 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source, + static int fdb_to_nud(const struct net_bridge *br, + const struct net_bridge_fdb_entry *fdb) + { +- if (fdb->is_local) ++ if (test_bit(BR_FDB_LOCAL, &fdb->flags)) + return NUD_PERMANENT; +- else if (fdb->is_static) ++ else if (test_bit(BR_FDB_STATIC, &fdb->flags)) + return NUD_NOARP; + else if (has_expired(br, fdb)) + return NUD_STALE; +@@ -650,9 +658,9 @@ static int fdb_fill_info(struct sk_buff *skb, const struct net_bridge *br, + + if (fdb->offloaded) + ndm->ndm_flags |= NTF_OFFLOADED; +- if (fdb->added_by_external_learn) ++ if (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) + ndm->ndm_flags |= NTF_EXT_LEARNED; +- if (fdb->is_sticky) ++ if (test_bit(BR_FDB_STICKY, &fdb->flags)) + ndm->ndm_flags |= NTF_STICKY; + + if (nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->key.addr)) +@@ -799,7 +807,7 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source, + const u8 *addr, u16 state, u16 flags, u16 vid, + u8 ndm_flags) + { +- u8 is_sticky = !!(ndm_flags & NTF_STICKY); ++ bool is_sticky = !!(ndm_flags & NTF_STICKY); + struct net_bridge_fdb_entry *fdb; + bool modified = false; + +@@ -840,34 +848,28 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source, + + if (fdb_to_nud(br, fdb) != state) { + if (state & NUD_PERMANENT) { +- fdb->is_local = 1; +- if (!fdb->is_static) { +- fdb->is_static = 1; ++ set_bit(BR_FDB_LOCAL, &fdb->flags); ++ if (!test_and_set_bit(BR_FDB_STATIC, &fdb->flags)) + fdb_add_hw_addr(br, addr); +- } + } else if (state & NUD_NOARP) { +- fdb->is_local = 0; +- if (!fdb->is_static) { +- fdb->is_static = 1; ++ clear_bit(BR_FDB_LOCAL, &fdb->flags); ++ if (!test_and_set_bit(BR_FDB_STATIC, &fdb->flags)) + fdb_add_hw_addr(br, addr); +- } + } else { +- fdb->is_local = 0; +- if (fdb->is_static) { +- fdb->is_static = 0; ++ clear_bit(BR_FDB_LOCAL, &fdb->flags); ++ if (test_and_clear_bit(BR_FDB_STATIC, &fdb->flags)) + fdb_del_hw_addr(br, addr); +- } + } + + modified = true; + } + +- if (is_sticky != fdb->is_sticky) { +- fdb->is_sticky = is_sticky; ++ if (is_sticky != test_bit(BR_FDB_STICKY, &fdb->flags)) { ++ change_bit(BR_FDB_STICKY, &fdb->flags); + modified = true; + } + +- fdb->added_by_user = 1; ++ set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); + + fdb->used = jiffies; + if (modified) { +@@ -1064,7 +1066,7 @@ int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p) + rcu_read_lock(); + hlist_for_each_entry_rcu(f, &br->fdb_list, fdb_node) { + /* We only care for static entries */ +- if (!f->is_static) ++ if (!test_bit(BR_FDB_STATIC, &f->flags)) + continue; + err = dev_uc_add(p->dev, f->key.addr.addr); + if (err) +@@ -1078,7 +1080,7 @@ int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p) + rollback: + hlist_for_each_entry_rcu(tmp, &br->fdb_list, fdb_node) { + /* We only care for static entries */ +- if (!tmp->is_static) ++ if (!test_bit(BR_FDB_STATIC, &tmp->flags)) + continue; + if (tmp == f) + break; +@@ -1097,7 +1099,7 @@ void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p) + rcu_read_lock(); + hlist_for_each_entry_rcu(f, &br->fdb_list, fdb_node) { + /* We only care for static entries */ +- if (!f->is_static) ++ if (!test_bit(BR_FDB_STATIC, &f->flags)) + continue; + + dev_uc_del(p->dev, f->key.addr.addr); +@@ -1125,8 +1127,8 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, + goto err_unlock; + } + if (swdev_notify) +- fdb->added_by_user = 1; +- fdb->added_by_external_learn = 1; ++ set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); ++ set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags); + fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify); + } else { + fdb->updated = jiffies; +@@ -1136,17 +1138,15 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p, + modified = true; + } + +- if (fdb->added_by_external_learn) { ++ if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { + /* Refresh entry */ + fdb->used = jiffies; +- } else if (!fdb->added_by_user) { +- /* Take over SW learned entry */ +- fdb->added_by_external_learn = 1; ++ } else { + modified = true; + } + + if (swdev_notify) +- fdb->added_by_user = 1; ++ set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags); + + if (modified) + fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify); +@@ -1168,7 +1168,7 @@ int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p, + spin_lock_bh(&br->hash_lock); + + fdb = br_fdb_find(br, addr, vid); +- if (fdb && fdb->added_by_external_learn) ++ if (fdb && test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) + fdb_delete(br, fdb, swdev_notify); + else + err = -ENOENT; +diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c +index 3d07dedd93bd0..22271b279063d 100644 +--- a/net/bridge/br_input.c ++++ b/net/bridge/br_input.c +@@ -158,7 +158,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb + if (dst) { + unsigned long now = jiffies; + +- if (dst->is_local) ++ if (test_bit(BR_FDB_LOCAL, &dst->flags)) + return br_pass_frame_up(skb); + + if (now != dst->used) +diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h +index c83d3a954b5f3..5ba4620727a7e 100644 +--- a/net/bridge/br_private.h ++++ b/net/bridge/br_private.h +@@ -172,6 +172,15 @@ struct net_bridge_vlan_group { + u16 pvid; + }; + ++/* bridge fdb flags */ ++enum { ++ BR_FDB_LOCAL, ++ BR_FDB_STATIC, ++ BR_FDB_STICKY, ++ BR_FDB_ADDED_BY_USER, ++ BR_FDB_ADDED_BY_EXT_LEARN, ++}; ++ + struct net_bridge_fdb_key { + mac_addr addr; + u16 vlan_id; +@@ -183,12 +192,8 @@ struct net_bridge_fdb_entry { + + struct net_bridge_fdb_key key; + struct hlist_node fdb_node; +- unsigned char is_local:1, +- is_static:1, +- is_sticky:1, +- added_by_user:1, +- added_by_external_learn:1, +- offloaded:1; ++ unsigned long flags; ++ unsigned char offloaded:1; + + /* write-heavy members should not affect lookups */ + unsigned long updated ____cacheline_aligned_in_smp; +diff --git a/net/bridge/br_switchdev.c b/net/bridge/br_switchdev.c +index 921310d3cbae4..5010fbf747783 100644 +--- a/net/bridge/br_switchdev.c ++++ b/net/bridge/br_switchdev.c +@@ -129,14 +129,16 @@ br_switchdev_fdb_notify(const struct net_bridge_fdb_entry *fdb, int type) + br_switchdev_fdb_call_notifiers(false, fdb->key.addr.addr, + fdb->key.vlan_id, + fdb->dst->dev, +- fdb->added_by_user, ++ test_bit(BR_FDB_ADDED_BY_USER, ++ &fdb->flags), + fdb->offloaded); + break; + case RTM_NEWNEIGH: + br_switchdev_fdb_call_notifiers(true, fdb->key.addr.addr, + fdb->key.vlan_id, + fdb->dst->dev, +- fdb->added_by_user, ++ test_bit(BR_FDB_ADDED_BY_USER, ++ &fdb->flags), + fdb->offloaded); + break; + } +diff --git a/net/can/bcm.c b/net/can/bcm.c +index 5cb4b6129263c..cc7fb30eafc0b 100644 +--- a/net/can/bcm.c ++++ b/net/can/bcm.c +@@ -1425,6 +1425,10 @@ static void bcm_notify(struct bcm_sock *bo, unsigned long msg, + + /* remove device reference, if this is our bound device */ + if (bo->bound && bo->ifindex == dev->ifindex) { ++#if IS_ENABLED(CONFIG_PROC_FS) ++ if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read) ++ remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir); ++#endif + bo->bound = 0; + bo->ifindex = 0; + notify_enodev = 1; +diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c +index e4f2790fd6410..cd2cbf5c52a30 100644 +--- a/net/ipv4/inet_hashtables.c ++++ b/net/ipv4/inet_hashtables.c +@@ -609,6 +609,7 @@ int __inet_hash(struct sock *sk, struct sock *osk) + if (err) + goto unlock; + } ++ sock_set_flag(sk, SOCK_RCU_FREE); + if (IS_ENABLED(CONFIG_IPV6) && sk->sk_reuseport && + sk->sk_family == AF_INET6) + __sk_nulls_add_node_tail_rcu(sk, &ilb->nulls_head); +@@ -616,7 +617,6 @@ int __inet_hash(struct sock *sk, struct sock *osk) + __sk_nulls_add_node_rcu(sk, &ilb->nulls_head); + inet_hash2(hashinfo, sk); + ilb->count++; +- sock_set_flag(sk, SOCK_RCU_FREE); + sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1); + unlock: + spin_unlock(&ilb->lock); +diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c +index ca49d68a0e048..0002a6730d2e9 100644 +--- a/net/ipv4/tcp_bpf.c ++++ b/net/ipv4/tcp_bpf.c +@@ -506,7 +506,7 @@ static int tcp_bpf_sendmsg(struct sock *sk, struct msghdr *msg, size_t size) + err = sk_stream_error(sk, msg->msg_flags, err); + release_sock(sk); + sk_psock_put(sk, psock); +- return copied ? copied : err; ++ return copied > 0 ? copied : err; + } + + static int tcp_bpf_sendpage(struct sock *sk, struct page *page, int offset, +diff --git a/net/ipv6/ila/ila.h b/net/ipv6/ila/ila.h +index bb6fc0d54dae0..5c41f007c2354 100644 +--- a/net/ipv6/ila/ila.h ++++ b/net/ipv6/ila/ila.h +@@ -113,6 +113,7 @@ int ila_lwt_init(void); + void ila_lwt_fini(void); + + int ila_xlat_init_net(struct net *net); ++void ila_xlat_pre_exit_net(struct net *net); + void ila_xlat_exit_net(struct net *net); + + int ila_xlat_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info); +diff --git a/net/ipv6/ila/ila_main.c b/net/ipv6/ila/ila_main.c +index 257d2b681246a..8306f89d8e2a2 100644 +--- a/net/ipv6/ila/ila_main.c ++++ b/net/ipv6/ila/ila_main.c +@@ -71,6 +71,11 @@ static __net_init int ila_init_net(struct net *net) + return err; + } + ++static __net_exit void ila_pre_exit_net(struct net *net) ++{ ++ ila_xlat_pre_exit_net(net); ++} ++ + static __net_exit void ila_exit_net(struct net *net) + { + ila_xlat_exit_net(net); +@@ -78,6 +83,7 @@ static __net_exit void ila_exit_net(struct net *net) + + static struct pernet_operations ila_net_ops = { + .init = ila_init_net, ++ .pre_exit = ila_pre_exit_net, + .exit = ila_exit_net, + .id = &ila_net_id, + .size = sizeof(struct ila_net), +diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c +index 10f1367eb4ca0..1a0f580da6527 100644 +--- a/net/ipv6/ila/ila_xlat.c ++++ b/net/ipv6/ila/ila_xlat.c +@@ -618,6 +618,15 @@ int ila_xlat_init_net(struct net *net) + return 0; + } + ++void ila_xlat_pre_exit_net(struct net *net) ++{ ++ struct ila_net *ilan = net_generic(net, ila_net_id); ++ ++ if (ilan->xlat.hooks_registered) ++ nf_unregister_net_hooks(net, ila_nf_hook_ops, ++ ARRAY_SIZE(ila_nf_hook_ops)); ++} ++ + void ila_xlat_exit_net(struct net *net) + { + struct ila_net *ilan = net_generic(net, ila_net_id); +@@ -625,10 +634,6 @@ void ila_xlat_exit_net(struct net *net) + rhashtable_free_and_destroy(&ilan->xlat.rhash_table, ila_free_cb, NULL); + + free_bucket_spinlocks(ilan->xlat.locks); +- +- if (ilan->xlat.hooks_registered) +- nf_unregister_net_hooks(net, ila_nf_hook_ops, +- ARRAY_SIZE(ila_nf_hook_ops)); + } + + static int ila_xlat_addr(struct sk_buff *skb, bool sir2ila) +diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c +index 82f36beb2e766..0ce12a33ffda4 100644 +--- a/net/netfilter/nf_conncount.c ++++ b/net/netfilter/nf_conncount.c +@@ -310,7 +310,6 @@ insert_tree(struct net *net, + struct nf_conncount_rb *rbconn; + struct nf_conncount_tuple *conn; + unsigned int count = 0, gc_count = 0; +- u8 keylen = data->keylen; + bool do_gc = true; + + spin_lock_bh(&nf_conncount_locks[hash]); +@@ -322,7 +321,7 @@ insert_tree(struct net *net, + rbconn = rb_entry(*rbnode, struct nf_conncount_rb, node); + + parent = *rbnode; +- diff = key_diff(key, rbconn->key, keylen); ++ diff = key_diff(key, rbconn->key, data->keylen); + if (diff < 0) { + rbnode = &((*rbnode)->rb_left); + } else if (diff > 0) { +@@ -367,7 +366,7 @@ insert_tree(struct net *net, + + conn->tuple = *tuple; + conn->zone = *zone; +- memcpy(rbconn->key, key, sizeof(u32) * keylen); ++ memcpy(rbconn->key, key, sizeof(u32) * data->keylen); + + nf_conncount_list_init(&rbconn->list); + list_add(&conn->node, &rbconn->list.head); +@@ -392,7 +391,6 @@ count_tree(struct net *net, + struct rb_node *parent; + struct nf_conncount_rb *rbconn; + unsigned int hash; +- u8 keylen = data->keylen; + + hash = jhash2(key, data->keylen, conncount_rnd) % CONNCOUNT_SLOTS; + root = &data->root[hash]; +@@ -403,7 +401,7 @@ count_tree(struct net *net, + + rbconn = rb_entry(parent, struct nf_conncount_rb, node); + +- diff = key_diff(key, rbconn->key, keylen); ++ diff = key_diff(key, rbconn->key, data->keylen); + if (diff < 0) { + parent = rcu_dereference_raw(parent->rb_left); + } else if (diff > 0) { +diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c +index 7ee0b57b3ed4c..9b4a9bdbeafd9 100644 +--- a/net/sched/sch_cake.c ++++ b/net/sched/sch_cake.c +@@ -749,12 +749,15 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb, + * queue, accept the collision, update the host tags. + */ + q->way_collisions++; +- if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { +- q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; +- q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; +- } + allocate_src = cake_dsrc(flow_mode); + allocate_dst = cake_ddst(flow_mode); ++ ++ if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { ++ if (allocate_src) ++ q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; ++ if (allocate_dst) ++ q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; ++ } + found: + /* reserve queue for future packets in same flow */ + reduced_hash = outer_hash + k; +diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c +index 591ca93e2a01d..9913bf87e5989 100644 +--- a/net/sched/sch_netem.c ++++ b/net/sched/sch_netem.c +@@ -733,11 +733,10 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch) + + err = qdisc_enqueue(skb, q->qdisc, &to_free); + kfree_skb_list(to_free); +- if (err != NET_XMIT_SUCCESS && +- net_xmit_drop_count(err)) { +- qdisc_qstats_drop(sch); +- qdisc_tree_reduce_backlog(sch, 1, +- pkt_len); ++ if (err != NET_XMIT_SUCCESS) { ++ if (net_xmit_drop_count(err)) ++ qdisc_qstats_drop(sch); ++ qdisc_tree_reduce_backlog(sch, 1, pkt_len); + } + goto tfifo_dequeue; + } +diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c +index 39d0a3c434829..e88b9e53dcb65 100644 +--- a/net/sunrpc/xprtsock.c ++++ b/net/sunrpc/xprtsock.c +@@ -2437,6 +2437,13 @@ static void xs_tcp_setup_socket(struct work_struct *work) + case -EALREADY: + xprt_unlock_connect(xprt, transport); + return; ++ case -EPERM: ++ /* Happens, for instance, if a BPF program is preventing ++ * the connect. Remap the error so upper layers can better ++ * deal with it. ++ */ ++ status = -ECONNREFUSED; ++ fallthrough; + case -EINVAL: + /* Happens, for instance, if the user specified a link + * local IPv6 address without a scope-id. +diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c +index ae6aae983b8cb..c47a734e1f2da 100644 +--- a/net/unix/af_unix.c ++++ b/net/unix/af_unix.c +@@ -605,9 +605,6 @@ static void init_peercred(struct sock *sk) + + static void copy_peercred(struct sock *sk, struct sock *peersk) + { +- const struct cred *old_cred; +- struct pid *old_pid; +- + if (sk < peersk) { + spin_lock(&sk->sk_peer_lock); + spin_lock_nested(&peersk->sk_peer_lock, SINGLE_DEPTH_NESTING); +@@ -615,16 +612,12 @@ static void copy_peercred(struct sock *sk, struct sock *peersk) + spin_lock(&peersk->sk_peer_lock); + spin_lock_nested(&sk->sk_peer_lock, SINGLE_DEPTH_NESTING); + } +- old_pid = sk->sk_peer_pid; +- old_cred = sk->sk_peer_cred; ++ + sk->sk_peer_pid = get_pid(peersk->sk_peer_pid); + sk->sk_peer_cred = get_cred(peersk->sk_peer_cred); + + spin_unlock(&sk->sk_peer_lock); + spin_unlock(&peersk->sk_peer_lock); +- +- put_pid(old_pid); +- put_cred(old_cred); + } + + static int unix_listen(struct socket *sock, int backlog) +diff --git a/net/wireless/scan.c b/net/wireless/scan.c +index c74882e3c3096..b28e652514e80 100644 +--- a/net/wireless/scan.c ++++ b/net/wireless/scan.c +@@ -1003,7 +1003,7 @@ struct cfg80211_bss *cfg80211_get_bss(struct wiphy *wiphy, + } + EXPORT_SYMBOL(cfg80211_get_bss); + +-static void rb_insert_bss(struct cfg80211_registered_device *rdev, ++static bool rb_insert_bss(struct cfg80211_registered_device *rdev, + struct cfg80211_internal_bss *bss) + { + struct rb_node **p = &rdev->bss_tree.rb_node; +@@ -1019,7 +1019,7 @@ static void rb_insert_bss(struct cfg80211_registered_device *rdev, + + if (WARN_ON(!cmp)) { + /* will sort of leak this BSS */ +- return; ++ return false; + } + + if (cmp < 0) +@@ -1030,6 +1030,7 @@ static void rb_insert_bss(struct cfg80211_registered_device *rdev, + + rb_link_node(&bss->rbn, parent, p); + rb_insert_color(&bss->rbn, &rdev->bss_tree); ++ return true; + } + + static struct cfg80211_internal_bss * +@@ -1056,6 +1057,34 @@ rb_find_bss(struct cfg80211_registered_device *rdev, + return NULL; + } + ++static void cfg80211_insert_bss(struct cfg80211_registered_device *rdev, ++ struct cfg80211_internal_bss *bss) ++{ ++ lockdep_assert_held(&rdev->bss_lock); ++ ++ if (!rb_insert_bss(rdev, bss)) ++ return; ++ list_add_tail(&bss->list, &rdev->bss_list); ++ rdev->bss_entries++; ++} ++ ++static void cfg80211_rehash_bss(struct cfg80211_registered_device *rdev, ++ struct cfg80211_internal_bss *bss) ++{ ++ lockdep_assert_held(&rdev->bss_lock); ++ ++ rb_erase(&bss->rbn, &rdev->bss_tree); ++ if (!rb_insert_bss(rdev, bss)) { ++ list_del(&bss->list); ++ if (!list_empty(&bss->hidden_list)) ++ list_del_init(&bss->hidden_list); ++ if (!list_empty(&bss->pub.nontrans_list)) ++ list_del_init(&bss->pub.nontrans_list); ++ rdev->bss_entries--; ++ } ++ rdev->bss_generation++; ++} ++ + static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev, + struct cfg80211_internal_bss *new) + { +@@ -1331,9 +1360,7 @@ cfg80211_bss_update(struct cfg80211_registered_device *rdev, + bss_ref_get(rdev, pbss); + } + +- list_add_tail(&new->list, &rdev->bss_list); +- rdev->bss_entries++; +- rb_insert_bss(rdev, new); ++ cfg80211_insert_bss(rdev, new); + found = new; + } + +@@ -2142,10 +2169,7 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev, + if (!WARN_ON(!__cfg80211_unlink_bss(rdev, new))) + rdev->bss_generation++; + } +- +- rb_erase(&cbss->rbn, &rdev->bss_tree); +- rb_insert_bss(rdev, cbss); +- rdev->bss_generation++; ++ cfg80211_rehash_bss(rdev, cbss); + + list_for_each_entry_safe(nontrans_bss, tmp, + &cbss->pub.nontrans_list, +@@ -2153,9 +2177,7 @@ void cfg80211_update_assoc_bss_entry(struct wireless_dev *wdev, + bss = container_of(nontrans_bss, + struct cfg80211_internal_bss, pub); + bss->pub.channel = chan; +- rb_erase(&bss->rbn, &rdev->bss_tree); +- rb_insert_bss(rdev, bss); +- rdev->bss_generation++; ++ cfg80211_rehash_bss(rdev, bss); + } + + done: +diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c +index 62736465ac82a..efe04f54be9ed 100644 +--- a/security/apparmor/apparmorfs.c ++++ b/security/apparmor/apparmorfs.c +@@ -1593,6 +1593,10 @@ int __aafs_profile_mkdir(struct aa_profile *profile, struct dentry *parent) + struct aa_profile *p; + p = aa_deref_parent(profile); + dent = prof_dir(p); ++ if (!dent) { ++ error = -ENOENT; ++ goto fail2; ++ } + /* adding to parent that previously didn't have children */ + dent = aafs_create_dir("profiles", dent); + if (IS_ERR(dent)) +diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c +index 072ce1ef6efb7..a9582737c2301 100644 +--- a/security/smack/smack_lsm.c ++++ b/security/smack/smack_lsm.c +@@ -3640,12 +3640,18 @@ static int smack_unix_stream_connect(struct sock *sock, + } + } + +- /* +- * Cross reference the peer labels for SO_PEERSEC. +- */ + if (rc == 0) { ++ /* ++ * Cross reference the peer labels for SO_PEERSEC. ++ */ + nsp->smk_packet = ssp->smk_out; + ssp->smk_packet = osp->smk_out; ++ ++ /* ++ * new/child/established socket must inherit listening socket labels ++ */ ++ nsp->smk_out = osp->smk_out; ++ nsp->smk_in = osp->smk_in; + } + + return rc; +@@ -4196,7 +4202,7 @@ static int smack_inet_conn_request(struct sock *sk, struct sk_buff *skb, + rcu_read_unlock(); + + if (hskp == NULL) +- rc = netlbl_req_setattr(req, &skp->smk_netlabel); ++ rc = netlbl_req_setattr(req, &ssp->smk_out->smk_netlabel); + else + netlbl_req_delattr(req); + +diff --git a/sound/hda/hdmi_chmap.c b/sound/hda/hdmi_chmap.c +index 2efee794cac67..79ccec2da387c 100644 +--- a/sound/hda/hdmi_chmap.c ++++ b/sound/hda/hdmi_chmap.c +@@ -753,6 +753,20 @@ static int hdmi_chmap_ctl_get(struct snd_kcontrol *kcontrol, + return 0; + } + ++/* a simple sanity check for input values to chmap kcontrol */ ++static int chmap_value_check(struct hdac_chmap *hchmap, ++ const struct snd_ctl_elem_value *ucontrol) ++{ ++ int i; ++ ++ for (i = 0; i < hchmap->channels_max; i++) { ++ if (ucontrol->value.integer.value[i] < 0 || ++ ucontrol->value.integer.value[i] > SNDRV_CHMAP_LAST) ++ return -EINVAL; ++ } ++ return 0; ++} ++ + static int hdmi_chmap_ctl_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) + { +@@ -764,6 +778,10 @@ static int hdmi_chmap_ctl_put(struct snd_kcontrol *kcontrol, + unsigned char chmap[8], per_pin_chmap[8]; + int i, err, ca, prepared = 0; + ++ err = chmap_value_check(hchmap, ucontrol); ++ if (err < 0) ++ return err; ++ + /* No monitor is connected in dyn_pcm_assign. + * It's invalid to setup the chmap + */ +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c +index ba70e053c3a2c..d0f26d7ed8611 100644 +--- a/sound/pci/hda/patch_conexant.c ++++ b/sound/pci/hda/patch_conexant.c +@@ -217,6 +217,7 @@ enum { + CXT_FIXUP_HEADSET_MIC, + CXT_FIXUP_HP_MIC_NO_PRESENCE, + CXT_PINCFG_SWS_JS201D, ++ CXT_PINCFG_TOP_SPEAKER, + }; + + /* for hda_fixup_thinkpad_acpi() */ +@@ -871,6 +872,13 @@ static const struct hda_fixup cxt_fixups[] = { + .type = HDA_FIXUP_PINS, + .v.pins = cxt_pincfg_sws_js201d, + }, ++ [CXT_PINCFG_TOP_SPEAKER] = { ++ .type = HDA_FIXUP_PINS, ++ .v.pins = (const struct hda_pintbl[]) { ++ { 0x1d, 0x82170111 }, ++ { } ++ }, ++ }, + }; + + static const struct snd_pci_quirk cxt5045_fixups[] = { +@@ -965,6 +973,8 @@ static const struct snd_pci_quirk cxt5066_fixups[] = { + SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI), + SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004), + SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205), ++ SND_PCI_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER), ++ SND_PCI_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER), + {} + }; + +@@ -983,6 +993,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = { + { .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" }, + { .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" }, + { .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" }, ++ { .id = CXT_PINCFG_TOP_SPEAKER, .name = "sirius-top-speaker" }, + {} + }; + +diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c +index 56c9c4189f269..9f764d92469e6 100644 +--- a/sound/soc/soc-dapm.c ++++ b/sound/soc/soc-dapm.c +@@ -4003,6 +4003,7 @@ static int snd_soc_dai_link_event(struct snd_soc_dapm_widget *w, + + case SND_SOC_DAPM_POST_PMD: + kfree(substream->runtime); ++ substream->runtime = NULL; + break; + + default: +diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c +index 870b002293535..df8a1cd091930 100644 +--- a/sound/soc/soc-topology.c ++++ b/sound/soc/soc-topology.c +@@ -993,6 +993,8 @@ static int soc_tplg_denum_create_values(struct soc_enum *se, + se->dobj.control.dvalues[i] = le32_to_cpu(ec->values[i]); + } + ++ se->items = le32_to_cpu(ec->items); ++ se->values = (const unsigned int *)se->dobj.control.dvalues; + return 0; + } + +diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c +index b8849812449c3..98e34c5172673 100644 +--- a/tools/lib/bpf/libbpf.c ++++ b/tools/lib/bpf/libbpf.c +@@ -4754,7 +4754,7 @@ __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i) + struct bpf_map * + bpf_map__next(const struct bpf_map *prev, const struct bpf_object *obj) + { +- if (prev == NULL) ++ if (prev == NULL && obj != NULL) + return obj->maps; + + return __bpf_map__iter(prev, obj, 1); +@@ -4763,7 +4763,7 @@ bpf_map__next(const struct bpf_map *prev, const struct bpf_object *obj) + struct bpf_map * + bpf_map__prev(const struct bpf_map *next, const struct bpf_object *obj) + { +- if (next == NULL) { ++ if (next == NULL && obj != NULL) { + if (!obj->nr_maps) + return NULL; + return obj->maps + obj->nr_maps - 1;