From: "Arisu Tachibana" <alicef@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:5.10 commit in: /
Date: Fri, 18 Jul 2025 12:07:29 +0000 (UTC) [thread overview]
Message-ID: <1752840434.634810f32fe586502d38e8ed3d78e4d20d4a01d9.alicef@gentoo> (raw)
commit: 634810f32fe586502d38e8ed3d78e4d20d4a01d9
Author: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Jul 18 12:07:14 2025 +0000
Commit: Arisu Tachibana <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Jul 18 12:07:14 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=634810f3
Linux patch 5.10.240
Signed-off-by: Arisu Tachibana <alicef <AT> gentoo.org>
0000_README | 4 +
1239_linux-5.10.240.patch | 7977 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7981 insertions(+)
diff --git a/0000_README b/0000_README
index 68ba8aa5..ae0aa0e6 100644
--- a/0000_README
+++ b/0000_README
@@ -999,6 +999,10 @@ Patch: 1238_linux-5.10.239.patch
From: https://www.kernel.org
Desc: Linux 5.10.239
+Patch: 1239_linux-5.10.240.patch
+From: https://www.kernel.org
+Desc: Linux 5.10.240
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1239_linux-5.10.240.patch b/1239_linux-5.10.240.patch
new file mode 100644
index 00000000..1313c76c
--- /dev/null
+++ b/1239_linux-5.10.240.patch
@@ -0,0 +1,7977 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 2a273bfebed057..bf2b83e9c07d1f 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -502,6 +502,7 @@ Description: information about CPUs heterogeneity.
+
+ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/gather_data_sampling
++ /sys/devices/system/cpu/vulnerabilities/indirect_target_selection
+ /sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ /sys/devices/system/cpu/vulnerabilities/l1tf
+ /sys/devices/system/cpu/vulnerabilities/mds
+@@ -513,6 +514,7 @@ What: /sys/devices/system/cpu/vulnerabilities
+ /sys/devices/system/cpu/vulnerabilities/spectre_v1
+ /sys/devices/system/cpu/vulnerabilities/spectre_v2
+ /sys/devices/system/cpu/vulnerabilities/srbds
++ /sys/devices/system/cpu/vulnerabilities/tsa
+ /sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ Date: January 2018
+ Contact: Linux kernel mailing list <linux-kernel@vger.kernel.org>
+diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs
+index adc0d0e9160780..00d8bd574d1a08 100644
+--- a/Documentation/ABI/testing/sysfs-driver-ufs
++++ b/Documentation/ABI/testing/sysfs-driver-ufs
+@@ -655,7 +655,7 @@ Description: This file shows the thin provisioning type. This is one of
+
+ The file is read only.
+
+-What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count
++What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count
+ Date: February 2018
+ Contact: Stanislav Nijnikov <stanislav.nijnikov@wdc.com>
+ Description: This file shows the total physical memory resources. This is
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index e020d1637e1c48..04a7f9fea3f21a 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -19,3 +19,4 @@ are configurable at compile, boot or run time.
+ gather_data_sampling.rst
+ srso
+ reg-file-data-sampling
++ indirect-target-selection
+diff --git a/Documentation/admin-guide/hw-vuln/indirect-target-selection.rst b/Documentation/admin-guide/hw-vuln/indirect-target-selection.rst
+new file mode 100644
+index 00000000000000..4788e14ebce09a
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/indirect-target-selection.rst
+@@ -0,0 +1,156 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++Indirect Target Selection (ITS)
++===============================
++
++ITS is a vulnerability in some Intel CPUs that support Enhanced IBRS and were
++released before Alder Lake. ITS may allow an attacker to control the prediction
++of indirect branches and RETs located in the lower half of a cacheline.
++
++ITS is assigned CVE-2024-28956 with a CVSS score of 4.7 (Medium).
++
++Scope of Impact
++---------------
++- **eIBRS Guest/Host Isolation**: Indirect branches in KVM/kernel may still be
++ predicted with unintended target corresponding to a branch in the guest.
++
++- **Intra-Mode BTI**: In-kernel training such as through cBPF or other native
++ gadgets.
++
++- **Indirect Branch Prediction Barrier (IBPB)**: After an IBPB, indirect
++ branches may still be predicted with targets corresponding to direct branches
++ executed prior to the IBPB. This is fixed by the IPU 2025.1 microcode, which
++ should be available via distro updates. Alternatively microcode can be
++ obtained from Intel's github repository [#f1]_.
++
++Affected CPUs
++-------------
++Below is the list of ITS affected CPUs [#f2]_ [#f3]_:
++
++ ======================== ============ ==================== ===============
++ Common name Family_Model eIBRS Intra-mode BTI
++ Guest/Host Isolation
++ ======================== ============ ==================== ===============
++ SKYLAKE_X (step >= 6) 06_55H Affected Affected
++ ICELAKE_X 06_6AH Not affected Affected
++ ICELAKE_D 06_6CH Not affected Affected
++ ICELAKE_L 06_7EH Not affected Affected
++ TIGERLAKE_L 06_8CH Not affected Affected
++ TIGERLAKE 06_8DH Not affected Affected
++ KABYLAKE_L (step >= 12) 06_8EH Affected Affected
++ KABYLAKE (step >= 13) 06_9EH Affected Affected
++ COMETLAKE 06_A5H Affected Affected
++ COMETLAKE_L 06_A6H Affected Affected
++ ROCKETLAKE 06_A7H Not affected Affected
++ ======================== ============ ==================== ===============
++
++- All affected CPUs enumerate Enhanced IBRS feature.
++- IBPB isolation is affected on all ITS affected CPUs, and need a microcode
++ update for mitigation.
++- None of the affected CPUs enumerate BHI_CTRL which was introduced in Golden
++ Cove (Alder Lake and Sapphire Rapids). This can help guests to determine the
++ host's affected status.
++- Intel Atom CPUs are not affected by ITS.
++
++Mitigation
++----------
++As only the indirect branches and RETs that have their last byte of instruction
++in the lower half of the cacheline are vulnerable to ITS, the basic idea behind
++the mitigation is to not allow indirect branches in the lower half.
++
++This is achieved by relying on existing retpoline support in the kernel, and in
++compilers. ITS-vulnerable retpoline sites are runtime patched to point to newly
++added ITS-safe thunks. These safe thunks consists of indirect branch in the
++second half of the cacheline. Not all retpoline sites are patched to thunks, if
++a retpoline site is evaluated to be ITS-safe, it is replaced with an inline
++indirect branch.
++
++Dynamic thunks
++~~~~~~~~~~~~~~
++From a dynamically allocated pool of safe-thunks, each vulnerable site is
++replaced with a new thunk, such that they get a unique address. This could
++improve the branch prediction accuracy. Also, it is a defense-in-depth measure
++against aliasing.
++
++Note, for simplicity, indirect branches in eBPF programs are always replaced
++with a jump to a static thunk in __x86_indirect_its_thunk_array. If required,
++in future this can be changed to use dynamic thunks.
++
++All vulnerable RETs are replaced with a static thunk, they do not use dynamic
++thunks. This is because RETs get their prediction from RSB mostly that does not
++depend on source address. RETs that underflow RSB may benefit from dynamic
++thunks. But, RETs significantly outnumber indirect branches, and any benefit
++from a unique source address could be outweighed by the increased icache
++footprint and iTLB pressure.
++
++Retpoline
++~~~~~~~~~
++Retpoline sequence also mitigates ITS-unsafe indirect branches. For this
++reason, when retpoline is enabled, ITS mitigation only relocates the RETs to
++safe thunks. Unless user requested the RSB-stuffing mitigation.
++
++Mitigation in guests
++^^^^^^^^^^^^^^^^^^^^
++All guests deploy ITS mitigation by default, irrespective of eIBRS enumeration
++and Family/Model of the guest. This is because eIBRS feature could be hidden
++from a guest. One exception to this is when a guest enumerates BHI_DIS_S, which
++indicates that the guest is running on an unaffected host.
++
++To prevent guests from unnecessarily deploying the mitigation on unaffected
++platforms, Intel has defined ITS_NO bit(62) in MSR IA32_ARCH_CAPABILITIES. When
++a guest sees this bit set, it should not enumerate the ITS bug. Note, this bit
++is not set by any hardware, but is **intended for VMMs to synthesize** it for
++guests as per the host's affected status.
++
++Mitigation options
++^^^^^^^^^^^^^^^^^^
++The ITS mitigation can be controlled using the "indirect_target_selection"
++kernel parameter. The available options are:
++
++ ======== ===================================================================
++ on (default) Deploy the "Aligned branch/return thunks" mitigation.
++ If spectre_v2 mitigation enables retpoline, aligned-thunks are only
++ deployed for the affected RET instructions. Retpoline mitigates
++ indirect branches.
++
++ off Disable ITS mitigation.
++
++ vmexit Equivalent to "=on" if the CPU is affected by guest/host isolation
++ part of ITS. Otherwise, mitigation is not deployed. This option is
++ useful when host userspace is not in the threat model, and only
++ attacks from guest to host are considered.
++
++ force Force the ITS bug and deploy the default mitigation.
++ ======== ===================================================================
++
++Sysfs reporting
++---------------
++
++The sysfs file showing ITS mitigation status is:
++
++ /sys/devices/system/cpu/vulnerabilities/indirect_target_selection
++
++Note, microcode mitigation status is not reported in this file.
++
++The possible values in this file are:
++
++.. list-table::
++
++ * - Not affected
++ - The processor is not vulnerable.
++ * - Vulnerable
++ - System is vulnerable and no mitigation has been applied.
++ * - Vulnerable, KVM: Not affected
++ - System is vulnerable to intra-mode BTI, but not affected by eIBRS
++ guest/host isolation.
++ * - Mitigation: Aligned branch/return thunks
++ - The mitigation is enabled, affected indirect branches and RETs are
++ relocated to safe thunks.
++
++References
++----------
++.. [#f1] Microcode repository - https://github.com/intel/Intel-Linux-Processor-Microcode-Data-Files
++
++.. [#f2] Affected Processors list - https://www.intel.com/content/www/us/en/developer/topic-technology/software-security-guidance/processors-affected-consolidated-product-cpu-model.html
++
++.. [#f3] Affected Processors list (machine readable) - https://github.com/intel/Intel-affected-processor-list
+diff --git a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+index c98fd11907cc87..e916dc232b0f0c 100644
+--- a/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
++++ b/Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst
+@@ -157,9 +157,7 @@ This is achieved by using the otherwise unused and obsolete VERW instruction in
+ combination with a microcode update. The microcode clears the affected CPU
+ buffers when the VERW instruction is executed.
+
+-Kernel reuses the MDS function to invoke the buffer clearing:
+-
+- mds_clear_cpu_buffers()
++Kernel does the buffer clearing with x86_clear_cpu_buffers().
+
+ On MDS affected CPUs, the kernel already invokes CPU buffer clear on
+ kernel/userspace, hypervisor/guest and C-state (idle) transitions. No
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 1eff151699830d..bbe6c23a577860 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1851,6 +1851,20 @@
+ different crypto accelerators. This option can be used
+ to achieve best performance for particular HW.
+
++ indirect_target_selection= [X86,Intel] Mitigation control for Indirect
++ Target Selection(ITS) bug in Intel CPUs. Updated
++ microcode is also required for a fix in IBPB.
++
++ on: Enable mitigation (default).
++ off: Disable mitigation.
++ force: Force the ITS bug and deploy default
++ mitigation.
++ vmexit: Only deploy mitigation if CPU is affected by
++ guest/host isolation part of ITS.
++
++ For details see:
++ Documentation/admin-guide/hw-vuln/indirect-target-selection.rst
++
+ init= [KNL]
+ Format: <full_path>
+ Run specified binary instead of /sbin/init as init
+@@ -2938,6 +2952,7 @@
+ improves system performance, but it may also
+ expose users to several CPU vulnerabilities.
+ Equivalent to: gather_data_sampling=off [X86]
++ indirect_target_selection=off [X86]
+ kpti=0 [ARM64]
+ kvm.nx_huge_pages=off [X86]
+ l1tf=off [X86]
+@@ -5604,6 +5619,19 @@
+ See Documentation/admin-guide/mm/transhuge.rst
+ for more details.
+
++ tsa= [X86] Control mitigation for Transient Scheduler
++ Attacks on AMD CPUs. Search the following in your
++ favourite search engine for more details:
++
++ "Technical guidance for mitigating transient scheduler
++ attacks".
++
++ off - disable the mitigation
++ on - enable the mitigation (default)
++ user - mitigate only user/kernel transitions
++ vm - mitigate only guest/host transitions
++
++
+ tsc= Disable clocksource stability checks for TSC.
+ Format: <string>
+ [x86] reliable: mark tsc clocksource as reliable, this
+diff --git a/Documentation/devicetree/bindings/serial/8250.yaml b/Documentation/devicetree/bindings/serial/8250.yaml
+index 460cb546c54a90..de1a869dd96f8e 100644
+--- a/Documentation/devicetree/bindings/serial/8250.yaml
++++ b/Documentation/devicetree/bindings/serial/8250.yaml
+@@ -39,7 +39,7 @@ allOf:
+ - ns16550
+ - ns16550a
+ then:
+- anyOf:
++ oneOf:
+ - required: [ clock-frequency ]
+ - required: [ clocks ]
+
+diff --git a/Makefile b/Makefile
+index 9ffd9397399dd8..cff26a5d22bbed 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 10
+-SUBLEVEL = 239
++SUBLEVEL = 240
+ EXTRAVERSION =
+ NAME = Dare mighty things
+
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 78f9fb638c9cd0..b584bf200619f3 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1459,7 +1459,8 @@ int pud_free_pmd_page(pud_t *pudp, unsigned long addr)
+ next = addr;
+ end = addr + PUD_SIZE;
+ do {
+- pmd_free_pte_page(pmdp, next);
++ if (pmd_present(READ_ONCE(*pmdp)))
++ pmd_free_pte_page(pmdp, next);
+ } while (pmdp++, next += PMD_SIZE, next != end);
+
+ pud_clear(pudp);
+diff --git a/arch/powerpc/include/uapi/asm/ioctls.h b/arch/powerpc/include/uapi/asm/ioctls.h
+index 2c145da3b774a1..b5211e413829a2 100644
+--- a/arch/powerpc/include/uapi/asm/ioctls.h
++++ b/arch/powerpc/include/uapi/asm/ioctls.h
+@@ -23,10 +23,10 @@
+ #define TCSETSW _IOW('t', 21, struct termios)
+ #define TCSETSF _IOW('t', 22, struct termios)
+
+-#define TCGETA _IOR('t', 23, struct termio)
+-#define TCSETA _IOW('t', 24, struct termio)
+-#define TCSETAW _IOW('t', 25, struct termio)
+-#define TCSETAF _IOW('t', 28, struct termio)
++#define TCGETA 0x40147417 /* _IOR('t', 23, struct termio) */
++#define TCSETA 0x80147418 /* _IOW('t', 24, struct termio) */
++#define TCSETAW 0x80147419 /* _IOW('t', 25, struct termio) */
++#define TCSETAF 0x8014741c /* _IOW('t', 28, struct termio) */
+
+ #define TCSBRK _IO('t', 29)
+ #define TCXONC _IO('t', 30)
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 39ffcd4389f10c..92f2426d87970c 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -23,7 +23,7 @@ endif
+ aflags_dwarf := -Wa,-gdwarf-2
+ KBUILD_AFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -D__ASSEMBLY__
+ KBUILD_AFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),$(aflags_dwarf))
+-KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2
++KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2 -std=gnu11
+ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-asynchronous-unwind-tables
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index a93c9aba834be1..955f113cf3200b 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -20,7 +20,7 @@ GCOV_PROFILE := n
+ UBSAN_SANITIZE := n
+ KASAN_SANITIZE := n
+
+-KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes
++KBUILD_CFLAGS := -std=gnu11 -fno-strict-aliasing -Wall -Wstrict-prototypes
+ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -c -MD -Os -m64 -msoft-float -fno-common
+diff --git a/arch/um/drivers/ubd_user.c b/arch/um/drivers/ubd_user.c
+index a1afe414ce4814..fb5b1e7c133d86 100644
+--- a/arch/um/drivers/ubd_user.c
++++ b/arch/um/drivers/ubd_user.c
+@@ -41,7 +41,7 @@ int start_io_thread(unsigned long sp, int *fd_out)
+ *fd_out = fds[1];
+
+ err = os_set_fd_block(*fd_out, 0);
+- err = os_set_fd_block(kernel_fd, 0);
++ err |= os_set_fd_block(kernel_fd, 0);
+ if (err) {
+ printk("start_io_thread - failed to set nonblocking I/O.\n");
+ goto out_close;
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index da05bfdaeb1dbf..a37007e42265ac 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -1600,35 +1600,19 @@ static void vector_eth_configure(
+
+ device->dev = dev;
+
+- *vp = ((struct vector_private)
+- {
+- .list = LIST_HEAD_INIT(vp->list),
+- .dev = dev,
+- .unit = n,
+- .options = get_transport_options(def),
+- .rx_irq = 0,
+- .tx_irq = 0,
+- .parsed = def,
+- .max_packet = get_mtu(def) + ETH_HEADER_OTHER,
+- /* TODO - we need to calculate headroom so that ip header
+- * is 16 byte aligned all the time
+- */
+- .headroom = get_headroom(def),
+- .form_header = NULL,
+- .verify_header = NULL,
+- .header_rxbuffer = NULL,
+- .header_txbuffer = NULL,
+- .header_size = 0,
+- .rx_header_size = 0,
+- .rexmit_scheduled = false,
+- .opened = false,
+- .transport_data = NULL,
+- .in_write_poll = false,
+- .coalesce = 2,
+- .req_size = get_req_size(def),
+- .in_error = false,
+- .bpf = NULL
+- });
++ INIT_LIST_HEAD(&vp->list);
++ vp->dev = dev;
++ vp->unit = n;
++ vp->options = get_transport_options(def);
++ vp->parsed = def;
++ vp->max_packet = get_mtu(def) + ETH_HEADER_OTHER;
++ /*
++ * TODO - we need to calculate headroom so that ip header
++ * is 16 byte aligned all the time
++ */
++ vp->headroom = get_headroom(def);
++ vp->coalesce = 2;
++ vp->req_size = get_req_size(def);
+
+ dev->features = dev->hw_features = (NETIF_F_SG | NETIF_F_FRAGLIST);
+ tasklet_init(&vp->tx_poll, vector_tx_poll, (unsigned long)vp);
+diff --git a/arch/um/include/asm/asm-prototypes.h b/arch/um/include/asm/asm-prototypes.h
+index 5898a26daa0dd4..408b31d591279d 100644
+--- a/arch/um/include/asm/asm-prototypes.h
++++ b/arch/um/include/asm/asm-prototypes.h
+@@ -1 +1,6 @@
+ #include <asm-generic/asm-prototypes.h>
++#include <asm/checksum.h>
++
++#ifdef CONFIG_UML_X86
++extern void cmpxchg8b_emu(void);
++#endif
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 93a1f9937a9bb2..77efb16cbc2424 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -102,7 +102,7 @@ config X86
+ select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+ select ARCH_WANT_DEFAULT_BPF_JIT if X86_64
+ select ARCH_WANTS_DYNAMIC_TASK_STRUCT
+- select ARCH_WANT_HUGE_PMD_SHARE
++ select ARCH_WANT_HUGE_PMD_SHARE if X86_64
+ select ARCH_WANT_LD_ORPHAN_WARN
+ select ARCH_WANTS_THP_SWAP if X86_64
+ select BUILDTIME_TABLE_SORT
+@@ -2521,6 +2521,26 @@ config MITIGATION_RFDS
+ stored in floating point, vector and integer registers.
+ See also <file:Documentation/admin-guide/hw-vuln/reg-file-data-sampling.rst>
+
++config MITIGATION_ITS
++ bool "Enable Indirect Target Selection mitigation"
++ depends on CPU_SUP_INTEL && X86_64
++ depends on RETPOLINE && RETHUNK
++ default y
++ help
++ Enable Indirect Target Selection (ITS) mitigation. ITS is a bug in
++ BPU on some Intel CPUs that may allow Spectre V2 style attacks. If
++ disabled, mitigation cannot be enabled via cmdline.
++ See <file:Documentation/admin-guide/hw-vuln/indirect-target-selection.rst>
++
++config MITIGATION_TSA
++ bool "Mitigate Transient Scheduler Attacks"
++ depends on CPU_SUP_AMD
++ default y
++ help
++ Enable mitigation for Transient Scheduler Attacks. TSA is a hardware
++ security vulnerability on AMD CPUs which can lead to forwarding of
++ invalid info to subsequent instructions and thus can affect their
++ timing and thereby cause a leakage.
+ endif
+
+ config ARCH_HAS_ADD_PAGES
+diff --git a/arch/x86/entry/entry.S b/arch/x86/entry/entry.S
+index bda217961172ba..057eeb4eda4e72 100644
+--- a/arch/x86/entry/entry.S
++++ b/arch/x86/entry/entry.S
+@@ -31,20 +31,20 @@ EXPORT_SYMBOL_GPL(entry_ibpb);
+
+ /*
+ * Define the VERW operand that is disguised as entry code so that
+- * it can be referenced with KPTI enabled. This ensure VERW can be
++ * it can be referenced with KPTI enabled. This ensures VERW can be
+ * used late in exit-to-user path after page tables are switched.
+ */
+ .pushsection .entry.text, "ax"
+
+ .align L1_CACHE_BYTES, 0xcc
+-SYM_CODE_START_NOALIGN(mds_verw_sel)
++SYM_CODE_START_NOALIGN(x86_verw_sel)
+ UNWIND_HINT_EMPTY
+ ANNOTATE_NOENDBR
+ .word __KERNEL_DS
+ .align L1_CACHE_BYTES, 0xcc
+-SYM_CODE_END(mds_verw_sel);
++SYM_CODE_END(x86_verw_sel);
+ /* For KVM */
+-EXPORT_SYMBOL_GPL(mds_verw_sel);
++EXPORT_SYMBOL_GPL(x86_verw_sel);
+
+ .popsection
+
+diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
+index 0e777b27972be3..f2cce0cd87a65c 100644
+--- a/arch/x86/include/asm/alternative.h
++++ b/arch/x86/include/asm/alternative.h
+@@ -80,6 +80,32 @@ extern void apply_returns(s32 *start, s32 *end);
+
+ struct module;
+
++extern u8 *its_static_thunk(int reg);
++
++#ifdef CONFIG_MITIGATION_ITS
++extern void its_init_mod(struct module *mod);
++extern void its_fini_mod(struct module *mod);
++extern void its_free_mod(struct module *mod);
++#else /* CONFIG_MITIGATION_ITS */
++static inline void its_init_mod(struct module *mod) { }
++static inline void its_fini_mod(struct module *mod) { }
++static inline void its_free_mod(struct module *mod) { }
++#endif
++
++#if defined(CONFIG_RETHUNK) && defined(CONFIG_STACK_VALIDATION)
++extern bool cpu_wants_rethunk(void);
++extern bool cpu_wants_rethunk_at(void *addr);
++#else
++static __always_inline bool cpu_wants_rethunk(void)
++{
++ return false;
++}
++static __always_inline bool cpu_wants_rethunk_at(void *addr)
++{
++ return false;
++}
++#endif
++
+ #ifdef CONFIG_SMP
+ extern void alternatives_smp_module_add(struct module *mod, char *name,
+ void *locks, void *locks_end,
+diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
+index da78ccbd493b77..96ed5f1ceef5c5 100644
+--- a/arch/x86/include/asm/cpu.h
++++ b/arch/x86/include/asm/cpu.h
+@@ -63,4 +63,17 @@ void init_ia32_feat_ctl(struct cpuinfo_x86 *c);
+ #else
+ static inline void init_ia32_feat_ctl(struct cpuinfo_x86 *c) {}
+ #endif
++
++union zen_patch_rev {
++ struct {
++ __u32 rev : 8,
++ stepping : 4,
++ model : 4,
++ __reserved : 4,
++ ext_model : 4,
++ ext_fam : 8;
++ };
++ __u32 ucode_rev;
++};
++
+ #endif /* _ASM_X86_CPU_H */
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index 955ca6b13e35f2..c8e966ed7aa4ff 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -34,6 +34,7 @@ enum cpuid_leafs
+ CPUID_8000_001F_EAX,
+ CPUID_8000_0021_EAX,
+ CPUID_LNX_5,
++ CPUID_8000_0021_ECX,
+ NR_CPUID_WORDS,
+ };
+
+@@ -97,7 +98,7 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 20, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 21, feature_bit) || \
+ REQUIRED_MASK_CHECK || \
+- BUILD_BUG_ON_ZERO(NCAPINTS != 22))
++ BUILD_BUG_ON_ZERO(NCAPINTS != 23))
+
+ #define DISABLED_MASK_BIT_SET(feature_bit) \
+ ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
+@@ -123,7 +124,7 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 20, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 21, feature_bit) || \
+ DISABLED_MASK_CHECK || \
+- BUILD_BUG_ON_ZERO(NCAPINTS != 22))
++ BUILD_BUG_ON_ZERO(NCAPINTS != 23))
+
+ #define cpu_has(c, bit) \
+ (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index f3365ec973763b..ae0fbb16865694 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -13,7 +13,7 @@
+ /*
+ * Defines x86 CPU feature bits
+ */
+-#define NCAPINTS 22 /* N 32-bit words worth of info */
++#define NCAPINTS 23 /* N 32-bit words worth of info */
+ #define NBUGINTS 2 /* N 32-bit bug flags */
+
+ /*
+@@ -289,8 +289,8 @@
+ #define X86_FEATURE_FENCE_SWAPGS_KERNEL (11*32+ 5) /* "" LFENCE in kernel entry SWAPGS path */
+ #define X86_FEATURE_SPLIT_LOCK_DETECT (11*32+ 6) /* #AC for split lock */
+ #define X86_FEATURE_PER_THREAD_MBA (11*32+ 7) /* "" Per-thread Memory Bandwidth Allocation */
+-/* FREE! (11*32+ 8) */
+-/* FREE! (11*32+ 9) */
++#define X86_FEATURE_BHI_CTRL (11*32+ 8) /* "" BHI_DIS_S HW control available */
++#define X86_FEATURE_INDIRECT_THUNK_ITS (11*32+ 9) /* "" Use thunk for indirect branches in lower half of cacheline */
+ #define X86_FEATURE_ENTRY_IBPB (11*32+10) /* "" Issue an IBPB on kernel entry */
+ #define X86_FEATURE_RRSBA_CTRL (11*32+11) /* "" RET prediction control */
+ #define X86_FEATURE_RETPOLINE (11*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
+@@ -406,11 +406,16 @@
+ #define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */
+ #define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */
+
++#define X86_FEATURE_VERW_CLEAR (20*32+ 5) /* "" The memory form of VERW mitigates TSA */
+ #define X86_FEATURE_AUTOIBRS (20*32+ 8) /* "" Automatic IBRS */
+ #define X86_FEATURE_SBPB (20*32+27) /* "" Selective Branch Prediction Barrier */
+ #define X86_FEATURE_IBPB_BRTYPE (20*32+28) /* "" MSR_PRED_CMD[IBPB] flushes all branch type predictions */
+ #define X86_FEATURE_SRSO_NO (20*32+29) /* "" CPU is not affected by SRSO */
+
++#define X86_FEATURE_TSA_SQ_NO (22*32+11) /* "" AMD CPU not vulnerable to TSA-SQ */
++#define X86_FEATURE_TSA_L1_NO (22*32+12) /* "" AMD CPU not vulnerable to TSA-L1 */
++#define X86_FEATURE_CLEAR_CPU_BUF_VM (22*32+13) /* "" Clear CPU buffers using VERW before VMRUN */
++
+ /*
+ * BUG word(s)
+ */
+@@ -459,4 +464,7 @@
+ #define X86_BUG_RFDS X86_BUG(1*32 + 2) /* CPU is vulnerable to Register File Data Sampling */
+ #define X86_BUG_BHI X86_BUG(1*32 + 3) /* CPU is affected by Branch History Injection */
+ #define X86_BUG_IBPB_NO_RET X86_BUG(1*32 + 4) /* "ibpb_no_ret" IBPB omits return target predictions */
++#define X86_BUG_ITS X86_BUG(1*32 + 5) /* CPU is affected by Indirect Target Selection */
++#define X86_BUG_ITS_NATIVE_ONLY X86_BUG(1*32 + 6) /* CPU is affected by ITS, VMX is not affected */
++#define X86_BUG_TSA X86_BUG(1*32 + 9) /* "tsa" CPU is affected by Transient Scheduler Attacks */
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
+index e5f44a3e275c1b..170c8725334038 100644
+--- a/arch/x86/include/asm/disabled-features.h
++++ b/arch/x86/include/asm/disabled-features.h
+@@ -104,6 +104,6 @@
+ #define DISABLED_MASK19 0
+ #define DISABLED_MASK20 0
+ #define DISABLED_MASK21 0
+-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22)
++#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 23)
+
+ #endif /* _ASM_X86_DISABLED_FEATURES_H */
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index e585a4705b8ddc..62b29995e51c81 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -56,13 +56,13 @@ static __always_inline void native_irq_enable(void)
+
+ static inline __cpuidle void native_safe_halt(void)
+ {
+- mds_idle_clear_cpu_buffers();
++ x86_idle_clear_cpu_buffers();
+ asm volatile("sti; hlt": : :"memory");
+ }
+
+ static inline __cpuidle void native_halt(void)
+ {
+- mds_idle_clear_cpu_buffers();
++ x86_idle_clear_cpu_buffers();
+ asm volatile("hlt": : :"memory");
+ }
+
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 7fd03f4ff9ed29..a479530e59abe0 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -55,10 +55,13 @@
+ #define SPEC_CTRL_SSBD BIT(SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */
+ #define SPEC_CTRL_RRSBA_DIS_S_SHIFT 6 /* Disable RRSBA behavior */
+ #define SPEC_CTRL_RRSBA_DIS_S BIT(SPEC_CTRL_RRSBA_DIS_S_SHIFT)
++#define SPEC_CTRL_BHI_DIS_S_SHIFT 10 /* Disable Branch History Injection behavior */
++#define SPEC_CTRL_BHI_DIS_S BIT(SPEC_CTRL_BHI_DIS_S_SHIFT)
+
+ /* A mask for bits which the kernel toggles when controlling mitigations */
+ #define SPEC_CTRL_MITIGATIONS_MASK (SPEC_CTRL_IBRS | SPEC_CTRL_STIBP | SPEC_CTRL_SSBD \
+- | SPEC_CTRL_RRSBA_DIS_S)
++ | SPEC_CTRL_RRSBA_DIS_S \
++ | SPEC_CTRL_BHI_DIS_S)
+
+ #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
+ #define PRED_CMD_IBPB BIT(0) /* Indirect Branch Prediction Barrier */
+@@ -176,6 +179,14 @@
+ * VERW clears CPU Register
+ * File.
+ */
++#define ARCH_CAP_ITS_NO BIT_ULL(62) /*
++ * Not susceptible to
++ * Indirect Target Selection.
++ * This bit is not set by
++ * HW, but is synthesized by
++ * VMMs for guests to know
++ * their affected status.
++ */
+
+ #define MSR_IA32_FLUSH_CMD 0x0000010b
+ #define L1D_FLUSH BIT(0) /*
+diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
+index 29dd27b5a339db..2a2de4f3cb204e 100644
+--- a/arch/x86/include/asm/mwait.h
++++ b/arch/x86/include/asm/mwait.h
+@@ -43,8 +43,6 @@ static inline void __monitorx(const void *eax, unsigned long ecx,
+
+ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ {
+- mds_idle_clear_cpu_buffers();
+-
+ /* "mwait %eax, %ecx;" */
+ asm volatile(".byte 0x0f, 0x01, 0xc9;"
+ :: "a" (eax), "c" (ecx));
+@@ -79,7 +77,7 @@ static inline void __mwait(unsigned long eax, unsigned long ecx)
+ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
+ unsigned long ecx)
+ {
+- /* No MDS buffer clear as this is AMD/HYGON only */
++ /* No need for TSA buffer clearing on AMD */
+
+ /* "mwaitx %eax, %ebx, %ecx;" */
+ asm volatile(".byte 0x0f, 0x01, 0xfb;"
+@@ -88,7 +86,7 @@ static inline void __mwaitx(unsigned long eax, unsigned long ebx,
+
+ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ {
+- mds_idle_clear_cpu_buffers();
++
+ /* "mwait %eax, %ecx;" */
+ asm volatile("sti; .byte 0x0f, 0x01, 0xc9;"
+ :: "a" (eax), "c" (ecx));
+@@ -106,6 +104,11 @@ static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+ */
+ static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+ {
++ if (need_resched())
++ return;
++
++ x86_idle_clear_cpu_buffers();
++
+ if (static_cpu_has_bug(X86_BUG_MONITOR) || !current_set_polling_and_test()) {
+ if (static_cpu_has_bug(X86_BUG_CLFLUSH_MONITOR)) {
+ mb();
+@@ -114,9 +117,13 @@ static inline void mwait_idle_with_hints(unsigned long eax, unsigned long ecx)
+ }
+
+ __monitor((void *)¤t_thread_info()->flags, 0, 0);
+- if (!need_resched())
+- __mwait(eax, ecx);
++ if (need_resched())
++ goto out;
++
++ __mwait(eax, ecx);
+ }
++
++out:
+ current_clr_polling();
+ }
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 7978d5fe1ce6e4..ce5e6e70d2a48f 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -191,27 +191,33 @@
+ .endm
+
+ /*
+- * Macro to execute VERW instruction that mitigate transient data sampling
+- * attacks such as MDS. On affected systems a microcode update overloaded VERW
+- * instruction to also clear the CPU buffers. VERW clobbers CFLAGS.ZF.
+- *
++ * Macro to execute VERW insns that mitigate transient data sampling
++ * attacks such as MDS or TSA. On affected systems a microcode update
++ * overloaded VERW insns to also clear the CPU buffers. VERW clobbers
++ * CFLAGS.ZF.
+ * Note: Only the memory operand variant of VERW clears the CPU buffers.
+ */
+-.macro CLEAR_CPU_BUFFERS
+- ALTERNATIVE "jmp .Lskip_verw_\@", "", X86_FEATURE_CLEAR_CPU_BUF
++.macro __CLEAR_CPU_BUFFERS feature
++ ALTERNATIVE "jmp .Lskip_verw_\@", "", \feature
+ #ifdef CONFIG_X86_64
+- verw mds_verw_sel(%rip)
++ verw x86_verw_sel(%rip)
+ #else
+ /*
+ * In 32bit mode, the memory operand must be a %cs reference. The data
+ * segments may not be usable (vm86 mode), and the stack segment may not
+ * be flat (ESPFIX32).
+ */
+- verw %cs:mds_verw_sel
++ verw %cs:x86_verw_sel
+ #endif
+ .Lskip_verw_\@:
+ .endm
+
++#define CLEAR_CPU_BUFFERS \
++ __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF
++
++#define VM_CLEAR_CPU_BUFFERS \
++ __CLEAR_CPU_BUFFERS X86_FEATURE_CLEAR_CPU_BUF_VM
++
+ #else /* __ASSEMBLY__ */
+
+ #define ANNOTATE_RETPOLINE_SAFE \
+@@ -226,6 +232,12 @@ extern void __x86_return_thunk(void);
+ static inline void __x86_return_thunk(void) {}
+ #endif
+
++#ifdef CONFIG_MITIGATION_ITS
++extern void its_return_thunk(void);
++#else
++static inline void its_return_thunk(void) {}
++#endif
++
+ extern void retbleed_return_thunk(void);
+ extern void srso_return_thunk(void);
+ extern void srso_alias_return_thunk(void);
+@@ -243,6 +255,11 @@ extern void (*x86_return_thunk)(void);
+
+ typedef u8 retpoline_thunk_t[RETPOLINE_THUNK_SIZE];
+
++#define ITS_THUNK_SIZE 64
++typedef u8 its_thunk_t[ITS_THUNK_SIZE];
++
++extern its_thunk_t __x86_indirect_its_thunk_array[];
++
+ #define GEN(reg) \
+ extern retpoline_thunk_t __x86_indirect_thunk_ ## reg;
+ #include <asm/GEN-for-each-reg.h>
+@@ -387,22 +404,22 @@ DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+
+-DECLARE_STATIC_KEY_FALSE(mds_idle_clear);
++DECLARE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
+
+ DECLARE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+
+-extern u16 mds_verw_sel;
++extern u16 x86_verw_sel;
+
+ #include <asm/segment.h>
+
+ /**
+- * mds_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability
++ * x86_clear_cpu_buffers - Buffer clearing support for different x86 CPU vulns
+ *
+ * This uses the otherwise unused and obsolete VERW instruction in
+ * combination with microcode which triggers a CPU buffer flush when the
+ * instruction is executed.
+ */
+-static __always_inline void mds_clear_cpu_buffers(void)
++static __always_inline void x86_clear_cpu_buffers(void)
+ {
+ static const u16 ds = __KERNEL_DS;
+
+@@ -419,14 +436,15 @@ static __always_inline void mds_clear_cpu_buffers(void)
+ }
+
+ /**
+- * mds_idle_clear_cpu_buffers - Mitigation for MDS vulnerability
++ * x86_idle_clear_cpu_buffers - Buffer clearing support in idle for the MDS
++ * and TSA vulnerabilities.
+ *
+ * Clear CPU buffers if the corresponding static key is enabled
+ */
+-static inline void mds_idle_clear_cpu_buffers(void)
++static __always_inline void x86_idle_clear_cpu_buffers(void)
+ {
+- if (static_branch_likely(&mds_idle_clear))
+- mds_clear_cpu_buffers();
++ if (static_branch_likely(&cpu_buf_idle_clear))
++ x86_clear_cpu_buffers();
+ }
+
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
+index 1fbe53583e9529..4e3cd318323b2a 100644
+--- a/arch/x86/include/asm/required-features.h
++++ b/arch/x86/include/asm/required-features.h
+@@ -104,6 +104,6 @@
+ #define REQUIRED_MASK19 0
+ #define REQUIRED_MASK20 0
+ #define REQUIRED_MASK21 0
+-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 22)
++#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 23)
+
+ #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
+index c6015b4074614c..7281ce64e99d8a 100644
+--- a/arch/x86/include/asm/text-patching.h
++++ b/arch/x86/include/asm/text-patching.h
+@@ -181,6 +181,37 @@ void int3_emulate_ret(struct pt_regs *regs)
+ unsigned long ip = int3_emulate_pop(regs);
+ int3_emulate_jmp(regs, ip);
+ }
++
++static __always_inline
++void int3_emulate_jcc(struct pt_regs *regs, u8 cc, unsigned long ip, unsigned long disp)
++{
++ static const unsigned long jcc_mask[6] = {
++ [0] = X86_EFLAGS_OF,
++ [1] = X86_EFLAGS_CF,
++ [2] = X86_EFLAGS_ZF,
++ [3] = X86_EFLAGS_CF | X86_EFLAGS_ZF,
++ [4] = X86_EFLAGS_SF,
++ [5] = X86_EFLAGS_PF,
++ };
++
++ bool invert = cc & 1;
++ bool match;
++
++ if (cc < 0xc) {
++ match = regs->flags & jcc_mask[cc >> 1];
++ } else {
++ match = ((regs->flags & X86_EFLAGS_SF) >> X86_EFLAGS_SF_BIT) ^
++ ((regs->flags & X86_EFLAGS_OF) >> X86_EFLAGS_OF_BIT);
++ if (cc >= 0xe)
++ match = match || (regs->flags & X86_EFLAGS_ZF);
++ }
++
++ if ((match && !invert) || (!match && invert))
++ ip += disp;
++
++ int3_emulate_jmp(regs, ip);
++}
++
+ #endif /* !CONFIG_UML_X86 */
+
+ #endif /* _ASM_X86_TEXT_PATCHING_H */
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 9ceef8515c0312..30dc73210c2ea4 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -18,6 +18,7 @@
+ #include <linux/mmu_context.h>
+ #include <linux/bsearch.h>
+ #include <linux/sync_core.h>
++#include <linux/moduleloader.h>
+ #include <asm/text-patching.h>
+ #include <asm/alternative.h>
+ #include <asm/sections.h>
+@@ -29,6 +30,7 @@
+ #include <asm/io.h>
+ #include <asm/fixmap.h>
+ #include <asm/asm-prototypes.h>
++#include <asm/set_memory.h>
+
+ int __read_mostly alternatives_patched;
+
+@@ -506,6 +508,12 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ kasan_enable_current();
+ }
+
++static inline bool is_jcc32(struct insn *insn)
++{
++ /* Jcc.d32 second opcode byte is in the range: 0x80-0x8f */
++ return insn->opcode.bytes[0] == 0x0f && (insn->opcode.bytes[1] & 0xf0) == 0x80;
++}
++
+ #if defined(CONFIG_RETPOLINE) && defined(CONFIG_STACK_VALIDATION)
+
+ /*
+@@ -544,6 +552,225 @@ static int emit_indirect(int op, int reg, u8 *bytes)
+ return i;
+ }
+
++#ifdef CONFIG_MITIGATION_ITS
++
++#ifdef CONFIG_MODULES
++static struct module *its_mod;
++static void *its_page;
++static unsigned int its_offset;
++
++/* Initialize a thunk with the "jmp *reg; int3" instructions. */
++static void *its_init_thunk(void *thunk, int reg)
++{
++ u8 *bytes = thunk;
++ int i = 0;
++
++ if (reg >= 8) {
++ bytes[i++] = 0x41; /* REX.B prefix */
++ reg -= 8;
++ }
++ bytes[i++] = 0xff;
++ bytes[i++] = 0xe0 + reg; /* jmp *reg */
++ bytes[i++] = 0xcc;
++
++ return thunk;
++}
++
++void its_init_mod(struct module *mod)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return;
++
++ mutex_lock(&text_mutex);
++ its_mod = mod;
++ its_page = NULL;
++}
++
++void its_fini_mod(struct module *mod)
++{
++ int i;
++
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return;
++
++ WARN_ON_ONCE(its_mod != mod);
++
++ its_mod = NULL;
++ its_page = NULL;
++ mutex_unlock(&text_mutex);
++
++ for (i = 0; i < mod->its_num_pages; i++) {
++ void *page = mod->its_page_array[i];
++ set_memory_ro((unsigned long)page, 1);
++ set_memory_x((unsigned long)page, 1);
++ }
++}
++
++void its_free_mod(struct module *mod)
++{
++ int i;
++
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return;
++
++ for (i = 0; i < mod->its_num_pages; i++) {
++ void *page = mod->its_page_array[i];
++ module_memfree(page);
++ }
++ kfree(mod->its_page_array);
++}
++
++static void *its_alloc(void)
++{
++ void *page = module_alloc(PAGE_SIZE);
++
++ if (!page)
++ return NULL;
++
++ if (its_mod) {
++ void *tmp = krealloc(its_mod->its_page_array,
++ (its_mod->its_num_pages+1) * sizeof(void *),
++ GFP_KERNEL);
++ if (!tmp) {
++ module_memfree(page);
++ return NULL;
++ }
++
++ its_mod->its_page_array = tmp;
++ its_mod->its_page_array[its_mod->its_num_pages++] = page;
++ }
++
++ return page;
++}
++
++static void *its_allocate_thunk(int reg)
++{
++ int size = 3 + (reg / 8);
++ void *thunk;
++
++ if (!its_page || (its_offset + size - 1) >= PAGE_SIZE) {
++ its_page = its_alloc();
++ if (!its_page) {
++ pr_err("ITS page allocation failed\n");
++ return NULL;
++ }
++ memset(its_page, INT3_INSN_OPCODE, PAGE_SIZE);
++ its_offset = 32;
++ }
++
++ /*
++ * If the indirect branch instruction will be in the lower half
++ * of a cacheline, then update the offset to reach the upper half.
++ */
++ if ((its_offset + size - 1) % 64 < 32)
++ its_offset = ((its_offset - 1) | 0x3F) + 33;
++
++ thunk = its_page + its_offset;
++ its_offset += size;
++
++ set_memory_rw((unsigned long)its_page, 1);
++ thunk = its_init_thunk(thunk, reg);
++ set_memory_ro((unsigned long)its_page, 1);
++ set_memory_x((unsigned long)its_page, 1);
++
++ return thunk;
++}
++#else /* CONFIG_MODULES */
++
++static void *its_allocate_thunk(int reg)
++{
++ return NULL;
++}
++
++#endif /* CONFIG_MODULES */
++
++static int __emit_trampoline(void *addr, struct insn *insn, u8 *bytes,
++ void *call_dest, void *jmp_dest)
++{
++ u8 op = insn->opcode.bytes[0];
++ int i = 0;
++
++ /*
++ * Clang does 'weird' Jcc __x86_indirect_thunk_r11 conditional
++ * tail-calls. Deal with them.
++ */
++ if (is_jcc32(insn)) {
++ bytes[i++] = op;
++ op = insn->opcode.bytes[1];
++ goto clang_jcc;
++ }
++
++ if (insn->length == 6)
++ bytes[i++] = 0x2e; /* CS-prefix */
++
++ switch (op) {
++ case CALL_INSN_OPCODE:
++ __text_gen_insn(bytes+i, op, addr+i,
++ call_dest,
++ CALL_INSN_SIZE);
++ i += CALL_INSN_SIZE;
++ break;
++
++ case JMP32_INSN_OPCODE:
++clang_jcc:
++ __text_gen_insn(bytes+i, op, addr+i,
++ jmp_dest,
++ JMP32_INSN_SIZE);
++ i += JMP32_INSN_SIZE;
++ break;
++
++ default:
++ WARN(1, "%pS %px %*ph\n", addr, addr, 6, addr);
++ return -1;
++ }
++
++ WARN_ON_ONCE(i != insn->length);
++
++ return i;
++}
++
++static int emit_its_trampoline(void *addr, struct insn *insn, int reg, u8 *bytes)
++{
++ u8 *thunk = __x86_indirect_its_thunk_array[reg];
++ u8 *tmp = its_allocate_thunk(reg);
++
++ if (tmp)
++ thunk = tmp;
++
++ return __emit_trampoline(addr, insn, bytes, thunk, thunk);
++}
++
++/* Check if an indirect branch is at ITS-unsafe address */
++static bool cpu_wants_indirect_its_thunk_at(unsigned long addr, int reg)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS))
++ return false;
++
++ /* Indirect branch opcode is 2 or 3 bytes depending on reg */
++ addr += 1 + reg / 8;
++
++ /* Lower-half of the cacheline? */
++ return !(addr & 0x20);
++}
++
++u8 *its_static_thunk(int reg)
++{
++ u8 *thunk = __x86_indirect_its_thunk_array[reg];
++
++ return thunk;
++}
++
++#else /* CONFIG_MITIGATION_ITS */
++
++u8 *its_static_thunk(int reg)
++{
++ WARN_ONCE(1, "ITS not compiled in");
++
++ return NULL;
++}
++
++#endif /* CONFIG_MITIGATION_ITS */
++
+ /*
+ * Rewrite the compiler generated retpoline thunk calls.
+ *
+@@ -615,6 +842,15 @@ static int patch_retpoline(void *addr, struct insn *insn, u8 *bytes)
+ bytes[i++] = 0xe8; /* LFENCE */
+ }
+
++#ifdef CONFIG_MITIGATION_ITS
++ /*
++ * Check if the address of last byte of emitted-indirect is in
++ * lower-half of the cacheline. Such branches need ITS mitigation.
++ */
++ if (cpu_wants_indirect_its_thunk_at((unsigned long)addr + i, reg))
++ return emit_its_trampoline(addr, insn, reg, bytes);
++#endif
++
+ ret = emit_indirect(op, reg, bytes + i);
+ if (ret < 0)
+ return ret;
+@@ -677,6 +913,21 @@ void __init_or_module noinline apply_retpolines(s32 *start, s32 *end)
+
+ #ifdef CONFIG_RETHUNK
+
++bool cpu_wants_rethunk(void)
++{
++ return cpu_feature_enabled(X86_FEATURE_RETHUNK);
++}
++
++bool cpu_wants_rethunk_at(void *addr)
++{
++ if (!cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ return false;
++ if (x86_return_thunk != its_return_thunk)
++ return true;
++
++ return !((unsigned long)addr & 0x20);
++}
++
+ /*
+ * Rewrite the compiler generated return thunk tail-calls.
+ *
+@@ -692,13 +943,12 @@ static int patch_return(void *addr, struct insn *insn, u8 *bytes)
+ {
+ int i = 0;
+
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
+- if (x86_return_thunk == __x86_return_thunk)
+- return -1;
+-
++ /* Patch the custom return thunks... */
++ if (cpu_wants_rethunk_at(addr)) {
+ i = JMP32_INSN_SIZE;
+ __text_gen_insn(bytes, JMP32_INSN_OPCODE, addr, x86_return_thunk, i);
+ } else {
++ /* ... or patch them out if not needed. */
+ bytes[i++] = RET_INSN_OPCODE;
+ }
+
+@@ -1331,6 +1581,11 @@ void text_poke_sync(void)
+ on_each_cpu(do_sync_core, NULL, 1);
+ }
+
++/*
++ * NOTE: crazy scheme to allow patching Jcc.d32 but not increase the size of
++ * this thing. When len == 6 everything is prefixed with 0x0f and we map
++ * opcode to Jcc.d8, using len to distinguish.
++ */
+ struct text_poke_loc {
+ /* addr := _stext + rel_addr */
+ s32 rel_addr;
+@@ -1452,6 +1707,10 @@ noinstr int poke_int3_handler(struct pt_regs *regs)
+ int3_emulate_jmp(regs, (long)ip + tp->disp);
+ break;
+
++ case 0x70 ... 0x7f: /* Jcc */
++ int3_emulate_jcc(regs, tp->opcode & 0xf, (long)ip, tp->disp);
++ break;
++
+ default:
+ BUG();
+ }
+@@ -1525,16 +1784,26 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
+ * Second step: update all but the first byte of the patched range.
+ */
+ for (do_sync = 0, i = 0; i < nr_entries; i++) {
+- u8 old[POKE_MAX_OPCODE_SIZE] = { tp[i].old, };
++ u8 old[POKE_MAX_OPCODE_SIZE+1] = { tp[i].old, };
++ u8 _new[POKE_MAX_OPCODE_SIZE+1];
++ const u8 *new = tp[i].text;
+ int len = tp[i].len;
+
+ if (len - INT3_INSN_SIZE > 0) {
+ memcpy(old + INT3_INSN_SIZE,
+ text_poke_addr(&tp[i]) + INT3_INSN_SIZE,
+ len - INT3_INSN_SIZE);
++
++ if (len == 6) {
++ _new[0] = 0x0f;
++ memcpy(_new + 1, new, 5);
++ new = _new;
++ }
++
+ text_poke(text_poke_addr(&tp[i]) + INT3_INSN_SIZE,
+- (const char *)tp[i].text + INT3_INSN_SIZE,
++ new + INT3_INSN_SIZE,
+ len - INT3_INSN_SIZE);
++
+ do_sync++;
+ }
+
+@@ -1562,8 +1831,7 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
+ * The old instruction is recorded so that the event can be
+ * processed forwards or backwards.
+ */
+- perf_event_text_poke(text_poke_addr(&tp[i]), old, len,
+- tp[i].text, len);
++ perf_event_text_poke(text_poke_addr(&tp[i]), old, len, new, len);
+ }
+
+ if (do_sync) {
+@@ -1580,10 +1848,15 @@ static void text_poke_bp_batch(struct text_poke_loc *tp, unsigned int nr_entries
+ * replacing opcode.
+ */
+ for (do_sync = 0, i = 0; i < nr_entries; i++) {
+- if (tp[i].text[0] == INT3_INSN_OPCODE)
++ u8 byte = tp[i].text[0];
++
++ if (tp[i].len == 6)
++ byte = 0x0f;
++
++ if (byte == INT3_INSN_OPCODE)
+ continue;
+
+- text_poke(text_poke_addr(&tp[i]), tp[i].text, INT3_INSN_SIZE);
++ text_poke(text_poke_addr(&tp[i]), &byte, INT3_INSN_SIZE);
+ do_sync++;
+ }
+
+@@ -1601,9 +1874,11 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ const void *opcode, size_t len, const void *emulate)
+ {
+ struct insn insn;
+- int ret, i;
++ int ret, i = 0;
+
+- memcpy((void *)tp->text, opcode, len);
++ if (len == 6)
++ i = 1;
++ memcpy((void *)tp->text, opcode+i, len-i);
+ if (!emulate)
+ emulate = opcode;
+
+@@ -1614,6 +1889,13 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ tp->len = len;
+ tp->opcode = insn.opcode.bytes[0];
+
++ if (is_jcc32(&insn)) {
++ /*
++ * Map Jcc.d32 onto Jcc.d8 and use len to distinguish.
++ */
++ tp->opcode = insn.opcode.bytes[1] - 0x10;
++ }
++
+ switch (tp->opcode) {
+ case RET_INSN_OPCODE:
+ case JMP32_INSN_OPCODE:
+@@ -1630,7 +1912,6 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ BUG_ON(len != insn.length);
+ };
+
+-
+ switch (tp->opcode) {
+ case INT3_INSN_OPCODE:
+ case RET_INSN_OPCODE:
+@@ -1639,6 +1920,7 @@ static void text_poke_loc_init(struct text_poke_loc *tp, void *addr,
+ case CALL_INSN_OPCODE:
+ case JMP32_INSN_OPCODE:
+ case JMP8_INSN_OPCODE:
++ case 0x70 ... 0x7f: /* Jcc */
+ tp->disp = insn.immediate.value;
+ break;
+
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 5f0bdb53b00673..e67d7603449b71 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -589,6 +589,62 @@ static void early_init_amd_mc(struct cpuinfo_x86 *c)
+ #endif
+ }
+
++static bool amd_check_tsa_microcode(void)
++{
++ struct cpuinfo_x86 *c = &boot_cpu_data;
++ union zen_patch_rev p;
++ u32 min_rev = 0;
++
++ p.ext_fam = c->x86 - 0xf;
++ p.model = c->x86_model;
++ p.ext_model = c->x86_model >> 4;
++ p.stepping = c->x86_stepping;
++
++ if (c->x86 == 0x19) {
++ switch (p.ucode_rev >> 8) {
++ case 0xa0011: min_rev = 0x0a0011d7; break;
++ case 0xa0012: min_rev = 0x0a00123b; break;
++ case 0xa0082: min_rev = 0x0a00820d; break;
++ case 0xa1011: min_rev = 0x0a10114c; break;
++ case 0xa1012: min_rev = 0x0a10124c; break;
++ case 0xa1081: min_rev = 0x0a108109; break;
++ case 0xa2010: min_rev = 0x0a20102e; break;
++ case 0xa2012: min_rev = 0x0a201211; break;
++ case 0xa4041: min_rev = 0x0a404108; break;
++ case 0xa5000: min_rev = 0x0a500012; break;
++ case 0xa6012: min_rev = 0x0a60120a; break;
++ case 0xa7041: min_rev = 0x0a704108; break;
++ case 0xa7052: min_rev = 0x0a705208; break;
++ case 0xa7080: min_rev = 0x0a708008; break;
++ case 0xa70c0: min_rev = 0x0a70c008; break;
++ case 0xaa002: min_rev = 0x0aa00216; break;
++ default:
++ pr_debug("%s: ucode_rev: 0x%x, current revision: 0x%x\n",
++ __func__, p.ucode_rev, c->microcode);
++ return false;
++ }
++ }
++
++ if (!min_rev)
++ return false;
++
++ return c->microcode >= min_rev;
++}
++
++static void tsa_init(struct cpuinfo_x86 *c)
++{
++ if (cpu_has(c, X86_FEATURE_HYPERVISOR))
++ return;
++
++ if (c->x86 == 0x19) {
++ if (amd_check_tsa_microcode())
++ setup_force_cpu_cap(X86_FEATURE_VERW_CLEAR);
++ } else {
++ setup_force_cpu_cap(X86_FEATURE_TSA_SQ_NO);
++ setup_force_cpu_cap(X86_FEATURE_TSA_L1_NO);
++ }
++}
++
+ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ {
+
+@@ -676,6 +732,8 @@ static void bsp_init_amd(struct cpuinfo_x86 *c)
+ }
+
+ resctrl_cpu_detect(c);
++
++ tsa_init(c);
+ }
+
+ static void early_detect_mem_encrypt(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 045ab6d0a98bbe..7c269dcb7cecee 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -47,6 +47,8 @@ static void __init mmio_select_mitigation(void);
+ static void __init srbds_select_mitigation(void);
+ static void __init gds_select_mitigation(void);
+ static void __init srso_select_mitigation(void);
++static void __init its_select_mitigation(void);
++static void __init tsa_select_mitigation(void);
+
+ /* The base value of the SPEC_CTRL MSR without task-specific bits set */
+ u64 x86_spec_ctrl_base;
+@@ -63,6 +65,14 @@ static DEFINE_MUTEX(spec_ctrl_mutex);
+
+ void (*x86_return_thunk)(void) __ro_after_init = &__x86_return_thunk;
+
++static void __init set_return_thunk(void *thunk)
++{
++ if (x86_return_thunk != __x86_return_thunk)
++ pr_warn("x86/bugs: return thunk changed\n");
++
++ x86_return_thunk = thunk;
++}
++
+ /* Update SPEC_CTRL MSR and its cached copy unconditionally */
+ static void update_spec_ctrl(u64 val)
+ {
+@@ -109,9 +119,9 @@ DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb);
+ /* Control unconditional IBPB in switch_mm() */
+ DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb);
+
+-/* Control MDS CPU buffer clear before idling (halt, mwait) */
+-DEFINE_STATIC_KEY_FALSE(mds_idle_clear);
+-EXPORT_SYMBOL_GPL(mds_idle_clear);
++/* Control CPU buffer clear before idling (halt, mwait) */
++DEFINE_STATIC_KEY_FALSE(cpu_buf_idle_clear);
++EXPORT_SYMBOL_GPL(cpu_buf_idle_clear);
+
+ /* Controls CPU Fill buffer clear before KVM guest MMIO accesses */
+ DEFINE_STATIC_KEY_FALSE(mmio_stale_data_clear);
+@@ -161,6 +171,8 @@ void __init cpu_select_mitigations(void)
+ */
+ srso_select_mitigation();
+ gds_select_mitigation();
++ its_select_mitigation();
++ tsa_select_mitigation();
+ }
+
+ /*
+@@ -435,7 +447,7 @@ static void __init mmio_select_mitigation(void)
+ * is required irrespective of SMT state.
+ */
+ if (!(ia32_cap & ARCH_CAP_FBSDP_NO))
+- static_branch_enable(&mds_idle_clear);
++ static_branch_enable(&cpu_buf_idle_clear);
+
+ /*
+ * Check if the system has the right microcode.
+@@ -1050,7 +1062,7 @@ static void __init retbleed_select_mitigation(void)
+ setup_force_cpu_cap(X86_FEATURE_UNRET);
+
+ if (IS_ENABLED(CONFIG_RETHUNK))
+- x86_return_thunk = retbleed_return_thunk;
++ set_return_thunk(retbleed_return_thunk);
+
+ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
+ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+@@ -1111,6 +1123,116 @@ static void __init retbleed_select_mitigation(void)
+ pr_info("%s\n", retbleed_strings[retbleed_mitigation]);
+ }
+
++#undef pr_fmt
++#define pr_fmt(fmt) "ITS: " fmt
++
++enum its_mitigation_cmd {
++ ITS_CMD_OFF,
++ ITS_CMD_ON,
++ ITS_CMD_VMEXIT,
++};
++
++enum its_mitigation {
++ ITS_MITIGATION_OFF,
++ ITS_MITIGATION_VMEXIT_ONLY,
++ ITS_MITIGATION_ALIGNED_THUNKS,
++};
++
++static const char * const its_strings[] = {
++ [ITS_MITIGATION_OFF] = "Vulnerable",
++ [ITS_MITIGATION_VMEXIT_ONLY] = "Mitigation: Vulnerable, KVM: Not affected",
++ [ITS_MITIGATION_ALIGNED_THUNKS] = "Mitigation: Aligned branch/return thunks",
++};
++
++static enum its_mitigation its_mitigation __ro_after_init = ITS_MITIGATION_ALIGNED_THUNKS;
++
++static enum its_mitigation_cmd its_cmd __ro_after_init =
++ IS_ENABLED(CONFIG_MITIGATION_ITS) ? ITS_CMD_ON : ITS_CMD_OFF;
++
++static int __init its_parse_cmdline(char *str)
++{
++ if (!str)
++ return -EINVAL;
++
++ if (!IS_ENABLED(CONFIG_MITIGATION_ITS)) {
++ pr_err("Mitigation disabled at compile time, ignoring option (%s)", str);
++ return 0;
++ }
++
++ if (!strcmp(str, "off")) {
++ its_cmd = ITS_CMD_OFF;
++ } else if (!strcmp(str, "on")) {
++ its_cmd = ITS_CMD_ON;
++ } else if (!strcmp(str, "force")) {
++ its_cmd = ITS_CMD_ON;
++ setup_force_cpu_bug(X86_BUG_ITS);
++ } else if (!strcmp(str, "vmexit")) {
++ its_cmd = ITS_CMD_VMEXIT;
++ } else {
++ pr_err("Ignoring unknown indirect_target_selection option (%s).", str);
++ }
++
++ return 0;
++}
++early_param("indirect_target_selection", its_parse_cmdline);
++
++static void __init its_select_mitigation(void)
++{
++ enum its_mitigation_cmd cmd = its_cmd;
++
++ if (!boot_cpu_has_bug(X86_BUG_ITS) || cpu_mitigations_off()) {
++ its_mitigation = ITS_MITIGATION_OFF;
++ return;
++ }
++
++ /* Exit early to avoid irrelevant warnings */
++ if (cmd == ITS_CMD_OFF) {
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (spectre_v2_enabled == SPECTRE_V2_NONE) {
++ pr_err("WARNING: Spectre-v2 mitigation is off, disabling ITS\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (!IS_ENABLED(CONFIG_RETPOLINE) || !IS_ENABLED(CONFIG_RETHUNK)) {
++ pr_err("WARNING: ITS mitigation depends on retpoline and rethunk support\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (IS_ENABLED(CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B)) {
++ pr_err("WARNING: ITS mitigation is not compatible with CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++ if (boot_cpu_has(X86_FEATURE_RETPOLINE_LFENCE)) {
++ pr_err("WARNING: ITS mitigation is not compatible with lfence mitigation\n");
++ its_mitigation = ITS_MITIGATION_OFF;
++ goto out;
++ }
++
++ switch (cmd) {
++ case ITS_CMD_OFF:
++ its_mitigation = ITS_MITIGATION_OFF;
++ break;
++ case ITS_CMD_VMEXIT:
++ if (boot_cpu_has_bug(X86_BUG_ITS_NATIVE_ONLY)) {
++ its_mitigation = ITS_MITIGATION_VMEXIT_ONLY;
++ goto out;
++ }
++ fallthrough;
++ case ITS_CMD_ON:
++ its_mitigation = ITS_MITIGATION_ALIGNED_THUNKS;
++ if (!boot_cpu_has(X86_FEATURE_RETPOLINE))
++ setup_force_cpu_cap(X86_FEATURE_INDIRECT_THUNK_ITS);
++ setup_force_cpu_cap(X86_FEATURE_RETHUNK);
++ set_return_thunk(its_return_thunk);
++ break;
++ }
++out:
++ pr_info("%s\n", its_strings[its_mitigation]);
++}
++
+ #undef pr_fmt
+ #define pr_fmt(fmt) "Spectre V2 : " fmt
+
+@@ -1802,10 +1924,10 @@ static void update_mds_branch_idle(void)
+ return;
+
+ if (sched_smt_active()) {
+- static_branch_enable(&mds_idle_clear);
++ static_branch_enable(&cpu_buf_idle_clear);
+ } else if (mmio_mitigation == MMIO_MITIGATION_OFF ||
+ (ia32_cap & ARCH_CAP_FBSDP_NO)) {
+- static_branch_disable(&mds_idle_clear);
++ static_branch_disable(&cpu_buf_idle_clear);
+ }
+ }
+
+@@ -1813,6 +1935,94 @@ static void update_mds_branch_idle(void)
+ #define TAA_MSG_SMT "TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.\n"
+ #define MMIO_MSG_SMT "MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.\n"
+
++#undef pr_fmt
++#define pr_fmt(fmt) "Transient Scheduler Attacks: " fmt
++
++enum tsa_mitigations {
++ TSA_MITIGATION_NONE,
++ TSA_MITIGATION_UCODE_NEEDED,
++ TSA_MITIGATION_USER_KERNEL,
++ TSA_MITIGATION_VM,
++ TSA_MITIGATION_FULL,
++};
++
++static const char * const tsa_strings[] = {
++ [TSA_MITIGATION_NONE] = "Vulnerable",
++ [TSA_MITIGATION_UCODE_NEEDED] = "Vulnerable: Clear CPU buffers attempted, no microcode",
++ [TSA_MITIGATION_USER_KERNEL] = "Mitigation: Clear CPU buffers: user/kernel boundary",
++ [TSA_MITIGATION_VM] = "Mitigation: Clear CPU buffers: VM",
++ [TSA_MITIGATION_FULL] = "Mitigation: Clear CPU buffers",
++};
++
++static enum tsa_mitigations tsa_mitigation __ro_after_init =
++ IS_ENABLED(CONFIG_MITIGATION_TSA) ? TSA_MITIGATION_FULL : TSA_MITIGATION_NONE;
++
++static int __init tsa_parse_cmdline(char *str)
++{
++ if (!str)
++ return -EINVAL;
++
++ if (!strcmp(str, "off"))
++ tsa_mitigation = TSA_MITIGATION_NONE;
++ else if (!strcmp(str, "on"))
++ tsa_mitigation = TSA_MITIGATION_FULL;
++ else if (!strcmp(str, "user"))
++ tsa_mitigation = TSA_MITIGATION_USER_KERNEL;
++ else if (!strcmp(str, "vm"))
++ tsa_mitigation = TSA_MITIGATION_VM;
++ else
++ pr_err("Ignoring unknown tsa=%s option.\n", str);
++
++ return 0;
++}
++early_param("tsa", tsa_parse_cmdline);
++
++static void __init tsa_select_mitigation(void)
++{
++ if (tsa_mitigation == TSA_MITIGATION_NONE)
++ return;
++
++ if (cpu_mitigations_off() || !boot_cpu_has_bug(X86_BUG_TSA)) {
++ tsa_mitigation = TSA_MITIGATION_NONE;
++ return;
++ }
++
++ if (!boot_cpu_has(X86_FEATURE_VERW_CLEAR))
++ tsa_mitigation = TSA_MITIGATION_UCODE_NEEDED;
++
++ switch (tsa_mitigation) {
++ case TSA_MITIGATION_USER_KERNEL:
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++ break;
++
++ case TSA_MITIGATION_VM:
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
++ break;
++
++ case TSA_MITIGATION_UCODE_NEEDED:
++ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ goto out;
++
++ pr_notice("Forcing mitigation on in a VM\n");
++
++ /*
++ * On the off-chance that microcode has been updated
++ * on the host, enable the mitigation in the guest just
++ * in case.
++ */
++ fallthrough;
++ case TSA_MITIGATION_FULL:
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF);
++ setup_force_cpu_cap(X86_FEATURE_CLEAR_CPU_BUF_VM);
++ break;
++ default:
++ break;
++ }
++
++out:
++ pr_info("%s\n", tsa_strings[tsa_mitigation]);
++}
++
+ void cpu_bugs_smt_update(void)
+ {
+ mutex_lock(&spec_ctrl_mutex);
+@@ -1866,6 +2076,24 @@ void cpu_bugs_smt_update(void)
+ break;
+ }
+
++ switch (tsa_mitigation) {
++ case TSA_MITIGATION_USER_KERNEL:
++ case TSA_MITIGATION_VM:
++ case TSA_MITIGATION_FULL:
++ case TSA_MITIGATION_UCODE_NEEDED:
++ /*
++ * TSA-SQ can potentially lead to info leakage between
++ * SMT threads.
++ */
++ if (sched_smt_active())
++ static_branch_enable(&cpu_buf_idle_clear);
++ else
++ static_branch_disable(&cpu_buf_idle_clear);
++ break;
++ case TSA_MITIGATION_NONE:
++ break;
++ }
++
+ mutex_unlock(&spec_ctrl_mutex);
+ }
+
+@@ -2453,10 +2681,10 @@ static void __init srso_select_mitigation(void)
+
+ if (boot_cpu_data.x86 == 0x19) {
+ setup_force_cpu_cap(X86_FEATURE_SRSO_ALIAS);
+- x86_return_thunk = srso_alias_return_thunk;
++ set_return_thunk(srso_alias_return_thunk);
+ } else {
+ setup_force_cpu_cap(X86_FEATURE_SRSO);
+- x86_return_thunk = srso_return_thunk;
++ set_return_thunk(srso_return_thunk);
+ }
+ srso_mitigation = SRSO_MITIGATION_SAFE_RET;
+ } else {
+@@ -2636,6 +2864,11 @@ static ssize_t rfds_show_state(char *buf)
+ return sysfs_emit(buf, "%s\n", rfds_strings[rfds_mitigation]);
+ }
+
++static ssize_t its_show_state(char *buf)
++{
++ return sysfs_emit(buf, "%s\n", its_strings[its_mitigation]);
++}
++
+ static char *stibp_state(void)
+ {
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+@@ -2742,6 +2975,11 @@ static ssize_t srso_show_state(char *buf)
+ boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) ? "" : ", no microcode");
+ }
+
++static ssize_t tsa_show_state(char *buf)
++{
++ return sysfs_emit(buf, "%s\n", tsa_strings[tsa_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ char *buf, unsigned int bug)
+ {
+@@ -2800,6 +3038,12 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ case X86_BUG_RFDS:
+ return rfds_show_state(buf);
+
++ case X86_BUG_ITS:
++ return its_show_state(buf);
++
++ case X86_BUG_TSA:
++ return tsa_show_state(buf);
++
+ default:
+ break;
+ }
+@@ -2879,4 +3123,14 @@ ssize_t cpu_show_reg_file_data_sampling(struct device *dev, struct device_attrib
+ {
+ return cpu_show_common(dev, attr, buf, X86_BUG_RFDS);
+ }
++
++ssize_t cpu_show_indirect_target_selection(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_ITS);
++}
++
++ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return cpu_show_common(dev, attr, buf, X86_BUG_TSA);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index db225e325ccfd6..258e28933abe1d 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1135,6 +1135,12 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ #define GDS BIT(6)
+ /* CPU is affected by Register File Data Sampling */
+ #define RFDS BIT(7)
++/* CPU is affected by Indirect Target Selection */
++#define ITS BIT(8)
++/* CPU is affected by Indirect Target Selection, but guest-host isolation is not affected */
++#define ITS_NATIVE_ONLY BIT(9)
++/* CPU is affected by Transient Scheduler Attacks */
++#define TSA BIT(10)
+
+ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_INTEL_STEPPINGS(IVYBRIDGE, X86_STEPPING_ANY, SRBDS),
+@@ -1146,22 +1152,25 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_INTEL_STEPPINGS(BROADWELL_G, X86_STEPPING_ANY, SRBDS),
+ VULNBL_INTEL_STEPPINGS(BROADWELL_X, X86_STEPPING_ANY, MMIO),
+ VULNBL_INTEL_STEPPINGS(BROADWELL, X86_STEPPING_ANY, SRBDS),
+- VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED | GDS),
++ VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPINGS(0x0, 0x5), MMIO | RETBLEED | GDS),
++ VULNBL_INTEL_STEPPINGS(SKYLAKE_X, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | ITS),
+ VULNBL_INTEL_STEPPINGS(SKYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS),
+ VULNBL_INTEL_STEPPINGS(SKYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS),
+- VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS),
+- VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPINGS(0x0, 0xb), MMIO | RETBLEED | GDS | SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE_L, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | ITS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPINGS(0x0, 0xc), MMIO | RETBLEED | GDS | SRBDS),
++ VULNBL_INTEL_STEPPINGS(KABYLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | SRBDS | ITS),
+ VULNBL_INTEL_STEPPINGS(CANNONLAKE_L, X86_STEPPING_ANY, RETBLEED),
+- VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS),
+- VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO | GDS),
+- VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO | GDS),
+- VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS),
+- VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED),
+- VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS),
+- VULNBL_INTEL_STEPPINGS(TIGERLAKE_L, X86_STEPPING_ANY, GDS),
+- VULNBL_INTEL_STEPPINGS(TIGERLAKE, X86_STEPPING_ANY, GDS),
++ VULNBL_INTEL_STEPPINGS(ICELAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPPINGS(ICELAKE_D, X86_STEPPING_ANY, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPPINGS(ICELAKE_X, X86_STEPPING_ANY, MMIO | GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPPINGS(COMETLAKE, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
++ VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPINGS(0x0, 0x0), MMIO | RETBLEED | ITS),
++ VULNBL_INTEL_STEPPINGS(COMETLAKE_L, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED | GDS | ITS),
++ VULNBL_INTEL_STEPPINGS(TIGERLAKE_L, X86_STEPPING_ANY, GDS | ITS | ITS_NATIVE_ONLY),
++ VULNBL_INTEL_STEPPINGS(TIGERLAKE, X86_STEPPING_ANY, GDS | ITS | ITS_NATIVE_ONLY),
+ VULNBL_INTEL_STEPPINGS(LAKEFIELD, X86_STEPPING_ANY, MMIO | MMIO_SBDS | RETBLEED),
+- VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS),
++ VULNBL_INTEL_STEPPINGS(ROCKETLAKE, X86_STEPPING_ANY, MMIO | RETBLEED | GDS | ITS | ITS_NATIVE_ONLY),
+ VULNBL_INTEL_STEPPINGS(ALDERLAKE, X86_STEPPING_ANY, RFDS),
+ VULNBL_INTEL_STEPPINGS(ALDERLAKE_L, X86_STEPPING_ANY, RFDS),
+ VULNBL_INTEL_STEPPINGS(RAPTORLAKE, X86_STEPPING_ANY, RFDS),
+@@ -1179,7 +1188,7 @@ static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
+ VULNBL_AMD(0x16, RETBLEED),
+ VULNBL_AMD(0x17, RETBLEED | SRSO),
+ VULNBL_HYGON(0x18, RETBLEED | SRSO),
+- VULNBL_AMD(0x19, SRSO),
++ VULNBL_AMD(0x19, SRSO | TSA),
+ {}
+ };
+
+@@ -1225,6 +1234,32 @@ static bool __init vulnerable_to_rfds(u64 ia32_cap)
+ return cpu_matches(cpu_vuln_blacklist, RFDS);
+ }
+
++static bool __init vulnerable_to_its(u64 x86_arch_cap_msr)
++{
++ /* The "immunity" bit trumps everything else: */
++ if (x86_arch_cap_msr & ARCH_CAP_ITS_NO)
++ return false;
++ if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
++ return false;
++
++ /* None of the affected CPUs have BHI_CTRL */
++ if (boot_cpu_has(X86_FEATURE_BHI_CTRL))
++ return false;
++
++ /*
++ * If a VMM did not expose ITS_NO, assume that a guest could
++ * be running on a vulnerable hardware or may migrate to such
++ * hardware.
++ */
++ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++ return true;
++
++ if (cpu_matches(cpu_vuln_blacklist, ITS))
++ return true;
++
++ return false;
++}
++
+ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ {
+ u64 ia32_cap = x86_read_arch_cap_msr();
+@@ -1339,6 +1374,22 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ if (cpu_has(c, X86_FEATURE_AMD_IBPB) && !cpu_has(c, X86_FEATURE_AMD_IBPB_RET))
+ setup_force_cpu_bug(X86_BUG_IBPB_NO_RET);
+
++ if (vulnerable_to_its(ia32_cap)) {
++ setup_force_cpu_bug(X86_BUG_ITS);
++ if (cpu_matches(cpu_vuln_blacklist, ITS_NATIVE_ONLY))
++ setup_force_cpu_bug(X86_BUG_ITS_NATIVE_ONLY);
++ }
++
++ if (c->x86_vendor == X86_VENDOR_AMD) {
++ if (!cpu_has(c, X86_FEATURE_TSA_SQ_NO) ||
++ !cpu_has(c, X86_FEATURE_TSA_L1_NO)) {
++ if (cpu_matches(cpu_vuln_blacklist, TSA) ||
++ /* Enable bug on Zen guests to allow for live migration. */
++ (cpu_has(c, X86_FEATURE_HYPERVISOR) && cpu_has(c, X86_FEATURE_ZEN)))
++ setup_force_cpu_bug(X86_BUG_TSA);
++ }
++ }
++
+ if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ return;
+
+diff --git a/arch/x86/kernel/cpu/mce/amd.c b/arch/x86/kernel/cpu/mce/amd.c
+index cd8db6b9ca2f58..c011fe79f0249a 100644
+--- a/arch/x86/kernel/cpu/mce/amd.c
++++ b/arch/x86/kernel/cpu/mce/amd.c
+@@ -297,7 +297,6 @@ static void smca_configure(unsigned int bank, unsigned int cpu)
+
+ struct thresh_restart {
+ struct threshold_block *b;
+- int reset;
+ int set_lvt_off;
+ int lvt_off;
+ u16 old_limit;
+@@ -392,13 +391,13 @@ static void threshold_restart_bank(void *_tr)
+
+ rdmsr(tr->b->address, lo, hi);
+
+- if (tr->b->threshold_limit < (hi & THRESHOLD_MAX))
+- tr->reset = 1; /* limit cannot be lower than err count */
+-
+- if (tr->reset) { /* reset err count and overflow bit */
+- hi =
+- (hi & ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI)) |
+- (THRESHOLD_MAX - tr->b->threshold_limit);
++ /*
++ * Reset error count and overflow bit.
++ * This is done during init or after handling an interrupt.
++ */
++ if (hi & MASK_OVERFLOW_HI || tr->set_lvt_off) {
++ hi &= ~(MASK_ERR_COUNT_HI | MASK_OVERFLOW_HI);
++ hi |= THRESHOLD_MAX - tr->b->threshold_limit;
+ } else if (tr->old_limit) { /* change limit w/o reset */
+ int new_count = (hi & THRESHOLD_MAX) +
+ (tr->old_limit - tr->b->threshold_limit);
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 97ab2942909321..3199dfcb757c4c 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -2627,15 +2627,9 @@ static int mce_cpu_dead(unsigned int cpu)
+ static int mce_cpu_online(unsigned int cpu)
+ {
+ struct timer_list *t = this_cpu_ptr(&mce_timer);
+- int ret;
+
+ mce_device_create(cpu);
+-
+- ret = mce_threshold_create_device(cpu);
+- if (ret) {
+- mce_device_remove(cpu);
+- return ret;
+- }
++ mce_threshold_create_device(cpu);
+ mce_reenable_cpu();
+ mce_start_timer(t);
+ return 0;
+diff --git a/arch/x86/kernel/cpu/mce/intel.c b/arch/x86/kernel/cpu/mce/intel.c
+index 886d4648c9dd42..85356dc936b59e 100644
+--- a/arch/x86/kernel/cpu/mce/intel.c
++++ b/arch/x86/kernel/cpu/mce/intel.c
+@@ -522,6 +522,7 @@ void mce_intel_feature_init(struct cpuinfo_x86 *c)
+ void mce_intel_feature_clear(struct cpuinfo_x86 *c)
+ {
+ intel_clear_lmce();
++ cmci_clear();
+ }
+
+ bool intel_filter_mce(struct mce *m)
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index f1cd1b6fb99ef5..53a9a55dc0866b 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -27,6 +27,7 @@ static const struct cpuid_bit cpuid_bits[] = {
+ { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 },
+ { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 },
+ { X86_FEATURE_RRSBA_CTRL, CPUID_EDX, 2, 0x00000007, 2 },
++ { X86_FEATURE_BHI_CTRL, CPUID_EDX, 4, 0x00000007, 2 },
+ { X86_FEATURE_CQM_LLC, CPUID_EDX, 1, 0x0000000f, 0 },
+ { X86_FEATURE_CQM_OCCUP_LLC, CPUID_EDX, 0, 0x0000000f, 1 },
+ { X86_FEATURE_CQM_MBM_TOTAL, CPUID_EDX, 1, 0x0000000f, 1 },
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 46447877b59419..bb02d5d474c283 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -367,7 +367,7 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ goto fail;
+
+ ip = trampoline + size;
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ if (cpu_wants_rethunk_at(ip))
+ __text_gen_insn(ip, JMP32_INSN_OPCODE, ip, x86_return_thunk, JMP32_INSN_SIZE);
+ else
+ memcpy(ip, retq, sizeof(retq));
+@@ -422,8 +422,6 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ /* ALLOC_TRAMP flags lets us know we created it */
+ ops->flags |= FTRACE_OPS_FL_ALLOC_TRAMP;
+
+- set_vm_flush_reset_perms(trampoline);
+-
+ if (likely(system_state != SYSTEM_BOOTING))
+ set_memory_ro((unsigned long)trampoline, npages);
+ set_memory_x((unsigned long)trampoline, npages);
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index 6d59c8e7719b1f..5254317125d895 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -403,7 +403,6 @@ void *alloc_insn_page(void)
+ if (!page)
+ return NULL;
+
+- set_vm_flush_reset_perms(page);
+ /*
+ * First make the page read-only, and only then make it executable to
+ * prevent it from being W+X in between.
+@@ -462,50 +461,26 @@ static void kprobe_emulate_call(struct kprobe *p, struct pt_regs *regs)
+ }
+ NOKPROBE_SYMBOL(kprobe_emulate_call);
+
+-static nokprobe_inline
+-void __kprobe_emulate_jmp(struct kprobe *p, struct pt_regs *regs, bool cond)
++static void kprobe_emulate_jmp(struct kprobe *p, struct pt_regs *regs)
+ {
+ unsigned long ip = regs->ip - INT3_INSN_SIZE + p->ainsn.size;
+
+- if (cond)
+- ip += p->ainsn.rel32;
++ ip += p->ainsn.rel32;
+ int3_emulate_jmp(regs, ip);
+ }
+-
+-static void kprobe_emulate_jmp(struct kprobe *p, struct pt_regs *regs)
+-{
+- __kprobe_emulate_jmp(p, regs, true);
+-}
+ NOKPROBE_SYMBOL(kprobe_emulate_jmp);
+
+-static const unsigned long jcc_mask[6] = {
+- [0] = X86_EFLAGS_OF,
+- [1] = X86_EFLAGS_CF,
+- [2] = X86_EFLAGS_ZF,
+- [3] = X86_EFLAGS_CF | X86_EFLAGS_ZF,
+- [4] = X86_EFLAGS_SF,
+- [5] = X86_EFLAGS_PF,
+-};
+-
+ static void kprobe_emulate_jcc(struct kprobe *p, struct pt_regs *regs)
+ {
+- bool invert = p->ainsn.jcc.type & 1;
+- bool match;
++ unsigned long ip = regs->ip - INT3_INSN_SIZE + p->ainsn.size;
+
+- if (p->ainsn.jcc.type < 0xc) {
+- match = regs->flags & jcc_mask[p->ainsn.jcc.type >> 1];
+- } else {
+- match = ((regs->flags & X86_EFLAGS_SF) >> X86_EFLAGS_SF_BIT) ^
+- ((regs->flags & X86_EFLAGS_OF) >> X86_EFLAGS_OF_BIT);
+- if (p->ainsn.jcc.type >= 0xe)
+- match = match || (regs->flags & X86_EFLAGS_ZF);
+- }
+- __kprobe_emulate_jmp(p, regs, (match && !invert) || (!match && invert));
++ int3_emulate_jcc(regs, p->ainsn.jcc.type, ip, p->ainsn.rel32);
+ }
+ NOKPROBE_SYMBOL(kprobe_emulate_jcc);
+
+ static void kprobe_emulate_loop(struct kprobe *p, struct pt_regs *regs)
+ {
++ unsigned long ip = regs->ip - INT3_INSN_SIZE + p->ainsn.size;
+ bool match;
+
+ if (p->ainsn.loop.type != 3) { /* LOOP* */
+@@ -533,7 +508,9 @@ static void kprobe_emulate_loop(struct kprobe *p, struct pt_regs *regs)
+ else if (p->ainsn.loop.type == 1) /* LOOPE */
+ match = match && (regs->flags & X86_EFLAGS_ZF);
+
+- __kprobe_emulate_jmp(p, regs, match);
++ if (match)
++ ip += p->ainsn.rel32;
++ int3_emulate_jmp(regs, ip);
+ }
+ NOKPROBE_SYMBOL(kprobe_emulate_loop);
+
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index 455e195847f9e9..1ab8e583c795d1 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -73,10 +73,10 @@ void *module_alloc(unsigned long size)
+ return NULL;
+
+ p = __vmalloc_node_range(size, MODULE_ALIGN,
+- MODULES_VADDR + get_module_load_offset(),
+- MODULES_END, GFP_KERNEL,
+- PAGE_KERNEL, 0, NUMA_NO_NODE,
+- __builtin_return_address(0));
++ MODULES_VADDR + get_module_load_offset(),
++ MODULES_END, GFP_KERNEL, PAGE_KERNEL,
++ VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
++ __builtin_return_address(0));
+ if (p && (kasan_module_alloc(p, size) < 0)) {
+ vfree(p);
+ return NULL;
+@@ -274,10 +274,15 @@ int module_finalize(const Elf_Ehdr *hdr,
+ returns = s;
+ }
+
++ its_init_mod(me);
++
+ if (retpolines) {
+ void *rseg = (void *)retpolines->sh_addr;
+ apply_retpolines(rseg, rseg + retpolines->sh_size);
+ }
++
++ its_fini_mod(me);
++
+ if (returns) {
+ void *rseg = (void *)returns->sh_addr;
+ apply_returns(rseg, rseg + returns->sh_size);
+@@ -313,4 +318,5 @@ int module_finalize(const Elf_Ehdr *hdr,
+ void module_arch_cleanup(struct module *mod)
+ {
+ alternatives_smp_module_del(mod);
++ its_free_mod(mod);
+ }
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 34e7c49b8057db..fd91c301218d5f 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -825,6 +825,11 @@ static int prefer_mwait_c1_over_halt(const struct cpuinfo_x86 *c)
+ */
+ static __cpuidle void mwait_idle(void)
+ {
++ if (need_resched())
++ return;
++
++ x86_idle_clear_cpu_buffers();
++
+ if (!current_set_polling_and_test()) {
+ if (this_cpu_has(X86_BUG_CLFLUSH_MONITOR)) {
+ mb(); /* quirk */
+@@ -833,13 +838,17 @@ static __cpuidle void mwait_idle(void)
+ }
+
+ __monitor((void *)¤t_thread_info()->flags, 0, 0);
+- if (!need_resched())
+- __sti_mwait(0, 0);
+- else
++ if (need_resched()) {
+ raw_local_irq_enable();
++ goto out;
++ }
++
++ __sti_mwait(0, 0);
+ } else {
+ raw_local_irq_enable();
+ }
++
++out:
+ __current_clr_polling();
+ }
+
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 4544f124bbd4d6..42564d29eb1bac 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -41,7 +41,7 @@ static void __ref __static_call_transform(void *insn, enum insn_type type,
+ break;
+
+ case RET:
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK))
++ if (cpu_wants_rethunk_at(insn))
+ code = text_gen_insn(JMP32_INSN_OPCODE, insn, x86_return_thunk);
+ else
+ code = &retinsn;
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 740f87d8aa4814..1f77896515c523 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -538,6 +538,14 @@ INIT_PER_CPU(irq_stack_backing_store);
+ "SRSO function pair won't alias");
+ #endif
+
++#ifdef CONFIG_MITIGATION_ITS
++. = ASSERT(__x86_indirect_its_thunk_rax & 0x20, "__x86_indirect_thunk_rax not in second half of cacheline");
++. = ASSERT(((__x86_indirect_its_thunk_rcx - __x86_indirect_its_thunk_rax) % 64) == 0, "Indirect thunks are not cacheline apart");
++. = ASSERT(__x86_indirect_its_thunk_array == __x86_indirect_its_thunk_rax, "Gap in ITS thunk array");
++
++. = ASSERT(its_return_thunk & 0x20, "its_return_thunk not in second half of cacheline");
++#endif
++
+ #endif /* CONFIG_X86_32 */
+
+ #ifdef CONFIG_KEXEC_CORE
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 8b07e48612d7d9..ab0ae4a30fd154 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -500,6 +500,15 @@ void kvm_set_cpu_caps(void)
+ */
+ kvm_cpu_cap_mask(CPUID_8000_000A_EDX, 0);
+
++ if (cpu_feature_enabled(X86_FEATURE_VERW_CLEAR))
++ kvm_cpu_cap_set(X86_FEATURE_VERW_CLEAR);
++
++ if (cpu_feature_enabled(X86_FEATURE_TSA_SQ_NO))
++ kvm_cpu_cap_set(X86_FEATURE_TSA_SQ_NO);
++
++ if (cpu_feature_enabled(X86_FEATURE_TSA_L1_NO))
++ kvm_cpu_cap_set(X86_FEATURE_TSA_L1_NO);
++
+ kvm_cpu_cap_mask(CPUID_C000_0001_EDX,
+ F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
+ F(ACE2) | F(ACE2_EN) | F(PHE) | F(PHE_EN) |
+@@ -810,7 +819,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ entry->edx = 0;
+ break;
+ case 0x80000000:
+- entry->eax = min(entry->eax, 0x8000001f);
++ entry->eax = min(entry->eax, 0x80000021);
+ break;
+ case 0x80000001:
+ entry->ebx &= ~GENMASK(27, 16);
+@@ -875,6 +884,26 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ if (!boot_cpu_has(X86_FEATURE_SEV))
+ entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+ break;
++ case 0x80000020:
++ entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
++ break;
++ case 0x80000021:
++ entry->ebx = entry->edx = 0;
++ /*
++ * Pass down these bits:
++ * EAX 0 NNDBP, Processor ignores nested data breakpoints
++ * EAX 2 LAS, LFENCE always serializing
++ * EAX 5 VERW_CLEAR, mitigate TSA
++ * EAX 6 NSCB, Null selector clear base
++ *
++ * Other defined bits are for MSRs that KVM does not expose:
++ * EAX 3 SPCL, SMM page configuration lock
++ * EAX 13 PCMSR, Prefetch control MSR
++ */
++ cpuid_entry_override(entry, CPUID_8000_0021_EAX);
++ entry->eax &= BIT(0) | BIT(2) | BIT(5) | BIT(6);
++ cpuid_entry_override(entry, CPUID_8000_0021_ECX);
++ break;
+ /*Add support for Centaur's CPUID instruction*/
+ case 0xC0000000:
+ /*Just support up to 0xC0000004 now*/
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index e25853c2eb0fc2..88315d43d38066 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -64,6 +64,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
+ [CPUID_7_EDX] = { 7, 0, CPUID_EDX},
+ [CPUID_7_1_EAX] = { 7, 1, CPUID_EAX},
+ [CPUID_8000_0021_EAX] = {0x80000021, 0, CPUID_EAX},
++ [CPUID_8000_0021_ECX] = {0x80000021, 0, CPUID_ECX},
+ };
+
+ /*
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index a8859c17325800..c3ec69f94b4548 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -77,6 +77,9 @@ SYM_FUNC_START(__svm_vcpu_run)
+ /* "POP" @vmcb to RAX. */
+ pop %_ASM_AX
+
++ /* Clobbers EFLAGS.ZF */
++ VM_CLEAR_CPU_BUFFERS
++
+ /* Enter guest mode */
+ sti
+ 1: vmload %_ASM_AX
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 1908f2aae9fa24..795bbaf89d94e1 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6810,7 +6810,7 @@ static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,
+ vmx_l1d_flush(vcpu);
+ else if (static_branch_unlikely(&mmio_stale_data_clear) &&
+ kvm_arch_has_assigned_device(vcpu->kvm))
+- mds_clear_cpu_buffers();
++ x86_clear_cpu_buffers();
+
+ vmx_disable_fb_clear(vmx);
+
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index bc295439360e5a..b61f697479a37f 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1390,7 +1390,7 @@ static unsigned int num_msr_based_features;
+ ARCH_CAP_PSCHANGE_MC_NO | ARCH_CAP_TSX_CTRL_MSR | ARCH_CAP_TAA_NO | \
+ ARCH_CAP_SBDR_SSDP_NO | ARCH_CAP_FBSDP_NO | ARCH_CAP_PSDP_NO | \
+ ARCH_CAP_FB_CLEAR | ARCH_CAP_RRSBA | ARCH_CAP_PBRSB_NO | ARCH_CAP_GDS_NO | \
+- ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR)
++ ARCH_CAP_RFDS_NO | ARCH_CAP_RFDS_CLEAR | ARCH_CAP_ITS_NO)
+
+ static u64 kvm_get_arch_capabilities(void)
+ {
+@@ -1429,6 +1429,8 @@ static u64 kvm_get_arch_capabilities(void)
+ data |= ARCH_CAP_MDS_NO;
+ if (!boot_cpu_has_bug(X86_BUG_RFDS))
+ data |= ARCH_CAP_RFDS_NO;
++ if (!boot_cpu_has_bug(X86_BUG_ITS))
++ data |= ARCH_CAP_ITS_NO;
+
+ if (!boot_cpu_has(X86_FEATURE_RTM)) {
+ /*
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index d1902213a0d637..01fcf0cd679bd6 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -255,6 +255,45 @@ SYM_FUNC_START(entry_untrain_ret)
+ SYM_FUNC_END(entry_untrain_ret)
+ __EXPORT_THUNK(entry_untrain_ret)
+
++#ifdef CONFIG_MITIGATION_ITS
++
++.macro ITS_THUNK reg
++
++SYM_INNER_LABEL(__x86_indirect_its_thunk_\reg, SYM_L_GLOBAL)
++ UNWIND_HINT_EMPTY
++ ANNOTATE_NOENDBR
++ ANNOTATE_RETPOLINE_SAFE
++ jmp *%\reg
++ int3
++ .align 32, 0xcc /* fill to the end of the line */
++ .skip 32, 0xcc /* skip to the next upper half */
++.endm
++
++/* ITS mitigation requires thunks be aligned to upper half of cacheline */
++.align 64, 0xcc
++.skip 32, 0xcc
++SYM_CODE_START(__x86_indirect_its_thunk_array)
++
++#define GEN(reg) ITS_THUNK reg
++#include <asm/GEN-for-each-reg.h>
++#undef GEN
++
++ .align 64, 0xcc
++SYM_CODE_END(__x86_indirect_its_thunk_array)
++
++.align 64, 0xcc
++.skip 32, 0xcc
++SYM_CODE_START(its_return_thunk)
++ UNWIND_HINT_FUNC
++ ANNOTATE_NOENDBR
++ ANNOTATE_UNRET_SAFE
++ ret
++ int3
++SYM_CODE_END(its_return_thunk)
++EXPORT_SYMBOL(its_return_thunk)
++
++#endif /* CONFIG_MITIGATION_ITS */
++
+ SYM_CODE_START(__x86_return_thunk)
+ UNWIND_HINT_FUNC
+ ANNOTATE_NOENDBR
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index d7d592c0929835..d3450088569ff3 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -387,7 +387,11 @@ static void emit_indirect_jump(u8 **pprog, int reg, u8 *ip)
+ int cnt = 0;
+
+ #ifdef CONFIG_RETPOLINE
+- if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
++ if (IS_ENABLED(CONFIG_MITIGATION_ITS) &&
++ cpu_feature_enabled(X86_FEATURE_INDIRECT_THUNK_ITS)) {
++ OPTIMIZER_HIDE_VAR(reg);
++ emit_jump(&prog, its_static_thunk(reg), ip);
++ } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE_LFENCE)) {
+ EMIT_LFENCE();
+ EMIT2(0xFF, 0xE0 + reg);
+ } else if (cpu_feature_enabled(X86_FEATURE_RETPOLINE)) {
+@@ -404,7 +408,7 @@ static void emit_return(u8 **pprog, u8 *ip)
+ u8 *prog = *pprog;
+ int cnt = 0;
+
+- if (cpu_feature_enabled(X86_FEATURE_RETHUNK)) {
++ if (cpu_wants_rethunk()) {
+ emit_jump(&prog, x86_return_thunk, ip);
+ } else {
+ EMIT1(0xC3); /* ret */
+diff --git a/arch/x86/um/asm/checksum.h b/arch/x86/um/asm/checksum.h
+index b07824500363fa..ddc144657efad9 100644
+--- a/arch/x86/um/asm/checksum.h
++++ b/arch/x86/um/asm/checksum.h
+@@ -20,6 +20,9 @@
+ */
+ extern __wsum csum_partial(const void *buff, int len, __wsum sum);
+
++/* Do not call this directly. Declared for export type visibility. */
++extern __visible __wsum csum_partial_copy_generic(const void *src, void *dst, int len);
++
+ /**
+ * csum_fold - Fold and invert a 32bit checksum.
+ * sum: 32bit unfolded sum
+diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
+index b84ab722feb444..6d592d8655cd61 100644
+--- a/drivers/acpi/acpi_pad.c
++++ b/drivers/acpi/acpi_pad.c
+@@ -128,8 +128,11 @@ static void round_robin_cpu(unsigned int tsk_index)
+ static void exit_round_robin(unsigned int tsk_index)
+ {
+ struct cpumask *pad_busy_cpus = to_cpumask(pad_busy_cpus_bits);
+- cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus);
+- tsk_in_cpu[tsk_index] = -1;
++
++ if (tsk_in_cpu[tsk_index] != -1) {
++ cpumask_clear_cpu(tsk_in_cpu[tsk_index], pad_busy_cpus);
++ tsk_in_cpu[tsk_index] = -1;
++ }
+ }
+
+ static unsigned int idle_pct = 5; /* percentage */
+diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
+index 97971c79c5f568..13c67f58e90521 100644
+--- a/drivers/acpi/acpica/dsmethod.c
++++ b/drivers/acpi/acpica/dsmethod.c
+@@ -483,6 +483,13 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread,
+ return_ACPI_STATUS(AE_NULL_OBJECT);
+ }
+
++ if (this_walk_state->num_operands < obj_desc->method.param_count) {
++ ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]",
++ acpi_ut_get_node_name(method_node)));
++
++ return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG);
++ }
++
+ /* Init for new method, possibly wait on method mutex */
+
+ status =
+diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
+index 4a188cc28b5ce1..f9fb092f33a263 100644
+--- a/drivers/acpi/battery.c
++++ b/drivers/acpi/battery.c
+@@ -255,23 +255,10 @@ static int acpi_battery_get_property(struct power_supply *psy,
+ break;
+ case POWER_SUPPLY_PROP_CURRENT_NOW:
+ case POWER_SUPPLY_PROP_POWER_NOW:
+- if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN) {
++ if (battery->rate_now == ACPI_BATTERY_VALUE_UNKNOWN)
+ ret = -ENODEV;
+- break;
+- }
+-
+- val->intval = battery->rate_now * 1000;
+- /*
+- * When discharging, the current should be reported as a
+- * negative number as per the power supply class interface
+- * definition.
+- */
+- if (psp == POWER_SUPPLY_PROP_CURRENT_NOW &&
+- (battery->state & ACPI_BATTERY_STATE_DISCHARGING) &&
+- acpi_battery_handle_discharging(battery)
+- == POWER_SUPPLY_STATUS_DISCHARGING)
+- val->intval = -val->intval;
+-
++ else
++ val->intval = battery->rate_now * 1000;
+ break;
+ case POWER_SUPPLY_PROP_CHARGE_FULL_DESIGN:
+ case POWER_SUPPLY_PROP_ENERGY_FULL_DESIGN:
+diff --git a/drivers/ata/pata_cs5536.c b/drivers/ata/pata_cs5536.c
+index 760ac6e65216f7..3737d1bf1539d5 100644
+--- a/drivers/ata/pata_cs5536.c
++++ b/drivers/ata/pata_cs5536.c
+@@ -27,7 +27,7 @@
+ #include <scsi/scsi_host.h>
+ #include <linux/dmi.h>
+
+-#ifdef CONFIG_X86_32
++#if defined(CONFIG_X86) && defined(CONFIG_X86_32)
+ #include <asm/msr.h>
+ static int use_msr;
+ module_param_named(msr, use_msr, int, 0644);
+diff --git a/drivers/atm/idt77252.c b/drivers/atm/idt77252.c
+index 25fd73fafb3711..89b0ed8e514306 100644
+--- a/drivers/atm/idt77252.c
++++ b/drivers/atm/idt77252.c
+@@ -852,6 +852,8 @@ queue_skb(struct idt77252_dev *card, struct vc_map *vc,
+
+ IDT77252_PRV_PADDR(skb) = dma_map_single(&card->pcidev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
++ if (dma_mapping_error(&card->pcidev->dev, IDT77252_PRV_PADDR(skb)))
++ return -ENOMEM;
+
+ error = -EINVAL;
+
+@@ -1863,6 +1865,8 @@ add_rx_skb(struct idt77252_dev *card, int queue,
+ paddr = dma_map_single(&card->pcidev->dev, skb->data,
+ skb_end_pointer(skb) - skb->data,
+ DMA_FROM_DEVICE);
++ if (dma_mapping_error(&card->pcidev->dev, paddr))
++ goto outpoolrm;
+ IDT77252_PRV_PADDR(skb) = paddr;
+
+ if (push_rx_skb(card, skb, queue)) {
+@@ -1877,6 +1881,7 @@ add_rx_skb(struct idt77252_dev *card, int queue,
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb_end_pointer(skb) - skb->data, DMA_FROM_DEVICE);
+
++outpoolrm:
+ handle = IDT77252_PRV_POOL(skb);
+ card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL;
+
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index e3aed8333f0976..377d8837e2b128 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -597,6 +597,17 @@ ssize_t __weak cpu_show_reg_file_data_sampling(struct device *dev,
+ return sysfs_emit(buf, "Not affected\n");
+ }
+
++ssize_t __weak cpu_show_indirect_target_selection(struct device *dev,
++ struct device_attribute *attr, char *buf)
++{
++ return sysfs_emit(buf, "Not affected\n");
++}
++
++ssize_t __weak cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf)
++{
++ return sysfs_emit(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -611,6 +622,8 @@ static DEVICE_ATTR(retbleed, 0444, cpu_show_retbleed, NULL);
+ static DEVICE_ATTR(gather_data_sampling, 0444, cpu_show_gds, NULL);
+ static DEVICE_ATTR(spec_rstack_overflow, 0444, cpu_show_spec_rstack_overflow, NULL);
+ static DEVICE_ATTR(reg_file_data_sampling, 0444, cpu_show_reg_file_data_sampling, NULL);
++static DEVICE_ATTR(indirect_target_selection, 0444, cpu_show_indirect_target_selection, NULL);
++static DEVICE_ATTR(tsa, 0444, cpu_show_tsa, NULL);
+
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_meltdown.attr,
+@@ -627,6 +640,8 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ &dev_attr_gather_data_sampling.attr,
+ &dev_attr_spec_rstack_overflow.attr,
+ &dev_attr_reg_file_data_sampling.attr,
++ &dev_attr_indirect_target_selection.attr,
++ &dev_attr_tsa.attr,
+ NULL
+ };
+
+diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c
+index 1187e5e80eded5..539cb4e0433865 100644
+--- a/drivers/dma-buf/dma-resv.c
++++ b/drivers/dma-buf/dma-resv.c
+@@ -591,7 +591,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj,
+ goto retry;
+ }
+
+- ret = dma_fence_wait_timeout(fence, intr, ret);
++ ret = dma_fence_wait_timeout(fence, intr, timeout);
+ dma_fence_put(fence);
+ if (ret > 0 && wait_all && (i + 1 < shared_count))
+ goto retry;
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index 12ad4bb3c5f282..3ecf0109af2ba4 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -2844,6 +2844,8 @@ static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
+ return -EINVAL;
+ }
+
++ xdev->common.directions |= chan->direction;
++
+ /* Request the interrupt */
+ chan->irq = irq_of_parse_and_map(node, chan->tdest);
+ err = request_irq(chan->irq, xdev->dma_config->irq_handler,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+index e3ba0cd3b6fa71..a60bffe48501cc 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_packet_manager_v9.c
+@@ -156,7 +156,7 @@ static int pm_map_queues_v9(struct packet_manager *pm, uint32_t *buffer,
+
+ packet->bitfields2.engine_sel =
+ engine_sel__mes_map_queues__compute_vi;
+- packet->bitfields2.gws_control_queue = q->gws ? 1 : 0;
++ packet->bitfields2.gws_control_queue = q->properties.is_gws ? 1 : 0;
+ packet->bitfields2.extended_engine_sel =
+ extended_engine_sel__mes_map_queues__legacy_engine_sel;
+ packet->bitfields2.queue_type =
+diff --git a/drivers/gpu/drm/bridge/cdns-dsi.c b/drivers/gpu/drm/bridge/cdns-dsi.c
+index 0ced08d81d7a26..77f05378efbdab 100644
+--- a/drivers/gpu/drm/bridge/cdns-dsi.c
++++ b/drivers/gpu/drm/bridge/cdns-dsi.c
+@@ -608,15 +608,18 @@ static int cdns_dsi_check_conf(struct cdns_dsi *dsi,
+ struct phy_configure_opts_mipi_dphy *phy_cfg = &output->phy_opts.mipi_dphy;
+ unsigned long dsi_hss_hsa_hse_hbp;
+ unsigned int nlanes = output->dev->lanes;
++ int mode_clock = (mode_valid_check ? mode->clock : mode->crtc_clock);
+ int ret;
+
+ ret = cdns_dsi_mode2cfg(dsi, mode, dsi_cfg, mode_valid_check);
+ if (ret)
+ return ret;
+
+- phy_mipi_dphy_get_default_config(mode->crtc_clock * 1000,
+- mipi_dsi_pixel_format_to_bpp(output->dev->format),
+- nlanes, phy_cfg);
++ ret = phy_mipi_dphy_get_default_config(mode_clock * 1000,
++ mipi_dsi_pixel_format_to_bpp(output->dev->format),
++ nlanes, phy_cfg);
++ if (ret)
++ return ret;
+
+ ret = cdns_dsi_adjust_phy_config(dsi, dsi_cfg, phy_cfg, mode, mode_valid_check);
+ if (ret)
+@@ -786,8 +789,9 @@ static void cdns_dsi_bridge_enable(struct drm_bridge *bridge)
+ struct phy_configure_opts_mipi_dphy *phy_cfg = &output->phy_opts.mipi_dphy;
+ unsigned long tx_byte_period;
+ struct cdns_dsi_cfg dsi_cfg;
+- u32 tmp, reg_wakeup, div;
++ u32 tmp, reg_wakeup, div, status;
+ int nlanes;
++ int i;
+
+ if (WARN_ON(pm_runtime_get_sync(dsi->base.dev) < 0))
+ return;
+@@ -800,6 +804,19 @@ static void cdns_dsi_bridge_enable(struct drm_bridge *bridge)
+ cdns_dsi_hs_init(dsi);
+ cdns_dsi_init_link(dsi);
+
++ /*
++ * Now that the DSI Link and DSI Phy are initialized,
++ * wait for the CLK and Data Lanes to be ready.
++ */
++ tmp = CLK_LANE_RDY;
++ for (i = 0; i < nlanes; i++)
++ tmp |= DATA_LANE_RDY(i);
++
++ if (readl_poll_timeout(dsi->regs + MCTL_MAIN_STS, status,
++ (tmp == (status & tmp)), 100, 500000))
++ dev_err(dsi->base.dev,
++ "Timed Out: DSI-DPhy Clock and Data Lanes not ready.\n");
++
+ writel(HBP_LEN(dsi_cfg.hbp) | HSA_LEN(dsi_cfg.hsa),
+ dsi->regs + VID_HSIZE1);
+ writel(HFP_LEN(dsi_cfg.hfp) | HACT_LEN(dsi_cfg.hact),
+@@ -960,7 +977,7 @@ static int cdns_dsi_attach(struct mipi_dsi_host *host,
+ bridge = drm_panel_bridge_add_typed(panel,
+ DRM_MODE_CONNECTOR_DSI);
+ } else {
+- bridge = of_drm_find_bridge(dev->dev.of_node);
++ bridge = of_drm_find_bridge(np);
+ if (!bridge)
+ bridge = ERR_PTR(-EINVAL);
+ }
+diff --git a/drivers/gpu/drm/exynos/exynos7_drm_decon.c b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+index 1c04c232dce152..81494d5938303d 100644
+--- a/drivers/gpu/drm/exynos/exynos7_drm_decon.c
++++ b/drivers/gpu/drm/exynos/exynos7_drm_decon.c
+@@ -595,6 +595,10 @@ static irqreturn_t decon_irq_handler(int irq, void *dev_id)
+ if (!ctx->drm_dev)
+ goto out;
+
++ /* check if crtc and vblank have been initialized properly */
++ if (!drm_dev_has_vblank(ctx->drm_dev))
++ goto out;
++
+ if (!ctx->i80_if) {
+ drm_crtc_handle_vblank(&ctx->crtc->base);
+
+diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+index c045330f9c48fa..3b89a8774db5a3 100644
+--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c
++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c
+@@ -182,6 +182,7 @@ struct fimd_context {
+ u32 i80ifcon;
+ bool i80_if;
+ bool suspended;
++ bool dp_clk_enabled;
+ wait_queue_head_t wait_vsync_queue;
+ atomic_t wait_vsync_event;
+ atomic_t win_updated;
+@@ -1003,7 +1004,18 @@ static void fimd_dp_clock_enable(struct exynos_drm_clk *clk, bool enable)
+ struct fimd_context *ctx = container_of(clk, struct fimd_context,
+ dp_clk);
+ u32 val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE;
++
++ if (enable == ctx->dp_clk_enabled)
++ return;
++
++ if (enable)
++ pm_runtime_resume_and_get(ctx->dev);
++
++ ctx->dp_clk_enabled = enable;
+ writel(val, ctx->regs + DP_MIE_CLKCON);
++
++ if (!enable)
++ pm_runtime_put(ctx->dev);
+ }
+
+ static const struct exynos_drm_crtc_ops fimd_crtc_ops = {
+diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+index 6aaca73eaee60b..af57192a7846f8 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c
++++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c
+@@ -576,7 +576,6 @@ static int ring_context_alloc(struct intel_context *ce)
+ /* One ringbuffer to rule them all */
+ GEM_BUG_ON(!engine->legacy.ring);
+ ce->ring = engine->legacy.ring;
+- ce->timeline = intel_timeline_get(engine->legacy.timeline);
+
+ GEM_BUG_ON(ce->state);
+ if (engine->context_size) {
+@@ -591,6 +590,8 @@ static int ring_context_alloc(struct intel_context *ce)
+ __set_bit(CONTEXT_VALID_BIT, &ce->flags);
+ }
+
++ ce->timeline = intel_timeline_get(engine->legacy.timeline);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c
+index 7a72faf29f2721..1881a97659a7a3 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_request.c
++++ b/drivers/gpu/drm/i915/selftests/i915_request.c
+@@ -71,8 +71,8 @@ static int igt_add_request(void *arg)
+ /* Basic preliminary test to create a request and let it loose! */
+
+ request = mock_request(rcs0(i915)->kernel_context, HZ / 10);
+- if (!request)
+- return -ENOMEM;
++ if (IS_ERR(request))
++ return PTR_ERR(request);
+
+ i915_request_add(request);
+
+@@ -89,8 +89,8 @@ static int igt_wait_request(void *arg)
+ /* Submit a request, then wait upon it */
+
+ request = mock_request(rcs0(i915)->kernel_context, T);
+- if (!request)
+- return -ENOMEM;
++ if (IS_ERR(request))
++ return PTR_ERR(request);
+
+ i915_request_get(request);
+
+@@ -158,8 +158,8 @@ static int igt_fence_wait(void *arg)
+ /* Submit a request, treat it as a fence and wait upon it */
+
+ request = mock_request(rcs0(i915)->kernel_context, T);
+- if (!request)
+- return -ENOMEM;
++ if (IS_ERR(request))
++ return PTR_ERR(request);
+
+ if (dma_fence_wait_timeout(&request->fence, false, T) != -ETIME) {
+ pr_err("fence wait success before submit (expected timeout)!\n");
+@@ -213,8 +213,8 @@ static int igt_request_rewind(void *arg)
+ GEM_BUG_ON(IS_ERR(ce));
+ request = mock_request(ce, 2 * HZ);
+ intel_context_put(ce);
+- if (!request) {
+- err = -ENOMEM;
++ if (IS_ERR(request)) {
++ err = PTR_ERR(request);
+ goto err_context_0;
+ }
+
+@@ -227,8 +227,8 @@ static int igt_request_rewind(void *arg)
+ GEM_BUG_ON(IS_ERR(ce));
+ vip = mock_request(ce, 0);
+ intel_context_put(ce);
+- if (!vip) {
+- err = -ENOMEM;
++ if (IS_ERR(vip)) {
++ err = PTR_ERR(vip);
+ goto err_context_1;
+ }
+
+diff --git a/drivers/gpu/drm/i915/selftests/mock_request.c b/drivers/gpu/drm/i915/selftests/mock_request.c
+index 09f747228dff57..1b0cf073e9643f 100644
+--- a/drivers/gpu/drm/i915/selftests/mock_request.c
++++ b/drivers/gpu/drm/i915/selftests/mock_request.c
+@@ -35,7 +35,7 @@ mock_request(struct intel_context *ce, unsigned long delay)
+ /* NB the i915->requests slab cache is enlarged to fit mock_request */
+ request = intel_context_create_request(ce);
+ if (IS_ERR(request))
+- return NULL;
++ return request;
+
+ request->mock.delay = delay;
+ return request;
+diff --git a/drivers/gpu/drm/tegra/dc.c b/drivers/gpu/drm/tegra/dc.c
+index 958d12da902d33..85b4c2cc544f8e 100644
+--- a/drivers/gpu/drm/tegra/dc.c
++++ b/drivers/gpu/drm/tegra/dc.c
+@@ -1134,10 +1134,16 @@ static struct drm_plane *tegra_dc_add_shared_planes(struct drm_device *drm,
+ if (wgrp->dc == dc->pipe) {
+ for (j = 0; j < wgrp->num_windows; j++) {
+ unsigned int index = wgrp->windows[j];
++ enum drm_plane_type type;
++
++ if (primary)
++ type = DRM_PLANE_TYPE_OVERLAY;
++ else
++ type = DRM_PLANE_TYPE_PRIMARY;
+
+ plane = tegra_shared_plane_create(drm, dc,
+ wgrp->index,
+- index);
++ index, type);
+ if (IS_ERR(plane))
+ return plane;
+
+@@ -1145,10 +1151,8 @@ static struct drm_plane *tegra_dc_add_shared_planes(struct drm_device *drm,
+ * Choose the first shared plane owned by this
+ * head as the primary plane.
+ */
+- if (!primary) {
+- plane->type = DRM_PLANE_TYPE_PRIMARY;
++ if (!primary)
+ primary = plane;
+- }
+ }
+ }
+ }
+@@ -1202,7 +1206,10 @@ static void tegra_crtc_reset(struct drm_crtc *crtc)
+ if (crtc->state)
+ tegra_crtc_atomic_destroy_state(crtc, crtc->state);
+
+- __drm_atomic_helper_crtc_reset(crtc, &state->base);
++ if (state)
++ __drm_atomic_helper_crtc_reset(crtc, &state->base);
++ else
++ __drm_atomic_helper_crtc_reset(crtc, NULL);
+ }
+
+ static struct drm_crtc_state *
+diff --git a/drivers/gpu/drm/tegra/hub.c b/drivers/gpu/drm/tegra/hub.c
+index 5ce771cba1335f..bafc865afe4ba7 100644
+--- a/drivers/gpu/drm/tegra/hub.c
++++ b/drivers/gpu/drm/tegra/hub.c
+@@ -551,9 +551,9 @@ static const struct drm_plane_helper_funcs tegra_shared_plane_helper_funcs = {
+ struct drm_plane *tegra_shared_plane_create(struct drm_device *drm,
+ struct tegra_dc *dc,
+ unsigned int wgrp,
+- unsigned int index)
++ unsigned int index,
++ enum drm_plane_type type)
+ {
+- enum drm_plane_type type = DRM_PLANE_TYPE_OVERLAY;
+ struct tegra_drm *tegra = drm->dev_private;
+ struct tegra_display_hub *hub = tegra->hub;
+ /* planes can be assigned to arbitrary CRTCs */
+diff --git a/drivers/gpu/drm/tegra/hub.h b/drivers/gpu/drm/tegra/hub.h
+index 3efa1be07ff882..aa219450413a5f 100644
+--- a/drivers/gpu/drm/tegra/hub.h
++++ b/drivers/gpu/drm/tegra/hub.h
+@@ -81,7 +81,8 @@ void tegra_display_hub_cleanup(struct tegra_display_hub *hub);
+ struct drm_plane *tegra_shared_plane_create(struct drm_device *drm,
+ struct tegra_dc *dc,
+ unsigned int wgrp,
+- unsigned int index);
++ unsigned int index,
++ enum drm_plane_type type);
+
+ int tegra_display_hub_atomic_check(struct drm_device *drm,
+ struct drm_atomic_state *state);
+diff --git a/drivers/gpu/drm/udl/udl_drv.c b/drivers/gpu/drm/udl/udl_drv.c
+index bcf32d188c1b18..8ea6fcde40bd12 100644
+--- a/drivers/gpu/drm/udl/udl_drv.c
++++ b/drivers/gpu/drm/udl/udl_drv.c
+@@ -115,9 +115,9 @@ static void udl_usb_disconnect(struct usb_interface *interface)
+ {
+ struct drm_device *dev = usb_get_intfdata(interface);
+
++ drm_dev_unplug(dev);
+ drm_kms_helper_poll_fini(dev);
+ udl_drop_usb(dev);
+- drm_dev_unplug(dev);
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index 8a390738d65baf..a605b31a8224c1 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -37,6 +37,12 @@ struct v3d_queue_state {
+ u64 emit_seqno;
+ };
+
++enum v3d_irq {
++ V3D_CORE_IRQ,
++ V3D_HUB_IRQ,
++ V3D_MAX_IRQS,
++};
++
+ struct v3d_dev {
+ struct drm_device drm;
+
+@@ -46,6 +52,7 @@ struct v3d_dev {
+ int ver;
+ bool single_irq_line;
+
++ int irq[V3D_MAX_IRQS];
+ void __iomem *hub_regs;
+ void __iomem *core_regs[3];
+ void __iomem *bridge_regs;
+diff --git a/drivers/gpu/drm/v3d/v3d_gem.c b/drivers/gpu/drm/v3d/v3d_gem.c
+index 64fe63c1938f52..32cc461937cf35 100644
+--- a/drivers/gpu/drm/v3d/v3d_gem.c
++++ b/drivers/gpu/drm/v3d/v3d_gem.c
+@@ -120,6 +120,8 @@ v3d_reset(struct v3d_dev *v3d)
+ if (false)
+ v3d_idle_axi(v3d, 0);
+
++ v3d_irq_disable(v3d);
++
+ v3d_idle_gca(v3d);
+ v3d_reset_v3d(v3d);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index c678c4ce4f1134..96766a788215fd 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -218,7 +218,7 @@ v3d_hub_irq(int irq, void *arg)
+ int
+ v3d_irq_init(struct v3d_dev *v3d)
+ {
+- int irq1, ret, core;
++ int irq, ret, core;
+
+ INIT_WORK(&v3d->overflow_mem_work, v3d_overflow_mem_work);
+
+@@ -229,17 +229,24 @@ v3d_irq_init(struct v3d_dev *v3d)
+ V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS);
+ V3D_WRITE(V3D_HUB_INT_CLR, V3D_HUB_IRQS);
+
+- irq1 = platform_get_irq(v3d_to_pdev(v3d), 1);
+- if (irq1 == -EPROBE_DEFER)
+- return irq1;
+- if (irq1 > 0) {
+- ret = devm_request_irq(v3d->drm.dev, irq1,
++ irq = platform_get_irq(v3d_to_pdev(v3d), 1);
++ if (irq == -EPROBE_DEFER)
++ return irq;
++ if (irq > 0) {
++ v3d->irq[V3D_CORE_IRQ] = irq;
++
++ ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
+ v3d_irq, IRQF_SHARED,
+ "v3d_core0", v3d);
+ if (ret)
+ goto fail;
+- ret = devm_request_irq(v3d->drm.dev,
+- platform_get_irq(v3d_to_pdev(v3d), 0),
++
++ irq = platform_get_irq(v3d_to_pdev(v3d), 0);
++ if (irq < 0)
++ return irq;
++ v3d->irq[V3D_HUB_IRQ] = irq;
++
++ ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_HUB_IRQ],
+ v3d_hub_irq, IRQF_SHARED,
+ "v3d_hub", v3d);
+ if (ret)
+@@ -247,8 +254,12 @@ v3d_irq_init(struct v3d_dev *v3d)
+ } else {
+ v3d->single_irq_line = true;
+
+- ret = devm_request_irq(v3d->drm.dev,
+- platform_get_irq(v3d_to_pdev(v3d), 0),
++ irq = platform_get_irq(v3d_to_pdev(v3d), 0);
++ if (irq < 0)
++ return irq;
++ v3d->irq[V3D_CORE_IRQ] = irq;
++
++ ret = devm_request_irq(v3d->drm.dev, v3d->irq[V3D_CORE_IRQ],
+ v3d_irq, IRQF_SHARED,
+ "v3d", v3d);
+ if (ret)
+@@ -283,12 +294,19 @@ void
+ v3d_irq_disable(struct v3d_dev *v3d)
+ {
+ int core;
++ int i;
+
+ /* Disable all interrupts. */
+ for (core = 0; core < v3d->cores; core++)
+ V3D_CORE_WRITE(core, V3D_CTL_INT_MSK_SET, ~0);
+ V3D_WRITE(V3D_HUB_INT_MSK_SET, ~0);
+
++ /* Finish any interrupt handler still in flight. */
++ for (i = 0; i < V3D_MAX_IRQS; i++) {
++ if (v3d->irq[i])
++ synchronize_irq(v3d->irq[i]);
++ }
++
+ /* Clear any pending interrupts we might have left. */
+ for (core = 0; core < v3d->cores; core++)
+ V3D_CORE_WRITE(core, V3D_CTL_INT_CLR, V3D_CORE_IRQS);
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 4b8f8e0ce8ca23..8bfa90e37ea17a 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -278,6 +278,8 @@
+ #define USB_DEVICE_ID_ASUS_AK1D 0x1125
+ #define USB_DEVICE_ID_CHICONY_TOSHIBA_WT10A 0x1408
+ #define USB_DEVICE_ID_CHICONY_ACER_SWITCH12 0x1421
++#define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA 0xb824
++#define USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2 0xb82c
+
+ #define USB_VENDOR_ID_CHUNGHWAT 0x2247
+ #define USB_DEVICE_ID_CHUNGHWAT_MULTITOUCH 0x0001
+@@ -1360,4 +1362,7 @@
+ #define USB_VENDOR_ID_SIGNOTEC 0x2133
+ #define USB_DEVICE_ID_SIGNOTEC_VIEWSONIC_PD1011 0x0018
+
++#define USB_VENDOR_ID_SMARTLINKTECHNOLOGY 0x4c4a
++#define USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155 0x4155
++
+ #endif
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index b3e7ede8f398e6..9c1c65612adb78 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -726,6 +726,8 @@ static const struct hid_device_id hid_ignore_list[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_AVERMEDIA, USB_DEVICE_ID_AVER_FM_MR800) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_AXENTIA, USB_DEVICE_ID_AXENTIA_FM_RADIO) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_BERKSHIRE, USB_DEVICE_ID_BERKSHIRE_PCWD) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_HP_5MP_CAMERA2) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_CIDC, 0x0103) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI470X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_CYGNAL, USB_DEVICE_ID_CYGNAL_RADIO_SI4713) },
+@@ -873,6 +875,7 @@ static const struct hid_device_id hid_ignore_list[] = {
+ #endif
+ { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_SMARTLINKTECHNOLOGY, USB_DEVICE_ID_SMARTLINKTECHNOLOGY_4155) },
+ { }
+ };
+
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 0f1c7a2f518599..abbfb53bb7dc9f 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2020,14 +2020,18 @@ static int wacom_initialize_remotes(struct wacom *wacom)
+
+ remote->remote_dir = kobject_create_and_add("wacom_remote",
+ &wacom->hdev->dev.kobj);
+- if (!remote->remote_dir)
++ if (!remote->remote_dir) {
++ kfifo_free(&remote->remote_fifo);
+ return -ENOMEM;
++ }
+
+ error = sysfs_create_files(remote->remote_dir, remote_unpair_attrs);
+
+ if (error) {
+ hid_err(wacom->hdev,
+ "cannot create sysfs group err: %d\n", error);
++ kfifo_free(&remote->remote_fifo);
++ kobject_put(remote->remote_dir);
+ return error;
+ }
+
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 0c6c54061088ea..8300ffb1ea9ae1 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -106,7 +106,9 @@ const struct vmbus_device vmbus_devs[] = {
+ },
+
+ /* File copy */
+- { .dev_type = HV_FCOPY,
++ /* fcopy always uses 16KB ring buffer size and is working well for last many years */
++ { .pref_ring_size = 0x4000,
++ .dev_type = HV_FCOPY,
+ HV_FCOPY_GUID,
+ .perf_device = false,
+ },
+@@ -123,11 +125,18 @@ const struct vmbus_device vmbus_devs[] = {
+ .perf_device = false,
+ },
+
+- /* Unknown GUID */
+- { .dev_type = HV_UNKNOWN,
++ /*
++ * Unknown GUID
++ * 64 KB ring buffer + 4 KB header should be sufficient size for any Hyper-V device apart
++ * from HV_NIC and HV_SCSI. This case avoid the fallback for unknown devices to allocate
++ * much bigger (2 MB) of ring size.
++ */
++ { .pref_ring_size = 0x11000,
++ .dev_type = HV_UNKNOWN,
+ .perf_device = false,
+ },
+ };
++EXPORT_SYMBOL_GPL(vmbus_devs);
+
+ static const struct {
+ guid_t guid;
+@@ -429,7 +438,7 @@ void hv_process_channel_removal(struct vmbus_channel *channel)
+ * init_vp_index() can (re-)use the CPU.
+ */
+ if (hv_is_perf_channel(channel))
+- hv_clear_alloced_cpu(channel->target_cpu);
++ hv_clear_allocated_cpu(channel->target_cpu);
+
+ /*
+ * Upon suspend, an in-use hv_sock channel is marked as "rescinded" and
+@@ -579,6 +588,17 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ */
+ mutex_lock(&vmbus_connection.channel_mutex);
+
++ list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
++ if (guid_equal(&channel->offermsg.offer.if_type,
++ &newchannel->offermsg.offer.if_type) &&
++ guid_equal(&channel->offermsg.offer.if_instance,
++ &newchannel->offermsg.offer.if_instance)) {
++ fnew = false;
++ newchannel->primary_channel = channel;
++ break;
++ }
++ }
++
+ init_vp_index(newchannel);
+
+ /* Remember the channels that should be cleaned up upon suspend. */
+@@ -591,16 +611,6 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ */
+ atomic_dec(&vmbus_connection.offer_in_progress);
+
+- list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
+- if (guid_equal(&channel->offermsg.offer.if_type,
+- &newchannel->offermsg.offer.if_type) &&
+- guid_equal(&channel->offermsg.offer.if_instance,
+- &newchannel->offermsg.offer.if_instance)) {
+- fnew = false;
+- break;
+- }
+- }
+-
+ if (fnew) {
+ list_add_tail(&newchannel->listentry,
+ &vmbus_connection.chn_list);
+@@ -622,7 +632,6 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ /*
+ * Process the sub-channel.
+ */
+- newchannel->primary_channel = channel;
+ list_add_tail(&newchannel->sc_list, &channel->sc_list);
+ }
+
+@@ -658,6 +667,30 @@ static void vmbus_process_offer(struct vmbus_channel *newchannel)
+ queue_work(wq, &newchannel->add_channel_work);
+ }
+
++/*
++ * Check if CPUs used by other channels of the same device.
++ * It should only be called by init_vp_index().
++ */
++static bool hv_cpuself_used(u32 cpu, struct vmbus_channel *chn)
++{
++ struct vmbus_channel *primary = chn->primary_channel;
++ struct vmbus_channel *sc;
++
++ lockdep_assert_held(&vmbus_connection.channel_mutex);
++
++ if (!primary)
++ return false;
++
++ if (primary->target_cpu == cpu)
++ return true;
++
++ list_for_each_entry(sc, &primary->sc_list, sc_list)
++ if (sc != chn && sc->target_cpu == cpu)
++ return true;
++
++ return false;
++}
++
+ /*
+ * We use this state to statically distribute the channel interrupt load.
+ */
+@@ -677,8 +710,9 @@ static int next_numa_node_id;
+ static void init_vp_index(struct vmbus_channel *channel)
+ {
+ bool perf_chn = hv_is_perf_channel(channel);
++ u32 i, ncpu = num_online_cpus();
+ cpumask_var_t available_mask;
+- struct cpumask *alloced_mask;
++ struct cpumask *allocated_mask;
+ u32 target_cpu;
+ int numa_node;
+
+@@ -695,35 +729,42 @@ static void init_vp_index(struct vmbus_channel *channel)
+ */
+ channel->target_cpu = VMBUS_CONNECT_CPU;
+ if (perf_chn)
+- hv_set_alloced_cpu(VMBUS_CONNECT_CPU);
++ hv_set_allocated_cpu(VMBUS_CONNECT_CPU);
+ return;
+ }
+
+- while (true) {
+- numa_node = next_numa_node_id++;
+- if (numa_node == nr_node_ids) {
+- next_numa_node_id = 0;
+- continue;
++ for (i = 1; i <= ncpu + 1; i++) {
++ while (true) {
++ numa_node = next_numa_node_id++;
++ if (numa_node == nr_node_ids) {
++ next_numa_node_id = 0;
++ continue;
++ }
++ if (cpumask_empty(cpumask_of_node(numa_node)))
++ continue;
++ break;
+ }
+- if (cpumask_empty(cpumask_of_node(numa_node)))
+- continue;
+- break;
+- }
+- alloced_mask = &hv_context.hv_numa_map[numa_node];
++ allocated_mask = &hv_context.hv_numa_map[numa_node];
+
+- if (cpumask_weight(alloced_mask) ==
+- cpumask_weight(cpumask_of_node(numa_node))) {
+- /*
+- * We have cycled through all the CPUs in the node;
+- * reset the alloced map.
+- */
+- cpumask_clear(alloced_mask);
+- }
++ if (cpumask_weight(allocated_mask) ==
++ cpumask_weight(cpumask_of_node(numa_node))) {
++ /*
++ * We have cycled through all the CPUs in the node;
++ * reset the allocated map.
++ */
++ cpumask_clear(allocated_mask);
++ }
+
+- cpumask_xor(available_mask, alloced_mask, cpumask_of_node(numa_node));
++ cpumask_xor(available_mask, allocated_mask,
++ cpumask_of_node(numa_node));
+
+- target_cpu = cpumask_first(available_mask);
+- cpumask_set_cpu(target_cpu, alloced_mask);
++ target_cpu = cpumask_first(available_mask);
++ cpumask_set_cpu(target_cpu, allocated_mask);
++
++ if (channel->offermsg.offer.sub_channel_index >= ncpu ||
++ i > ncpu || !hv_cpuself_used(target_cpu, channel))
++ break;
++ }
+
+ channel->target_cpu = target_cpu;
+
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index a785d790e0aaee..1137c25d9a7aee 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -406,7 +406,12 @@ static inline bool hv_is_perf_channel(struct vmbus_channel *channel)
+ return vmbus_devs[channel->device_id].perf_device;
+ }
+
+-static inline bool hv_is_alloced_cpu(unsigned int cpu)
++static inline size_t hv_dev_ring_size(struct vmbus_channel *channel)
++{
++ return vmbus_devs[channel->device_id].pref_ring_size;
++}
++
++static inline bool hv_is_allocated_cpu(unsigned int cpu)
+ {
+ struct vmbus_channel *channel, *sc;
+
+@@ -428,23 +433,23 @@ static inline bool hv_is_alloced_cpu(unsigned int cpu)
+ return false;
+ }
+
+-static inline void hv_set_alloced_cpu(unsigned int cpu)
++static inline void hv_set_allocated_cpu(unsigned int cpu)
+ {
+ cpumask_set_cpu(cpu, &hv_context.hv_numa_map[cpu_to_node(cpu)]);
+ }
+
+-static inline void hv_clear_alloced_cpu(unsigned int cpu)
++static inline void hv_clear_allocated_cpu(unsigned int cpu)
+ {
+- if (hv_is_alloced_cpu(cpu))
++ if (hv_is_allocated_cpu(cpu))
+ return;
+ cpumask_clear_cpu(cpu, &hv_context.hv_numa_map[cpu_to_node(cpu)]);
+ }
+
+-static inline void hv_update_alloced_cpus(unsigned int old_cpu,
++static inline void hv_update_allocated_cpus(unsigned int old_cpu,
+ unsigned int new_cpu)
+ {
+- hv_set_alloced_cpu(new_cpu);
+- hv_clear_alloced_cpu(old_cpu);
++ hv_set_allocated_cpu(new_cpu);
++ hv_clear_allocated_cpu(old_cpu);
+ }
+
+ #ifdef CONFIG_HYPERV_TESTING
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index e8bea7c7916913..2fed2b169a9106 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -1790,7 +1790,7 @@ static ssize_t target_cpu_store(struct vmbus_channel *channel,
+
+ /* See init_vp_index(). */
+ if (hv_is_perf_channel(channel))
+- hv_update_alloced_cpus(origin_cpu, target_cpu);
++ hv_update_allocated_cpus(origin_cpu, target_cpu);
+
+ /* Currently set only for storvsc channels. */
+ if (channel->change_target_cpu_callback) {
+diff --git a/drivers/hwmon/pmbus/max34440.c b/drivers/hwmon/pmbus/max34440.c
+index f4cb196aaaf314..f8108f6bd58cf6 100644
+--- a/drivers/hwmon/pmbus/max34440.c
++++ b/drivers/hwmon/pmbus/max34440.c
+@@ -34,16 +34,21 @@ enum chips { max34440, max34441, max34446, max34451, max34460, max34461 };
+ /*
+ * The whole max344* family have IOUT_OC_WARN_LIMIT and IOUT_OC_FAULT_LIMIT
+ * swapped from the standard pmbus spec addresses.
++ * For max34451, version MAX34451ETNA6+ and later has this issue fixed.
+ */
+ #define MAX34440_IOUT_OC_WARN_LIMIT 0x46
+ #define MAX34440_IOUT_OC_FAULT_LIMIT 0x4A
+
++#define MAX34451ETNA6_MFR_REV 0x0012
++
+ #define MAX34451_MFR_CHANNEL_CONFIG 0xe4
+ #define MAX34451_MFR_CHANNEL_CONFIG_SEL_MASK 0x3f
+
+ struct max34440_data {
+ int id;
+ struct pmbus_driver_info info;
++ u8 iout_oc_warn_limit;
++ u8 iout_oc_fault_limit;
+ };
+
+ #define to_max34440_data(x) container_of(x, struct max34440_data, info)
+@@ -60,11 +65,11 @@ static int max34440_read_word_data(struct i2c_client *client, int page,
+ switch (reg) {
+ case PMBUS_IOUT_OC_FAULT_LIMIT:
+ ret = pmbus_read_word_data(client, page, phase,
+- MAX34440_IOUT_OC_FAULT_LIMIT);
++ data->iout_oc_fault_limit);
+ break;
+ case PMBUS_IOUT_OC_WARN_LIMIT:
+ ret = pmbus_read_word_data(client, page, phase,
+- MAX34440_IOUT_OC_WARN_LIMIT);
++ data->iout_oc_warn_limit);
+ break;
+ case PMBUS_VIRT_READ_VOUT_MIN:
+ ret = pmbus_read_word_data(client, page, phase,
+@@ -133,11 +138,11 @@ static int max34440_write_word_data(struct i2c_client *client, int page,
+
+ switch (reg) {
+ case PMBUS_IOUT_OC_FAULT_LIMIT:
+- ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_FAULT_LIMIT,
++ ret = pmbus_write_word_data(client, page, data->iout_oc_fault_limit,
+ word);
+ break;
+ case PMBUS_IOUT_OC_WARN_LIMIT:
+- ret = pmbus_write_word_data(client, page, MAX34440_IOUT_OC_WARN_LIMIT,
++ ret = pmbus_write_word_data(client, page, data->iout_oc_warn_limit,
+ word);
+ break;
+ case PMBUS_VIRT_RESET_POUT_HISTORY:
+@@ -235,6 +240,25 @@ static int max34451_set_supported_funcs(struct i2c_client *client,
+ */
+
+ int page, rv;
++ bool max34451_na6 = false;
++
++ rv = i2c_smbus_read_word_data(client, PMBUS_MFR_REVISION);
++ if (rv < 0)
++ return rv;
++
++ if (rv >= MAX34451ETNA6_MFR_REV) {
++ max34451_na6 = true;
++ data->info.format[PSC_VOLTAGE_IN] = direct;
++ data->info.format[PSC_CURRENT_IN] = direct;
++ data->info.m[PSC_VOLTAGE_IN] = 1;
++ data->info.b[PSC_VOLTAGE_IN] = 0;
++ data->info.R[PSC_VOLTAGE_IN] = 3;
++ data->info.m[PSC_CURRENT_IN] = 1;
++ data->info.b[PSC_CURRENT_IN] = 0;
++ data->info.R[PSC_CURRENT_IN] = 2;
++ data->iout_oc_fault_limit = PMBUS_IOUT_OC_FAULT_LIMIT;
++ data->iout_oc_warn_limit = PMBUS_IOUT_OC_WARN_LIMIT;
++ }
+
+ for (page = 0; page < 16; page++) {
+ rv = i2c_smbus_write_byte_data(client, PMBUS_PAGE, page);
+@@ -251,16 +275,30 @@ static int max34451_set_supported_funcs(struct i2c_client *client,
+ case 0x20:
+ data->info.func[page] = PMBUS_HAVE_VOUT |
+ PMBUS_HAVE_STATUS_VOUT;
++
++ if (max34451_na6)
++ data->info.func[page] |= PMBUS_HAVE_VIN |
++ PMBUS_HAVE_STATUS_INPUT;
+ break;
+ case 0x21:
+ data->info.func[page] = PMBUS_HAVE_VOUT;
++
++ if (max34451_na6)
++ data->info.func[page] |= PMBUS_HAVE_VIN;
+ break;
+ case 0x22:
+ data->info.func[page] = PMBUS_HAVE_IOUT |
+ PMBUS_HAVE_STATUS_IOUT;
++
++ if (max34451_na6)
++ data->info.func[page] |= PMBUS_HAVE_IIN |
++ PMBUS_HAVE_STATUS_INPUT;
+ break;
+ case 0x23:
+ data->info.func[page] = PMBUS_HAVE_IOUT;
++
++ if (max34451_na6)
++ data->info.func[page] |= PMBUS_HAVE_IIN;
+ break;
+ default:
+ break;
+@@ -494,6 +532,8 @@ static int max34440_probe(struct i2c_client *client)
+ return -ENOMEM;
+ data->id = i2c_match_id(max34440_id, client)->driver_data;
+ data->info = max34440_info[data->id];
++ data->iout_oc_fault_limit = MAX34440_IOUT_OC_FAULT_LIMIT;
++ data->iout_oc_warn_limit = MAX34440_IOUT_OC_WARN_LIMIT;
+
+ if (data->id == max34451) {
+ rv = max34451_set_supported_funcs(client, data);
+diff --git a/drivers/i2c/busses/i2c-robotfuzz-osif.c b/drivers/i2c/busses/i2c-robotfuzz-osif.c
+index 66dfa211e736b1..8e4cf9028b2342 100644
+--- a/drivers/i2c/busses/i2c-robotfuzz-osif.c
++++ b/drivers/i2c/busses/i2c-robotfuzz-osif.c
+@@ -111,6 +111,11 @@ static u32 osif_func(struct i2c_adapter *adapter)
+ return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
+ }
+
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks osif_quirks = {
++ .flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ static const struct i2c_algorithm osif_algorithm = {
+ .master_xfer = osif_xfer,
+ .functionality = osif_func,
+@@ -143,6 +148,7 @@ static int osif_probe(struct usb_interface *interface,
+
+ priv->adapter.owner = THIS_MODULE;
+ priv->adapter.class = I2C_CLASS_HWMON;
++ priv->adapter.quirks = &osif_quirks;
+ priv->adapter.algo = &osif_algorithm;
+ priv->adapter.algo_data = priv;
+ snprintf(priv->adapter.name, sizeof(priv->adapter.name),
+diff --git a/drivers/i2c/busses/i2c-tiny-usb.c b/drivers/i2c/busses/i2c-tiny-usb.c
+index d1fa9ff5aeab48..204cc0883da641 100644
+--- a/drivers/i2c/busses/i2c-tiny-usb.c
++++ b/drivers/i2c/busses/i2c-tiny-usb.c
+@@ -140,6 +140,11 @@ static u32 usb_func(struct i2c_adapter *adapter)
+ return ret;
+ }
+
++/* prevent invalid 0-length usb_control_msg */
++static const struct i2c_adapter_quirks usb_quirks = {
++ .flags = I2C_AQ_NO_ZERO_LEN_READ,
++};
++
+ /* This is the actual algorithm we define */
+ static const struct i2c_algorithm usb_algorithm = {
+ .master_xfer = usb_xfer,
+@@ -244,6 +249,7 @@ static int i2c_tiny_usb_probe(struct usb_interface *interface,
+ /* setup i2c adapter description */
+ dev->adapter.owner = THIS_MODULE;
+ dev->adapter.class = I2C_CLASS_HWMON;
++ dev->adapter.quirks = &usb_quirks;
+ dev->adapter.algo = &usb_algorithm;
+ dev->adapter.algo_data = dev;
+ snprintf(dev->adapter.name, sizeof(dev->adapter.name),
+diff --git a/drivers/iio/pressure/zpa2326.c b/drivers/iio/pressure/zpa2326.c
+index b8bc2c67462d7c..00791bc65b7006 100644
+--- a/drivers/iio/pressure/zpa2326.c
++++ b/drivers/iio/pressure/zpa2326.c
+@@ -582,7 +582,7 @@ static int zpa2326_fill_sample_buffer(struct iio_dev *indio_dev,
+ struct {
+ u32 pressure;
+ u16 temperature;
+- u64 timestamp;
++ aligned_s64 timestamp;
+ } sample;
+ int err;
+
+diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
+index 44362f693df9f7..ce41f235af253c 100644
+--- a/drivers/infiniband/core/iwcm.c
++++ b/drivers/infiniband/core/iwcm.c
+@@ -211,8 +211,7 @@ static void free_cm_id(struct iwcm_id_private *cm_id_priv)
+ */
+ static int iwcm_deref_id(struct iwcm_id_private *cm_id_priv)
+ {
+- BUG_ON(atomic_read(&cm_id_priv->refcount)==0);
+- if (atomic_dec_and_test(&cm_id_priv->refcount)) {
++ if (refcount_dec_and_test(&cm_id_priv->refcount)) {
+ BUG_ON(!list_empty(&cm_id_priv->work_list));
+ free_cm_id(cm_id_priv);
+ return 1;
+@@ -225,7 +224,7 @@ static void add_ref(struct iw_cm_id *cm_id)
+ {
+ struct iwcm_id_private *cm_id_priv;
+ cm_id_priv = container_of(cm_id, struct iwcm_id_private, id);
+- atomic_inc(&cm_id_priv->refcount);
++ refcount_inc(&cm_id_priv->refcount);
+ }
+
+ static void rem_ref(struct iw_cm_id *cm_id)
+@@ -257,7 +256,7 @@ struct iw_cm_id *iw_create_cm_id(struct ib_device *device,
+ cm_id_priv->id.add_ref = add_ref;
+ cm_id_priv->id.rem_ref = rem_ref;
+ spin_lock_init(&cm_id_priv->lock);
+- atomic_set(&cm_id_priv->refcount, 1);
++ refcount_set(&cm_id_priv->refcount, 1);
+ init_waitqueue_head(&cm_id_priv->connect_wait);
+ init_completion(&cm_id_priv->destroy_comp);
+ INIT_LIST_HEAD(&cm_id_priv->work_list);
+@@ -368,12 +367,9 @@ EXPORT_SYMBOL(iw_cm_disconnect);
+ /*
+ * CM_ID <-- DESTROYING
+ *
+- * Clean up all resources associated with the connection and release
+- * the initial reference taken by iw_create_cm_id.
+- *
+- * Returns true if and only if the last cm_id_priv reference has been dropped.
++ * Clean up all resources associated with the connection.
+ */
+-static bool destroy_cm_id(struct iw_cm_id *cm_id)
++static void destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+ struct iwcm_id_private *cm_id_priv;
+ struct ib_qp *qp;
+@@ -442,20 +438,22 @@ static bool destroy_cm_id(struct iw_cm_id *cm_id)
+ iwpm_remove_mapinfo(&cm_id->local_addr, &cm_id->m_local_addr);
+ iwpm_remove_mapping(&cm_id->local_addr, RDMA_NL_IWCM);
+ }
+-
+- return iwcm_deref_id(cm_id_priv);
+ }
+
+ /*
+- * This function is only called by the application thread and cannot
+- * be called by the event thread. The function will wait for all
+- * references to be released on the cm_id and then kfree the cm_id
+- * object.
++ * Destroy cm_id. If the cm_id still has other references, wait for all
++ * references to be released on the cm_id and then release the initial
++ * reference taken by iw_create_cm_id.
+ */
+ void iw_destroy_cm_id(struct iw_cm_id *cm_id)
+ {
+- if (!destroy_cm_id(cm_id))
++ struct iwcm_id_private *cm_id_priv;
++
++ cm_id_priv = container_of(cm_id, struct iwcm_id_private, id);
++ destroy_cm_id(cm_id);
++ if (refcount_read(&cm_id_priv->refcount) > 1)
+ flush_workqueue(iwcm_wq);
++ iwcm_deref_id(cm_id_priv);
+ }
+ EXPORT_SYMBOL(iw_destroy_cm_id);
+
+@@ -1038,8 +1036,10 @@ static void cm_work_handler(struct work_struct *_work)
+
+ if (!test_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags)) {
+ ret = process_event(cm_id_priv, &levent);
+- if (ret)
+- WARN_ON_ONCE(destroy_cm_id(&cm_id_priv->id));
++ if (ret) {
++ destroy_cm_id(&cm_id_priv->id);
++ WARN_ON_ONCE(iwcm_deref_id(cm_id_priv));
++ }
+ } else
+ pr_debug("dropping event %d\n", levent.event);
+ if (iwcm_deref_id(cm_id_priv))
+@@ -1097,7 +1097,7 @@ static int cm_event_handler(struct iw_cm_id *cm_id,
+ }
+ }
+
+- atomic_inc(&cm_id_priv->refcount);
++ refcount_inc(&cm_id_priv->refcount);
+ if (list_empty(&cm_id_priv->work_list)) {
+ list_add_tail(&work->list, &cm_id_priv->work_list);
+ queue_work(iwcm_wq, &work->work);
+diff --git a/drivers/infiniband/core/iwcm.h b/drivers/infiniband/core/iwcm.h
+index 82c2cd1b0a8043..bf74639be1287c 100644
+--- a/drivers/infiniband/core/iwcm.h
++++ b/drivers/infiniband/core/iwcm.h
+@@ -52,7 +52,7 @@ struct iwcm_id_private {
+ wait_queue_head_t connect_wait;
+ struct list_head work_list;
+ spinlock_t lock;
+- atomic_t refcount;
++ refcount_t refcount;
+ struct list_head work_free_list;
+ };
+
+diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c
+index f6bae1f7545b5b..33636268d43d97 100644
+--- a/drivers/infiniband/hw/mlx5/counters.c
++++ b/drivers/infiniband/hw/mlx5/counters.c
+@@ -279,7 +279,7 @@ static int mlx5_ib_get_hw_stats(struct ib_device *ibdev,
+ */
+ goto done;
+ }
+- ret = mlx5_lag_query_cong_counters(dev->mdev,
++ ret = mlx5_lag_query_cong_counters(mdev,
+ stats->value +
+ cnts->num_q_counters,
+ cnts->num_cong_counters,
+diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c
+index f67ebd9f3cdd19..301c061bb31902 100644
+--- a/drivers/infiniband/hw/mlx5/devx.c
++++ b/drivers/infiniband/hw/mlx5/devx.c
+@@ -1809,6 +1809,7 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ /* Level1 is valid for future use, no need to free */
+ return -ENOMEM;
+
++ INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ err = xa_insert(&event->object_ids,
+ key_level2,
+ obj_event,
+@@ -1817,7 +1818,6 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table,
+ kfree(obj_event);
+ return err;
+ }
+- INIT_LIST_HEAD(&obj_event->obj_sub_list);
+ }
+
+ return 0;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 1800cea46b2d34..0e20b99cae8b6f 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1667,6 +1667,33 @@ static void deallocate_uars(struct mlx5_ib_dev *dev,
+ mlx5_cmd_free_uar(dev->mdev, bfregi->sys_pages[i]);
+ }
+
++static int mlx5_ib_enable_lb_mp(struct mlx5_core_dev *master,
++ struct mlx5_core_dev *slave)
++{
++ int err;
++
++ err = mlx5_nic_vport_update_local_lb(master, true);
++ if (err)
++ return err;
++
++ err = mlx5_nic_vport_update_local_lb(slave, true);
++ if (err)
++ goto out;
++
++ return 0;
++
++out:
++ mlx5_nic_vport_update_local_lb(master, false);
++ return err;
++}
++
++static void mlx5_ib_disable_lb_mp(struct mlx5_core_dev *master,
++ struct mlx5_core_dev *slave)
++{
++ mlx5_nic_vport_update_local_lb(slave, false);
++ mlx5_nic_vport_update_local_lb(master, false);
++}
++
+ int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp)
+ {
+ int err = 0;
+@@ -3424,6 +3451,8 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
+
+ lockdep_assert_held(&mlx5_ib_multiport_mutex);
+
++ mlx5_ib_disable_lb_mp(ibdev->mdev, mpi->mdev);
++
+ mlx5_ib_cleanup_cong_debugfs(ibdev, port_num);
+
+ spin_lock(&port->mp.mpi_lock);
+@@ -3512,6 +3541,10 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev,
+
+ mlx5_ib_init_cong_debugfs(ibdev, port_num);
+
++ err = mlx5_ib_enable_lb_mp(ibdev->mdev, mpi->mdev);
++ if (err)
++ goto unbind;
++
+ return true;
+
+ unbind:
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 00b973e0f79ffe..a0362201b5d35b 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -147,6 +147,7 @@ static const struct xpad_device {
+ { 0x05fd, 0x107a, "InterAct 'PowerPad Pro' X-Box pad (Germany)", 0, XTYPE_XBOX },
+ { 0x05fe, 0x3030, "Chic Controller", 0, XTYPE_XBOX },
+ { 0x05fe, 0x3031, "Chic Controller", 0, XTYPE_XBOX },
++ { 0x0502, 0x1305, "Acer NGR200", 0, XTYPE_XBOX },
+ { 0x062a, 0x0020, "Logic3 Xbox GamePad", 0, XTYPE_XBOX },
+ { 0x062a, 0x0033, "Competition Pro Steering Wheel", 0, XTYPE_XBOX },
+ { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+@@ -275,6 +276,7 @@ static const struct xpad_device {
+ { 0x1689, 0xfd00, "Razer Onza Tournament Edition", 0, XTYPE_XBOX360 },
+ { 0x1689, 0xfd01, "Razer Onza Classic Edition", 0, XTYPE_XBOX360 },
+ { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 },
++ { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+@@ -439,6 +441,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x045e), /* Microsoft X-Box 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft X-Box One controllers */
+ XPAD_XBOX360_VENDOR(0x046d), /* Logitech X-Box 360 style controllers */
++ XPAD_XBOX360_VENDOR(0x0502), /* Acer Inc. Xbox 360 style controllers */
+ XPAD_XBOX360_VENDOR(0x056e), /* Elecom JC-U3613M */
+ XPAD_XBOX360_VENDOR(0x06a3), /* Saitek P3600 */
+ XPAD_XBOX360_VENDOR(0x0738), /* Mad Catz X-Box 360 controllers */
+@@ -451,6 +454,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x0f0d), /* Hori Controllers */
+ XPAD_XBOXONE_VENDOR(0x0f0d), /* Hori Controllers */
+ XPAD_XBOX360_VENDOR(0x1038), /* SteelSeries Controllers */
++ XPAD_XBOXONE_VENDOR(0x10f5), /* Turtle Beach Controllers */
+ XPAD_XBOX360_VENDOR(0x11c9), /* Nacon GC100XF */
+ XPAD_XBOX360_VENDOR(0x11ff), /* PXN V900 */
+ XPAD_XBOX360_VENDOR(0x1209), /* Ardwiino Controllers */
+@@ -462,6 +466,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x15e4), /* Numark X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x162e), /* Joytech X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */
++ XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */
+ XPAD_XBOX360_VENDOR(0x1bad), /* Harminix Rock Band Guitar and Drums */
+ XPAD_XBOX360_VENDOR(0x20d6), /* PowerA Controllers */
+ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA Controllers */
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index d4c8275d49c372..26526924ffd2fc 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -817,7 +817,7 @@ static int atkbd_probe(struct atkbd *atkbd)
+
+ if (atkbd_skip_getid(atkbd)) {
+ atkbd->id = 0xab83;
+- return 0;
++ goto deactivate_kbd;
+ }
+
+ /*
+@@ -854,6 +854,7 @@ static int atkbd_probe(struct atkbd *atkbd)
+ return -1;
+ }
+
++deactivate_kbd:
+ /*
+ * Make sure nothing is coming from the keyboard and disturbs our
+ * internal state.
+diff --git a/drivers/leds/led-class-multicolor.c b/drivers/leds/led-class-multicolor.c
+index e317408583df9f..5b1479b5d32ca1 100644
+--- a/drivers/leds/led-class-multicolor.c
++++ b/drivers/leds/led-class-multicolor.c
+@@ -59,7 +59,8 @@ static ssize_t multi_intensity_store(struct device *dev,
+ for (i = 0; i < mcled_cdev->num_colors; i++)
+ mcled_cdev->subled_info[i].intensity = intensity_value[i];
+
+- led_set_brightness(led_cdev, led_cdev->brightness);
++ if (!test_bit(LED_BLINK_SW, &led_cdev->work_flags))
++ led_set_brightness(led_cdev, led_cdev->brightness);
+ ret = size;
+ err_out:
+ mutex_unlock(&led_cdev->led_access);
+diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c
+index 6f54501dc7762a..cb31ad917b352f 100644
+--- a/drivers/mailbox/mailbox.c
++++ b/drivers/mailbox/mailbox.c
+@@ -459,8 +459,8 @@ void mbox_free_channel(struct mbox_chan *chan)
+ if (chan->txdone_method == TXDONE_BY_ACK)
+ chan->txdone_method = TXDONE_BY_POLL;
+
+- module_put(chan->mbox->dev->driver->owner);
+ spin_unlock_irqrestore(&chan->lock, flags);
++ module_put(chan->mbox->dev->driver->owner);
+ }
+ EXPORT_SYMBOL_GPL(mbox_free_channel);
+
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 85569bd253b2c0..a80de1cfbbd071 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1765,7 +1765,12 @@ static void cache_set_flush(struct closure *cl)
+ mutex_unlock(&b->write_lock);
+ }
+
+- if (ca->alloc_thread)
++ /*
++ * If the register_cache_set() call to bch_cache_set_alloc() failed,
++ * ca has not been assigned a value and return error.
++ * So we need check ca is not NULL during bch_cache_set_unregister().
++ */
++ if (ca && ca->alloc_thread)
+ kthread_stop(ca->alloc_thread);
+
+ if (c->journal.cur) {
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 99995b1804b324..3c0960f294fb5e 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -2381,7 +2381,7 @@ static int super_init_validation(struct raid_set *rs, struct md_rdev *rdev)
+ */
+ sb_retrieve_failed_devices(sb, failed_devices);
+ rdev_for_each(r, mddev) {
+- if (test_bit(Journal, &rdev->flags) ||
++ if (test_bit(Journal, &r->flags) ||
+ !r->sb_page)
+ continue;
+ sb2 = page_address(r->sb_page);
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 91bc764a854c67..f2ba541ed89d4d 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -546,7 +546,7 @@ static int md_bitmap_new_disk_sb(struct bitmap *bitmap)
+ * is a good choice? We choose COUNTER_MAX / 2 arbitrarily.
+ */
+ write_behind = bitmap->mddev->bitmap_info.max_write_behind;
+- if (write_behind > COUNTER_MAX)
++ if (write_behind > COUNTER_MAX / 2)
+ write_behind = COUNTER_MAX / 2;
+ sb->write_behind = cpu_to_le32(write_behind);
+ bitmap->mddev->bitmap_info.max_write_behind = write_behind;
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index dada9b2258a612..51e05ea3f1373e 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -3290,6 +3290,7 @@ static int raid1_reshape(struct mddev *mddev)
+ /* ok, everything is stopped */
+ oldpool = conf->r1bio_pool;
+ conf->r1bio_pool = newpool;
++ init_waitqueue_head(&conf->r1bio_pool.wait);
+
+ for (d = d2 = 0; d < conf->raid_disks; d++) {
+ struct md_rdev *rdev = conf->mirrors[d].rdev;
+diff --git a/drivers/media/platform/omap3isp/ispccdc.c b/drivers/media/platform/omap3isp/ispccdc.c
+index 0fbb2aa6dd2c02..6f46e239895323 100644
+--- a/drivers/media/platform/omap3isp/ispccdc.c
++++ b/drivers/media/platform/omap3isp/ispccdc.c
+@@ -446,8 +446,8 @@ static int ccdc_lsc_config(struct isp_ccdc_device *ccdc,
+ if (ret < 0)
+ goto done;
+
+- dma_sync_sg_for_cpu(isp->dev, req->table.sgt.sgl,
+- req->table.sgt.nents, DMA_TO_DEVICE);
++ dma_sync_sgtable_for_cpu(isp->dev, &req->table.sgt,
++ DMA_TO_DEVICE);
+
+ if (copy_from_user(req->table.addr, config->lsc,
+ req->config.size)) {
+@@ -455,8 +455,8 @@ static int ccdc_lsc_config(struct isp_ccdc_device *ccdc,
+ goto done;
+ }
+
+- dma_sync_sg_for_device(isp->dev, req->table.sgt.sgl,
+- req->table.sgt.nents, DMA_TO_DEVICE);
++ dma_sync_sgtable_for_device(isp->dev, &req->table.sgt,
++ DMA_TO_DEVICE);
+ }
+
+ spin_lock_irqsave(&ccdc->lsc.req_lock, flags);
+diff --git a/drivers/media/platform/omap3isp/ispstat.c b/drivers/media/platform/omap3isp/ispstat.c
+index 5b9b57f4d9bf83..e8a1837b1b74f3 100644
+--- a/drivers/media/platform/omap3isp/ispstat.c
++++ b/drivers/media/platform/omap3isp/ispstat.c
+@@ -161,8 +161,7 @@ static void isp_stat_buf_sync_for_device(struct ispstat *stat,
+ if (ISP_STAT_USES_DMAENGINE(stat))
+ return;
+
+- dma_sync_sg_for_device(stat->isp->dev, buf->sgt.sgl,
+- buf->sgt.nents, DMA_FROM_DEVICE);
++ dma_sync_sgtable_for_device(stat->isp->dev, &buf->sgt, DMA_FROM_DEVICE);
+ }
+
+ static void isp_stat_buf_sync_for_cpu(struct ispstat *stat,
+@@ -171,8 +170,7 @@ static void isp_stat_buf_sync_for_cpu(struct ispstat *stat,
+ if (ISP_STAT_USES_DMAENGINE(stat))
+ return;
+
+- dma_sync_sg_for_cpu(stat->isp->dev, buf->sgt.sgl,
+- buf->sgt.nents, DMA_FROM_DEVICE);
++ dma_sync_sgtable_for_cpu(stat->isp->dev, &buf->sgt, DMA_FROM_DEVICE);
+ }
+
+ static void isp_stat_buf_clear(struct ispstat *stat)
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 2b4ddfb8a29128..4f67ba3a7c028e 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1429,7 +1429,9 @@ static bool uvc_ctrl_xctrls_has_control(const struct v4l2_ext_control *xctrls,
+ }
+
+ static void uvc_ctrl_send_events(struct uvc_fh *handle,
+- const struct v4l2_ext_control *xctrls, unsigned int xctrls_count)
++ struct uvc_entity *entity,
++ const struct v4l2_ext_control *xctrls,
++ unsigned int xctrls_count)
+ {
+ struct uvc_control_mapping *mapping;
+ struct uvc_control *ctrl;
+@@ -1440,6 +1442,9 @@ static void uvc_ctrl_send_events(struct uvc_fh *handle,
+ u32 changes = V4L2_EVENT_CTRL_CH_VALUE;
+
+ ctrl = uvc_find_control(handle->chain, xctrls[i].id, &mapping);
++ if (ctrl->entity != entity)
++ continue;
++
+ if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+ /* Notification will be sent from an Interrupt event. */
+ continue;
+@@ -1560,14 +1565,19 @@ int uvc_ctrl_begin(struct uvc_video_chain *chain)
+ return mutex_lock_interruptible(&chain->ctrl_mutex) ? -ERESTARTSYS : 0;
+ }
+
++/*
++ * Returns the number of uvc controls that have been correctly set, or a
++ * negative number if there has been an error.
++ */
+ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ struct uvc_fh *handle,
+ struct uvc_entity *entity,
+ int rollback)
+ {
++ unsigned int processed_ctrls = 0;
+ struct uvc_control *ctrl;
+ unsigned int i;
+- int ret;
++ int ret = 0;
+
+ if (entity == NULL)
+ return 0;
+@@ -1595,8 +1605,9 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ dev->intfnum, ctrl->info.selector,
+ uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+ ctrl->info.size);
+- else
+- ret = 0;
++
++ if (!ret)
++ processed_ctrls++;
+
+ if (rollback || ret < 0)
+ memcpy(uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT),
+@@ -1605,15 +1616,23 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+
+ ctrl->dirty = 0;
+
+- if (ret < 0)
+- return ret;
+-
+- if (!rollback && handle &&
++ if (!rollback && handle && !ret &&
+ ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+ uvc_ctrl_set_handle(handle, ctrl, handle);
++
++ if (ret < 0 && !rollback) {
++ /*
++ * If we fail to set a control, we need to rollback
++ * the next ones.
++ */
++ rollback = 1;
++ }
+ }
+
+- return 0;
++ if (ret)
++ return ret;
++
++ return processed_ctrls;
+ }
+
+ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+@@ -1622,21 +1641,31 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+ {
+ struct uvc_video_chain *chain = handle->chain;
+ struct uvc_entity *entity;
+- int ret = 0;
++ int ret_out = 0;
++ int ret;
+
+ /* Find the control. */
+ list_for_each_entry(entity, &chain->entities, chain) {
+ ret = uvc_ctrl_commit_entity(chain->dev, handle, entity,
+ rollback);
+- if (ret < 0)
+- goto done;
++ if (ret < 0) {
++ /*
++ * When we fail to commit an entity, we need to
++ * restore the UVC_CTRL_DATA_BACKUP for all the
++ * controls in the other entities, otherwise our cache
++ * and the hardware will be out of sync.
++ */
++ rollback = 1;
++
++ ret_out = ret;
++ } else if (ret > 0 && !rollback) {
++ uvc_ctrl_send_events(handle, entity, xctrls,
++ xctrls_count);
++ }
+ }
+
+- if (!rollback)
+- uvc_ctrl_send_events(handle, xctrls, xctrls_count);
+-done:
+ mutex_unlock(&chain->ctrl_mutex);
+- return ret;
++ return ret_out;
+ }
+
+ int uvc_ctrl_get(struct uvc_video_chain *chain,
+diff --git a/drivers/mfd/max14577.c b/drivers/mfd/max14577.c
+index be185e9d5f16b1..c9e56145b08bd5 100644
+--- a/drivers/mfd/max14577.c
++++ b/drivers/mfd/max14577.c
+@@ -467,6 +467,7 @@ static int max14577_i2c_remove(struct i2c_client *i2c)
+ {
+ struct max14577 *max14577 = i2c_get_clientdata(i2c);
+
++ device_init_wakeup(max14577->dev, false);
+ mfd_remove_devices(max14577->dev);
+ regmap_del_irq_chip(max14577->irq, max14577->irq_data);
+ if (max14577->dev_type == MAXIM_DEVICE_TYPE_MAX77836)
+diff --git a/drivers/misc/vmw_vmci/vmci_host.c b/drivers/misc/vmw_vmci/vmci_host.c
+index 4a903770b8e1d5..e7965ee6bdba9a 100644
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -227,6 +227,7 @@ static int drv_cp_harray_to_user(void __user *user_buf_uva,
+ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ unsigned long uva)
+ {
++ struct page *page;
+ int retval;
+
+ if (context->notify_page) {
+@@ -243,11 +244,11 @@ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ /*
+ * Lock physical page backing a given user VA.
+ */
+- retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &context->notify_page);
+- if (retval != 1) {
+- context->notify_page = NULL;
++ retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &page);
++ if (retval != 1)
+ return VMCI_ERROR_GENERIC;
+- }
++
++ context->notify_page = page;
+
+ /*
+ * Map the locked page and set up notify pointer.
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 2c9ea5ed0b2fcb..8d0f888b219ac3 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -708,21 +708,23 @@ static inline void msdc_dma_setup(struct msdc_host *host, struct msdc_dma *dma,
+ writel(lower_32_bits(dma->gpd_addr), host->base + MSDC_DMA_SA);
+ }
+
+-static void msdc_prepare_data(struct msdc_host *host, struct mmc_request *mrq)
++static void msdc_prepare_data(struct msdc_host *host, struct mmc_data *data)
+ {
+- struct mmc_data *data = mrq->data;
+-
+ if (!(data->host_cookie & MSDC_PREPARE_FLAG)) {
+- data->host_cookie |= MSDC_PREPARE_FLAG;
+ data->sg_count = dma_map_sg(host->dev, data->sg, data->sg_len,
+ mmc_get_dma_dir(data));
++ if (data->sg_count)
++ data->host_cookie |= MSDC_PREPARE_FLAG;
+ }
+ }
+
+-static void msdc_unprepare_data(struct msdc_host *host, struct mmc_request *mrq)
++static bool msdc_data_prepared(struct mmc_data *data)
+ {
+- struct mmc_data *data = mrq->data;
++ return data->host_cookie & MSDC_PREPARE_FLAG;
++}
+
++static void msdc_unprepare_data(struct msdc_host *host, struct mmc_data *data)
++{
+ if (data->host_cookie & MSDC_ASYNC_FLAG)
+ return;
+
+@@ -1115,7 +1117,7 @@ static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
+
+ msdc_track_cmd_data(host, mrq->cmd, mrq->data);
+ if (mrq->data)
+- msdc_unprepare_data(host, mrq);
++ msdc_unprepare_data(host, mrq->data);
+ if (host->error)
+ msdc_reset_hw(host);
+ mmc_request_done(mmc_from_priv(host), mrq);
+@@ -1285,8 +1287,19 @@ static void msdc_ops_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ WARN_ON(host->mrq);
+ host->mrq = mrq;
+
+- if (mrq->data)
+- msdc_prepare_data(host, mrq);
++ if (mrq->data) {
++ msdc_prepare_data(host, mrq->data);
++ if (!msdc_data_prepared(mrq->data)) {
++ host->mrq = NULL;
++ /*
++ * Failed to prepare DMA area, fail fast before
++ * starting any commands.
++ */
++ mrq->cmd->error = -ENOSPC;
++ mmc_request_done(mmc_from_priv(host), mrq);
++ return;
++ }
++ }
+
+ /* if SBC is required, we have HW option and SW option.
+ * if HW option is enabled, and SBC does not have "special" flags,
+@@ -1307,7 +1320,7 @@ static void msdc_pre_req(struct mmc_host *mmc, struct mmc_request *mrq)
+ if (!data)
+ return;
+
+- msdc_prepare_data(host, mrq);
++ msdc_prepare_data(host, data);
+ data->host_cookie |= MSDC_ASYNC_FLAG;
+ }
+
+@@ -1315,14 +1328,14 @@ static void msdc_post_req(struct mmc_host *mmc, struct mmc_request *mrq,
+ int err)
+ {
+ struct msdc_host *host = mmc_priv(mmc);
+- struct mmc_data *data;
++ struct mmc_data *data = mrq->data;
+
+- data = mrq->data;
+ if (!data)
+ return;
++
+ if (data->host_cookie) {
+ data->host_cookie &= ~MSDC_ASYNC_FLAG;
+- msdc_unprepare_data(host, mrq);
++ msdc_unprepare_data(host, data);
+ }
+ }
+
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 3d601a3f31c1ec..9091930f585916 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -2010,15 +2010,10 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+
+ host->mmc->actual_clock = 0;
+
+- clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
+- if (clk & SDHCI_CLOCK_CARD_EN)
+- sdhci_writew(host, clk & ~SDHCI_CLOCK_CARD_EN,
+- SDHCI_CLOCK_CONTROL);
++ sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
+
+- if (clock == 0) {
+- sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
++ if (clock == 0)
+ return;
+- }
+
+ clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+ sdhci_enable_clk(host, clk);
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 4db57c3a8cd4b2..a188bf241a117a 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -813,4 +813,20 @@ void sdhci_switch_external_dma(struct sdhci_host *host, bool en);
+ void sdhci_set_data_timeout_irq(struct sdhci_host *host, bool enable);
+ void __sdhci_set_timeout(struct sdhci_host *host, struct mmc_command *cmd);
+
++#if defined(CONFIG_DYNAMIC_DEBUG) || \
++ (defined(CONFIG_DYNAMIC_DEBUG_CORE) && defined(DYNAMIC_DEBUG_MODULE))
++#define SDHCI_DBG_ANYWAY 0
++#elif defined(DEBUG)
++#define SDHCI_DBG_ANYWAY 1
++#else
++#define SDHCI_DBG_ANYWAY 0
++#endif
++
++#define sdhci_dbg_dumpregs(host, fmt) \
++do { \
++ DEFINE_DYNAMIC_DEBUG_METADATA(descriptor, fmt); \
++ if (DYNAMIC_DEBUG_BRANCH(descriptor) || SDHCI_DBG_ANYWAY) \
++ sdhci_dumpregs(host); \
++} while (0)
++
+ #endif /* __SDHCI_HW_H */
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 6181ac277b62ff..1c8a7c65530fdb 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -522,7 +522,7 @@ static int m_can_handle_lost_msg(struct net_device *dev)
+ struct sk_buff *skb;
+ struct can_frame *frame;
+
+- netdev_err(dev, "msg lost in rxf0\n");
++ netdev_dbg(dev, "msg lost in rxf0\n");
+
+ stats->rx_errors++;
+ stats->rx_over_errors++;
+diff --git a/drivers/net/can/m_can/tcan4x5x.c b/drivers/net/can/m_can/tcan4x5x.c
+index f903f78af087aa..4bdea945c48623 100644
+--- a/drivers/net/can/m_can/tcan4x5x.c
++++ b/drivers/net/can/m_can/tcan4x5x.c
+@@ -417,10 +417,11 @@ static int tcan4x5x_can_probe(struct spi_device *spi)
+ }
+
+ priv->power = devm_regulator_get_optional(&spi->dev, "vsup");
+- if (PTR_ERR(priv->power) == -EPROBE_DEFER) {
+- ret = -EPROBE_DEFER;
+- goto out_m_can_class_free_dev;
+- } else {
++ if (IS_ERR(priv->power)) {
++ if (PTR_ERR(priv->power) == -EPROBE_DEFER) {
++ ret = -EPROBE_DEFER;
++ goto out_m_can_class_free_dev;
++ }
+ priv->power = NULL;
+ }
+
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index 533b8519ec3528..c5dc23906a78d0 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -1355,6 +1355,8 @@
+ #define MDIO_VEND2_CTRL1_SS13 BIT(13)
+ #endif
+
++#define XGBE_VEND2_MAC_AUTO_SW BIT(9)
++
+ /* MDIO mask values */
+ #define XGBE_AN_CL73_INT_CMPLT BIT(0)
+ #define XGBE_AN_CL73_INC_LINK BIT(1)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+index 60be836b294bbe..19fed56b6ee3fd 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c
+@@ -363,6 +363,10 @@ static void xgbe_an37_set(struct xgbe_prv_data *pdata, bool enable,
+ reg |= MDIO_VEND2_CTRL1_AN_RESTART;
+
+ XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_CTRL1, reg);
++
++ reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL);
++ reg |= XGBE_VEND2_MAC_AUTO_SW;
++ XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL, reg);
+ }
+
+ static void xgbe_an37_restart(struct xgbe_prv_data *pdata)
+@@ -991,6 +995,11 @@ static void xgbe_an37_init(struct xgbe_prv_data *pdata)
+
+ netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n",
+ (pdata->an_mode == XGBE_AN_MODE_CL37) ? "BaseX" : "SGMII");
++
++ reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1);
++ reg &= ~MDIO_AN_CTRL1_ENABLE;
++ XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg);
++
+ }
+
+ static void xgbe_an73_init(struct xgbe_prv_data *pdata)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h
+index 0493de8ee545ab..61f22462197aed 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h
+@@ -291,11 +291,11 @@
+ #define XGBE_LINK_TIMEOUT 5
+ #define XGBE_KR_TRAINING_WAIT_ITER 50
+
+-#define XGBE_SGMII_AN_LINK_STATUS BIT(1)
++#define XGBE_SGMII_AN_LINK_DUPLEX BIT(1)
+ #define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3))
+ #define XGBE_SGMII_AN_LINK_SPEED_100 0x04
+ #define XGBE_SGMII_AN_LINK_SPEED_1000 0x08
+-#define XGBE_SGMII_AN_LINK_DUPLEX BIT(4)
++#define XGBE_SGMII_AN_LINK_STATUS BIT(4)
+
+ /* ECC correctable error notification window (seconds) */
+ #define XGBE_ECC_LIMIT 60
+diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c
+index eaf96d002fa50d..2e950313f427a6 100644
+--- a/drivers/net/ethernet/atheros/atlx/atl1.c
++++ b/drivers/net/ethernet/atheros/atlx/atl1.c
+@@ -1861,14 +1861,21 @@ static u16 atl1_alloc_rx_buffers(struct atl1_adapter *adapter)
+ break;
+ }
+
+- buffer_info->alloced = 1;
+- buffer_info->skb = skb;
+- buffer_info->length = (u16) adapter->rx_buffer_len;
+ page = virt_to_page(skb->data);
+ offset = offset_in_page(skb->data);
+ buffer_info->dma = dma_map_page(&pdev->dev, page, offset,
+ adapter->rx_buffer_len,
+ DMA_FROM_DEVICE);
++ if (dma_mapping_error(&pdev->dev, buffer_info->dma)) {
++ kfree_skb(skb);
++ adapter->soft_stats.rx_dropped++;
++ break;
++ }
++
++ buffer_info->alloced = 1;
++ buffer_info->skb = skb;
++ buffer_info->length = (u16)adapter->rx_buffer_len;
++
+ rfd_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
+ rfd_desc->buf_len = cpu_to_le16(adapter->rx_buffer_len);
+ rfd_desc->coalese = 0;
+@@ -2180,8 +2187,8 @@ static int atl1_tx_csum(struct atl1_adapter *adapter, struct sk_buff *skb,
+ return 0;
+ }
+
+-static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+- struct tx_packet_desc *ptpd)
++static bool atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
++ struct tx_packet_desc *ptpd)
+ {
+ struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
+ struct atl1_buffer *buffer_info;
+@@ -2191,6 +2198,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ unsigned int nr_frags;
+ unsigned int f;
+ int retval;
++ u16 first_mapped;
+ u16 next_to_use;
+ u16 data_len;
+ u8 hdr_len;
+@@ -2198,6 +2206,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buf_len -= skb->data_len;
+ nr_frags = skb_shinfo(skb)->nr_frags;
+ next_to_use = atomic_read(&tpd_ring->next_to_use);
++ first_mapped = next_to_use;
+ buffer_info = &tpd_ring->buffer_info[next_to_use];
+ BUG_ON(buffer_info->skb);
+ /* put skb in last TPD */
+@@ -2213,6 +2222,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buffer_info->dma = dma_map_page(&adapter->pdev->dev, page,
+ offset, hdr_len,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma))
++ goto dma_err;
+
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+@@ -2239,6 +2250,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ page, offset,
+ buffer_info->length,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ buffer_info->dma))
++ goto dma_err;
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+ }
+@@ -2251,6 +2265,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buffer_info->dma = dma_map_page(&adapter->pdev->dev, page,
+ offset, buf_len,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma))
++ goto dma_err;
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+ }
+@@ -2274,6 +2290,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+ buffer_info->dma = skb_frag_dma_map(&adapter->pdev->dev,
+ frag, i * ATL1_MAX_TX_BUF_LEN,
+ buffer_info->length, DMA_TO_DEVICE);
++ if (dma_mapping_error(&adapter->pdev->dev,
++ buffer_info->dma))
++ goto dma_err;
+
+ if (++next_to_use == tpd_ring->count)
+ next_to_use = 0;
+@@ -2282,6 +2301,22 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
+
+ /* last tpd's buffer-info */
+ buffer_info->skb = skb;
++
++ return true;
++
++ dma_err:
++ while (first_mapped != next_to_use) {
++ buffer_info = &tpd_ring->buffer_info[first_mapped];
++ dma_unmap_page(&adapter->pdev->dev,
++ buffer_info->dma,
++ buffer_info->length,
++ DMA_TO_DEVICE);
++ buffer_info->dma = 0;
++
++ if (++first_mapped == tpd_ring->count)
++ first_mapped = 0;
++ }
++ return false;
+ }
+
+ static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count,
+@@ -2352,10 +2387,8 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+
+ len = skb_headlen(skb);
+
+- if (unlikely(skb->len <= 0)) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (unlikely(skb->len <= 0))
++ goto drop_packet;
+
+ nr_frags = skb_shinfo(skb)->nr_frags;
+ for (f = 0; f < nr_frags; f++) {
+@@ -2369,10 +2402,8 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ if (skb->protocol == htons(ETH_P_IP)) {
+ proto_hdr_len = (skb_transport_offset(skb) +
+ tcp_hdrlen(skb));
+- if (unlikely(proto_hdr_len > len)) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (unlikely(proto_hdr_len > len))
++ goto drop_packet;
+ /* need additional TPD ? */
+ if (proto_hdr_len != len)
+ count += (len - proto_hdr_len +
+@@ -2404,23 +2435,26 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb,
+ }
+
+ tso = atl1_tso(adapter, skb, ptpd);
+- if (tso < 0) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (tso < 0)
++ goto drop_packet;
+
+ if (!tso) {
+ ret_val = atl1_tx_csum(adapter, skb, ptpd);
+- if (ret_val < 0) {
+- dev_kfree_skb_any(skb);
+- return NETDEV_TX_OK;
+- }
++ if (ret_val < 0)
++ goto drop_packet;
+ }
+
+- atl1_tx_map(adapter, skb, ptpd);
++ if (!atl1_tx_map(adapter, skb, ptpd))
++ goto drop_packet;
++
+ atl1_tx_queue(adapter, count, ptpd);
+ atl1_update_mailbox(adapter);
+ return NETDEV_TX_OK;
++
++drop_packet:
++ adapter->soft_stats.tx_errors++;
++ dev_kfree_skb_any(skb);
++ return NETDEV_TX_OK;
+ }
+
+ static int atl1_rings_clean(struct napi_struct *napi, int budget)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+index 8e90224c43a214..6464de38c82e24 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_dcb.c
+@@ -447,7 +447,9 @@ static int bnxt_ets_validate(struct bnxt *bp, struct ieee_ets *ets, u8 *tc)
+
+ if ((ets->tc_tx_bw[i] || ets->tc_tsa[i]) && i > bp->max_tc)
+ return -EINVAL;
++ }
+
++ for (i = 0; i < max_tc; i++) {
+ switch (ets->tc_tsa[i]) {
+ case IEEE_8021QAZ_TSA_STRICT:
+ break;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index fcc262064766a2..dc9afaa14da8f6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -65,7 +65,7 @@ static void __bnxt_xmit_xdp_redirect(struct bnxt *bp,
+ tx_buf->action = XDP_REDIRECT;
+ tx_buf->xdpf = xdpf;
+ dma_unmap_addr_set(tx_buf, mapping, mapping);
+- dma_unmap_len_set(tx_buf, len, 0);
++ dma_unmap_len_set(tx_buf, len, len);
+ }
+
+ void bnxt_tx_int_xdp(struct bnxt *bp, struct bnxt_napi *bnapi, int nr_pkts)
+diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
+index b695f3f233286c..f59d658d624f5e 100644
+--- a/drivers/net/ethernet/cisco/enic/enic_main.c
++++ b/drivers/net/ethernet/cisco/enic/enic_main.c
+@@ -2058,10 +2058,10 @@ static int enic_change_mtu(struct net_device *netdev, int new_mtu)
+ if (enic_is_dynamic(enic) || enic_is_sriov_vf(enic))
+ return -EOPNOTSUPP;
+
+- if (netdev->mtu > enic->port_mtu)
++ if (new_mtu > enic->port_mtu)
+ netdev_warn(netdev,
+ "interface MTU (%d) set higher than port MTU (%d)\n",
+- netdev->mtu, enic->port_mtu);
++ new_mtu, enic->port_mtu);
+
+ return _enic_change_mtu(netdev, new_mtu);
+ }
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index fa202fea537f8c..776f624e3b8eed 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -34,6 +34,75 @@ MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
+ struct ptp_qoriq *dpaa2_ptp;
+ EXPORT_SYMBOL(dpaa2_ptp);
+
++static void dpaa2_eth_detect_features(struct dpaa2_eth_priv *priv)
++{
++ priv->features = 0;
++
++ if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_PTP_ONESTEP_VER_MAJOR,
++ DPNI_PTP_ONESTEP_VER_MINOR) >= 0)
++ priv->features |= DPAA2_ETH_FEATURE_ONESTEP_CFG_DIRECT;
++}
++
++static void dpaa2_update_ptp_onestep_indirect(struct dpaa2_eth_priv *priv,
++ u32 offset, u8 udp)
++{
++ struct dpni_single_step_cfg cfg;
++
++ cfg.en = 1;
++ cfg.ch_update = udp;
++ cfg.offset = offset;
++ cfg.peer_delay = 0;
++
++ if (dpni_set_single_step_cfg(priv->mc_io, 0, priv->mc_token, &cfg))
++ WARN_ONCE(1, "Failed to set single step register");
++}
++
++static void dpaa2_update_ptp_onestep_direct(struct dpaa2_eth_priv *priv,
++ u32 offset, u8 udp)
++{
++ u32 val = 0;
++
++ val = DPAA2_PTP_SINGLE_STEP_ENABLE |
++ DPAA2_PTP_SINGLE_CORRECTION_OFF(offset);
++
++ if (udp)
++ val |= DPAA2_PTP_SINGLE_STEP_CH;
++
++ if (priv->onestep_reg_base)
++ writel(val, priv->onestep_reg_base);
++}
++
++static void dpaa2_ptp_onestep_reg_update_method(struct dpaa2_eth_priv *priv)
++{
++ struct device *dev = priv->net_dev->dev.parent;
++ struct dpni_single_step_cfg ptp_cfg;
++
++ priv->dpaa2_set_onestep_params_cb = dpaa2_update_ptp_onestep_indirect;
++
++ if (!(priv->features & DPAA2_ETH_FEATURE_ONESTEP_CFG_DIRECT))
++ return;
++
++ if (dpni_get_single_step_cfg(priv->mc_io, 0,
++ priv->mc_token, &ptp_cfg)) {
++ dev_err(dev, "dpni_get_single_step_cfg cannot retrieve onestep reg, falling back to indirect update\n");
++ return;
++ }
++
++ if (!ptp_cfg.ptp_onestep_reg_base) {
++ dev_err(dev, "1588 onestep reg not available, falling back to indirect update\n");
++ return;
++ }
++
++ priv->onestep_reg_base = ioremap(ptp_cfg.ptp_onestep_reg_base,
++ sizeof(u32));
++ if (!priv->onestep_reg_base) {
++ dev_err(dev, "1588 onestep reg cannot be mapped, falling back to indirect update\n");
++ return;
++ }
++
++ priv->dpaa2_set_onestep_params_cb = dpaa2_update_ptp_onestep_direct;
++}
++
+ static void *dpaa2_iova_to_virt(struct iommu_domain *domain,
+ dma_addr_t iova_addr)
+ {
+@@ -223,31 +292,31 @@ static void dpaa2_eth_free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array,
+ }
+ }
+
+-static void dpaa2_eth_xdp_release_buf(struct dpaa2_eth_priv *priv,
+- struct dpaa2_eth_channel *ch,
+- dma_addr_t addr)
++static void dpaa2_eth_recycle_buf(struct dpaa2_eth_priv *priv,
++ struct dpaa2_eth_channel *ch,
++ dma_addr_t addr)
+ {
+ int retries = 0;
+ int err;
+
+- ch->xdp.drop_bufs[ch->xdp.drop_cnt++] = addr;
+- if (ch->xdp.drop_cnt < DPAA2_ETH_BUFS_PER_CMD)
++ ch->recycled_bufs[ch->recycled_bufs_cnt++] = addr;
++ if (ch->recycled_bufs_cnt < DPAA2_ETH_BUFS_PER_CMD)
+ return;
+
+ while ((err = dpaa2_io_service_release(ch->dpio, priv->bpid,
+- ch->xdp.drop_bufs,
+- ch->xdp.drop_cnt)) == -EBUSY) {
++ ch->recycled_bufs,
++ ch->recycled_bufs_cnt)) == -EBUSY) {
+ if (retries++ >= DPAA2_ETH_SWP_BUSY_RETRIES)
+ break;
+ cpu_relax();
+ }
+
+ if (err) {
+- dpaa2_eth_free_bufs(priv, ch->xdp.drop_bufs, ch->xdp.drop_cnt);
+- ch->buf_count -= ch->xdp.drop_cnt;
++ dpaa2_eth_free_bufs(priv, ch->recycled_bufs, ch->recycled_bufs_cnt);
++ ch->buf_count -= ch->recycled_bufs_cnt;
+ }
+
+- ch->xdp.drop_cnt = 0;
++ ch->recycled_bufs_cnt = 0;
+ }
+
+ static int dpaa2_eth_xdp_flush(struct dpaa2_eth_priv *priv,
+@@ -300,7 +369,7 @@ static void dpaa2_eth_xdp_tx_flush(struct dpaa2_eth_priv *priv,
+ ch->stats.xdp_tx++;
+ }
+ for (i = enqueued; i < fq->xdp_tx_fds.num; i++) {
+- dpaa2_eth_xdp_release_buf(priv, ch, dpaa2_fd_get_addr(&fds[i]));
++ dpaa2_eth_recycle_buf(priv, ch, dpaa2_fd_get_addr(&fds[i]));
+ percpu_stats->tx_errors++;
+ ch->stats.xdp_tx_err++;
+ }
+@@ -386,7 +455,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
+ trace_xdp_exception(priv->net_dev, xdp_prog, xdp_act);
+ fallthrough;
+ case XDP_DROP:
+- dpaa2_eth_xdp_release_buf(priv, ch, addr);
++ dpaa2_eth_recycle_buf(priv, ch, addr);
+ ch->stats.xdp_drop++;
+ break;
+ case XDP_REDIRECT:
+@@ -407,7 +476,7 @@ static u32 dpaa2_eth_run_xdp(struct dpaa2_eth_priv *priv,
+ free_pages((unsigned long)vaddr, 0);
+ } else {
+ ch->buf_count++;
+- dpaa2_eth_xdp_release_buf(priv, ch, addr);
++ dpaa2_eth_recycle_buf(priv, ch, addr);
+ }
+ ch->stats.xdp_drop++;
+ } else {
+@@ -668,7 +737,6 @@ static void dpaa2_eth_enable_tx_tstamp(struct dpaa2_eth_priv *priv,
+ struct sk_buff *skb)
+ {
+ struct ptp_tstamp origin_timestamp;
+- struct dpni_single_step_cfg cfg;
+ u8 msgtype, twostep, udp;
+ struct dpaa2_faead *faead;
+ struct dpaa2_fas *fas;
+@@ -722,14 +790,12 @@ static void dpaa2_eth_enable_tx_tstamp(struct dpaa2_eth_priv *priv,
+ htonl(origin_timestamp.sec_lsb);
+ *(__be32 *)(data + offset2 + 6) = htonl(origin_timestamp.nsec);
+
+- cfg.en = 1;
+- cfg.ch_update = udp;
+- cfg.offset = offset1;
+- cfg.peer_delay = 0;
++ if (priv->ptp_correction_off == offset1)
++ return;
++
++ priv->dpaa2_set_onestep_params_cb(priv, offset1, udp);
++ priv->ptp_correction_off = offset1;
+
+- if (dpni_set_single_step_cfg(priv->mc_io, 0, priv->mc_token,
+- &cfg))
+- WARN_ONCE(1, "Failed to set single step register");
+ }
+ }
+
+@@ -2112,6 +2178,9 @@ static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+ config.rx_filter = HWTSTAMP_FILTER_ALL;
+ }
+
++ if (priv->tx_tstamp_type == HWTSTAMP_TX_ONESTEP_SYNC)
++ dpaa2_ptp_onestep_reg_update_method(priv);
++
+ return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
+ -EFAULT : 0;
+ }
+@@ -3356,6 +3425,7 @@ static int dpaa2_eth_setup_rx_flow(struct dpaa2_eth_priv *priv,
+ MEM_TYPE_PAGE_ORDER0, NULL);
+ if (err) {
+ dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n");
++ xdp_rxq_info_unreg(&fq->channel->xdp_rxq);
+ return err;
+ }
+
+@@ -3848,17 +3918,25 @@ static int dpaa2_eth_bind_dpni(struct dpaa2_eth_priv *priv)
+ return -EINVAL;
+ }
+ if (err)
+- return err;
++ goto out;
+ }
+
+ err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,
+ DPNI_QUEUE_TX, &priv->tx_qdid);
+ if (err) {
+ dev_err(dev, "dpni_get_qdid() failed\n");
+- return err;
++ goto out;
+ }
+
+ return 0;
++
++out:
++ while (i--) {
++ if (priv->fq[i].type == DPAA2_RX_FQ &&
++ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))
++ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);
++ }
++ return err;
+ }
+
+ /* Allocate rings for storing incoming frame descriptors */
+@@ -4009,6 +4087,8 @@ static int dpaa2_eth_netdev_init(struct net_device *net_dev)
+ return err;
+ }
+
++ dpaa2_eth_detect_features(priv);
++
+ /* Capabilities listing */
+ supported |= IFF_LIVE_ADDR_CHANGE;
+
+@@ -4193,6 +4273,17 @@ static void dpaa2_eth_del_ch_napi(struct dpaa2_eth_priv *priv)
+ }
+ }
+
++static void dpaa2_eth_free_rx_xdp_rxq(struct dpaa2_eth_priv *priv)
++{
++ int i;
++
++ for (i = 0; i < priv->num_fqs; i++) {
++ if (priv->fq[i].type == DPAA2_RX_FQ &&
++ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq))
++ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq);
++ }
++}
++
+ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ {
+ struct device *dev;
+@@ -4379,6 +4470,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+ free_percpu(priv->percpu_stats);
+ err_alloc_percpu_stats:
+ dpaa2_eth_del_ch_napi(priv);
++ dpaa2_eth_free_rx_xdp_rxq(priv);
+ err_bind:
+ dpaa2_eth_free_dpbp(priv);
+ err_dpbp_setup:
+@@ -4430,9 +4522,12 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+ free_percpu(priv->percpu_extras);
+
+ dpaa2_eth_del_ch_napi(priv);
++ dpaa2_eth_free_rx_xdp_rxq(priv);
+ dpaa2_eth_free_dpbp(priv);
+ dpaa2_eth_free_dpio(priv);
+ dpaa2_eth_free_dpni(priv);
++ if (priv->onestep_reg_base)
++ iounmap(priv->onestep_reg_base);
+
+ fsl_mc_portal_free(priv->mc_io);
+
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+index 2825f53e7e9b16..5934b1b4ee9732 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+@@ -438,8 +438,6 @@ struct dpaa2_eth_fq {
+
+ struct dpaa2_eth_ch_xdp {
+ struct bpf_prog *prog;
+- u64 drop_bufs[DPAA2_ETH_BUFS_PER_CMD];
+- int drop_cnt;
+ unsigned int res;
+ };
+
+@@ -457,6 +455,10 @@ struct dpaa2_eth_channel {
+ struct dpaa2_eth_ch_xdp xdp;
+ struct xdp_rxq_info xdp_rxq;
+ struct list_head *rx_list;
++
++ /* Buffers to be recycled back in the buffer pool */
++ u64 recycled_bufs[DPAA2_ETH_BUFS_PER_CMD];
++ int recycled_bufs_cnt;
+ };
+
+ struct dpaa2_eth_dist_fields {
+@@ -502,12 +504,15 @@ struct dpaa2_eth_priv {
+ u8 num_channels;
+ struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS];
+ struct dpaa2_eth_sgt_cache __percpu *sgt_cache;
+-
++ unsigned long features;
+ struct dpni_attr dpni_attrs;
+ u16 dpni_ver_major;
+ u16 dpni_ver_minor;
+ u16 tx_data_offset;
+-
++ void __iomem *onestep_reg_base;
++ u8 ptp_correction_off;
++ void (*dpaa2_set_onestep_params_cb)(struct dpaa2_eth_priv *priv,
++ u32 offset, u8 udp);
+ struct fsl_mc_device *dpbp_dev;
+ u16 rx_buf_size;
+ u16 bpid;
+@@ -645,6 +650,13 @@ enum dpaa2_eth_rx_dist {
+ #define DPAA2_ETH_DIST_L4DST BIT(8)
+ #define DPAA2_ETH_DIST_ALL (~0ULL)
+
++#define DPNI_PTP_ONESTEP_VER_MAJOR 8
++#define DPNI_PTP_ONESTEP_VER_MINOR 2
++#define DPAA2_ETH_FEATURE_ONESTEP_CFG_DIRECT BIT(0)
++#define DPAA2_PTP_SINGLE_STEP_ENABLE BIT(31)
++#define DPAA2_PTP_SINGLE_STEP_CH BIT(7)
++#define DPAA2_PTP_SINGLE_CORRECTION_OFF(v) ((v) << 8)
++
+ #define DPNI_PAUSE_VER_MAJOR 7
+ #define DPNI_PAUSE_VER_MINOR 13
+ #define dpaa2_eth_has_pause_support(priv) \
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+index f981a523e13a43..d7de60049700f2 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+@@ -225,17 +225,8 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
+ struct ethtool_stats *stats,
+ u64 *data)
+ {
+- int i = 0;
+- int j, k, err;
+- int num_cnt;
+- union dpni_statistics dpni_stats;
+- u32 fcnt, bcnt;
+- u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
+- u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
+- u32 buf_cnt;
+ struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+- struct dpaa2_eth_drv_stats *extras;
+- struct dpaa2_eth_ch_stats *ch_stats;
++ union dpni_statistics dpni_stats;
+ int dpni_stats_page_size[DPNI_STATISTICS_CNT] = {
+ sizeof(dpni_stats.page_0),
+ sizeof(dpni_stats.page_1),
+@@ -245,6 +236,13 @@ static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
+ sizeof(dpni_stats.page_5),
+ sizeof(dpni_stats.page_6),
+ };
++ u32 fcnt_rx_total = 0, fcnt_tx_total = 0;
++ u32 bcnt_rx_total = 0, bcnt_tx_total = 0;
++ struct dpaa2_eth_ch_stats *ch_stats;
++ struct dpaa2_eth_drv_stats *extras;
++ int j, k, err, num_cnt, i = 0;
++ u32 fcnt, bcnt;
++ u32 buf_cnt;
+
+ memset(data, 0,
+ sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
+index 90453dc7baefed..a0dfd25c6bd4aa 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
+@@ -94,7 +94,7 @@
+ #define DPNI_CMDID_GET_LINK_CFG DPNI_CMD(0x278)
+
+ #define DPNI_CMDID_SET_SINGLE_STEP_CFG DPNI_CMD(0x279)
+-#define DPNI_CMDID_GET_SINGLE_STEP_CFG DPNI_CMD(0x27a)
++#define DPNI_CMDID_GET_SINGLE_STEP_CFG DPNI_CMD_V2(0x27a)
+
+ /* Macros for accessing command fields smaller than 1byte */
+ #define DPNI_MASK(field) \
+@@ -654,12 +654,16 @@ struct dpni_cmd_single_step_cfg {
+ __le16 flags;
+ __le16 offset;
+ __le32 peer_delay;
++ __le32 ptp_onestep_reg_base;
++ __le32 pad0;
+ };
+
+ struct dpni_rsp_single_step_cfg {
+ __le16 flags;
+ __le16 offset;
+ __le32 peer_delay;
++ __le32 ptp_onestep_reg_base;
++ __le32 pad0;
+ };
+
+ #endif /* _FSL_DPNI_CMD_H */
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.c b/drivers/net/ethernet/freescale/dpaa2/dpni.c
+index 6ea7db66a6322b..d248a40fbc3f8f 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpni.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpni.c
+@@ -2037,6 +2037,8 @@ int dpni_get_single_step_cfg(struct fsl_mc_io *mc_io,
+ ptp_cfg->ch_update = dpni_get_field(le16_to_cpu(rsp_params->flags),
+ PTP_CH_UPDATE) ? 1 : 0;
+ ptp_cfg->peer_delay = le32_to_cpu(rsp_params->peer_delay);
++ ptp_cfg->ptp_onestep_reg_base =
++ le32_to_cpu(rsp_params->ptp_onestep_reg_base);
+
+ return err;
+ }
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.h b/drivers/net/ethernet/freescale/dpaa2/dpni.h
+index e7b9e195b534b3..f854450983983e 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpni.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpni.h
+@@ -1096,12 +1096,18 @@ int dpni_set_tx_shaping(struct fsl_mc_io *mc_io,
+ * @peer_delay: For peer-to-peer transparent clocks add this value to the
+ * correction field in addition to the transient time update.
+ * The value expresses nanoseconds.
++ * @ptp_onestep_reg_base: 1588 SINGLE_STEP register base address. This address
++ * is used to update directly the register contents.
++ * User has to create an address mapping for it.
++ *
++ *
+ */
+ struct dpni_single_step_cfg {
+ u8 en;
+ u8 ch_update;
+ u16 offset;
+ u32 peer_delay;
++ u32 ptp_onestep_reg_base;
+ };
+
+ int dpni_set_single_step_cfg(struct fsl_mc_io *mc_io,
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_hw.h b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+index 2b90a345507b87..e0a58471ff592d 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_hw.h
++++ b/drivers/net/ethernet/freescale/enetc/enetc_hw.h
+@@ -444,7 +444,7 @@ static inline u64 _enetc_rd_reg64(void __iomem *reg)
+ tmp = ioread32(reg + 4);
+ } while (high != tmp);
+
+- return le64_to_cpu((__le64)high << 32 | low);
++ return (u64)high << 32 | low;
+ }
+ #endif
+
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index 1a269fa8c1a073..6a626b1b02338d 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -3317,7 +3317,7 @@ static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp,
+
+ addr = np->ops->map_page(np->device, page, 0,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+- if (!addr) {
++ if (np->ops->mapping_error(np->device, addr)) {
+ __free_page(page);
+ return -ENOMEM;
+ }
+@@ -6654,6 +6654,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ len = skb_headlen(skb);
+ mapping = np->ops->map_single(np->device, skb->data,
+ len, DMA_TO_DEVICE);
++ if (np->ops->mapping_error(np->device, mapping))
++ goto out_drop;
+
+ prod = rp->prod;
+
+@@ -6695,6 +6697,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ mapping = np->ops->map_page(np->device, skb_frag_page(frag),
+ skb_frag_off(frag), len,
+ DMA_TO_DEVICE);
++ if (np->ops->mapping_error(np->device, mapping))
++ goto out_unmap;
+
+ rp->tx_buffs[prod].skb = NULL;
+ rp->tx_buffs[prod].mapping = mapping;
+@@ -6719,6 +6723,19 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ out:
+ return NETDEV_TX_OK;
+
++out_unmap:
++ while (i--) {
++ const skb_frag_t *frag;
++
++ prod = PREVIOUS_TX(rp, prod);
++ frag = &skb_shinfo(skb)->frags[i];
++ np->ops->unmap_page(np->device, rp->tx_buffs[prod].mapping,
++ skb_frag_size(frag), DMA_TO_DEVICE);
++ }
++
++ np->ops->unmap_single(np->device, rp->tx_buffs[rp->prod].mapping,
++ skb_headlen(skb), DMA_TO_DEVICE);
++
+ out_drop:
+ rp->tx_errors++;
+ kfree_skb(skb);
+@@ -9612,6 +9629,11 @@ static void niu_pci_unmap_single(struct device *dev, u64 dma_address,
+ dma_unmap_single(dev, dma_address, size, direction);
+ }
+
++static int niu_pci_mapping_error(struct device *dev, u64 addr)
++{
++ return dma_mapping_error(dev, addr);
++}
++
+ static const struct niu_ops niu_pci_ops = {
+ .alloc_coherent = niu_pci_alloc_coherent,
+ .free_coherent = niu_pci_free_coherent,
+@@ -9619,6 +9641,7 @@ static const struct niu_ops niu_pci_ops = {
+ .unmap_page = niu_pci_unmap_page,
+ .map_single = niu_pci_map_single,
+ .unmap_single = niu_pci_unmap_single,
++ .mapping_error = niu_pci_mapping_error,
+ };
+
+ static void niu_driver_version(void)
+@@ -9993,6 +10016,11 @@ static void niu_phys_unmap_single(struct device *dev, u64 dma_address,
+ /* Nothing to do. */
+ }
+
++static int niu_phys_mapping_error(struct device *dev, u64 dma_address)
++{
++ return false;
++}
++
+ static const struct niu_ops niu_phys_ops = {
+ .alloc_coherent = niu_phys_alloc_coherent,
+ .free_coherent = niu_phys_free_coherent,
+@@ -10000,6 +10028,7 @@ static const struct niu_ops niu_phys_ops = {
+ .unmap_page = niu_phys_unmap_page,
+ .map_single = niu_phys_map_single,
+ .unmap_single = niu_phys_unmap_single,
++ .mapping_error = niu_phys_mapping_error,
+ };
+
+ static int niu_of_probe(struct platform_device *op)
+diff --git a/drivers/net/ethernet/sun/niu.h b/drivers/net/ethernet/sun/niu.h
+index 04c215f91fc08e..0b169c08b0f2d1 100644
+--- a/drivers/net/ethernet/sun/niu.h
++++ b/drivers/net/ethernet/sun/niu.h
+@@ -2879,6 +2879,9 @@ struct tx_ring_info {
+ #define NEXT_TX(tp, index) \
+ (((index) + 1) < (tp)->pending ? ((index) + 1) : 0)
+
++#define PREVIOUS_TX(tp, index) \
++ (((index) - 1) >= 0 ? ((index) - 1) : (((tp)->pending) - 1))
++
+ static inline u32 niu_tx_avail(struct tx_ring_info *tp)
+ {
+ return (tp->pending -
+@@ -3140,6 +3143,7 @@ struct niu_ops {
+ enum dma_data_direction direction);
+ void (*unmap_single)(struct device *dev, u64 dma_address,
+ size_t size, enum dma_data_direction direction);
++ int (*mapping_error)(struct device *dev, u64 dma_address);
+ };
+
+ struct niu_link_config {
+diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
+index e50b59efe188b3..5ace1a4905d7e8 100644
+--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
++++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
+@@ -1299,7 +1299,7 @@ static int ll_temac_ethtools_set_ringparam(struct net_device *ndev,
+ if (ering->rx_pending > RX_BD_NUM_MAX ||
+ ering->rx_mini_pending ||
+ ering->rx_jumbo_pending ||
+- ering->rx_pending > TX_BD_NUM_MAX)
++ ering->tx_pending > TX_BD_NUM_MAX)
+ return -EINVAL;
+
+ if (netif_running(ndev))
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index 375bbd60b38af6..e6ad7d29a05595 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -335,7 +335,7 @@ static void lan88xx_link_change_notify(struct phy_device *phydev)
+ * As workaround, set to 10 before setting to 100
+ * at forced 100 F/H mode.
+ */
+- if (!phydev->autoneg && phydev->speed == 100) {
++ if (phydev->state == PHY_NOLINK && !phydev->autoneg && phydev->speed == 100) {
+ /* disable phy interrupt */
+ temp = phy_read(phydev, LAN88XX_INT_MASK);
+ temp &= ~LAN88XX_INT_MASK_MDINTPIN_EN_;
+diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
+index b67de3f9ef186f..d860a2626b13be 100644
+--- a/drivers/net/phy/smsc.c
++++ b/drivers/net/phy/smsc.c
+@@ -120,10 +120,29 @@ static int lan911x_config_init(struct phy_device *phydev)
+
+ static int lan87xx_config_aneg(struct phy_device *phydev)
+ {
+- int rc;
++ u8 mdix_ctrl;
+ int val;
++ int rc;
++
++ /* When auto-negotiation is disabled (forced mode), the PHY's
++ * Auto-MDIX will continue toggling the TX/RX pairs.
++ *
++ * To establish a stable link, we must select a fixed MDI mode.
++ * If the user has not specified a fixed MDI mode (i.e., mdix_ctrl is
++ * 'auto'), we default to ETH_TP_MDI. This choice of a ETH_TP_MDI mode
++ * mirrors the behavior the hardware would exhibit if the AUTOMDIX_EN
++ * strap were configured for a fixed MDI connection.
++ */
++ if (phydev->autoneg == AUTONEG_DISABLE) {
++ if (phydev->mdix_ctrl == ETH_TP_MDI_AUTO)
++ mdix_ctrl = ETH_TP_MDI;
++ else
++ mdix_ctrl = phydev->mdix_ctrl;
++ } else {
++ mdix_ctrl = phydev->mdix_ctrl;
++ }
+
+- switch (phydev->mdix_ctrl) {
++ switch (mdix_ctrl) {
+ case ETH_TP_MDI:
+ val = SPECIAL_CTRL_STS_OVRRD_AMDIX_;
+ break;
+@@ -132,7 +151,8 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
+ SPECIAL_CTRL_STS_AMDIX_STATE_;
+ break;
+ case ETH_TP_MDI_AUTO:
+- val = SPECIAL_CTRL_STS_AMDIX_ENABLE_;
++ val = SPECIAL_CTRL_STS_OVRRD_AMDIX_ |
++ SPECIAL_CTRL_STS_AMDIX_ENABLE_;
+ break;
+ default:
+ return genphy_config_aneg(phydev);
+@@ -148,7 +168,7 @@ static int lan87xx_config_aneg(struct phy_device *phydev)
+ rc |= val;
+ phy_write(phydev, SPECIAL_CTRL_STS, rc);
+
+- phydev->mdix = phydev->mdix_ctrl;
++ phydev->mdix = mdix_ctrl;
+ return genphy_config_aneg(phydev);
+ }
+
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 3ab7b27b6bac3b..9f493d504d20f6 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1360,6 +1360,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x03f0, 0x9d1d, 1)}, /* HP lt4120 Snapdragon X5 LTE */
+ {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
+ {QMI_QUIRK_SET_DTR(0x1e0e, 0x9001, 5)}, /* SIMCom 7100E, 7230E, 7600E ++ */
++ {QMI_QUIRK_SET_DTR(0x1e0e, 0x9071, 3)}, /* SIMCom 8230C ++ */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0191, 4)}, /* Quectel EG91 */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0195, 4)}, /* Quectel EG95 */
+diff --git a/drivers/net/wireless/ath/ath6kl/bmi.c b/drivers/net/wireless/ath/ath6kl/bmi.c
+index af98e871199d31..5a9e93fd1ef42a 100644
+--- a/drivers/net/wireless/ath/ath6kl/bmi.c
++++ b/drivers/net/wireless/ath/ath6kl/bmi.c
+@@ -87,7 +87,9 @@ int ath6kl_bmi_get_target_info(struct ath6kl *ar,
+ * We need to do some backwards compatibility to make this work.
+ */
+ if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) {
+- WARN_ON(1);
++ ath6kl_err("mismatched byte count %d vs. expected %zd\n",
++ le32_to_cpu(targ_info->byte_count),
++ sizeof(*targ_info));
+ return -EINVAL;
+ }
+
+diff --git a/drivers/net/wireless/zydas/zd1211rw/zd_mac.c b/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
+index 3ef8533205f913..0a7f368f0d99ca 100644
+--- a/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
++++ b/drivers/net/wireless/zydas/zd1211rw/zd_mac.c
+@@ -583,7 +583,11 @@ void zd_mac_tx_to_dev(struct sk_buff *skb, int error)
+
+ skb_queue_tail(q, skb);
+ while (skb_queue_len(q) > ZD_MAC_MAX_ACK_WAITERS) {
+- zd_mac_tx_status(hw, skb_dequeue(q),
++ skb = skb_dequeue(q);
++ if (!skb)
++ break;
++
++ zd_mac_tx_status(hw, skb,
+ mac->ack_pending ? mac->ack_signal : 0,
+ NULL);
+ mac->ack_pending = 0;
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index 403ff93bc85090..f6edbe77e640af 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -253,11 +253,12 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u16 interrupts,
+ struct cdns_pcie *pcie = &ep->pcie;
+ u32 cap = CDNS_PCIE_EP_FUNC_MSIX_CAP_OFFSET;
+ u32 val, reg;
++ u16 actual_interrupts = interrupts + 1;
+
+ reg = cap + PCI_MSIX_FLAGS;
+ val = cdns_pcie_ep_fn_readw(pcie, fn, reg);
+ val &= ~PCI_MSIX_FLAGS_QSIZE;
+- val |= interrupts;
++ val |= interrupts; /* 0's based value */
+ cdns_pcie_ep_fn_writew(pcie, fn, reg, val);
+
+ /* Set MSIX BAR and offset */
+@@ -267,7 +268,7 @@ static int cdns_pcie_ep_set_msix(struct pci_epc *epc, u8 fn, u16 interrupts,
+
+ /* Set PBA BAR and offset. BAR must match MSIX BAR */
+ reg = cap + PCI_MSIX_PBA;
+- val = (offset + (interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
++ val = (offset + (actual_interrupts * PCI_MSIX_ENTRY_SIZE)) | bir;
+ cdns_pcie_ep_fn_writel(pcie, fn, reg, val);
+
+ return 0;
+diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
+index d0b3ec2373850d..e41726ec407c62 100644
+--- a/drivers/pci/controller/pci-hyperv.c
++++ b/drivers/pci/controller/pci-hyperv.c
+@@ -1820,12 +1820,17 @@ static void prepopulate_bars(struct hv_pcibus_device *hbus)
+ }
+ }
+ if (high_size <= 1 && low_size <= 1) {
+- /* Set the memory enable bit. */
+- _hv_pcifront_read_config(hpdev, PCI_COMMAND, 2,
+- &command);
+- command |= PCI_COMMAND_MEMORY;
+- _hv_pcifront_write_config(hpdev, PCI_COMMAND, 2,
+- command);
++ /*
++ * No need to set the PCI_COMMAND_MEMORY bit as
++ * the core PCI driver doesn't require the bit
++ * to be pre-set. Actually here we intentionally
++ * keep the bit off so that the PCI BAR probing
++ * in the core PCI driver doesn't cause Hyper-V
++ * to unnecessarily unmap/map the virtual BARs
++ * from/to the physical BARs multiple times.
++ * This reduces the VM boot time significantly
++ * if the BAR sizes are huge.
++ */
+ break;
+ }
+ }
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index ad52846b6beb64..049a34f0e13f77 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -949,6 +949,25 @@ static bool msm_gpio_needs_dual_edge_parent_workaround(struct irq_data *d,
+ test_bit(d->hwirq, pctrl->skip_wake_irqs);
+ }
+
++static void msm_gpio_irq_init_valid_mask(struct gpio_chip *gc,
++ unsigned long *valid_mask,
++ unsigned int ngpios)
++{
++ struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
++ const struct msm_pingroup *g;
++ int i;
++
++ bitmap_fill(valid_mask, ngpios);
++
++ for (i = 0; i < ngpios; i++) {
++ g = &pctrl->soc->groups[i];
++
++ if (g->intr_detection_width != 1 &&
++ g->intr_detection_width != 2)
++ clear_bit(i, valid_mask);
++ }
++}
++
+ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+@@ -1307,6 +1326,7 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ girq->default_type = IRQ_TYPE_NONE;
+ girq->handler = handle_bad_irq;
+ girq->parents[0] = pctrl->irq;
++ girq->init_valid_mask = msm_gpio_irq_init_valid_mask;
+
+ ret = gpiochip_add_data(&pctrl->chip, pctrl);
+ if (ret) {
+diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c
+index 767f4406e55f11..1eb7f4eb1156c3 100644
+--- a/drivers/platform/mellanox/mlxbf-tmfifo.c
++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c
+@@ -253,7 +253,8 @@ static int mlxbf_tmfifo_alloc_vrings(struct mlxbf_tmfifo *fifo,
+ vring->align = SMP_CACHE_BYTES;
+ vring->index = i;
+ vring->vdev_id = tm_vdev->vdev.id.device;
+- vring->drop_desc.len = VRING_DROP_DESC_MAX_LEN;
++ vring->drop_desc.len = cpu_to_virtio32(&tm_vdev->vdev,
++ VRING_DROP_DESC_MAX_LEN);
+ dev = &tm_vdev->vdev.dev;
+
+ size = vring_size(vring->num, vring->align);
+diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
+index f2cb5e0347e36d..239eb052f40be0 100644
+--- a/drivers/pwm/pwm-mediatek.c
++++ b/drivers/pwm/pwm-mediatek.c
+@@ -135,8 +135,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ return ret;
+
+ clk_rate = clk_get_rate(pc->clk_pwms[pwm->hwpwm]);
+- if (!clk_rate)
+- return -EINVAL;
++ if (!clk_rate) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ /* Make sure we use the bus clock and not the 26MHz clock */
+ if (pc->soc->has_ck_26m_sel)
+@@ -155,9 +157,9 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ }
+
+ if (clkdiv > PWM_CLK_DIV_MAX) {
+- pwm_mediatek_clk_disable(chip, pwm);
+- dev_err(chip->dev, "period %d not supported\n", period_ns);
+- return -EINVAL;
++ dev_err(chip->dev, "period of %d ns not supported\n", period_ns);
++ ret = -EINVAL;
++ goto out;
+ }
+
+ if (pc->soc->pwm45_fixup && pwm->hwpwm > 2) {
+@@ -174,9 +176,10 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ pwm_mediatek_writel(pc, pwm->hwpwm, reg_width, cnt_period);
+ pwm_mediatek_writel(pc, pwm->hwpwm, reg_thres, cnt_duty);
+
++out:
+ pwm_mediatek_clk_disable(chip, pwm);
+
+- return 0;
++ return ret;
+ }
+
+ static int pwm_mediatek_enable(struct pwm_chip *chip, struct pwm_device *pwm)
+diff --git a/drivers/regulator/gpio-regulator.c b/drivers/regulator/gpio-regulator.c
+index 5927d4f3eabd75..de07b16b34f8e4 100644
+--- a/drivers/regulator/gpio-regulator.c
++++ b/drivers/regulator/gpio-regulator.c
+@@ -257,8 +257,8 @@ static int gpio_regulator_probe(struct platform_device *pdev)
+ return -ENOMEM;
+ }
+
+- drvdata->gpiods = devm_kzalloc(dev, sizeof(struct gpio_desc *),
+- GFP_KERNEL);
++ drvdata->gpiods = devm_kcalloc(dev, config->ngpios,
++ sizeof(struct gpio_desc *), GFP_KERNEL);
+ if (!drvdata->gpiods)
+ return -ENOMEM;
+ for (i = 0; i < config->ngpios; i++) {
+diff --git a/drivers/rtc/lib_test.c b/drivers/rtc/lib_test.c
+index fa6fd2875b3d97..225c859d6da550 100644
+--- a/drivers/rtc/lib_test.c
++++ b/drivers/rtc/lib_test.c
+@@ -77,3 +77,5 @@ static struct kunit_suite rtc_lib_test_suite = {
+ };
+
+ kunit_test_suite(rtc_lib_test_suite);
++
++MODULE_LICENSE("GPL");
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index a55a1cff2ef033..97fa887f4f02c8 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -704,8 +704,12 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ {
+ u8 irqstat;
+ u8 rtc_control;
++ unsigned long flags;
+
+- spin_lock(&rtc_lock);
++ /* We cannot use spin_lock() here, as cmos_interrupt() is also called
++ * in a non-irq context.
++ */
++ spin_lock_irqsave(&rtc_lock, flags);
+
+ /* When the HPET interrupt handler calls us, the interrupt
+ * status is passed as arg1 instead of the irq number. But
+@@ -739,7 +743,7 @@ static irqreturn_t cmos_interrupt(int irq, void *p)
+ hpet_mask_rtc_irq_bit(RTC_AIE);
+ CMOS_READ(RTC_INTR_FLAGS);
+ }
+- spin_unlock(&rtc_lock);
++ spin_unlock_irqrestore(&rtc_lock, flags);
+
+ if (is_intr(irqstat)) {
+ rtc_update_irq(p, 1, irqstat);
+@@ -1289,9 +1293,7 @@ static void cmos_check_wkalrm(struct device *dev)
+ * ACK the rtc irq here
+ */
+ if (t_now >= cmos->alarm_expires && cmos_use_acpi_alarm()) {
+- local_irq_disable();
+ cmos_interrupt(0, (void *)cmos->rtc);
+- local_irq_enable();
+ return;
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 21ba7100ff6760..8b7c71e779a781 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -2097,7 +2097,7 @@ qla24xx_get_port_database(scsi_qla_host_t *vha, u16 nport_handle,
+
+ pdb_dma = dma_map_single(&vha->hw->pdev->dev, pdb,
+ sizeof(*pdb), DMA_FROM_DEVICE);
+- if (!pdb_dma) {
++ if (dma_mapping_error(&vha->hw->pdev->dev, pdb_dma)) {
+ ql_log(ql_log_warn, vha, 0x1116, "Failed to map dma buffer.\n");
+ return QLA_MEMORY_ALLOC_FAILED;
+ }
+diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c
+index 05ae9b11570966..f02d8bbea3e511 100644
+--- a/drivers/scsi/qla4xxx/ql4_os.c
++++ b/drivers/scsi/qla4xxx/ql4_os.c
+@@ -3425,6 +3425,8 @@ static int qla4xxx_alloc_pdu(struct iscsi_task *task, uint8_t opcode)
+ task_data->data_dma = dma_map_single(&ha->pdev->dev, task->data,
+ task->data_count,
+ DMA_TO_DEVICE);
++ if (dma_mapping_error(&ha->pdev->dev, task_data->data_dma))
++ return -ENOMEM;
+ }
+
+ DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n",
+diff --git a/drivers/scsi/ufs/ufs-sysfs.c b/drivers/scsi/ufs/ufs-sysfs.c
+index 34b424ad96a20e..32b6fe493ae986 100644
+--- a/drivers/scsi/ufs/ufs-sysfs.c
++++ b/drivers/scsi/ufs/ufs-sysfs.c
+@@ -806,7 +806,7 @@ UFS_UNIT_DESC_PARAM(logical_block_size, _LOGICAL_BLK_SIZE, 1);
+ UFS_UNIT_DESC_PARAM(logical_block_count, _LOGICAL_BLK_COUNT, 8);
+ UFS_UNIT_DESC_PARAM(erase_block_size, _ERASE_BLK_SIZE, 4);
+ UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1);
+-UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8);
++UFS_UNIT_DESC_PARAM(physical_memory_resource_count, _PHY_MEM_RSRC_CNT, 8);
+ UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2);
+ UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1);
+ UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4);
+@@ -823,7 +823,7 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = {
+ &dev_attr_logical_block_count.attr,
+ &dev_attr_erase_block_size.attr,
+ &dev_attr_provisioning_type.attr,
+- &dev_attr_physical_memory_resourse_count.attr,
++ &dev_attr_physical_memory_resource_count.attr,
+ &dev_attr_context_capabilities.attr,
+ &dev_attr_large_unit_granularity.attr,
+ &dev_attr_wb_buf_alloc_units.attr,
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index eda7ed618369d6..580fdcbcd9b6c4 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -964,11 +964,20 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
+ status = dspi_dma_xfer(dspi);
+ } else {
++ /*
++ * Reinitialize the completion before transferring data
++ * to avoid the case where it might remain in the done
++ * state due to a spurious interrupt from a previous
++ * transfer. This could falsely signal that the current
++ * transfer has completed.
++ */
++ if (dspi->irq)
++ reinit_completion(&dspi->xfer_done);
++
+ dspi_fifo_write(dspi);
+
+ if (dspi->irq) {
+ wait_for_completion(&dspi->xfer_done);
+- reinit_completion(&dspi->xfer_done);
+ } else {
+ do {
+ status = dspi_poll(dspi);
+diff --git a/drivers/staging/rtl8723bs/core/rtw_security.c b/drivers/staging/rtl8723bs/core/rtw_security.c
+index 159d32ace2bc7c..cc709e849f39f4 100644
+--- a/drivers/staging/rtl8723bs/core/rtw_security.c
++++ b/drivers/staging/rtl8723bs/core/rtw_security.c
+@@ -1299,30 +1299,21 @@ static sint aes_cipher(u8 *key, uint hdrlen,
+ num_blocks, payload_index;
+
+ u8 pn_vector[6];
+- u8 mic_iv[16];
+- u8 mic_header1[16];
+- u8 mic_header2[16];
+- u8 ctr_preload[16];
++ u8 mic_iv[16] = {};
++ u8 mic_header1[16] = {};
++ u8 mic_header2[16] = {};
++ u8 ctr_preload[16] = {};
+
+ /* Intermediate Buffers */
+- u8 chain_buffer[16];
+- u8 aes_out[16];
+- u8 padded_buffer[16];
++ u8 chain_buffer[16] = {};
++ u8 aes_out[16] = {};
++ u8 padded_buffer[16] = {};
+ u8 mic[8];
+ uint frtype = GetFrameType(pframe);
+ uint frsubtype = GetFrameSubType(pframe);
+
+ frsubtype = frsubtype>>4;
+
+-
+- memset((void *)mic_iv, 0, 16);
+- memset((void *)mic_header1, 0, 16);
+- memset((void *)mic_header2, 0, 16);
+- memset((void *)ctr_preload, 0, 16);
+- memset((void *)chain_buffer, 0, 16);
+- memset((void *)aes_out, 0, 16);
+- memset((void *)padded_buffer, 0, 16);
+-
+ if ((hdrlen == WLAN_HDR_A3_LEN) || (hdrlen == WLAN_HDR_A3_QOS_LEN))
+ a4_exists = 0;
+ else
+@@ -1540,15 +1531,15 @@ static sint aes_decipher(u8 *key, uint hdrlen,
+ num_blocks, payload_index;
+ sint res = _SUCCESS;
+ u8 pn_vector[6];
+- u8 mic_iv[16];
+- u8 mic_header1[16];
+- u8 mic_header2[16];
+- u8 ctr_preload[16];
++ u8 mic_iv[16] = {};
++ u8 mic_header1[16] = {};
++ u8 mic_header2[16] = {};
++ u8 ctr_preload[16] = {};
+
+ /* Intermediate Buffers */
+- u8 chain_buffer[16];
+- u8 aes_out[16];
+- u8 padded_buffer[16];
++ u8 chain_buffer[16] = {};
++ u8 aes_out[16] = {};
++ u8 padded_buffer[16] = {};
+ u8 mic[8];
+
+
+@@ -1557,15 +1548,6 @@ static sint aes_decipher(u8 *key, uint hdrlen,
+
+ frsubtype = frsubtype>>4;
+
+-
+- memset((void *)mic_iv, 0, 16);
+- memset((void *)mic_header1, 0, 16);
+- memset((void *)mic_header2, 0, 16);
+- memset((void *)ctr_preload, 0, 16);
+- memset((void *)chain_buffer, 0, 16);
+- memset((void *)aes_out, 0, 16);
+- memset((void *)padded_buffer, 0, 16);
+-
+ /* start to decrypt the payload */
+
+ num_blocks = (plen-8) / 16; /* plen including LLC, payload_length and mic) */
+diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c
+index b42193c554fb28..2bc849799739ea 100644
+--- a/drivers/target/target_core_pr.c
++++ b/drivers/target/target_core_pr.c
+@@ -1858,7 +1858,9 @@ core_scsi3_decode_spec_i_port(
+ }
+
+ kmem_cache_free(t10_pr_reg_cache, dest_pr_reg);
+- core_scsi3_lunacl_undepend_item(dest_se_deve);
++
++ if (dest_se_deve)
++ core_scsi3_lunacl_undepend_item(dest_se_deve);
+
+ if (is_local)
+ continue;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 5d9de3a53548b1..98ca54330d7713 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -4452,6 +4452,7 @@ void do_unblank_screen(int leaving_gfx)
+ set_palette(vc);
+ set_cursor(vc);
+ vt_event_post(VT_EVENT_UNBLANK, vc->vc_num, vc->vc_num);
++ notify_update(vc);
+ }
+ EXPORT_SYMBOL(do_unblank_screen);
+
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index 67cfe838a78743..6625d340f3ac5d 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -249,6 +249,7 @@ hv_uio_probe(struct hv_device *dev,
+ struct hv_uio_private_data *pdata;
+ void *ring_buffer;
+ int ret;
++ size_t ring_size = hv_dev_ring_size(channel);
+
+ /* Communicating with host has to be via shared memory not hypercall */
+ if (!channel->offermsg.monitor_allocated) {
+@@ -256,14 +257,19 @@ hv_uio_probe(struct hv_device *dev,
+ return -ENOTSUPP;
+ }
+
+- pdata = kzalloc(sizeof(*pdata), GFP_KERNEL);
++ if (!ring_size)
++ ring_size = HV_RING_SIZE * PAGE_SIZE;
++
++ /* Adjust ring size if necessary to have it page aligned */
++ ring_size = VMBUS_RING_SIZE(ring_size);
++
++ pdata = devm_kzalloc(&dev->device, sizeof(*pdata), GFP_KERNEL);
+ if (!pdata)
+ return -ENOMEM;
+
+- ret = vmbus_alloc_ring(channel, HV_RING_SIZE * PAGE_SIZE,
+- HV_RING_SIZE * PAGE_SIZE);
++ ret = vmbus_alloc_ring(channel, ring_size, ring_size);
+ if (ret)
+- goto fail;
++ return ret;
+
+ set_channel_read_mode(channel, HV_CALL_ISR);
+
+@@ -360,8 +366,6 @@ hv_uio_probe(struct hv_device *dev,
+
+ fail_close:
+ hv_uio_cleanup(dev, pdata);
+-fail:
+- kfree(pdata);
+
+ return ret;
+ }
+@@ -377,10 +381,8 @@ hv_uio_remove(struct hv_device *dev)
+ sysfs_remove_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr);
+ uio_unregister_device(&pdata->info);
+ hv_uio_cleanup(dev, pdata);
+- hv_set_drvdata(dev, NULL);
+
+ vmbus_free_ring(dev->channel);
+- kfree(pdata);
+ return 0;
+ }
+
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index aa91d561a0ace2..26a59443d25f30 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -89,7 +89,6 @@ struct wdm_device {
+ u16 wMaxCommand;
+ u16 wMaxPacketSize;
+ __le16 inum;
+- int reslength;
+ int length;
+ int read;
+ int count;
+@@ -201,6 +200,11 @@ static void wdm_in_callback(struct urb *urb)
+ if (desc->rerr == 0 && status != -EPIPE)
+ desc->rerr = status;
+
++ if (length == 0) {
++ dev_dbg(&desc->intf->dev, "received ZLP\n");
++ goto skip_zlp;
++ }
++
+ if (length + desc->length > desc->wMaxCommand) {
+ /* The buffer would overflow */
+ set_bit(WDM_OVERFLOW, &desc->flags);
+@@ -209,18 +213,18 @@ static void wdm_in_callback(struct urb *urb)
+ if (!test_bit(WDM_OVERFLOW, &desc->flags)) {
+ memmove(desc->ubuf + desc->length, desc->inbuf, length);
+ desc->length += length;
+- desc->reslength = length;
+ }
+ }
+ skip_error:
+
+ if (desc->rerr) {
+ /*
+- * Since there was an error, userspace may decide to not read
+- * any data after poll'ing.
++ * If there was a ZLP or an error, userspace may decide to not
++ * read any data after poll'ing.
+ * We should respond to further attempts from the device to send
+ * data, so that we can get unstuck.
+ */
++skip_zlp:
+ schedule_work(&desc->service_outs_intr);
+ } else {
+ set_bit(WDM_READ, &desc->flags);
+@@ -571,15 +575,6 @@ static ssize_t wdm_read
+ goto retry;
+ }
+
+- if (!desc->reslength) { /* zero length read */
+- dev_dbg(&desc->intf->dev, "zero length - clearing WDM_READ\n");
+- clear_bit(WDM_READ, &desc->flags);
+- rv = service_outstanding_interrupt(desc);
+- spin_unlock_irq(&desc->iuspin);
+- if (rv < 0)
+- goto err;
+- goto retry;
+- }
+ cntr = desc->length;
+ spin_unlock_irq(&desc->iuspin);
+ }
+@@ -839,7 +834,7 @@ static void service_interrupt_work(struct work_struct *work)
+
+ spin_lock_irq(&desc->iuspin);
+ service_outstanding_interrupt(desc);
+- if (!desc->resp_count) {
++ if (!desc->resp_count && (desc->length || desc->rerr)) {
+ set_bit(WDM_READ, &desc->flags);
+ wake_up(&desc->wait);
+ }
+diff --git a/drivers/usb/common/usb-conn-gpio.c b/drivers/usb/common/usb-conn-gpio.c
+index 02446092520c89..f5a1981c9eb40e 100644
+--- a/drivers/usb/common/usb-conn-gpio.c
++++ b/drivers/usb/common/usb-conn-gpio.c
+@@ -20,6 +20,9 @@
+ #include <linux/power_supply.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/usb/role.h>
++#include <linux/idr.h>
++
++static DEFINE_IDA(usb_conn_ida);
+
+ #define USB_GPIO_DEB_MS 20 /* ms */
+ #define USB_GPIO_DEB_US ((USB_GPIO_DEB_MS) * 1000) /* us */
+@@ -29,6 +32,7 @@
+
+ struct usb_conn_info {
+ struct device *dev;
++ int conn_id; /* store the IDA-allocated ID */
+ struct usb_role_switch *role_sw;
+ enum usb_role last_role;
+ struct regulator *vbus;
+@@ -160,7 +164,17 @@ static int usb_conn_psy_register(struct usb_conn_info *info)
+ .of_node = dev->of_node,
+ };
+
+- desc->name = "usb-charger";
++ info->conn_id = ida_alloc(&usb_conn_ida, GFP_KERNEL);
++ if (info->conn_id < 0)
++ return info->conn_id;
++
++ desc->name = devm_kasprintf(dev, GFP_KERNEL, "usb-charger-%d",
++ info->conn_id);
++ if (!desc->name) {
++ ida_free(&usb_conn_ida, info->conn_id);
++ return -ENOMEM;
++ }
++
+ desc->properties = usb_charger_properties;
+ desc->num_properties = ARRAY_SIZE(usb_charger_properties);
+ desc->get_property = usb_charger_get_property;
+@@ -168,8 +182,10 @@ static int usb_conn_psy_register(struct usb_conn_info *info)
+ cfg.drv_data = info;
+
+ info->charger = devm_power_supply_register(dev, desc, &cfg);
+- if (IS_ERR(info->charger))
+- dev_err(dev, "Unable to register charger\n");
++ if (IS_ERR(info->charger)) {
++ dev_err(dev, "Unable to register charger %d\n", info->conn_id);
++ ida_free(&usb_conn_ida, info->conn_id);
++ }
+
+ return PTR_ERR_OR_ZERO(info->charger);
+ }
+@@ -296,6 +312,9 @@ static int usb_conn_remove(struct platform_device *pdev)
+
+ cancel_delayed_work_sync(&info->dw_det);
+
++ if (info->charger)
++ ida_free(&usb_conn_ida, info->conn_id);
++
+ if (info->last_role == USB_ROLE_HOST && info->vbus)
+ regulator_disable(info->vbus);
+
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 89ffadb1a4f0be..ff3b5131903ace 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -224,7 +224,8 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME },
+
+ /* Logitech HD Webcam C270 */
+- { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME },
++ { USB_DEVICE(0x046d, 0x0825), .driver_info = USB_QUIRK_RESET_RESUME |
++ USB_QUIRK_NO_LPM},
+
+ /* Logitech HD Pro Webcams C920, C920-C, C922, C925e and C930e */
+ { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT },
+diff --git a/drivers/usb/core/usb.c b/drivers/usb/core/usb.c
+index c4cd9d46f9e3c3..7be4e8f77a5ca6 100644
+--- a/drivers/usb/core/usb.c
++++ b/drivers/usb/core/usb.c
+@@ -704,15 +704,16 @@ struct usb_device *usb_alloc_dev(struct usb_device *parent,
+ dev_set_name(&dev->dev, "usb%d", bus->busnum);
+ root_hub = 1;
+ } else {
++ int n;
++
+ /* match any labeling on the hubs; it's one-based */
+ if (parent->devpath[0] == '0') {
+- snprintf(dev->devpath, sizeof dev->devpath,
+- "%d", port1);
++ n = snprintf(dev->devpath, sizeof(dev->devpath), "%d", port1);
+ /* Root ports are not counted in route string */
+ dev->route = 0;
+ } else {
+- snprintf(dev->devpath, sizeof dev->devpath,
+- "%s.%d", parent->devpath, port1);
++ n = snprintf(dev->devpath, sizeof(dev->devpath), "%s.%d",
++ parent->devpath, port1);
+ /* Route string assumes hubs have less than 16 ports */
+ if (port1 < 15)
+ dev->route = parent->route +
+@@ -721,6 +722,11 @@ struct usb_device *usb_alloc_dev(struct usb_device *parent,
+ dev->route = parent->route +
+ (15 << ((parent->level - 1)*4));
+ }
++ if (n >= sizeof(dev->devpath)) {
++ usb_put_hcd(bus_to_hcd(bus));
++ usb_put_dev(dev);
++ return NULL;
++ }
+
+ dev->dev.parent = &parent->dev;
+ dev_set_name(&dev->dev, "%d-%s", bus->busnum, dev->devpath);
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 7f825c961fb88f..30c3a44abb183a 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1325,14 +1325,14 @@ static struct se_portal_group *usbg_make_tpg(struct se_wwn *wwn,
+ struct usbg_tport *tport = container_of(wwn, struct usbg_tport,
+ tport_wwn);
+ struct usbg_tpg *tpg;
+- unsigned long tpgt;
++ u16 tpgt;
+ int ret;
+ struct f_tcm_opts *opts;
+ unsigned i;
+
+ if (strstr(name, "tpgt_") != name)
+ return ERR_PTR(-EINVAL);
+- if (kstrtoul(name + 5, 0, &tpgt) || tpgt > UINT_MAX)
++ if (kstrtou16(name + 5, 0, &tpgt))
+ return ERR_PTR(-EINVAL);
+ ret = -ENODEV;
+ mutex_lock(&tpg_instances_lock);
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index a2ba5ab9617c16..02470345d8548d 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -292,8 +292,8 @@ __acquires(&port->port_lock)
+ break;
+ }
+
+- if (do_tty_wake && port->port.tty)
+- tty_wakeup(port->port.tty);
++ if (do_tty_wake)
++ tty_port_tty_wakeup(&port->port);
+ return status;
+ }
+
+@@ -570,7 +570,7 @@ static int gs_start_io(struct gs_port *port)
+ gs_start_tx(port);
+ /* Unblock any pending writes into our circular buffer, in case
+ * we didn't in gs_start_tx() */
+- tty_wakeup(port->port.tty);
++ tty_port_tty_wakeup(&port->port);
+ } else {
+ /* Free reqs only if we are still connected */
+ if (port->port_usb) {
+diff --git a/drivers/usb/host/xhci-dbgcap.c b/drivers/usb/host/xhci-dbgcap.c
+index 75108acf3741c5..03f047f5508bff 100644
+--- a/drivers/usb/host/xhci-dbgcap.c
++++ b/drivers/usb/host/xhci-dbgcap.c
+@@ -639,6 +639,10 @@ static void xhci_dbc_stop(struct xhci_dbc *dbc)
+ case DS_DISABLED:
+ return;
+ case DS_CONFIGURED:
++ spin_lock(&dbc->lock);
++ xhci_dbc_flush_requests(dbc);
++ spin_unlock(&dbc->lock);
++
+ if (dbc->driver->disconnect)
+ dbc->driver->disconnect(dbc);
+ break;
+diff --git a/drivers/usb/host/xhci-dbgtty.c b/drivers/usb/host/xhci-dbgtty.c
+index 2a53b283199998..0b91cc2ba8fb1b 100644
+--- a/drivers/usb/host/xhci-dbgtty.c
++++ b/drivers/usb/host/xhci-dbgtty.c
+@@ -529,6 +529,7 @@ static int dbc_tty_init(void)
+ dbc_tty_driver->type = TTY_DRIVER_TYPE_SERIAL;
+ dbc_tty_driver->subtype = SERIAL_TYPE_NORMAL;
+ dbc_tty_driver->init_termios = tty_std_termios;
++ dbc_tty_driver->init_termios.c_lflag &= ~ECHO;
+ dbc_tty_driver->init_termios.c_cflag =
+ B9600 | CS8 | CREAD | HUPCL | CLOCAL;
+ dbc_tty_driver->init_termios.c_ispeed = 9600;
+diff --git a/drivers/usb/typec/altmodes/displayport.c b/drivers/usb/typec/altmodes/displayport.c
+index e0456e5e10b688..70a21451c58b7d 100644
+--- a/drivers/usb/typec/altmodes/displayport.c
++++ b/drivers/usb/typec/altmodes/displayport.c
+@@ -304,6 +304,9 @@ static int dp_altmode_vdm(struct typec_altmode *alt,
+ break;
+ case CMDT_RSP_NAK:
+ switch (cmd) {
++ case DP_CMD_STATUS_UPDATE:
++ dp->state = DP_STATE_EXIT;
++ break;
+ case DP_CMD_CONFIGURE:
+ dp->data.conf = 0;
+ ret = dp_altmode_configured(dp);
+@@ -505,7 +508,7 @@ static ssize_t pin_assignment_show(struct device *dev,
+
+ assignments = get_current_pin_assignments(dp);
+
+- for (i = 0; assignments; assignments >>= 1, i++) {
++ for (i = 0; assignments && i < DP_PIN_ASSIGN_MAX; assignments >>= 1, i++) {
+ if (assignments & 1) {
+ if (i == cur)
+ len += sprintf(buf + len, "[%s] ",
+diff --git a/drivers/usb/typec/tcpm/tcpci_maxim.c b/drivers/usb/typec/tcpm/tcpci_maxim.c
+index 723d7dd38f75bb..d694094084f85e 100644
+--- a/drivers/usb/typec/tcpm/tcpci_maxim.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim.c
+@@ -151,7 +151,7 @@ static void process_rx(struct max_tcpci_chip *chip, u16 status)
+ */
+ ret = regmap_raw_read(chip->data.regmap, TCPC_RX_BYTE_CNT, rx_buf, 2);
+ if (ret < 0) {
+- dev_err(chip->dev, "TCPC_RX_BYTE_CNT read failed ret:%d", ret);
++ dev_err(chip->dev, "TCPC_RX_BYTE_CNT read failed ret:%d\n", ret);
+ return;
+ }
+
+@@ -160,13 +160,14 @@ static void process_rx(struct max_tcpci_chip *chip, u16 status)
+
+ if (count == 0 || frame_type != TCPC_RX_BUF_FRAME_TYPE_SOP) {
+ max_tcpci_write16(chip, TCPC_ALERT, TCPC_ALERT_RX_STATUS);
+- dev_err(chip->dev, "%s", count == 0 ? "error: count is 0" :
++ dev_err(chip->dev, "%s\n", count == 0 ? "error: count is 0" :
+ "error frame_type is not SOP");
+ return;
+ }
+
+- if (count > sizeof(struct pd_message) || count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
+- dev_err(chip->dev, "Invalid TCPC_RX_BYTE_CNT %d", count);
++ if (count > sizeof(struct pd_message) + 1 ||
++ count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
++ dev_err(chip->dev, "Invalid TCPC_RX_BYTE_CNT %d\n", count);
+ return;
+ }
+
+@@ -177,7 +178,7 @@ static void process_rx(struct max_tcpci_chip *chip, u16 status)
+ count += 1;
+ ret = regmap_raw_read(chip->data.regmap, TCPC_RX_BYTE_CNT, rx_buf, count);
+ if (ret < 0) {
+- dev_err(chip->dev, "Error: TCPC_RX_BYTE_CNT read failed: %d", ret);
++ dev_err(chip->dev, "Error: TCPC_RX_BYTE_CNT read failed: %d\n", ret);
+ return;
+ }
+
+@@ -311,7 +312,7 @@ static irqreturn_t _max_tcpci_irq(struct max_tcpci_chip *chip, u16 status)
+ return ret;
+
+ if (reg_status & TCPC_SINK_FAST_ROLE_SWAP) {
+- dev_info(chip->dev, "FRS Signal");
++ dev_info(chip->dev, "FRS Signal\n");
+ tcpm_sink_frs(chip->port);
+ }
+ }
+@@ -344,7 +345,7 @@ static irqreturn_t max_tcpci_irq(int irq, void *dev_id)
+ {
+ struct max_tcpci_chip *chip = dev_id;
+ u16 status;
+- irqreturn_t irq_return;
++ irqreturn_t irq_return = IRQ_HANDLED;
+ int ret;
+
+ if (!chip->port)
+@@ -444,9 +445,8 @@ static int max_tcpci_probe(struct i2c_client *client, const struct i2c_device_id
+
+ max_tcpci_init_regs(chip);
+ chip->tcpci = tcpci_register_port(chip->dev, &chip->data);
+- if (IS_ERR_OR_NULL(chip->tcpci)) {
+- dev_err(&client->dev, "TCPCI port registration failed");
+- ret = PTR_ERR(chip->tcpci);
++ if (IS_ERR(chip->tcpci)) {
++ dev_err(&client->dev, "TCPCI port registration failed\n");
+ return PTR_ERR(chip->tcpci);
+ }
+ chip->port = tcpci_get_tcpm_port(chip->tcpci);
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index a23a65e7d828e4..fcde3752b4f1b3 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -579,8 +579,10 @@ static void vhost_scsi_complete_cmd_work(struct vhost_work *work)
+ ret = copy_to_iter(&v_rsp, sizeof(v_rsp), &iov_iter);
+ if (likely(ret == sizeof(v_rsp))) {
+ struct vhost_scsi_virtqueue *q;
+- vhost_add_used(cmd->tvc_vq, cmd->tvc_vq_desc, 0);
+ q = container_of(cmd->tvc_vq, struct vhost_scsi_virtqueue, vq);
++ mutex_lock(&q->vq.mutex);
++ vhost_add_used(cmd->tvc_vq, cmd->tvc_vq_desc, 0);
++ mutex_unlock(&q->vq.mutex);
+ vq = q - vs->vqs;
+ __set_bit(vq, signal);
+ } else
+@@ -1193,8 +1195,11 @@ static void vhost_scsi_tmf_resp_work(struct vhost_work *work)
+ else
+ resp_code = VIRTIO_SCSI_S_FUNCTION_REJECTED;
+
++ mutex_lock(&tmf->svq->vq.mutex);
+ vhost_scsi_send_tmf_resp(tmf->vhost, &tmf->svq->vq, tmf->in_iovs,
+ tmf->vq_desc, &tmf->resp_iov, resp_code);
++ mutex_unlock(&tmf->svq->vq.mutex);
++
+ vhost_scsi_release_tmf_res(tmf);
+ }
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 8d7ca8a21525aa..82805ac91b06cc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4150,7 +4150,6 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ int err = 0;
+ struct btrfs_root *root = BTRFS_I(dir)->root;
+ struct btrfs_trans_handle *trans;
+- u64 last_unlink_trans;
+
+ if (inode->i_size > BTRFS_EMPTY_DIR_SIZE)
+ return -ENOTEMPTY;
+@@ -4161,6 +4160,23 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
+
++ /*
++ * Propagate the last_unlink_trans value of the deleted dir to its
++ * parent directory. This is to prevent an unrecoverable log tree in the
++ * case we do something like this:
++ * 1) create dir foo
++ * 2) create snapshot under dir foo
++ * 3) delete the snapshot
++ * 4) rmdir foo
++ * 5) mkdir foo
++ * 6) fsync foo or some file inside foo
++ *
++ * This is because we can't unlink other roots when replaying the dir
++ * deletes for directory foo.
++ */
++ if (BTRFS_I(inode)->last_unlink_trans >= trans->transid)
++ btrfs_record_snapshot_destroy(trans, BTRFS_I(dir));
++
+ if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) {
+ err = btrfs_unlink_subvol(trans, dir, dentry);
+ goto out;
+@@ -4170,28 +4186,12 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry)
+ if (err)
+ goto out;
+
+- last_unlink_trans = BTRFS_I(inode)->last_unlink_trans;
+-
+ /* now the directory is empty */
+ err = btrfs_unlink_inode(trans, root, BTRFS_I(dir),
+ BTRFS_I(d_inode(dentry)), dentry->d_name.name,
+ dentry->d_name.len);
+- if (!err) {
++ if (!err)
+ btrfs_i_size_write(BTRFS_I(inode), 0);
+- /*
+- * Propagate the last_unlink_trans value of the deleted dir to
+- * its parent directory. This is to prevent an unrecoverable
+- * log tree in the case we do something like this:
+- * 1) create dir foo
+- * 2) create snapshot under dir foo
+- * 3) delete the snapshot
+- * 4) rmdir foo
+- * 5) mkdir foo
+- * 6) fsync foo or some file inside foo
+- */
+- if (last_unlink_trans >= trans->transid)
+- BTRFS_I(dir)->last_unlink_trans = last_unlink_trans;
+- }
+ out:
+ btrfs_end_transaction(trans);
+ btrfs_btree_balance_dirty(root->fs_info);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 4ee68142932798..dd1c40019412cb 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1005,7 +1005,9 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans,
+ search_key.type = BTRFS_INODE_REF_KEY;
+ search_key.offset = parent_objectid;
+ ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0);
+- if (ret == 0) {
++ if (ret < 0) {
++ return ret;
++ } else if (ret == 0) {
+ struct btrfs_inode_ref *victim_ref;
+ unsigned long ptr;
+ unsigned long ptr_end;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 9524588346b8e3..9c1a7b3b84e42e 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -3101,6 +3101,12 @@ int btrfs_remove_chunk(struct btrfs_trans_handle *trans, u64 chunk_offset)
+ device->bytes_used - dev_extent_len);
+ atomic64_add(dev_extent_len, &fs_info->free_chunk_space);
+ btrfs_clear_space_info_full(fs_info);
++
++ if (list_empty(&device->post_commit_list)) {
++ list_add_tail(&device->post_commit_list,
++ &trans->transaction->dev_update_list);
++ }
++
+ mutex_unlock(&fs_info->chunk_mutex);
+ }
+
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index d4974c652e8e4a..c1eafff45b1943 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -2034,7 +2034,7 @@ static int ceph_zero_objects(struct inode *inode, loff_t offset, loff_t length)
+ s32 stripe_unit = ci->i_layout.stripe_unit;
+ s32 stripe_count = ci->i_layout.stripe_count;
+ s32 object_size = ci->i_layout.object_size;
+- u64 object_set_size = object_size * stripe_count;
++ u64 object_set_size = (u64) object_size * stripe_count;
+ u64 nearly, t;
+
+ /* round offset up to next period boundary */
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index 2d46018b02839d..54c443686dabad 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -310,6 +310,14 @@ check_smb_hdr(struct smb_hdr *smb)
+ if (smb->Command == SMB_COM_LOCKING_ANDX)
+ return 0;
+
++ /*
++ * Windows NT server returns error resposne (e.g. STATUS_DELETE_PENDING
++ * or STATUS_OBJECT_NAME_NOT_FOUND or ERRDOS/ERRbadfile or any other)
++ * for some TRANS2 requests without the RESPONSE flag set in header.
++ */
++ if (smb->Command == SMB_COM_TRANSACTION2 && smb->Status.CifsError != 0)
++ return 0;
++
+ cifs_dbg(VFS, "Server sent request, not response. mid=%u\n",
+ get_mid(smb));
+ return 1;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index b7997df291a66b..d7fd28a4770112 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1433,26 +1433,32 @@ static int f2fs_statfs_project(struct super_block *sb,
+
+ limit = min_not_zero(dquot->dq_dqb.dqb_bsoftlimit,
+ dquot->dq_dqb.dqb_bhardlimit);
+- if (limit)
+- limit >>= sb->s_blocksize_bits;
++ limit >>= sb->s_blocksize_bits;
++
++ if (limit) {
++ uint64_t remaining = 0;
+
+- if (limit && buf->f_blocks > limit) {
+ curblock = (dquot->dq_dqb.dqb_curspace +
+ dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
+- buf->f_blocks = limit;
+- buf->f_bfree = buf->f_bavail =
+- (buf->f_blocks > curblock) ?
+- (buf->f_blocks - curblock) : 0;
++ if (limit > curblock)
++ remaining = limit - curblock;
++
++ buf->f_blocks = min(buf->f_blocks, limit);
++ buf->f_bfree = min(buf->f_bfree, remaining);
++ buf->f_bavail = min(buf->f_bavail, remaining);
+ }
+
+ limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit,
+ dquot->dq_dqb.dqb_ihardlimit);
+
+- if (limit && buf->f_files > limit) {
+- buf->f_files = limit;
+- buf->f_ffree =
+- (buf->f_files > dquot->dq_dqb.dqb_curinodes) ?
+- (buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > dquot->dq_dqb.dqb_curinodes)
++ remaining = limit - dquot->dq_dqb.dqb_curinodes;
++
++ buf->f_files = min(buf->f_files, limit);
++ buf->f_ffree = min(buf->f_ffree, remaining);
+ }
+
+ spin_unlock(&dquot->dq_dqb_lock);
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 9dccebbee55ad0..37888187b97738 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -178,45 +178,30 @@ int dbMount(struct inode *ipbmap)
+ dbmp_le = (struct dbmap_disk *) mp->data;
+ bmp->db_mapsize = le64_to_cpu(dbmp_le->dn_mapsize);
+ bmp->db_nfree = le64_to_cpu(dbmp_le->dn_nfree);
+-
+ bmp->db_l2nbperpage = le32_to_cpu(dbmp_le->dn_l2nbperpage);
+- if (bmp->db_l2nbperpage > L2PSIZE - L2MINBLOCKSIZE ||
+- bmp->db_l2nbperpage < 0) {
+- err = -EINVAL;
+- goto err_release_metapage;
+- }
+-
+ bmp->db_numag = le32_to_cpu(dbmp_le->dn_numag);
+- if (!bmp->db_numag || bmp->db_numag > MAXAG) {
+- err = -EINVAL;
+- goto err_release_metapage;
+- }
+-
+ bmp->db_maxlevel = le32_to_cpu(dbmp_le->dn_maxlevel);
+ bmp->db_maxag = le32_to_cpu(dbmp_le->dn_maxag);
+ bmp->db_agpref = le32_to_cpu(dbmp_le->dn_agpref);
+- if (bmp->db_maxag >= MAXAG || bmp->db_maxag < 0 ||
+- bmp->db_agpref >= MAXAG || bmp->db_agpref < 0) {
+- err = -EINVAL;
+- goto err_release_metapage;
+- }
+-
+ bmp->db_aglevel = le32_to_cpu(dbmp_le->dn_aglevel);
+ bmp->db_agheight = le32_to_cpu(dbmp_le->dn_agheight);
+ bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
+- if (!bmp->db_agwidth) {
+- err = -EINVAL;
+- goto err_release_metapage;
+- }
+ bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
+ bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
+- if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG ||
+- bmp->db_agl2size < 0) {
+- err = -EINVAL;
+- goto err_release_metapage;
+- }
+
+- if (((bmp->db_mapsize - 1) >> bmp->db_agl2size) > MAXAG) {
++ if ((bmp->db_l2nbperpage > L2PSIZE - L2MINBLOCKSIZE) ||
++ (bmp->db_l2nbperpage < 0) ||
++ !bmp->db_numag || (bmp->db_numag > MAXAG) ||
++ (bmp->db_maxag >= MAXAG) || (bmp->db_maxag < 0) ||
++ (bmp->db_agpref >= MAXAG) || (bmp->db_agpref < 0) ||
++ (bmp->db_agheight < 0) || (bmp->db_agheight > (L2LPERCTL >> 1)) ||
++ (bmp->db_agwidth < 1) || (bmp->db_agwidth > (LPERCTL / MAXAG)) ||
++ (bmp->db_agwidth > (1 << (L2LPERCTL - (bmp->db_agheight << 1)))) ||
++ (bmp->db_agstart < 0) ||
++ (bmp->db_agstart > (CTLTREESIZE - 1 - bmp->db_agwidth * (MAXAG - 1))) ||
++ (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG) ||
++ (bmp->db_agl2size < 0) ||
++ ((bmp->db_mapsize - 1) >> bmp->db_agl2size) > MAXAG) {
+ err = -EINVAL;
+ goto err_release_metapage;
+ }
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 2d5af6653cd118..ee6d139f75292d 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2186,14 +2186,14 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+ hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) {
+ struct mount *q;
+ hlist_del_init(&child->mnt_hash);
+- q = __lookup_mnt(&child->mnt_parent->mnt,
+- child->mnt_mountpoint);
+- if (q)
+- mnt_change_mountpoint(child, smp, q);
+ /* Notice when we are propagating across user namespaces */
+ if (child->mnt_parent->mnt_ns->user_ns != user_ns)
+ lock_mnt_tree(child);
+ child->mnt.mnt_flags &= ~MNT_LOCKED;
++ q = __lookup_mnt(&child->mnt_parent->mnt,
++ child->mnt_mountpoint);
++ if (q)
++ mnt_change_mountpoint(child, smp, q);
+ commit_tree(child);
+ }
+ put_mountpoint(smp);
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index ce9c2d1f54ae0e..f8962eaec87bc4 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1103,6 +1103,7 @@ static void ff_layout_reset_read(struct nfs_pgio_header *hdr)
+ }
+
+ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
++ u32 op_status,
+ struct nfs4_state *state,
+ struct nfs_client *clp,
+ struct pnfs_layout_segment *lseg,
+@@ -1113,32 +1114,42 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+ struct nfs4_slot_table *tbl = &clp->cl_session->fc_slot_table;
+
+- switch (task->tk_status) {
+- case -NFS4ERR_BADSESSION:
+- case -NFS4ERR_BADSLOT:
+- case -NFS4ERR_BAD_HIGH_SLOT:
+- case -NFS4ERR_DEADSESSION:
+- case -NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
+- case -NFS4ERR_SEQ_FALSE_RETRY:
+- case -NFS4ERR_SEQ_MISORDERED:
++ switch (op_status) {
++ case NFS4_OK:
++ case NFS4ERR_NXIO:
++ break;
++ case NFSERR_PERM:
++ if (!task->tk_xprt)
++ break;
++ xprt_force_disconnect(task->tk_xprt);
++ goto out_retry;
++ case NFS4ERR_BADSESSION:
++ case NFS4ERR_BADSLOT:
++ case NFS4ERR_BAD_HIGH_SLOT:
++ case NFS4ERR_DEADSESSION:
++ case NFS4ERR_CONN_NOT_BOUND_TO_SESSION:
++ case NFS4ERR_SEQ_FALSE_RETRY:
++ case NFS4ERR_SEQ_MISORDERED:
+ dprintk("%s ERROR %d, Reset session. Exchangeid "
+ "flags 0x%x\n", __func__, task->tk_status,
+ clp->cl_exchange_flags);
+ nfs4_schedule_session_recovery(clp->cl_session, task->tk_status);
+- break;
+- case -NFS4ERR_DELAY:
+- case -NFS4ERR_GRACE:
++ goto out_retry;
++ case NFS4ERR_DELAY:
++ nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);
++ fallthrough;
++ case NFS4ERR_GRACE:
+ rpc_delay(task, FF_LAYOUT_POLL_RETRY_MAX);
+- break;
+- case -NFS4ERR_RETRY_UNCACHED_REP:
+- break;
++ goto out_retry;
++ case NFS4ERR_RETRY_UNCACHED_REP:
++ goto out_retry;
+ /* Invalidate Layout errors */
+- case -NFS4ERR_PNFS_NO_LAYOUT:
+- case -ESTALE: /* mapped NFS4ERR_STALE */
+- case -EBADHANDLE: /* mapped NFS4ERR_BADHANDLE */
+- case -EISDIR: /* mapped NFS4ERR_ISDIR */
+- case -NFS4ERR_FHEXPIRED:
+- case -NFS4ERR_WRONG_TYPE:
++ case NFS4ERR_PNFS_NO_LAYOUT:
++ case NFS4ERR_STALE:
++ case NFS4ERR_BADHANDLE:
++ case NFS4ERR_ISDIR:
++ case NFS4ERR_FHEXPIRED:
++ case NFS4ERR_WRONG_TYPE:
+ dprintk("%s Invalid layout error %d\n", __func__,
+ task->tk_status);
+ /*
+@@ -1151,6 +1162,11 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ pnfs_destroy_layout(NFS_I(inode));
+ rpc_wake_up(&tbl->slot_tbl_waitq);
+ goto reset;
++ default:
++ break;
++ }
++
++ switch (task->tk_status) {
+ /* RPC connection errors */
+ case -ECONNREFUSED:
+ case -EHOSTDOWN:
+@@ -1164,26 +1180,56 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+ &devid->deviceid);
+ rpc_wake_up(&tbl->slot_tbl_waitq);
+- fallthrough;
++ break;
+ default:
+- if (ff_layout_avoid_mds_available_ds(lseg))
+- return -NFS4ERR_RESET_TO_PNFS;
+-reset:
+- dprintk("%s Retry through MDS. Error %d\n", __func__,
+- task->tk_status);
+- return -NFS4ERR_RESET_TO_MDS;
++ break;
+ }
++
++ if (ff_layout_avoid_mds_available_ds(lseg))
++ return -NFS4ERR_RESET_TO_PNFS;
++reset:
++ dprintk("%s Retry through MDS. Error %d\n", __func__,
++ task->tk_status);
++ return -NFS4ERR_RESET_TO_MDS;
++
++out_retry:
+ task->tk_status = 0;
+ return -EAGAIN;
+ }
+
+ /* Retry all errors through either pNFS or MDS except for -EJUKEBOX */
+ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
++ u32 op_status,
++ struct nfs_client *clp,
+ struct pnfs_layout_segment *lseg,
+ u32 idx)
+ {
+ struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+
++ switch (op_status) {
++ case NFS_OK:
++ case NFSERR_NXIO:
++ break;
++ case NFSERR_PERM:
++ if (!task->tk_xprt)
++ break;
++ xprt_force_disconnect(task->tk_xprt);
++ goto out_retry;
++ case NFSERR_ACCES:
++ case NFSERR_BADHANDLE:
++ case NFSERR_FBIG:
++ case NFSERR_IO:
++ case NFSERR_NOSPC:
++ case NFSERR_ROFS:
++ case NFSERR_STALE:
++ goto out_reset_to_pnfs;
++ case NFSERR_JUKEBOX:
++ nfs_inc_stats(lseg->pls_layout->plh_inode, NFSIOS_DELAY);
++ goto out_retry;
++ default:
++ break;
++ }
++
+ switch (task->tk_status) {
+ /* File access problems. Don't mark the device as unavailable */
+ case -EACCES:
+@@ -1202,6 +1248,7 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ nfs4_delete_deviceid(devid->ld, devid->nfs_client,
+ &devid->deviceid);
+ }
++out_reset_to_pnfs:
+ /* FIXME: Need to prevent infinite looping here. */
+ return -NFS4ERR_RESET_TO_PNFS;
+ out_retry:
+@@ -1212,6 +1259,7 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ }
+
+ static int ff_layout_async_handle_error(struct rpc_task *task,
++ u32 op_status,
+ struct nfs4_state *state,
+ struct nfs_client *clp,
+ struct pnfs_layout_segment *lseg,
+@@ -1230,10 +1278,11 @@ static int ff_layout_async_handle_error(struct rpc_task *task,
+
+ switch (vers) {
+ case 3:
+- return ff_layout_async_handle_error_v3(task, lseg, idx);
+- case 4:
+- return ff_layout_async_handle_error_v4(task, state, clp,
++ return ff_layout_async_handle_error_v3(task, op_status, clp,
+ lseg, idx);
++ case 4:
++ return ff_layout_async_handle_error_v4(task, op_status, state,
++ clp, lseg, idx);
+ default:
+ /* should never happen */
+ WARN_ON_ONCE(1);
+@@ -1284,6 +1333,7 @@ static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
+ switch (status) {
+ case NFS4ERR_DELAY:
+ case NFS4ERR_GRACE:
++ case NFS4ERR_PERM:
+ break;
+ case NFS4ERR_NXIO:
+ ff_layout_mark_ds_unreachable(lseg, idx);
+@@ -1316,7 +1366,8 @@ static int ff_layout_read_done_cb(struct rpc_task *task,
+ trace_ff_layout_read_error(hdr);
+ }
+
+- err = ff_layout_async_handle_error(task, hdr->args.context->state,
++ err = ff_layout_async_handle_error(task, hdr->res.op_status,
++ hdr->args.context->state,
+ hdr->ds_clp, hdr->lseg,
+ hdr->pgio_mirror_idx);
+
+@@ -1481,7 +1532,8 @@ static int ff_layout_write_done_cb(struct rpc_task *task,
+ trace_ff_layout_write_error(hdr);
+ }
+
+- err = ff_layout_async_handle_error(task, hdr->args.context->state,
++ err = ff_layout_async_handle_error(task, hdr->res.op_status,
++ hdr->args.context->state,
+ hdr->ds_clp, hdr->lseg,
+ hdr->pgio_mirror_idx);
+
+@@ -1527,8 +1579,9 @@ static int ff_layout_commit_done_cb(struct rpc_task *task,
+ trace_ff_layout_commit_error(data);
+ }
+
+- err = ff_layout_async_handle_error(task, NULL, data->ds_clp,
+- data->lseg, data->ds_commit_index);
++ err = ff_layout_async_handle_error(task, data->res.op_status,
++ NULL, data->ds_clp, data->lseg,
++ data->ds_commit_index);
+
+ trace_nfs4_pnfs_commit_ds(data, err);
+ switch (err) {
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index d82eb1b2164f3d..3e3114a9d19375 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -2227,15 +2227,26 @@ EXPORT_SYMBOL_GPL(nfs_net_id);
+ static int nfs_net_init(struct net *net)
+ {
+ struct nfs_net *nn = net_generic(net, nfs_net_id);
++ int err;
+
+ nfs_clients_init(net);
+
+ if (!rpc_proc_register(net, &nn->rpcstats)) {
+- nfs_clients_exit(net);
+- return -ENOMEM;
++ err = -ENOMEM;
++ goto err_proc_rpc;
+ }
+
+- return nfs_fs_proc_net_init(net);
++ err = nfs_fs_proc_net_init(net);
++ if (err)
++ goto err_proc_nfs;
++
++ return 0;
++
++err_proc_nfs:
++ rpc_proc_unregister(net, "nfs");
++err_proc_rpc:
++ nfs_clients_exit(net);
++ return err;
+ }
+
+ static void nfs_net_exit(struct net *net)
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 1005ecf7c250b3..77cc1c4219e15b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -10378,7 +10378,7 @@ const struct nfs4_minor_version_ops *nfs_v4_minor_ops[] = {
+
+ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ {
+- ssize_t error, error2, error3;
++ ssize_t error, error2, error3, error4;
+ size_t left = size;
+
+ error = generic_listxattr(dentry, list, left);
+@@ -10401,8 +10401,16 @@ static ssize_t nfs4_listxattr(struct dentry *dentry, char *list, size_t size)
+ error3 = nfs4_listxattr_nfs4_user(d_inode(dentry), list, left);
+ if (error3 < 0)
+ return error3;
++ if (list) {
++ list += error3;
++ left -= error3;
++ }
++
++ error4 = security_inode_listsecurity(d_inode(dentry), list, left);
++ if (error4 < 0)
++ return error4;
+
+- error += error2 + error3;
++ error += error2 + error3 + error4;
+ if (size && error > size)
+ return -ERANGE;
+ return error;
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 1800836306a5d1..758689877d85d7 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1934,8 +1934,10 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo)
+ static void nfs_layoutget_end(struct pnfs_layout_hdr *lo)
+ {
+ if (atomic_dec_and_test(&lo->plh_outstanding) &&
+- test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags))
++ test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) {
++ smp_mb__after_atomic();
+ wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN);
++ }
+ }
+
+ static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo)
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index 62a258c2b59cdb..26f29a3e5ada03 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -211,7 +211,9 @@ enum ovl_path_type ovl_path_real(struct dentry *dentry, struct path *path)
+
+ struct dentry *ovl_dentry_upper(struct dentry *dentry)
+ {
+- return ovl_upperdentry_dereference(OVL_I(d_inode(dentry)));
++ struct inode *inode = d_inode(dentry);
++
++ return inode ? ovl_upperdentry_dereference(OVL_I(inode)) : NULL;
+ }
+
+ struct dentry *ovl_dentry_lower(struct dentry *dentry)
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index 8fba6d39e776fd..77b94c04e4aff7 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -512,18 +512,18 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ cgtime = sig->cgtime;
+
+ if (whole) {
+- struct task_struct *t = task;
++ struct task_struct *t;
+
+ min_flt = sig->min_flt;
+ maj_flt = sig->maj_flt;
+ gtime = sig->gtime;
+
+ rcu_read_lock();
+- do {
++ __for_each_thread(sig, t) {
+ min_flt += t->min_flt;
+ maj_flt += t->maj_flt;
+ gtime += task_gtime(t);
+- } while_each_thread(task, t);
++ }
+ rcu_read_unlock();
+
+ thread_group_cputime_adjusted(task, &utime, &stime);
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index ba35ffc426eac9..269a14a50d8b0a 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -54,7 +54,7 @@ static void proc_evict_inode(struct inode *inode)
+
+ head = ei->sysctl;
+ if (head) {
+- RCU_INIT_POINTER(ei->sysctl, NULL);
++ WRITE_ONCE(ei->sysctl, NULL);
+ proc_sys_evict_inode(inode, head);
+ }
+ }
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index f5c9677353354c..78bd6063142816 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -909,17 +909,21 @@ static int proc_sys_compare(const struct dentry *dentry,
+ struct ctl_table_header *head;
+ struct inode *inode;
+
+- /* Although proc doesn't have negative dentries, rcu-walk means
+- * that inode here can be NULL */
+- /* AV: can it, indeed? */
+- inode = d_inode_rcu(dentry);
+- if (!inode)
+- return 1;
+ if (name->len != len)
+ return 1;
+ if (memcmp(name->name, str, len))
+ return 1;
+- head = rcu_dereference(PROC_I(inode)->sysctl);
++
++ // false positive is fine here - we'll recheck anyway
++ if (d_in_lookup(dentry))
++ return 0;
++
++ inode = d_inode_rcu(dentry);
++ // we just might have run into dentry in the middle of __dentry_kill()
++ if (!inode)
++ return 1;
++
++ head = READ_ONCE(PROC_I(inode)->sysctl);
+ return !head || !sysctl_is_seen(head);
+ }
+
+diff --git a/include/drm/spsc_queue.h b/include/drm/spsc_queue.h
+index 125f096c88cb96..ee9df8cc67b730 100644
+--- a/include/drm/spsc_queue.h
++++ b/include/drm/spsc_queue.h
+@@ -70,9 +70,11 @@ static inline bool spsc_queue_push(struct spsc_queue *queue, struct spsc_node *n
+
+ preempt_disable();
+
++ atomic_inc(&queue->job_count);
++ smp_mb__after_atomic();
++
+ tail = (struct spsc_node **)atomic_long_xchg(&queue->tail, (long)&node->next);
+ WRITE_ONCE(*tail, node);
+- atomic_inc(&queue->job_count);
+
+ /*
+ * In case of first element verify new node will be visible to the consumer
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 2099226d862381..f00bbb174a2e73 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -76,6 +76,9 @@ extern ssize_t cpu_show_gds(struct device *dev,
+ struct device_attribute *attr, char *buf);
+ extern ssize_t cpu_show_reg_file_data_sampling(struct device *dev,
+ struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_indirect_target_selection(struct device *dev,
++ struct device_attribute *attr, char *buf);
++extern ssize_t cpu_show_tsa(struct device *dev, struct device_attribute *attr, char *buf);
+
+ extern __printf(4, 5)
+ struct device *cpu_device_create(struct device *parent, void *drvdata,
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 5e019d26b5b729..987cc04f131820 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -802,6 +802,8 @@ struct vmbus_requestor {
+ #define VMBUS_RQST_ID_NO_RESPONSE (U64_MAX - 2)
+
+ struct vmbus_device {
++ /* preferred ring buffer size in KB, 0 means no preferred size for this device */
++ size_t pref_ring_size;
+ u16 dev_type;
+ guid_t guid;
+ bool perf_device;
+diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
+index 89b3527e036751..d758c131ed5e11 100644
+--- a/include/linux/ipv6.h
++++ b/include/linux/ipv6.h
+@@ -189,7 +189,6 @@ struct inet6_cork {
+ struct ipv6_txoptions *opt;
+ u8 hop_limit;
+ u8 tclass;
+- u8 dontfrag:1;
+ };
+
+ /**
+diff --git a/include/linux/module.h b/include/linux/module.h
+index 63fe94e6ae6f19..f5a150c42918c6 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -524,6 +524,11 @@ struct module {
+ atomic_t refcnt;
+ #endif
+
++#ifdef CONFIG_MITIGATION_ITS
++ int its_num_pages;
++ void **its_page_array;
++#endif
++
+ #ifdef CONFIG_CONSTRUCTORS
+ /* Constructor functions. */
+ ctor_fn_t *ctors;
+diff --git a/include/linux/usb/typec_dp.h b/include/linux/usb/typec_dp.h
+index 296909ea04f26c..afb73b3e0b8090 100644
+--- a/include/linux/usb/typec_dp.h
++++ b/include/linux/usb/typec_dp.h
+@@ -56,6 +56,7 @@ enum {
+ DP_PIN_ASSIGN_D,
+ DP_PIN_ASSIGN_E,
+ DP_PIN_ASSIGN_F, /* Not supported after v1.0b */
++ DP_PIN_ASSIGN_MAX,
+ };
+
+ /* DisplayPort alt mode specific commands */
+diff --git a/include/uapi/linux/vm_sockets.h b/include/uapi/linux/vm_sockets.h
+index fd0ed7221645d9..4263c85593fa01 100644
+--- a/include/uapi/linux/vm_sockets.h
++++ b/include/uapi/linux/vm_sockets.h
+@@ -17,7 +17,12 @@
+ #ifndef _UAPI_VM_SOCKETS_H
+ #define _UAPI_VM_SOCKETS_H
+
++#ifndef __KERNEL__
++#include <sys/socket.h> /* for struct sockaddr and sa_family_t */
++#endif
++
+ #include <linux/socket.h>
++#include <linux/types.h>
+
+ /* Option name for STREAM socket buffer size. Use as the option name in
+ * setsockopt(3) or getsockopt(3) to set or get an unsigned long long that
+@@ -114,6 +119,26 @@
+
+ #define VMADDR_CID_HOST 2
+
++/* The current default use case for the vsock channel is the following:
++ * local vsock communication between guest and host and nested VMs setup.
++ * In addition to this, implicitly, the vsock packets are forwarded to the host
++ * if no host->guest vsock transport is set.
++ *
++ * Set this flag value in the sockaddr_vm corresponding field if the vsock
++ * packets need to be always forwarded to the host. Using this behavior,
++ * vsock communication between sibling VMs can be setup.
++ *
++ * This way can explicitly distinguish between vsock channels created for
++ * different use cases, such as nested VMs (or local communication between
++ * guest and host) and sibling VMs.
++ *
++ * The flag can be set in the connect logic in the user space application flow.
++ * In the listen logic (from kernel space) the flag is set on the remote peer
++ * address. This happens for an incoming connection when it is routed from the
++ * host and comes from the guest (local CID and remote CID > VMADDR_CID_HOST).
++ */
++#define VMADDR_FLAG_TO_HOST 0x01
++
+ /* Invalid vSockets version. */
+
+ #define VM_SOCKETS_INVALID_VERSION -1U
+@@ -148,10 +173,13 @@ struct sockaddr_vm {
+ unsigned short svm_reserved1;
+ unsigned int svm_port;
+ unsigned int svm_cid;
++ __u8 svm_flags;
+ unsigned char svm_zero[sizeof(struct sockaddr) -
+ sizeof(sa_family_t) -
+ sizeof(unsigned short) -
+- sizeof(unsigned int) - sizeof(unsigned int)];
++ sizeof(unsigned int) -
++ sizeof(unsigned int) -
++ sizeof(__u8)];
+ };
+
+ #define IOCTL_VM_SOCKETS_GET_LOCAL_CID _IO(7, 0xb9)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index b133abe23a4b1f..bf9f9eab6f67f8 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -9823,7 +9823,7 @@ static int perf_uprobe_event_init(struct perf_event *event)
+ if (event->attr.type != perf_uprobe.type)
+ return -ENOENT;
+
+- if (!perfmon_capable())
++ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ /*
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 06bfe61d3cd388..c4eb06d37ae91b 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -2959,6 +2959,10 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func)
+ /* Misaligned rcu_head! */
+ WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1));
+
++ /* Avoid NULL dereference if callback is NULL. */
++ if (WARN_ON_ONCE(!func))
++ return;
++
+ if (debug_rcu_head_queue(head)) {
+ /*
+ * Probable double call_rcu(), so leak the callback.
+diff --git a/kernel/rseq.c b/kernel/rseq.c
+index 6ca29dddceabc7..2f8a1c6ac0a35b 100644
+--- a/kernel/rseq.c
++++ b/kernel/rseq.c
+@@ -112,6 +112,29 @@ static int rseq_reset_rseq_cpu_id(struct task_struct *t)
+ return 0;
+ }
+
++/*
++ * Get the user-space pointer value stored in the 'rseq_cs' field.
++ */
++static int rseq_get_rseq_cs_ptr_val(struct rseq __user *rseq, u64 *rseq_cs)
++{
++ if (!rseq_cs)
++ return -EFAULT;
++
++#ifdef CONFIG_64BIT
++ if (get_user(*rseq_cs, &rseq->rseq_cs))
++ return -EFAULT;
++#else
++ if (copy_from_user(rseq_cs, &rseq->rseq_cs, sizeof(*rseq_cs)))
++ return -EFAULT;
++#endif
++
++ return 0;
++}
++
++/*
++ * If the rseq_cs field of 'struct rseq' contains a valid pointer to
++ * user-space, copy 'struct rseq_cs' from user-space and validate its fields.
++ */
+ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ {
+ struct rseq_cs __user *urseq_cs;
+@@ -120,17 +143,16 @@ static int rseq_get_rseq_cs(struct task_struct *t, struct rseq_cs *rseq_cs)
+ u32 sig;
+ int ret;
+
+-#ifdef CONFIG_64BIT
+- if (get_user(ptr, &t->rseq->rseq_cs))
+- return -EFAULT;
+-#else
+- if (copy_from_user(&ptr, &t->rseq->rseq_cs, sizeof(ptr)))
+- return -EFAULT;
+-#endif
++ ret = rseq_get_rseq_cs_ptr_val(t->rseq, &ptr);
++ if (ret)
++ return ret;
++
++ /* If the rseq_cs pointer is NULL, return a cleared struct rseq_cs. */
+ if (!ptr) {
+ memset(rseq_cs, 0, sizeof(*rseq_cs));
+ return 0;
+ }
++ /* Check that the pointer value fits in the user-space process space. */
+ if (ptr >= TASK_SIZE)
+ return -EINVAL;
+ urseq_cs = (struct rseq_cs __user *)(unsigned long)ptr;
+@@ -199,7 +221,7 @@ static int rseq_need_restart(struct task_struct *t, u32 cs_flags)
+ return !!(event_mask & ~flags);
+ }
+
+-static int clear_rseq_cs(struct task_struct *t)
++static int clear_rseq_cs(struct rseq __user *rseq)
+ {
+ /*
+ * The rseq_cs field is set to NULL on preemption or signal
+@@ -210,9 +232,9 @@ static int clear_rseq_cs(struct task_struct *t)
+ * Set rseq_cs to NULL.
+ */
+ #ifdef CONFIG_64BIT
+- return put_user(0UL, &t->rseq->rseq_cs);
++ return put_user(0UL, &rseq->rseq_cs);
+ #else
+- if (clear_user(&t->rseq->rseq_cs, sizeof(t->rseq->rseq_cs)))
++ if (clear_user(&rseq->rseq_cs, sizeof(rseq->rseq_cs)))
+ return -EFAULT;
+ return 0;
+ #endif
+@@ -244,11 +266,11 @@ static int rseq_ip_fixup(struct pt_regs *regs)
+ * Clear the rseq_cs pointer and return.
+ */
+ if (!in_rseq_cs(ip, &rseq_cs))
+- return clear_rseq_cs(t);
++ return clear_rseq_cs(t->rseq);
+ ret = rseq_need_restart(t, rseq_cs.flags);
+ if (ret <= 0)
+ return ret;
+- ret = clear_rseq_cs(t);
++ ret = clear_rseq_cs(t->rseq);
+ if (ret)
+ return ret;
+ trace_rseq_ip_fixup(ip, rseq_cs.start_ip, rseq_cs.post_commit_offset,
+@@ -324,6 +346,7 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
+ int, flags, u32, sig)
+ {
+ int ret;
++ u64 rseq_cs;
+
+ if (flags & RSEQ_FLAG_UNREGISTER) {
+ if (flags & ~RSEQ_FLAG_UNREGISTER)
+@@ -369,6 +392,19 @@ SYSCALL_DEFINE4(rseq, struct rseq __user *, rseq, u32, rseq_len,
+ return -EINVAL;
+ if (!access_ok(rseq, rseq_len))
+ return -EFAULT;
++
++ /*
++ * If the rseq_cs pointer is non-NULL on registration, clear it to
++ * avoid a potential segfault on return to user-space. The proper thing
++ * to do would have been to fail the registration but this would break
++ * older libcs that reuse the rseq area for new threads without
++ * clearing the fields.
++ */
++ if (rseq_get_rseq_cs_ptr_val(rseq, &rseq_cs))
++ return -EFAULT;
++ if (rseq_cs && clear_rseq_cs(rseq))
++ return -EFAULT;
++
+ current->rseq = rseq;
+ current->rseq_sig = sig;
+ /*
+diff --git a/lib/test_objagg.c b/lib/test_objagg.c
+index da137939a41007..78d25ab19a9603 100644
+--- a/lib/test_objagg.c
++++ b/lib/test_objagg.c
+@@ -899,8 +899,10 @@ static int check_expect_hints_stats(struct objagg_hints *objagg_hints,
+ int err;
+
+ stats = objagg_hints_stats_get(objagg_hints);
+- if (IS_ERR(stats))
++ if (IS_ERR(stats)) {
++ *errmsg = "objagg_hints_stats_get() failed.";
+ return PTR_ERR(stats);
++ }
+ err = __check_expect_stats(stats, expect_stats, errmsg);
+ objagg_stats_put(stats);
+ return err;
+diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
+index 46adb8cefccf2b..c9edfca153c99e 100644
+--- a/net/appletalk/ddp.c
++++ b/net/appletalk/ddp.c
+@@ -563,6 +563,7 @@ static int atrtr_create(struct rtentry *r, struct net_device *devhint)
+
+ /* Fill in the routing entry */
+ rt->target = ta->sat_addr;
++ dev_put(rt->dev); /* Release old device */
+ dev_hold(devhint);
+ rt->dev = devhint;
+ rt->flags = r->rt_flags;
+diff --git a/net/atm/clip.c b/net/atm/clip.c
+index 294cb9efe3d382..53d62361ae4606 100644
+--- a/net/atm/clip.c
++++ b/net/atm/clip.c
+@@ -45,7 +45,8 @@
+ #include <net/atmclip.h>
+
+ static struct net_device *clip_devs;
+-static struct atm_vcc *atmarpd;
++static struct atm_vcc __rcu *atmarpd;
++static DEFINE_MUTEX(atmarpd_lock);
+ static struct timer_list idle_timer;
+ static const struct neigh_ops clip_neigh_ops;
+
+@@ -53,24 +54,35 @@ static int to_atmarpd(enum atmarp_ctrl_type type, int itf, __be32 ip)
+ {
+ struct sock *sk;
+ struct atmarp_ctrl *ctrl;
++ struct atm_vcc *vcc;
+ struct sk_buff *skb;
++ int err = 0;
+
+ pr_debug("(%d)\n", type);
+- if (!atmarpd)
+- return -EUNATCH;
++
++ rcu_read_lock();
++ vcc = rcu_dereference(atmarpd);
++ if (!vcc) {
++ err = -EUNATCH;
++ goto unlock;
++ }
+ skb = alloc_skb(sizeof(struct atmarp_ctrl), GFP_ATOMIC);
+- if (!skb)
+- return -ENOMEM;
++ if (!skb) {
++ err = -ENOMEM;
++ goto unlock;
++ }
+ ctrl = skb_put(skb, sizeof(struct atmarp_ctrl));
+ ctrl->type = type;
+ ctrl->itf_num = itf;
+ ctrl->ip = ip;
+- atm_force_charge(atmarpd, skb->truesize);
++ atm_force_charge(vcc, skb->truesize);
+
+- sk = sk_atm(atmarpd);
++ sk = sk_atm(vcc);
+ skb_queue_tail(&sk->sk_receive_queue, skb);
+ sk->sk_data_ready(sk);
+- return 0;
++unlock:
++ rcu_read_unlock();
++ return err;
+ }
+
+ static void link_vcc(struct clip_vcc *clip_vcc, struct atmarp_entry *entry)
+@@ -193,12 +205,6 @@ static void clip_push(struct atm_vcc *vcc, struct sk_buff *skb)
+
+ pr_debug("\n");
+
+- if (!clip_devs) {
+- atm_return(vcc, skb->truesize);
+- kfree_skb(skb);
+- return;
+- }
+-
+ if (!skb) {
+ pr_debug("removing VCC %p\n", clip_vcc);
+ if (clip_vcc->entry)
+@@ -208,6 +214,11 @@ static void clip_push(struct atm_vcc *vcc, struct sk_buff *skb)
+ return;
+ }
+ atm_return(vcc, skb->truesize);
++ if (!clip_devs) {
++ kfree_skb(skb);
++ return;
++ }
++
+ skb->dev = clip_vcc->entry ? clip_vcc->entry->neigh->dev : clip_devs;
+ /* clip_vcc->entry == NULL if we don't have an IP address yet */
+ if (!skb->dev) {
+@@ -418,6 +429,8 @@ static int clip_mkip(struct atm_vcc *vcc, int timeout)
+
+ if (!vcc->push)
+ return -EBADFD;
++ if (vcc->user_back)
++ return -EINVAL;
+ clip_vcc = kmalloc(sizeof(struct clip_vcc), GFP_KERNEL);
+ if (!clip_vcc)
+ return -ENOMEM;
+@@ -608,17 +621,27 @@ static void atmarpd_close(struct atm_vcc *vcc)
+ {
+ pr_debug("\n");
+
+- rtnl_lock();
+- atmarpd = NULL;
++ mutex_lock(&atmarpd_lock);
++ RCU_INIT_POINTER(atmarpd, NULL);
++ mutex_unlock(&atmarpd_lock);
++
++ synchronize_rcu();
+ skb_queue_purge(&sk_atm(vcc)->sk_receive_queue);
+- rtnl_unlock();
+
+ pr_debug("(done)\n");
+ module_put(THIS_MODULE);
+ }
+
++static int atmarpd_send(struct atm_vcc *vcc, struct sk_buff *skb)
++{
++ atm_return_tx(vcc, skb);
++ dev_kfree_skb_any(skb);
++ return 0;
++}
++
+ static const struct atmdev_ops atmarpd_dev_ops = {
+- .close = atmarpd_close
++ .close = atmarpd_close,
++ .send = atmarpd_send
+ };
+
+
+@@ -632,15 +655,18 @@ static struct atm_dev atmarpd_dev = {
+
+ static int atm_init_atmarp(struct atm_vcc *vcc)
+ {
+- rtnl_lock();
++ if (vcc->push == clip_push)
++ return -EINVAL;
++
++ mutex_lock(&atmarpd_lock);
+ if (atmarpd) {
+- rtnl_unlock();
++ mutex_unlock(&atmarpd_lock);
+ return -EADDRINUSE;
+ }
+
+ mod_timer(&idle_timer, jiffies + CLIP_CHECK_INTERVAL * HZ);
+
+- atmarpd = vcc;
++ rcu_assign_pointer(atmarpd, vcc);
+ set_bit(ATM_VF_META, &vcc->flags);
+ set_bit(ATM_VF_READY, &vcc->flags);
+ /* allow replies and avoid getting closed if signaling dies */
+@@ -649,13 +675,14 @@ static int atm_init_atmarp(struct atm_vcc *vcc)
+ vcc->push = NULL;
+ vcc->pop = NULL; /* crash */
+ vcc->push_oam = NULL; /* crash */
+- rtnl_unlock();
++ mutex_unlock(&atmarpd_lock);
+ return 0;
+ }
+
+ static int clip_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ {
+ struct atm_vcc *vcc = ATM_SD(sock);
++ struct sock *sk = sock->sk;
+ int err = 0;
+
+ switch (cmd) {
+@@ -676,14 +703,18 @@ static int clip_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ err = clip_create(arg);
+ break;
+ case ATMARPD_CTRL:
++ lock_sock(sk);
+ err = atm_init_atmarp(vcc);
+ if (!err) {
+ sock->state = SS_CONNECTED;
+ __module_get(THIS_MODULE);
+ }
++ release_sock(sk);
+ break;
+ case ATMARP_MKIP:
++ lock_sock(sk);
+ err = clip_mkip(vcc, arg);
++ release_sock(sk);
+ break;
+ case ATMARP_SETENTRY:
+ err = clip_setentry(vcc, (__force __be32)arg);
+diff --git a/net/atm/resources.c b/net/atm/resources.c
+index 3ad39ae971323f..fb8cf4cd6c1d75 100644
+--- a/net/atm/resources.c
++++ b/net/atm/resources.c
+@@ -148,11 +148,10 @@ void atm_dev_deregister(struct atm_dev *dev)
+ */
+ mutex_lock(&atm_dev_mutex);
+ list_del(&dev->dev_list);
+- mutex_unlock(&atm_dev_mutex);
+-
+ atm_dev_release_vccs(dev);
+ atm_unregister_sysfs(dev);
+ atm_proc_dev_deregister(dev);
++ mutex_unlock(&atm_dev_mutex);
+
+ atm_dev_put(dev);
+ }
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 08d91a3d3460dd..8c8631e609f6bf 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3571,7 +3571,7 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC };
+ struct l2cap_conf_efs efs;
+ u8 remote_efs = 0;
+- u16 mtu = L2CAP_DEFAULT_MTU;
++ u16 mtu = 0;
+ u16 result = L2CAP_CONF_SUCCESS;
+ u16 size;
+
+@@ -3682,6 +3682,13 @@ static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data
+ /* Configure output options and let the other side know
+ * which ones we don't like. */
+
++ /* If MTU is not provided in configure request, use the most recently
++ * explicitly or implicitly accepted value for the other direction,
++ * or the default value.
++ */
++ if (mtu == 0)
++ mtu = chan->imtu ? chan->imtu : L2CAP_DEFAULT_MTU;
++
+ if (mtu < L2CAP_DEFAULT_MIN_MTU)
+ result = L2CAP_CONF_UNACCEPT;
+ else {
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index f024d89cb0f812..426330b8dfa474 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -1416,7 +1416,6 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
+ cork->fl.u.ip6 = *fl6;
+ v6_cork->hop_limit = ipc6->hlimit;
+ v6_cork->tclass = ipc6->tclass;
+- v6_cork->dontfrag = ipc6->dontfrag;
+ if (rt->dst.flags & DST_XFRM_TUNNEL)
+ mtu = np->pmtudisc >= IPV6_PMTUDISC_PROBE ?
+ READ_ONCE(rt->dst.dev->mtu) : dst_mtu(&rt->dst);
+@@ -1451,7 +1450,7 @@ static int __ip6_append_data(struct sock *sk,
+ int getfrag(void *from, char *to, int offset,
+ int len, int odd, struct sk_buff *skb),
+ void *from, size_t length, int transhdrlen,
+- unsigned int flags)
++ unsigned int flags, struct ipcm6_cookie *ipc6)
+ {
+ struct sk_buff *skb, *skb_prev = NULL;
+ unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu, pmtu;
+@@ -1508,7 +1507,7 @@ static int __ip6_append_data(struct sock *sk,
+ if (headersize + transhdrlen > mtu)
+ goto emsgsize;
+
+- if (cork->length + length > mtu - headersize && v6_cork->dontfrag &&
++ if (cork->length + length > mtu - headersize && ipc6->dontfrag &&
+ (sk->sk_protocol == IPPROTO_UDP ||
+ sk->sk_protocol == IPPROTO_RAW)) {
+ ipv6_local_rxpmtu(sk, fl6, mtu - headersize +
+@@ -1826,7 +1825,7 @@ int ip6_append_data(struct sock *sk,
+
+ return __ip6_append_data(sk, fl6, &sk->sk_write_queue, &inet->cork.base,
+ &np->cork, sk_page_frag(sk), getfrag,
+- from, length, transhdrlen, flags);
++ from, length, transhdrlen, flags, ipc6);
+ }
+ EXPORT_SYMBOL_GPL(ip6_append_data);
+
+@@ -2021,7 +2020,7 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
+ err = __ip6_append_data(sk, fl6, &queue, &cork->base, &v6_cork,
+ ¤t->task_frag, getfrag, from,
+ length + exthdrlen, transhdrlen + exthdrlen,
+- flags);
++ flags, ipc6);
+ if (err) {
+ __ip6_flush_pending_frames(sk, &queue, cork, &v6_cork);
+ return ERR_PTR(err);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 65fea564c9c005..b46c4c770608c4 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4097,6 +4097,10 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ if (!multicast &&
+ !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1))
+ return false;
++ /* reject invalid/our STA address */
++ if (!is_valid_ether_addr(hdr->addr2) ||
++ ether_addr_equal(sdata->dev->dev_addr, hdr->addr2))
++ return false;
+ if (!rx->sta) {
+ int rate_idx;
+ if (status->encoding != RX_ENC_LEGACY)
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 0da845d9d48635..7cb32340108e39 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -4242,7 +4242,7 @@ void ieee80211_recalc_dtim(struct ieee80211_local *local,
+ {
+ u64 tsf = drv_get_tsf(local, sdata);
+ u64 dtim_count = 0;
+- u16 beacon_int = sdata->vif.bss_conf.beacon_int * 1024;
++ u32 beacon_int = sdata->vif.bss_conf.beacon_int * 1024;
+ u8 dtim_period = sdata->vif.bss_conf.dtim_period;
+ struct ps_data *ps;
+ u8 bcns_from_dtim;
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 4da043d9f2c7a2..77631cb74a192a 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -379,7 +379,6 @@ static void netlink_skb_set_owner_r(struct sk_buff *skb, struct sock *sk)
+ WARN_ON(skb->sk != NULL);
+ skb->sk = sk;
+ skb->destructor = netlink_skb_destructor;
+- atomic_add(skb->truesize, &sk->sk_rmem_alloc);
+ sk_mem_charge(sk, skb->truesize);
+ }
+
+@@ -1207,41 +1206,48 @@ static struct sk_buff *netlink_alloc_large_skb(unsigned int size,
+ int netlink_attachskb(struct sock *sk, struct sk_buff *skb,
+ long *timeo, struct sock *ssk)
+ {
++ DECLARE_WAITQUEUE(wait, current);
+ struct netlink_sock *nlk;
++ unsigned int rmem;
+
+ nlk = nlk_sk(sk);
++ rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
+
+- if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
+- test_bit(NETLINK_S_CONGESTED, &nlk->state))) {
+- DECLARE_WAITQUEUE(wait, current);
+- if (!*timeo) {
+- if (!ssk || netlink_is_kernel(ssk))
+- netlink_overrun(sk);
+- sock_put(sk);
+- kfree_skb(skb);
+- return -EAGAIN;
+- }
+-
+- __set_current_state(TASK_INTERRUPTIBLE);
+- add_wait_queue(&nlk->wait, &wait);
++ if ((rmem == skb->truesize || rmem < READ_ONCE(sk->sk_rcvbuf)) &&
++ !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
++ netlink_skb_set_owner_r(skb, sk);
++ return 0;
++ }
+
+- if ((atomic_read(&sk->sk_rmem_alloc) > sk->sk_rcvbuf ||
+- test_bit(NETLINK_S_CONGESTED, &nlk->state)) &&
+- !sock_flag(sk, SOCK_DEAD))
+- *timeo = schedule_timeout(*timeo);
++ atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
+
+- __set_current_state(TASK_RUNNING);
+- remove_wait_queue(&nlk->wait, &wait);
++ if (!*timeo) {
++ if (!ssk || netlink_is_kernel(ssk))
++ netlink_overrun(sk);
+ sock_put(sk);
++ kfree_skb(skb);
++ return -EAGAIN;
++ }
+
+- if (signal_pending(current)) {
+- kfree_skb(skb);
+- return sock_intr_errno(*timeo);
+- }
+- return 1;
++ __set_current_state(TASK_INTERRUPTIBLE);
++ add_wait_queue(&nlk->wait, &wait);
++ rmem = atomic_read(&sk->sk_rmem_alloc);
++
++ if (((rmem && rmem + skb->truesize > READ_ONCE(sk->sk_rcvbuf)) ||
++ test_bit(NETLINK_S_CONGESTED, &nlk->state)) &&
++ !sock_flag(sk, SOCK_DEAD))
++ *timeo = schedule_timeout(*timeo);
++
++ __set_current_state(TASK_RUNNING);
++ remove_wait_queue(&nlk->wait, &wait);
++ sock_put(sk);
++
++ if (signal_pending(current)) {
++ kfree_skb(skb);
++ return sock_intr_errno(*timeo);
+ }
+- netlink_skb_set_owner_r(skb, sk);
+- return 0;
++
++ return 1;
+ }
+
+ static int __netlink_sendskb(struct sock *sk, struct sk_buff *skb)
+@@ -1301,6 +1307,7 @@ static int netlink_unicast_kernel(struct sock *sk, struct sk_buff *skb,
+ ret = -ECONNREFUSED;
+ if (nlk->netlink_rcv != NULL) {
+ ret = skb->len;
++ atomic_add(skb->truesize, &sk->sk_rmem_alloc);
+ netlink_skb_set_owner_r(skb, sk);
+ NETLINK_CB(skb).sk = ssk;
+ netlink_deliver_tap_kernel(sk, ssk, skb);
+@@ -1379,13 +1386,19 @@ EXPORT_SYMBOL_GPL(netlink_strict_get_check);
+ static int netlink_broadcast_deliver(struct sock *sk, struct sk_buff *skb)
+ {
+ struct netlink_sock *nlk = nlk_sk(sk);
++ unsigned int rmem, rcvbuf;
+
+- if (atomic_read(&sk->sk_rmem_alloc) <= sk->sk_rcvbuf &&
++ rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
++ rcvbuf = READ_ONCE(sk->sk_rcvbuf);
++
++ if ((rmem == skb->truesize || rmem <= rcvbuf) &&
+ !test_bit(NETLINK_S_CONGESTED, &nlk->state)) {
+ netlink_skb_set_owner_r(skb, sk);
+ __netlink_sendskb(sk, skb);
+- return atomic_read(&sk->sk_rmem_alloc) > (sk->sk_rcvbuf >> 1);
++ return rmem > (rcvbuf >> 1);
+ }
++
++ atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
+ return -1;
+ }
+
+@@ -2198,6 +2211,7 @@ static int netlink_dump(struct sock *sk, bool lock_taken)
+ struct netlink_ext_ack extack = {};
+ struct netlink_callback *cb;
+ struct sk_buff *skb = NULL;
++ unsigned int rmem, rcvbuf;
+ size_t max_recvmsg_len;
+ struct module *module;
+ int err = -ENOBUFS;
+@@ -2211,9 +2225,6 @@ static int netlink_dump(struct sock *sk, bool lock_taken)
+ goto errout_skb;
+ }
+
+- if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
+- goto errout_skb;
+-
+ /* NLMSG_GOODSIZE is small to avoid high order allocations being
+ * required, but it makes sense to _attempt_ a 16K bytes allocation
+ * to reduce number of system calls on dump operations, if user
+@@ -2236,6 +2247,13 @@ static int netlink_dump(struct sock *sk, bool lock_taken)
+ if (!skb)
+ goto errout_skb;
+
++ rcvbuf = READ_ONCE(sk->sk_rcvbuf);
++ rmem = atomic_add_return(skb->truesize, &sk->sk_rmem_alloc);
++ if (rmem != skb->truesize && rmem >= rcvbuf) {
++ atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
++ goto errout_skb;
++ }
++
+ /* Trim skb to allocated size. User is expected to provide buffer as
+ * large as max(min_dump_alloc, 16KiB (mac_recvmsg_len capped at
+ * netlink_recvmsg())). dump will pack as many smaller messages as
+diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c
+index 981bdefd478b0e..d0112f1863850a 100644
+--- a/net/rose/rose_route.c
++++ b/net/rose/rose_route.c
+@@ -347,6 +347,7 @@ static int rose_del_node(struct rose_route_struct *rose_route,
+ case 1:
+ rose_node->neighbour[1] =
+ rose_node->neighbour[2];
++ break;
+ case 2:
+ break;
+ }
+@@ -496,21 +497,15 @@ void rose_rt_device_down(struct net_device *dev)
+ t = rose_node;
+ rose_node = rose_node->next;
+
+- for (i = 0; i < t->count; i++) {
++ for (i = t->count - 1; i >= 0; i--) {
+ if (t->neighbour[i] != s)
+ continue;
+
+ t->count--;
+
+- switch (i) {
+- case 0:
+- t->neighbour[0] = t->neighbour[1];
+- fallthrough;
+- case 1:
+- t->neighbour[1] = t->neighbour[2];
+- case 2:
+- break;
+- }
++ memmove(&t->neighbour[i], &t->neighbour[i + 1],
++ sizeof(t->neighbour[0]) *
++ (t->count - i));
+ }
+
+ if (t->count <= 0)
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 2a14d69b171f3b..b96af42a1b0415 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -271,6 +271,9 @@ static struct rxrpc_call *rxrpc_alloc_incoming_call(struct rxrpc_sock *rx,
+ unsigned short call_tail, conn_tail, peer_tail;
+ unsigned short call_count, conn_count;
+
++ if (!b)
++ return NULL;
++
+ /* #calls >= #conns >= #peers must hold true. */
+ call_head = smp_load_acquire(&b->call_backlog_head);
+ call_tail = b->call_backlog_tail;
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index b8fb94bfa96066..a325036f3ae025 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -331,17 +331,22 @@ struct Qdisc *qdisc_lookup_rcu(struct net_device *dev, u32 handle)
+ return q;
+ }
+
+-static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid)
++static struct Qdisc *qdisc_leaf(struct Qdisc *p, u32 classid,
++ struct netlink_ext_ack *extack)
+ {
+ unsigned long cl;
+ const struct Qdisc_class_ops *cops = p->ops->cl_ops;
+
+- if (cops == NULL)
+- return NULL;
++ if (cops == NULL) {
++ NL_SET_ERR_MSG(extack, "Parent qdisc is not classful");
++ return ERR_PTR(-EOPNOTSUPP);
++ }
+ cl = cops->find(p, classid);
+
+- if (cl == 0)
+- return NULL;
++ if (cl == 0) {
++ NL_SET_ERR_MSG(extack, "Specified class not found");
++ return ERR_PTR(-ENOENT);
++ }
+ return cops->leaf(p, cl);
+ }
+
+@@ -768,15 +773,12 @@ static u32 qdisc_alloc_handle(struct net_device *dev)
+
+ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ {
+- bool qdisc_is_offloaded = sch->flags & TCQ_F_OFFLOADED;
+ const struct Qdisc_class_ops *cops;
+ unsigned long cl;
+ u32 parentid;
+ bool notify;
+ int drops;
+
+- if (n == 0 && len == 0)
+- return;
+ drops = max_t(int, n, 0);
+ rcu_read_lock();
+ while ((parentid = sch->parent)) {
+@@ -785,17 +787,8 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+
+ if (sch->flags & TCQ_F_NOPARENT)
+ break;
+- /* Notify parent qdisc only if child qdisc becomes empty.
+- *
+- * If child was empty even before update then backlog
+- * counter is screwed and we skip notification because
+- * parent class is already passive.
+- *
+- * If the original child was offloaded then it is allowed
+- * to be seem as empty, so the parent is notified anyway.
+- */
+- notify = !sch->q.qlen && !WARN_ON_ONCE(!n &&
+- !qdisc_is_offloaded);
++ /* Notify parent qdisc only if child qdisc becomes empty. */
++ notify = !sch->q.qlen;
+ /* TODO: perform the search on a per txq basis */
+ sch = qdisc_lookup(qdisc_dev(sch), TC_H_MAJ(parentid));
+ if (sch == NULL) {
+@@ -804,6 +797,9 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len)
+ }
+ cops = sch->ops->cl_ops;
+ if (notify && cops->qlen_notify) {
++ /* Note that qlen_notify must be idempotent as it may get called
++ * multiple times.
++ */
+ cl = cops->find(sch, parentid);
+ cops->qlen_notify(sch, cl);
+ }
+@@ -1471,7 +1467,7 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ NL_SET_ERR_MSG(extack, "Failed to find qdisc with specified classid");
+ return -ENOENT;
+ }
+- q = qdisc_leaf(p, clid);
++ q = qdisc_leaf(p, clid, extack);
+ } else if (dev_ingress_queue(dev)) {
+ q = dev_ingress_queue(dev)->qdisc_sleeping;
+ }
+@@ -1482,6 +1478,8 @@ static int tc_get_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ NL_SET_ERR_MSG(extack, "Cannot find specified qdisc on specified device");
+ return -ENOENT;
+ }
++ if (IS_ERR(q))
++ return PTR_ERR(q);
+
+ if (tcm->tcm_handle && q->handle != tcm->tcm_handle) {
+ NL_SET_ERR_MSG(extack, "Invalid handle");
+@@ -1578,7 +1576,9 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ NL_SET_ERR_MSG(extack, "Failed to find specified qdisc");
+ return -ENOENT;
+ }
+- q = qdisc_leaf(p, clid);
++ q = qdisc_leaf(p, clid, extack);
++ if (IS_ERR(q))
++ return PTR_ERR(q);
+ } else if (dev_ingress_queue_create(dev)) {
+ q = dev_ingress_queue(dev)->qdisc_sleeping;
+ }
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index e87560e244861f..4a10f794be588d 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -653,6 +653,14 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ NL_SET_ERR_MSG_MOD(extack, "invalid quantum");
+ return -EINVAL;
+ }
++
++ if (ctl->perturb_period < 0 ||
++ ctl->perturb_period > INT_MAX / HZ) {
++ NL_SET_ERR_MSG_MOD(extack, "invalid perturb period");
++ return -EINVAL;
++ }
++ perturb_period = ctl->perturb_period * HZ;
++
+ if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+ ctl_v1->Wlog, ctl_v1->Scell_log, NULL))
+ return -EINVAL;
+@@ -669,14 +677,12 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ headdrop = q->headdrop;
+ maxdepth = q->maxdepth;
+ maxflows = q->maxflows;
+- perturb_period = q->perturb_period;
+ quantum = q->quantum;
+ flags = q->flags;
+
+ /* update and validate configuration */
+ if (ctl->quantum)
+ quantum = ctl->quantum;
+- perturb_period = ctl->perturb_period * HZ;
+ if (ctl->flows)
+ maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
+ if (ctl->divisor) {
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index 89d8a2bd30cd0e..d914c5eb251788 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -699,8 +699,10 @@ static void tipc_topsrv_stop(struct net *net)
+ for (id = 0; srv->idr_in_use; id++) {
+ con = idr_find(&srv->conn_idr, id);
+ if (con) {
++ conn_get(con);
+ spin_unlock_bh(&srv->idr_lock);
+ tipc_conn_close(con);
++ conn_put(con);
+ spin_lock_bh(&srv->idr_lock);
+ }
+ }
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index fc0306ba2d43ef..56bbc2970ffef0 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -400,6 +400,8 @@ EXPORT_SYMBOL_GPL(vsock_enqueue_accept);
+
+ static bool vsock_use_local_transport(unsigned int remote_cid)
+ {
++ lockdep_assert_held(&vsock_register_mutex);
++
+ if (!transport_local)
+ return false;
+
+@@ -431,7 +433,8 @@ static void vsock_deassign_transport(struct vsock_sock *vsk)
+ * The vsk->remote_addr is used to decide which transport to use:
+ * - remote CID == VMADDR_CID_LOCAL or g2h->local_cid or VMADDR_CID_HOST if
+ * g2h is not loaded, will use local transport;
+- * - remote CID <= VMADDR_CID_HOST will use guest->host transport;
++ * - remote CID <= VMADDR_CID_HOST or h2g is not loaded or remote flags field
++ * includes VMADDR_FLAG_TO_HOST flag value, will use guest->host transport;
+ * - remote CID > VMADDR_CID_HOST will use host->guest transport;
+ */
+ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+@@ -439,8 +442,25 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ const struct vsock_transport *new_transport;
+ struct sock *sk = sk_vsock(vsk);
+ unsigned int remote_cid = vsk->remote_addr.svm_cid;
++ __u8 remote_flags;
+ int ret;
+
++ /* If the packet is coming with the source and destination CIDs higher
++ * than VMADDR_CID_HOST, then a vsock channel where all the packets are
++ * forwarded to the host should be established. Then the host will
++ * need to forward the packets to the guest.
++ *
++ * The flag is set on the (listen) receive path (psk is not NULL). On
++ * the connect path the flag can be set by the user space application.
++ */
++ if (psk && vsk->local_addr.svm_cid > VMADDR_CID_HOST &&
++ vsk->remote_addr.svm_cid > VMADDR_CID_HOST)
++ vsk->remote_addr.svm_flags |= VMADDR_FLAG_TO_HOST;
++
++ remote_flags = vsk->remote_addr.svm_flags;
++
++ mutex_lock(&vsock_register_mutex);
++
+ switch (sk->sk_type) {
+ case SOCK_DGRAM:
+ new_transport = transport_dgram;
+@@ -448,18 +468,22 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ case SOCK_STREAM:
+ if (vsock_use_local_transport(remote_cid))
+ new_transport = transport_local;
+- else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g)
++ else if (remote_cid <= VMADDR_CID_HOST || !transport_h2g ||
++ (remote_flags & VMADDR_FLAG_TO_HOST))
+ new_transport = transport_g2h;
+ else
+ new_transport = transport_h2g;
+ break;
+ default:
+- return -ESOCKTNOSUPPORT;
++ ret = -ESOCKTNOSUPPORT;
++ goto err;
+ }
+
+ if (vsk->transport) {
+- if (vsk->transport == new_transport)
+- return 0;
++ if (vsk->transport == new_transport) {
++ ret = 0;
++ goto err;
++ }
+
+ /* transport->release() must be called with sock lock acquired.
+ * This path can only be taken during vsock_stream_connect(),
+@@ -483,8 +507,16 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ /* We increase the module refcnt to prevent the transport unloading
+ * while there are open sockets assigned to it.
+ */
+- if (!new_transport || !try_module_get(new_transport->module))
+- return -ENODEV;
++ if (!new_transport || !try_module_get(new_transport->module)) {
++ ret = -ENODEV;
++ goto err;
++ }
++
++ /* It's safe to release the mutex after a successful try_module_get().
++ * Whichever transport `new_transport` points at, it won't go away until
++ * the last module_put() below or in vsock_deassign_transport().
++ */
++ mutex_unlock(&vsock_register_mutex);
+
+ ret = new_transport->init(vsk, psk);
+ if (ret) {
+@@ -495,12 +527,31 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ vsk->transport = new_transport;
+
+ return 0;
++err:
++ mutex_unlock(&vsock_register_mutex);
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(vsock_assign_transport);
+
++/*
++ * Provide safe access to static transport_{h2g,g2h,dgram,local} callbacks.
++ * Otherwise we may race with module removal. Do not use on `vsk->transport`.
++ */
++static u32 vsock_registered_transport_cid(const struct vsock_transport **transport)
++{
++ u32 cid = VMADDR_CID_ANY;
++
++ mutex_lock(&vsock_register_mutex);
++ if (*transport)
++ cid = (*transport)->get_local_cid();
++ mutex_unlock(&vsock_register_mutex);
++
++ return cid;
++}
++
+ bool vsock_find_cid(unsigned int cid)
+ {
+- if (transport_g2h && cid == transport_g2h->get_local_cid())
++ if (cid == vsock_registered_transport_cid(&transport_g2h))
+ return true;
+
+ if (transport_h2g && cid == VMADDR_CID_HOST)
+@@ -2124,18 +2175,19 @@ static long vsock_dev_do_ioctl(struct file *filp,
+ unsigned int cmd, void __user *ptr)
+ {
+ u32 __user *p = ptr;
+- u32 cid = VMADDR_CID_ANY;
+ int retval = 0;
++ u32 cid;
+
+ switch (cmd) {
+ case IOCTL_VM_SOCKETS_GET_LOCAL_CID:
+ /* To be compatible with the VMCI behavior, we prioritize the
+ * guest CID instead of well-know host CID (VMADDR_CID_HOST).
+ */
+- if (transport_g2h)
+- cid = transport_g2h->get_local_cid();
+- else if (transport_h2g)
+- cid = transport_h2g->get_local_cid();
++ cid = vsock_registered_transport_cid(&transport_g2h);
++ if (cid == VMADDR_CID_ANY)
++ cid = vsock_registered_transport_cid(&transport_h2g);
++ if (cid == VMADDR_CID_ANY)
++ cid = vsock_registered_transport_cid(&transport_local);
+
+ if (put_user(cid, p) != 0)
+ retval = -EFAULT;
+diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
+index 8c2856cbfeccff..912bafcf825b26 100644
+--- a/net/vmw_vsock/vmci_transport.c
++++ b/net/vmw_vsock/vmci_transport.c
+@@ -119,6 +119,8 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
+ u16 proto,
+ struct vmci_handle handle)
+ {
++ memset(pkt, 0, sizeof(*pkt));
++
+ /* We register the stream control handler as an any cid handle so we
+ * must always send from a source address of VMADDR_CID_ANY
+ */
+@@ -131,8 +133,6 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,
+ pkt->type = type;
+ pkt->src_port = src->svm_port;
+ pkt->dst_port = dst->svm_port;
+- memset(&pkt->proto, 0, sizeof(pkt->proto));
+- memset(&pkt->_reserved2, 0, sizeof(pkt->_reserved2));
+
+ switch (pkt->type) {
+ case VMCI_TRANSPORT_PACKET_TYPE_INVALID:
+diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c
+index aa48705310231c..19804d3fd98c44 100644
+--- a/sound/isa/sb/sb16_main.c
++++ b/sound/isa/sb/sb16_main.c
+@@ -710,6 +710,10 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct
+ change = nval != oval;
+ snd_sb16_set_dma_mode(chip, nval);
+ spin_unlock_irqrestore(&chip->reg_lock, flags);
++ if (change) {
++ snd_dma_disable(chip->dma8);
++ snd_dma_disable(chip->dma16);
++ }
+ return change;
+ }
+
+diff --git a/sound/pci/hda/hda_bind.c b/sound/pci/hda/hda_bind.c
+index 0a83afa5f373c9..6625643f333e8d 100644
+--- a/sound/pci/hda/hda_bind.c
++++ b/sound/pci/hda/hda_bind.c
+@@ -44,7 +44,7 @@ static void hda_codec_unsol_event(struct hdac_device *dev, unsigned int ev)
+ struct hda_codec *codec = container_of(dev, struct hda_codec, core);
+
+ /* ignore unsol events during shutdown */
+- if (codec->bus->shutdown)
++ if (codec->card->shutdown || codec->bus->shutdown)
+ return;
+
+ /* ignore unsol events during system suspend/resume */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index dd7c7cb0de140a..cb3dccdf3911c1 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2793,6 +2793,9 @@ static const struct pci_device_id azx_ids[] = {
+ { PCI_DEVICE(0x1002, 0xab38),
+ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ AZX_DCAPS_PM_RUNTIME },
++ { PCI_VDEVICE(ATI, 0xab40),
++ .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
++ AZX_DCAPS_PM_RUNTIME },
+ /* GLENFLY */
+ { PCI_DEVICE(PCI_VENDOR_ID_GLENFLY, PCI_ANY_ID),
+ .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8,
+diff --git a/sound/soc/fsl/fsl_asrc.c b/sound/soc/fsl/fsl_asrc.c
+index 5e3c71f025f453..cf6d3c549707b3 100644
+--- a/sound/soc/fsl/fsl_asrc.c
++++ b/sound/soc/fsl/fsl_asrc.c
+@@ -513,7 +513,8 @@ static int fsl_asrc_config_pair(struct fsl_asrc_pair *pair, bool use_ideal_rate)
+ regmap_update_bits(asrc->regmap, REG_ASRCTR,
+ ASRCTR_ATSi_MASK(index), ASRCTR_ATS(index));
+ regmap_update_bits(asrc->regmap, REG_ASRCTR,
+- ASRCTR_USRi_MASK(index), 0);
++ ASRCTR_IDRi_MASK(index) | ASRCTR_USRi_MASK(index),
++ ASRCTR_USR(index));
+
+ /* Set the input and output clock sources */
+ regmap_update_bits(asrc->regmap, REG_ASRCSR,
+diff --git a/sound/usb/stream.c b/sound/usb/stream.c
+index 0c77f244e5d668..d6d3ce9e963739 100644
+--- a/sound/usb/stream.c
++++ b/sound/usb/stream.c
+@@ -983,6 +983,8 @@ snd_usb_get_audioformat_uac3(struct snd_usb_audio *chip,
+ * and request Cluster Descriptor
+ */
+ wLength = le16_to_cpu(hc_header.wLength);
++ if (wLength < sizeof(cluster))
++ return NULL;
+ cluster = kzalloc(wLength, GFP_KERNEL);
+ if (!cluster)
+ return ERR_PTR(-ENOMEM);
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 2342aec3c5a3e6..d6818e22503c06 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -193,6 +193,9 @@ static void btf_dump_free_names(struct hashmap *map)
+ size_t bkt;
+ struct hashmap_entry *cur;
+
++ if (!map)
++ return;
++
+ hashmap__for_each_entry(map, cur, bkt)
+ free((void *)cur->key);
+
next reply other threads:[~2025-07-18 12:07 UTC|newest]
Thread overview: 312+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-18 12:07 Arisu Tachibana [this message]
-- strict thread matches above, loose matches on Subject: below --
2025-10-02 13:27 [gentoo-commits] proj/linux-patches:5.10 commit in: / Arisu Tachibana
2025-09-12 3:58 Arisu Tachibana
2025-09-10 5:33 Arisu Tachibana
2025-09-04 15:19 Arisu Tachibana
2025-09-04 14:32 Arisu Tachibana
2025-08-29 9:13 Arisu Tachibana
2025-08-28 16:55 Arisu Tachibana
2025-06-27 11:21 Mike Pagano
2025-06-04 18:15 Mike Pagano
2025-05-02 10:58 Mike Pagano
2025-04-10 13:16 Mike Pagano
2025-03-13 12:58 Mike Pagano
2025-02-01 23:10 Mike Pagano
2025-01-09 13:58 Mike Pagano
2024-12-19 18:10 Mike Pagano
2024-12-14 23:50 Mike Pagano
2024-11-30 17:35 Mike Pagano
2024-11-17 18:19 Mike Pagano
2024-11-08 17:46 Mike Pagano
2024-10-22 17:00 Mike Pagano
2024-10-17 14:11 Mike Pagano
2024-10-17 14:08 Mike Pagano
2024-09-12 12:42 Mike Pagano
2024-09-04 13:53 Mike Pagano
2024-08-19 10:44 Mike Pagano
2024-07-27 9:20 Mike Pagano
2024-07-27 9:17 Mike Pagano
2024-07-18 12:17 Mike Pagano
2024-07-05 10:53 Mike Pagano
2024-07-05 10:51 Mike Pagano
2024-06-21 14:08 Mike Pagano
2024-06-16 14:35 Mike Pagano
2024-05-25 15:14 Mike Pagano
2024-05-17 11:38 Mike Pagano
2024-05-05 18:14 Mike Pagano
2024-05-02 15:03 Mike Pagano
2024-04-27 22:57 Mike Pagano
2024-04-13 13:09 Mike Pagano
2024-03-27 11:26 Mike Pagano
2024-03-15 22:02 Mike Pagano
2024-03-06 18:09 Mike Pagano
2024-03-01 13:09 Mike Pagano
2024-02-23 12:45 Mike Pagano
2024-02-23 12:39 Mike Pagano
2024-01-25 23:34 Mike Pagano
2024-01-15 18:49 Mike Pagano
2024-01-12 20:35 Mike Pagano
2024-01-05 14:29 Mike Pagano
2023-12-20 15:21 Mike Pagano
2023-12-13 18:29 Mike Pagano
2023-12-08 11:16 Mike Pagano
2023-12-01 17:47 Mike Pagano
2023-11-28 17:52 Mike Pagano
2023-11-20 11:25 Mike Pagano
2023-11-08 17:28 Mike Pagano
2023-10-25 11:38 Mike Pagano
2023-10-18 20:16 Mike Pagano
2023-10-10 20:34 Mike Pagano
2023-10-05 14:24 Mike Pagano
2023-09-23 10:19 Mike Pagano
2023-09-21 11:29 Mike Pagano
2023-09-19 13:22 Mike Pagano
2023-09-02 9:59 Mike Pagano
2023-08-30 14:45 Mike Pagano
2023-08-26 15:21 Mike Pagano
2023-08-16 17:01 Mike Pagano
2023-08-11 11:56 Mike Pagano
2023-08-08 18:42 Mike Pagano
2023-07-27 11:50 Mike Pagano
2023-07-24 20:28 Mike Pagano
2023-06-28 10:27 Mike Pagano
2023-06-21 14:55 Alice Ferrazzi
2023-06-14 10:34 Mike Pagano
2023-06-14 10:20 Mike Pagano
2023-06-09 11:31 Mike Pagano
2023-06-05 11:50 Mike Pagano
2023-05-30 12:56 Mike Pagano
2023-05-17 11:25 Mike Pagano
2023-05-17 10:59 Mike Pagano
2023-05-10 17:56 Mike Pagano
2023-04-27 14:11 Mike Pagano
2023-04-26 9:50 Alice Ferrazzi
2023-04-20 11:17 Alice Ferrazzi
2023-04-05 10:01 Alice Ferrazzi
2023-03-22 14:15 Alice Ferrazzi
2023-03-17 10:45 Mike Pagano
2023-03-13 11:32 Alice Ferrazzi
2023-03-11 16:05 Mike Pagano
2023-03-03 15:01 Mike Pagano
2023-03-03 12:30 Mike Pagano
2023-02-25 11:44 Mike Pagano
2023-02-24 3:06 Alice Ferrazzi
2023-02-22 14:04 Alice Ferrazzi
2023-02-15 16:40 Mike Pagano
2023-02-06 12:47 Mike Pagano
2023-02-02 19:11 Mike Pagano
2023-02-01 8:09 Alice Ferrazzi
2023-01-24 7:13 Alice Ferrazzi
2023-01-18 11:09 Mike Pagano
2023-01-14 13:52 Mike Pagano
2023-01-04 11:39 Mike Pagano
2022-12-21 19:00 Alice Ferrazzi
2022-12-19 12:33 Alice Ferrazzi
2022-12-14 12:14 Mike Pagano
2022-12-08 11:51 Alice Ferrazzi
2022-12-02 17:26 Mike Pagano
2022-11-25 17:06 Mike Pagano
2022-11-16 12:08 Alice Ferrazzi
2022-11-10 18:05 Mike Pagano
2022-11-03 15:17 Mike Pagano
2022-10-30 9:33 Mike Pagano
2022-10-28 13:38 Mike Pagano
2022-10-26 11:46 Mike Pagano
2022-10-17 16:46 Mike Pagano
2022-10-15 10:05 Mike Pagano
2022-10-05 11:58 Mike Pagano
2022-09-28 9:30 Mike Pagano
2022-09-23 12:40 Mike Pagano
2022-09-20 12:01 Mike Pagano
2022-09-15 10:31 Mike Pagano
2022-09-08 10:46 Mike Pagano
2022-09-05 12:04 Mike Pagano
2022-08-31 15:39 Mike Pagano
2022-08-29 10:46 Mike Pagano
2022-08-25 10:33 Mike Pagano
2022-08-21 16:52 Mike Pagano
2022-08-11 12:34 Mike Pagano
2022-08-03 14:24 Alice Ferrazzi
2022-07-29 16:37 Mike Pagano
2022-07-25 10:19 Alice Ferrazzi
2022-07-21 20:08 Mike Pagano
2022-07-15 10:03 Mike Pagano
2022-07-12 15:59 Mike Pagano
2022-07-07 16:17 Mike Pagano
2022-07-02 16:10 Mike Pagano
2022-06-29 11:08 Mike Pagano
2022-06-27 11:12 Mike Pagano
2022-06-25 19:45 Mike Pagano
2022-06-22 12:45 Mike Pagano
2022-06-16 11:44 Mike Pagano
2022-06-14 17:12 Mike Pagano
2022-06-09 11:27 Mike Pagano
2022-06-06 11:03 Mike Pagano
2022-05-30 13:59 Mike Pagano
2022-05-25 11:54 Mike Pagano
2022-05-18 9:48 Mike Pagano
2022-05-15 22:10 Mike Pagano
2022-05-12 11:29 Mike Pagano
2022-05-09 10:56 Mike Pagano
2022-04-27 12:24 Mike Pagano
2022-04-27 12:20 Mike Pagano
2022-04-26 12:17 Mike Pagano
2022-04-20 12:07 Mike Pagano
2022-04-13 20:20 Mike Pagano
2022-04-13 19:48 Mike Pagano
2022-04-12 19:08 Mike Pagano
2022-04-08 13:16 Mike Pagano
2022-03-28 10:58 Mike Pagano
2022-03-23 11:55 Mike Pagano
2022-03-19 13:20 Mike Pagano
2022-03-16 13:33 Mike Pagano
2022-03-11 11:31 Mike Pagano
2022-03-08 18:32 Mike Pagano
2022-03-02 13:06 Mike Pagano
2022-02-26 20:27 Mike Pagano
2022-02-23 12:37 Mike Pagano
2022-02-16 12:46 Mike Pagano
2022-02-11 12:35 Mike Pagano
2022-02-08 17:54 Mike Pagano
2022-02-05 19:04 Mike Pagano
2022-02-05 12:13 Mike Pagano
2022-02-01 17:23 Mike Pagano
2022-01-31 12:25 Mike Pagano
2022-01-29 17:43 Mike Pagano
2022-01-27 11:37 Mike Pagano
2022-01-20 10:00 Mike Pagano
2022-01-16 10:21 Mike Pagano
2022-01-11 14:50 Mike Pagano
2022-01-05 12:53 Mike Pagano
2021-12-29 13:06 Mike Pagano
2021-12-22 14:05 Mike Pagano
2021-12-21 19:37 Mike Pagano
2021-12-17 11:55 Mike Pagano
2021-12-16 16:04 Mike Pagano
2021-12-14 12:51 Mike Pagano
2021-12-14 12:12 Mike Pagano
2021-12-08 12:53 Mike Pagano
2021-12-01 12:49 Mike Pagano
2021-11-26 11:57 Mike Pagano
2021-11-21 20:42 Mike Pagano
2021-11-18 15:33 Mike Pagano
2021-11-12 14:18 Mike Pagano
2021-11-06 13:36 Mike Pagano
2021-11-02 19:30 Mike Pagano
2021-10-27 14:55 Mike Pagano
2021-10-27 11:57 Mike Pagano
2021-10-20 13:23 Mike Pagano
2021-10-18 21:17 Mike Pagano
2021-10-17 13:11 Mike Pagano
2021-10-13 9:35 Alice Ferrazzi
2021-10-09 21:31 Mike Pagano
2021-10-06 14:18 Mike Pagano
2021-09-30 10:48 Mike Pagano
2021-09-26 14:12 Mike Pagano
2021-09-22 11:38 Mike Pagano
2021-09-20 22:02 Mike Pagano
2021-09-18 16:07 Mike Pagano
2021-09-17 12:50 Mike Pagano
2021-09-17 12:46 Mike Pagano
2021-09-16 11:20 Mike Pagano
2021-09-15 12:00 Mike Pagano
2021-09-12 14:38 Mike Pagano
2021-09-08 13:00 Alice Ferrazzi
2021-09-03 11:47 Mike Pagano
2021-09-03 11:20 Mike Pagano
2021-08-26 14:34 Mike Pagano
2021-08-25 16:23 Mike Pagano
2021-08-24 21:33 Mike Pagano
2021-08-24 21:32 Mike Pagano
2021-08-21 14:17 Mike Pagano
2021-08-19 11:56 Mike Pagano
2021-08-18 12:46 Mike Pagano
2021-08-15 20:05 Mike Pagano
2021-08-12 11:53 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-10 11:49 Mike Pagano
2021-08-08 13:36 Mike Pagano
2021-08-04 11:52 Mike Pagano
2021-08-03 11:03 Mike Pagano
2021-08-02 22:35 Mike Pagano
2021-07-31 10:30 Alice Ferrazzi
2021-07-28 13:22 Mike Pagano
2021-07-25 17:28 Mike Pagano
2021-07-25 17:26 Mike Pagano
2021-07-20 15:44 Alice Ferrazzi
2021-07-19 11:17 Mike Pagano
2021-07-14 16:31 Mike Pagano
2021-07-14 16:21 Mike Pagano
2021-07-13 12:37 Mike Pagano
2021-07-12 17:25 Mike Pagano
2021-07-11 15:11 Mike Pagano
2021-07-11 14:43 Mike Pagano
2021-07-08 12:27 Mike Pagano
2021-07-08 3:27 Alice Ferrazzi
2021-07-07 13:13 Mike Pagano
2021-07-02 19:38 Mike Pagano
2021-07-01 14:32 Mike Pagano
2021-06-30 14:23 Mike Pagano
2021-06-23 15:12 Mike Pagano
2021-06-18 11:37 Mike Pagano
2021-06-16 12:24 Mike Pagano
2021-06-11 17:34 Mike Pagano
2021-06-10 13:14 Mike Pagano
2021-06-10 12:09 Mike Pagano
2021-06-08 22:42 Mike Pagano
2021-06-03 10:26 Alice Ferrazzi
2021-05-28 12:15 Alice Ferrazzi
2021-05-26 12:07 Mike Pagano
2021-05-22 16:59 Mike Pagano
2021-05-19 12:24 Mike Pagano
2021-05-14 14:07 Alice Ferrazzi
2021-05-11 14:20 Mike Pagano
2021-05-07 11:27 Alice Ferrazzi
2021-05-02 16:03 Mike Pagano
2021-04-30 18:58 Mike Pagano
2021-04-28 12:03 Alice Ferrazzi
2021-04-21 11:42 Mike Pagano
2021-04-16 11:02 Alice Ferrazzi
2021-04-14 11:07 Alice Ferrazzi
2021-04-10 13:26 Mike Pagano
2021-04-07 13:27 Mike Pagano
2021-03-30 12:57 Alice Ferrazzi
2021-03-25 9:04 Alice Ferrazzi
2021-03-22 15:57 Mike Pagano
2021-03-20 14:35 Mike Pagano
2021-03-17 17:00 Mike Pagano
2021-03-11 15:08 Mike Pagano
2021-03-09 12:18 Mike Pagano
2021-03-07 15:17 Mike Pagano
2021-03-04 12:04 Alice Ferrazzi
2021-02-26 13:22 Mike Pagano
2021-02-26 10:42 Alice Ferrazzi
2021-02-23 15:16 Alice Ferrazzi
2021-02-18 20:45 Mike Pagano
2021-02-18 14:48 Mike Pagano
2021-02-17 11:14 Alice Ferrazzi
2021-02-13 15:51 Mike Pagano
2021-02-13 15:48 Mike Pagano
2021-02-13 14:42 Alice Ferrazzi
2021-02-10 10:23 Alice Ferrazzi
2021-02-10 9:51 Alice Ferrazzi
2021-02-09 19:10 Mike Pagano
2021-02-07 15:20 Alice Ferrazzi
2021-02-03 23:43 Alice Ferrazzi
2021-01-30 13:27 Alice Ferrazzi
2021-01-27 11:29 Mike Pagano
2021-01-23 16:38 Mike Pagano
2021-01-19 20:31 Mike Pagano
2021-01-17 16:18 Mike Pagano
2021-01-12 20:03 Mike Pagano
2021-01-09 17:58 Mike Pagano
2021-01-09 0:14 Mike Pagano
2021-01-06 14:54 Mike Pagano
2020-12-30 12:54 Mike Pagano
2020-12-26 15:32 Mike Pagano
2020-12-26 15:29 Mike Pagano
2020-12-21 13:26 Mike Pagano
2020-12-18 16:08 Mike Pagano
2020-12-14 20:45 Mike Pagano
2020-12-13 16:09 Mike Pagano
2020-11-19 13:03 Mike Pagano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1752840434.634810f32fe586502d38e8ed3d78e4d20d4a01d9.alicef@gentoo \
--to=alicef@gentoo.org \
--cc=gentoo-commits@lists.gentoo.org \
--cc=gentoo-dev@lists.gentoo.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox