public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] repo/gentoo:master commit in: sci-ml/caffe2/, sci-ml/caffe2/files/
@ 2025-04-26 18:25 Alfredo Tupone
  0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2025-04-26 18:25 UTC (permalink / raw
  To: gentoo-commits

commit:     cb4e7ac0e1fb5d9ca6ce1527b6d083e0827cfc6e
Author:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 26 18:09:37 2025 +0000
Commit:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Sat Apr 26 18:24:55 2025 +0000
URL:        https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=cb4e7ac0

sci-ml/caffe2: add 2.7.0

Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>

 sci-ml/caffe2/Manifest                        |   1 +
 sci-ml/caffe2/caffe2-2.7.0.ebuild             | 342 ++++++++++++++++++++++++++
 sci-ml/caffe2/files/caffe2-2.7.0-gentoo.patch | 157 ++++++++++++
 3 files changed, 500 insertions(+)

diff --git a/sci-ml/caffe2/Manifest b/sci-ml/caffe2/Manifest
index 9edc8e9aadc7..32f540381bbc 100644
--- a/sci-ml/caffe2/Manifest
+++ b/sci-ml/caffe2/Manifest
@@ -3,3 +3,4 @@ DIST composable_kernel-50ee4267.tar.gz 4194795 BLAKE2B b3c97d98a0c9e4620fdae3d30
 DIST pytorch-2.4.1.tar.gz 115029469 BLAKE2B c2909ff27d527bc57cba56b780d3b8cd07a043ab045caa6c6b27857a16f9ad10aaab2116b26226b1e46ee08ffb44007965d914464418e4ae14ca48c3f3f383bb SHA512 7e9b4485e242eaf0d648765c6621d73d95e7107b766646a098175436d1ab2e2b864badd0757a3bab6b7c318233f2120bad9ac07b39bb9e357897919580c87631
 DIST pytorch-2.5.1.tar.gz 116091366 BLAKE2B 7838b17562b94ffc7d798031348689db607dd5eae2a3c35be365972e2b52a2c1b12067068d5aca5ab00cf0977d9c2c3c9ae5337d69534c864c732e6256cbeef6 SHA512 a913a466324a65fa3d79c5e9ad4d605fc7976f0134fda2f81aaa3cea29d56926604999b8a238759646d211e63b47bbb446cdffa86ca8defd8159f11e30301289
 DIST pytorch-2.6.0.tar.gz 119594438 BLAKE2B 3152eb341cf42295e147e59625beb9c06608aa4b78f9618c1c0024b10c1c767715d07fe8c4be52d029ac47f808cd0d5e65c9530ec90d951a64b993083b4067ad SHA512 a70da80ff09d226085e18228132cf6bb236ad8cc47eed52375d0d2a615f09dd33849da947270b5670c184eab60cb8e2adf11d801babfbda7aa621400501d07b0
+DIST pytorch-2.7.0.tar.gz 50197290 BLAKE2B 2a317d1e9b0d8876f1593382246cd9f786eff3c1b8602353c5e0010dc8414720c5de61886361843a0c33268830c784963a89b410b361e1b67636e652f6a6a2eb SHA512 63eb0363ea68d23567f5524ee8b51756d9302bbe1cbefa367335ab5ebe652523dba75fa417ea3e7eedfc67aa4bef1434c8b7e3dfde2152061b91b6e489763a55

diff --git a/sci-ml/caffe2/caffe2-2.7.0.ebuild b/sci-ml/caffe2/caffe2-2.7.0.ebuild
new file mode 100644
index 000000000000..608304b638ca
--- /dev/null
+++ b/sci-ml/caffe2/caffe2-2.7.0.ebuild
@@ -0,0 +1,342 @@
+# Copyright 2022-2025 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+
+PYTHON_COMPAT=( python3_{10..13} )
+ROCM_VERSION=6.1
+inherit python-single-r1 cmake cuda flag-o-matic prefix rocm toolchain-funcs
+
+MYPN=pytorch
+MYP=${MYPN}-${PV}
+
+# caffe2-2.6.0 depends on future version of composable kernel
+# TODO: replace it with RDEPEND in the future
+CK_COMMIT=50ee4267e27b875d149e642f4cebd47be1dc3b57
+CK_P=composable_kernel-${CK_COMMIT:0:8}
+
+DESCRIPTION="A deep learning framework"
+HOMEPAGE="https://pytorch.org/"
+SRC_URI="
+	https://github.com/pytorch/${MYPN}/archive/refs/tags/v${PV}.tar.gz -> ${MYP}.tar.gz
+	rocm? ( https://github.com/ROCm/composable_kernel/archive/${CK_COMMIT}.tar.gz -> ${CK_P}.tar.gz )
+"
+
+S="${WORKDIR}"/${MYP}
+
+LICENSE="BSD"
+SLOT="0"
+KEYWORDS="~amd64"
+IUSE="cuda distributed fbgemm flash gloo memefficient mkl mpi nnpack +numpy
+	onednn openblas opencl openmp qnnpack rocm xnnpack"
+RESTRICT="test"
+REQUIRED_USE="
+	${PYTHON_REQUIRED_USE}
+	mpi? ( distributed )
+	gloo? ( distributed )
+	?? ( cuda rocm )
+	rocm? (
+		|| ( ${ROCM_REQUIRED_USE} )
+		!flash
+	)
+"
+
+RDEPEND="
+	${PYTHON_DEPS}
+	dev-cpp/abseil-cpp:=
+	dev-cpp/gflags:=
+	>=dev-cpp/glog-0.5.0
+	dev-cpp/nlohmann_json
+	dev-cpp/opentelemetry-cpp
+	dev-libs/cpuinfo
+	dev-libs/libfmt:=
+	dev-libs/protobuf:=
+	dev-libs/pthreadpool
+	dev-libs/sleef
+	virtual/lapack
+	sci-ml/foxi
+	sci-ml/onnx
+	cuda? (
+		dev-libs/cudnn
+		>=sci-ml/cudnn-frontend-1.0.3:0/8
+		dev-util/nvidia-cuda-toolkit:=[profiler]
+	)
+	fbgemm? ( sci-ml/FBGEMM )
+	gloo? ( sci-ml/gloo[cuda?] )
+	mpi? ( virtual/mpi )
+	nnpack? ( sci-ml/NNPACK )
+	numpy? ( $(python_gen_cond_dep '
+		dev-python/numpy[${PYTHON_USEDEP}]
+		') )
+	onednn? ( =sci-ml/oneDNN-3.5* )
+	opencl? ( virtual/opencl )
+	qnnpack? (
+		!sci-libs/QNNPACK
+		sci-ml/gemmlowp
+	)
+	rocm? (
+		>=dev-libs/rccl-6.1      <dev-libs/rccl-6.4
+		>=dev-util/hip-6.1       <dev-util/hip-6.4
+		>=dev-util/roctracer-6.1 <dev-util/roctracer-6.4
+		>=sci-libs/hipBLAS-6.1   <sci-libs/hipBLAS-6.4
+		>=sci-libs/hipBLASLt-6.1 <sci-libs/hipBLASLt-6.4
+		>=sci-libs/hipCUB-6.1    <sci-libs/hipCUB-6.4
+		>=sci-libs/hipFFT-6.1    <sci-libs/hipFFT-6.4
+		>=sci-libs/hipRAND-6.1   <sci-libs/hipRAND-6.4
+		>=sci-libs/hipSOLVER-6.1 <sci-libs/hipSOLVER-6.4
+		>=sci-libs/hipSPARSE-6.1 <sci-libs/hipSPARSE-6.4
+		>=sci-libs/miopen-6.1    <sci-libs/miopen-6.4
+		>=sci-libs/rocPRIM-6.1   <sci-libs/rocPRIM-6.4
+		>=sci-libs/rocThrust-6.1 <sci-libs/rocThrust-6.4
+	)
+	distributed? (
+		sci-ml/tensorpipe[cuda?]
+		dev-cpp/cpp-httplib
+	)
+	xnnpack? ( >=sci-ml/XNNPACK-2024.11 )
+	mkl? ( sci-libs/mkl )
+	openblas? ( sci-libs/openblas )
+"
+
+DEPEND="
+	${RDEPEND}
+	dev-libs/flatbuffers
+	dev-libs/FXdiv
+	dev-libs/pocketfft
+	dev-libs/psimd
+	sci-ml/FP16
+	sci-ml/kineto
+	$(python_gen_cond_dep '
+		dev-python/pybind11[${PYTHON_USEDEP}]
+		dev-python/pyyaml[${PYTHON_USEDEP}]
+		dev-python/typing-extensions[${PYTHON_USEDEP}]
+	')
+	cuda? ( >=dev-libs/cutlass-3.8.0 )
+	onednn? ( sci-ml/ideep )
+	qnnpack? ( dev-libs/clog )
+"
+
+PATCHES=(
+	"${FILESDIR}"/${PN}-2.5.1-unbundle_fmt.patch
+	"${FILESDIR}"/${PN}-2.5.1-unbundle_kineto.patch
+	"${FILESDIR}"/${PN}-2.5.1-cudnn_include_fix.patch
+	"${FILESDIR}"/${P}-gentoo.patch
+	"${FILESDIR}"/${PN}-2.4.0-cpp-httplib.patch
+	"${FILESDIR}"/${PN}-2.5.1-glog-0.6.0.patch
+	"${FILESDIR}"/${PN}-2.5.1-newfix-functorch-install.patch
+	"${FILESDIR}"/${PN}-2.6.0-rocm-fix-std-cpp17.patch
+)
+
+src_prepare() {
+	filter-lto #bug 862672
+
+	# Unbundle fmt
+	sed -i \
+		-e 's|::fmt-header-only||' \
+		c10/CMakeLists.txt \
+		cmake/Dependencies.cmake \
+		torch/CMakeLists.txt \
+		|| die
+
+	# Drop third_party from CMake tree
+	sed -i \
+		-e '/add_subdirectory.*third_party/d' \
+		CMakeLists.txt \
+		cmake/Dependencies.cmake \
+		cmake/ProtoBuf.cmake \
+		aten/src/ATen/CMakeLists.txt \
+		|| die
+	# Change libc10* path
+	sed -i \
+		-e "/EXPORT/s|DESTINATION lib)|DESTINATION $(get_libdir))|" \
+		c10/cuda/CMakeLists.txt \
+		c10/CMakeLists.txt \
+		c10/hip/CMakeLists.txt \
+		|| die
+	sed -i \
+		-e '/Using pocketfft in directory:/d' \
+		cmake/Dependencies.cmake \
+		|| die
+
+	cmake_src_prepare
+	pushd torch/csrc/jit/serialization || die
+	flatc --cpp --gen-mutable --scoped-enums mobile_bytecode.fbs || die
+	popd
+
+	# prefixify the hardcoded paths, after all patches are applied
+	hprefixify \
+		aten/CMakeLists.txt \
+		caffe2/CMakeLists.txt \
+		cmake/Metal.cmake \
+		cmake/Modules/*.cmake \
+		cmake/Modules_CUDA_fix/FindCUDNN.cmake \
+		cmake/Modules_CUDA_fix/upstream/FindCUDA/make2cmake.cmake \
+		cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake \
+		cmake/public/LoadHIP.cmake \
+		cmake/public/cuda.cmake \
+		cmake/Dependencies.cmake \
+		torch/CMakeLists.txt \
+		CMakeLists.txt
+
+	if use rocm; then
+		sed -e "s:/opt/rocm:/usr:" \
+			-e "s:lib/cmake:$(get_libdir)/cmake:g" \
+			-i cmake/public/LoadHIP.cmake || die
+
+		# TODO: delete, when caffe2 depends on systemwide composable_kernel
+		sed -e "s:third_party/composable_kernel:../composable_kernel-${CK_COMMIT}:g" \
+			-i aten/src/ATen/CMakeLists.txt || die
+
+		if tc-is-clang; then
+			# Systemwide gcc (for absl and at::TensorBase) + hipcc (llvm>=18) need abi-compat=17.
+			# But systemwide clang>=18 + hipcc (>=llvm-18) need opposite!
+			# See also: https://github.com/llvm/llvm-project/issues/102443#issuecomment-2329726287
+			sed '/-fclang-abi-compat=17/d' -i cmake/Dependencies.cmake || die
+		fi
+
+		# Workaround for libc++ issue https://github.com/llvm/llvm-project/issues/100802
+		sed 's/std::memcpy/memcpy/g' -i c10/util/Half.h || die
+
+		ebegin "HIPifying cuda sources"
+		${EPYTHON} tools/amd_build/build_amd.py || die
+		eend $?
+	fi
+}
+
+src_configure() {
+	if use cuda && [[ -z ${TORCH_CUDA_ARCH_LIST} ]]; then
+		ewarn "WARNING: caffe2 is being built with its default CUDA compute capabilities: 3.5 and 7.0."
+		ewarn "These may not be optimal for your GPU."
+		ewarn ""
+		ewarn "To configure caffe2 with the CUDA compute capability that is optimal for your GPU,"
+		ewarn "set TORCH_CUDA_ARCH_LIST in your make.conf, and re-emerge caffe2."
+		ewarn "For example, to use CUDA capability 7.5 & 3.5, add: TORCH_CUDA_ARCH_LIST=7.5 3.5"
+		ewarn "For a Maxwell model GPU, an example value would be: TORCH_CUDA_ARCH_LIST=Maxwell"
+		ewarn ""
+		ewarn "You can look up your GPU's CUDA compute capability at https://developer.nvidia.com/cuda-gpus"
+		ewarn "or by running /opt/cuda/extras/demo_suite/deviceQuery | grep 'CUDA Capability'"
+	fi
+
+	local mycmakeargs=(
+		-DBUILD_CUSTOM_PROTOBUF=OFF
+		-DLIBSHM_INSTALL_LIB_SUBDIR="${EPREFIX}"/usr/$(get_libdir)
+		-DPython_EXECUTABLE="${PYTHON}"
+		-DTORCH_INSTALL_LIB_DIR="${EPREFIX}"/usr/$(get_libdir)
+		-DUSE_CCACHE=OFF
+		-DUSE_CUDA=$(usex cuda)
+		-DUSE_DISTRIBUTED=$(usex distributed)
+		-DUSE_FAKELOWP=OFF
+		-DUSE_FBGEMM=$(usex fbgemm)
+		-DUSE_FLASH_ATTENTION=$(usex flash)
+		-DUSE_GFLAGS=ON
+		-DUSE_GLOG=ON
+		-DUSE_GLOO=$(usex gloo)
+		-DUSE_ITT=OFF
+		-DUSE_KINETO=OFF # TODO
+		-DUSE_MAGMA=OFF # TODO: In GURU as sci-libs/magma
+		-DUSE_MEM_EFF_ATTENTION=$(usex memefficient)
+		-DUSE_MKLDNN=$(usex onednn)
+		-DUSE_MPI=$(usex mpi)
+		-DUSE_NCCL=OFF
+		-DUSE_NNPACK=$(usex nnpack)
+		-DUSE_NUMA=OFF
+		-DUSE_NUMPY=$(usex numpy)
+		-DUSE_OPENCL=$(usex opencl)
+		-DUSE_OPENMP=$(usex openmp)
+		-DUSE_PYTORCH_QNNPACK=$(usex qnnpack)
+		-DUSE_PYTORCH_METAL=OFF
+		-DUSE_ROCM=$(usex rocm)
+		-DUSE_SYSTEM_CPUINFO=ON
+		-DUSE_SYSTEM_EIGEN_INSTALL=ON
+		-DUSE_SYSTEM_FP16=ON
+		-DUSE_SYSTEM_FXDIV=ON
+		-DUSE_SYSTEM_GLOO=ON
+		-DUSE_SYSTEM_ONNX=ON
+		-DUSE_SYSTEM_PSIMD=ON
+		-DUSE_SYSTEM_PTHREADPOOL=ON
+		-DUSE_SYSTEM_PYBIND11=ON
+		-DUSE_SYSTEM_SLEEF=ON
+		-DUSE_SYSTEM_XNNPACK=$(usex xnnpack)
+		-DUSE_TENSORPIPE=$(usex distributed)
+		-DUSE_UCC=OFF
+		-DUSE_VALGRIND=OFF
+		-DUSE_XNNPACK=$(usex xnnpack)
+		-DUSE_XPU=OFF
+		-Wno-dev
+	)
+
+	if use mkl; then
+		mycmakeargs+=(-DBLAS=MKL)
+	elif use openblas; then
+		mycmakeargs+=(-DBLAS=OpenBLAS)
+	else
+		mycmakeargs+=(-DBLAS=Generic -DBLAS_LIBRARIES=)
+	fi
+
+	if use cuda; then
+		addpredict "/dev/nvidiactl" # bug 867706
+		addpredict "/dev/char"
+		addpredict "/proc/self/task" # bug 926116
+
+		mycmakeargs+=(
+			-DUSE_CUDNN=ON
+			-DTORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST:-3.5 7.0}"
+			-DUSE_NCCL=OFF # TODO: NVIDIA Collective Communication Library
+			-DCMAKE_CUDA_FLAGS="$(cuda_gccdir -f | tr -d \")"
+		)
+	elif use rocm; then
+		export PYTORCH_ROCM_ARCH="$(get_amdgpu_flags)"
+
+		mycmakeargs+=(
+			-DUSE_NCCL=ON
+			-DUSE_SYSTEM_NCCL=ON
+			-DCMAKE_REQUIRE_FIND_PACKAGE_HIP=ON
+		)
+
+		# ROCm libraries produce too much warnings
+		append-cxxflags -Wno-deprecated-declarations -Wno-unused-result
+	fi
+
+	if use onednn; then
+		mycmakeargs+=(
+			-DMKLDNN_FOUND=ON
+			-DMKLDNN_LIBRARIES=dnnl
+			-DMKLDNN_INCLUDE_DIR="${ESYSROOT}/usr/include/oneapi/dnnl"
+		)
+	fi
+
+	cmake_src_configure
+}
+
+src_compile() {
+	PYTORCH_BUILD_VERSION=${PV} \
+	PYTORCH_BUILD_NUMBER=0 \
+	cmake_src_compile
+}
+
+python_install() {
+	python_domodule python/torch
+	mkdir "${D}"$(python_get_sitedir)/torch/bin || die
+	mkdir "${D}"$(python_get_sitedir)/torch/lib || die
+	mkdir "${D}"$(python_get_sitedir)/torch/include || die
+	ln -s ../../../../../include/torch \
+		"${D}$(python_get_sitedir)"/torch/include/torch || die # bug 923269
+	ln -s ../../../../../bin/torch_shm_manager \
+		"${D}"/$(python_get_sitedir)/torch/bin/torch_shm_manager || die
+	ln -s ../../../../../$(get_libdir)/libtorch_global_deps.so \
+		"${D}"/$(python_get_sitedir)/torch/lib/libtorch_global_deps.so || die
+}
+
+src_install() {
+	cmake_src_install
+
+	# Used by pytorch ebuild
+	insinto "/var/lib/${PN}"
+	doins "${BUILD_DIR}"/CMakeCache.txt
+	dostrip -x /var/lib/${PN}/functorch.so
+
+	rm -rf python
+	mkdir -p python/torch || die
+	cp torch/version.py python/torch/ || die
+	python_install
+}

diff --git a/sci-ml/caffe2/files/caffe2-2.7.0-gentoo.patch b/sci-ml/caffe2/files/caffe2-2.7.0-gentoo.patch
new file mode 100644
index 000000000000..78011bc46cdf
--- /dev/null
+++ b/sci-ml/caffe2/files/caffe2-2.7.0-gentoo.patch
@@ -0,0 +1,157 @@
+--- a/CMakeLists.txt
++++ b/CMakeLists.txt
+@@ -989,12 +989,11 @@ endif()
+ # third_party/FBGEMM
+ include(cmake/public/utils.cmake)
+ if(NOT MSVC)
+-  string(APPEND CMAKE_CXX_FLAGS " -O2 -fPIC")
++  string(APPEND CMAKE_CXX_FLAGS " -O2")
+   # Eigen fails to build with some versions, so convert this to a warning
+   # Details at http://eigen.tuxfamily.org/bz/show_bug.cgi?id=1459
+   string(APPEND CMAKE_CXX_FLAGS " -Wall")
+   string(APPEND CMAKE_CXX_FLAGS " -Wextra")
+-  append_cxx_flag_if_supported("-Werror=return-type" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-Werror=non-virtual-dtor" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-Werror=braced-scalar-init" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-Werror=range-loop-construct" CMAKE_CXX_FLAGS)
+@@ -1092,7 +1091,6 @@
+   endif()
+   append_cxx_flag_if_supported("-fno-math-errno" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-fno-trapping-math" CMAKE_CXX_FLAGS)
+-  append_cxx_flag_if_supported("-Werror=format" CMAKE_CXX_FLAGS)
+   if(CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 13)
+     append_cxx_flag_if_supported("-Wno-dangling-reference" CMAKE_CXX_FLAGS)
+     append_cxx_flag_if_supported("-Wno-error=dangling-reference" CMAKE_CXX_FLAGS)
+     append_cxx_flag_if_supported("-Wno-error=redundant-move" CMAKE_CXX_FLAGS)
+--- a/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
++++ b/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
+@@ -323,16 +323,8 @@ set_target_properties(pytorch_qnnpack PROPERTIES PUBLIC_HEADER include/pytorch_q
+ set_target_properties(pytorch_qnnpack PROPERTIES PUBLIC_HEADER include/qnnpack_func.h)
+ 
+ # ---[ Configure clog
+-if(NOT TARGET clog)
+-  set(CLOG_BUILD_TESTS OFF CACHE BOOL "")
+-  set(CLOG_RUNTIME_TYPE "${CPUINFO_RUNTIME_TYPE}" CACHE STRING "")
+-  add_subdirectory(
+-    "${CLOG_SOURCE_DIR}"
+-    "${CONFU_DEPENDENCIES_BINARY_DIR}/clog")
+-  # We build static version of clog but a dynamic library may indirectly depend on it
+-  set_property(TARGET clog PROPERTY POSITION_INDEPENDENT_CODE ON)
+-endif()
+-target_link_libraries(pytorch_qnnpack PUBLIC clog)
++find_library(CLOG_LIBRARY NAMES clog REQUIRED)
++target_link_libraries(pytorch_qnnpack PUBLIC ${CLOG_LIBRARY})
+ 
+ # ---[ Configure cpuinfo
+ if(NOT TARGET cpuinfo AND USE_SYSTEM_CPUINFO)
+--- a/caffe2/CMakeLists.txt
++++ b/caffe2/CMakeLists.txt
+@@ -87,7 +87,7 @@ endif()
+ # Note: the folders that are being commented out have not been properly
+ # addressed yet.
+ 
+-if(NOT MSVC AND USE_XNNPACK)
++if(FALSE)
+   if(NOT TARGET fxdiv)
+     set(FXDIV_BUILD_TESTS OFF CACHE BOOL "")
+     set(FXDIV_BUILD_BENCHMARKS OFF CACHE BOOL "")
+@@ -1135,7 +1135,6 @@ if(USE_XPU)
+ endif()
+ 
+ if(NOT MSVC AND USE_XNNPACK)
+-  TARGET_LINK_LIBRARIES(torch_cpu PRIVATE fxdiv)
+ endif()
+ 
+ # ==========================================================
+--- a/cmake/Codegen.cmake
++++ b/cmake/Codegen.cmake
+@@ -64,7 +64,7 @@ if(INTERN_BUILD_ATEN_OPS)
+   if(MSVC)
+     set(OPT_FLAG "/fp:strict ")
+   else(MSVC)
+-    set(OPT_FLAG "-O3 ")
++    set(OPT_FLAG " ")
+     if("${CMAKE_BUILD_TYPE}" MATCHES "Debug")
+       set(OPT_FLAG " ")
+     endif()
+--- a/cmake/Dependencies.cmake
++++ b/cmake/Dependencies.cmake
+@@ -467,7 +467,9 @@
+       set_property(TARGET pytorch_qnnpack PROPERTY POSITION_INDEPENDENT_CODE ON)
+       set_property(TARGET cpuinfo PROPERTY POSITION_INDEPENDENT_CODE ON)
+       # QNNPACK depends on gemmlowp headers
+-      target_include_directories(pytorch_qnnpack PRIVATE "${CAFFE2_THIRD_PARTY_ROOT}/gemmlowp")
++      find_package(gemmlowp REQUIRED)
++      get_target_property(GEMMLOWP_INCLUDE_DIRS gemmlowp::gemmlowp INTERFACE_INCLUDE_DIRECTORIES)
++      target_include_directories(pytorch_qnnpack PRIVATE ${GEMMLOWP_INCLUDE_DIRS})
+     endif()
+ 
+     list(APPEND Caffe2_DEPENDENCY_LIBS pytorch_qnnpack)
+@@ -562,7 +564,7 @@
+   find_library(microkernels-prod_LIBRARY microkernels-prod)
+   set_property(TARGET XNNPACK PROPERTY IMPORTED_LOCATION "${XNNPACK_LIBRARY}")
+   set_property(TARGET microkernels-prod PROPERTY IMPORTED_LOCATION "${microkernels-prod_LIBRARY}")
+-  if(NOT XNNPACK_LIBRARY or NOT microkernels-prod_LIBRARY)
++  if(FALSE)
+     message(FATAL_ERROR "Cannot find XNNPACK")
+   endif()
+   message("-- Found XNNPACK: ${XNNPACK_LIBRARY}")
+@@ -699,7 +701,7 @@ if(BUILD_TEST OR BUILD_MOBILE_BENCHMARK OR BUILD_MOBILE_TEST)
+ endif()
+ 
+ # ---[ FBGEMM
+-if(USE_FBGEMM)
++if(FALSE)
+   set(CAFFE2_THIRD_PARTY_ROOT "${PROJECT_SOURCE_DIR}/third_party")
+   if(NOT DEFINED FBGEMM_SOURCE_DIR)
+     set(FBGEMM_SOURCE_DIR "${CAFFE2_THIRD_PARTY_ROOT}/fbgemm" CACHE STRING "FBGEMM source directory")
+@@ -751,6 +753,7 @@ if(USE_FBGEMM)
+ endif()
+ 
+ if(USE_FBGEMM)
++  list(APPEND Caffe2_DEPENDENCY_LIBS fbgemm)
+   caffe2_update_option(USE_FBGEMM ON)
+ else()
+   caffe2_update_option(USE_FBGEMM OFF)
+--- a/cmake/External/nnpack.cmake
++++ b/cmake/External/nnpack.cmake
+@@ -56,7 +56,7 @@
+   set(PTHREADPOOL_SOURCE_DIR "${CAFFE2_THIRD_PARTY_ROOT}/pthreadpool" CACHE STRING "pthreadpool source directory")
+   set(GOOGLETEST_SOURCE_DIR "${CAFFE2_THIRD_PARTY_ROOT}/googletest" CACHE STRING "Google Test source directory")
+ 
+-  if(NOT TARGET nnpack)
++  if(FALSE)
+     set(NNPACK_BUILD_TESTS OFF CACHE BOOL "")
+     set(NNPACK_BUILD_BENCHMARKS OFF CACHE BOOL "")
+     set(NNPACK_LIBRARY_TYPE "static" CACHE STRING "")
+--- a/cmake/public/utils.cmake
++++ b/cmake/public/utils.cmake
+@@ -439,8 +439,6 @@ function(torch_compile_options libname)
+   endif()
+ 
+   # Use -O2 for release builds (-O3 doesn't improve perf, and -Os results in perf regression)
+-  target_compile_options(${libname} PRIVATE
+-      $<$<AND:$<COMPILE_LANGUAGE:CXX>,$<OR:$<CONFIG:Release>,$<CONFIG:RelWithDebInfo>>>:-O2>)
+ 
+ endfunction()
+ 
+--- a/aten/src/ATen/CMakeLists.txt	2025-02-27 14:23:02.402742165 +0100
++++ b/aten/src/ATen/CMakeLists.txt	2025-02-27 14:23:40.445850718 +0100
+@@ -301,8 +301,6 @@
+ if(USE_CUDA)
+   list(APPEND ATen_CUDA_INCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/cuda)
+   # Next two lines are needed because TunableOp uses third-party/fmt
+-  list(APPEND ATen_CUDA_INCLUDE $<TARGET_PROPERTY:fmt::fmt-header-only,INTERFACE_INCLUDE_DIRECTORIES>)
+-  list(APPEND ATen_CUDA_DEPENDENCY_LIBS fmt::fmt-header-only)
+   list(APPEND ATen_CUDA_CU_SRCS
+     ${cuda_cu}
+     ${native_cuda_cu}
+@@ -315,8 +313,6 @@
+   list(APPEND ATen_HIP_INCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/../../../third_party/composable_kernel/include)
+   list(APPEND ATen_HIP_INCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/../../../third_party/composable_kernel/library/include)
+   # Next two lines are needed because TunableOp uses third-party/fmt
+-  list(APPEND ATen_HIP_INCLUDE $<TARGET_PROPERTY:fmt::fmt-header-only,INTERFACE_INCLUDE_DIRECTORIES>)
+-  list(APPEND ATen_HIP_DEPENDENCY_LIBS fmt::fmt-header-only)
+ if(USE_FLASH_ATTENTION)
+   list(APPEND ATen_HIP_INCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/native/transformers/hip/flash_attn/ck)
+ endif()


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [gentoo-commits] repo/gentoo:master commit in: sci-ml/caffe2/, sci-ml/caffe2/files/
@ 2025-05-04 19:23 Alfredo Tupone
  0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2025-05-04 19:23 UTC (permalink / raw
  To: gentoo-commits

commit:     563fdaf1e3ae2e7362a3ef6f4cfafa84c4706403
Author:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Sun May  4 19:05:35 2025 +0000
Commit:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Sun May  4 19:23:40 2025 +0000
URL:        https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=563fdaf1

sci-ml/caffe2: enable kineto

Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>

 sci-ml/caffe2/caffe2-2.7.0.ebuild                      |  4 ++--
 sci-ml/caffe2/files/caffe2-2.5.1-unbundle_kineto.patch | 11 +++++++++++
 2 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/sci-ml/caffe2/caffe2-2.7.0.ebuild b/sci-ml/caffe2/caffe2-2.7.0.ebuild
index 1a954381cc6d..1ee026bcf176 100644
--- a/sci-ml/caffe2/caffe2-2.7.0.ebuild
+++ b/sci-ml/caffe2/caffe2-2.7.0.ebuild
@@ -105,7 +105,7 @@ DEPEND="
 	dev-libs/pocketfft
 	dev-libs/psimd
 	sci-ml/FP16
-	sci-ml/kineto
+	~sci-ml/kineto-0.4.0_p20250214
 	$(python_gen_cond_dep '
 		dev-python/pybind11[${PYTHON_USEDEP}]
 		dev-python/pyyaml[${PYTHON_USEDEP}]
@@ -232,7 +232,7 @@ src_configure() {
 		-DUSE_GLOG=ON
 		-DUSE_GLOO=$(usex gloo)
 		-DUSE_ITT=OFF
-		-DUSE_KINETO=OFF # TODO
+		-DUSE_KINETO=ON # TODO
 		-DUSE_KLEIDIAI=OFF # TODO
 		-DUSE_MAGMA=OFF # TODO: In GURU as sci-libs/magma
 		-DUSE_MEM_EFF_ATTENTION=$(usex memefficient)

diff --git a/sci-ml/caffe2/files/caffe2-2.5.1-unbundle_kineto.patch b/sci-ml/caffe2/files/caffe2-2.5.1-unbundle_kineto.patch
index ebe931bc49b6..0ef6cd6d01ce 100644
--- a/sci-ml/caffe2/files/caffe2-2.5.1-unbundle_kineto.patch
+++ b/sci-ml/caffe2/files/caffe2-2.5.1-unbundle_kineto.patch
@@ -20,3 +20,14 @@
  
  if(USE_KINETO)
    target_include_directories(torch_cpu PRIVATE
+--- a/cmake/Dependencies.cmake	2025-05-04 15:30:00.268862558 +0200
++++ b/cmake/Dependencies.cmake	2025-05-04 15:30:13.275934233 +0200
+@@ -1711,7 +1711,7 @@
+     endif()
+   endif()
+ 
+-  if(NOT TARGET kineto)
++  if(FALSE)
+     add_subdirectory("${KINETO_SOURCE_DIR}")
+     set_property(TARGET kineto PROPERTY POSITION_INDEPENDENT_CODE ON)
+   endif()


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [gentoo-commits] repo/gentoo:master commit in: sci-ml/caffe2/, sci-ml/caffe2/files/
@ 2025-06-24 10:14 Alfredo Tupone
  0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2025-06-24 10:14 UTC (permalink / raw
  To: gentoo-commits

commit:     2d035a8df52816f6b3080938b25e3207298e7c59
Author:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Tue Jun 24 10:12:38 2025 +0000
Commit:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Tue Jun 24 10:13:25 2025 +0000
URL:        https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=2d035a8d

sci-ml/caffe2: min req version of cmake

Closes: https://bugs.gentoo.org/957617
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>

 sci-ml/caffe2/caffe2-2.7.0-r2.ebuild         |  1 +
 sci-ml/caffe2/files/caffe2-2.7.0-cmake.patch | 40 ++++++++++++++++++++++++++++
 2 files changed, 41 insertions(+)

diff --git a/sci-ml/caffe2/caffe2-2.7.0-r2.ebuild b/sci-ml/caffe2/caffe2-2.7.0-r2.ebuild
index 297273b8e670..9c49aacddb11 100644
--- a/sci-ml/caffe2/caffe2-2.7.0-r2.ebuild
+++ b/sci-ml/caffe2/caffe2-2.7.0-r2.ebuild
@@ -126,6 +126,7 @@ PATCHES=(
 	"${FILESDIR}"/${PN}-2.5.1-glog-0.6.0.patch
 	"${FILESDIR}"/${PN}-2.5.1-newfix-functorch-install.patch
 	"${FILESDIR}"/${PN}-2.6.0-rocm-fix-std-cpp17.patch
+	"${FILESDIR}"/${P}-cmake.patch
 )
 
 src_prepare() {

diff --git a/sci-ml/caffe2/files/caffe2-2.7.0-cmake.patch b/sci-ml/caffe2/files/caffe2-2.7.0-cmake.patch
new file mode 100644
index 000000000000..008dfe560105
--- /dev/null
+++ b/sci-ml/caffe2/files/caffe2-2.7.0-cmake.patch
@@ -0,0 +1,40 @@
+--- a/.ci/pytorch/test_example_code/CMakeLists.txt	2025-06-24 11:57:17.268200696 +0200
++++ b/.ci/pytorch/test_example_code/CMakeLists.txt	2025-06-24 11:57:27.656239353 +0200
+@@ -1,4 +1,4 @@
+-cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
++cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
+ project(simple-torch-test)
+ 
+ find_package(Torch REQUIRED)
+--- a/aten/src/ATen/test/test_install/CMakeLists.txt	2025-06-24 11:54:39.366613030 +0200
++++ b/aten/src/ATen/test/test_install/CMakeLists.txt	2025-06-24 11:54:49.938652376 +0200
+@@ -1,4 +1,4 @@
+-cmake_minimum_required(VERSION 3.0)
++cmake_minimum_required(VERSION 3.5)
+ find_package(ATen REQUIRED)
+ include_directories(${ATEN_INCLUDE_DIR})
+ 
+--- a/android/test_app/app/CMakeLists.txt	2025-06-24 11:49:00.371351384 +0200
++++ b/android/test_app/app/CMakeLists.txt	2025-06-24 11:49:12.083394978 +0200
+@@ -1,4 +1,4 @@
+-cmake_minimum_required(VERSION 3.4.1)
++cmake_minimum_required(VERSION 3.5)
+ set(PROJECT_NAME pytorch_testapp_jni)
+ project(${PROJECT_NAME} CXX)
+ set(CMAKE_CXX_STANDARD 17 CACHE STRING "The C++ standard whose features are requested to build this target.")
+--- a/android/pytorch_android/CMakeLists.txt	2025-06-24 11:58:48.551540427 +0200
++++ b/android/pytorch_android/CMakeLists.txt	2025-06-24 11:58:59.802582301 +0200
+@@ -1,4 +1,4 @@
+-cmake_minimum_required(VERSION 3.4.1)
++cmake_minimum_required(VERSION 3.5)
+ option(BUILD_LITE_INTERPRETER "Master flag to build pytorch_jni_lite" ON)
+ message(
+   STATUS
+--- a/android/pytorch_android_torchvision/CMakeLists.txt	2025-06-24 12:04:49.205884981 +0200
++++ b/android/pytorch_android_torchvision/CMakeLists.txt	2025-06-24 12:04:58.357919901 +0200
+@@ -1,4 +1,4 @@
+-cmake_minimum_required(VERSION 3.4.1)
++cmake_minimum_required(VERSION 3.5)
+ project(pytorch_vision_jni CXX)
+ set(CMAKE_CXX_STANDARD 17 CACHE STRING "The C++ standard whose features are requested to build this target.")
+ set(CMAKE_VERBOSE_MAKEFILE ON)


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [gentoo-commits] repo/gentoo:master commit in: sci-ml/caffe2/, sci-ml/caffe2/files/
@ 2025-08-16 12:12 Alfredo Tupone
  0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2025-08-16 12:12 UTC (permalink / raw
  To: gentoo-commits

commit:     c2fbfba3e70d6c00e6a07594a31e5f7d08bca7bb
Author:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Sat Aug 16 12:11:07 2025 +0000
Commit:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Sat Aug 16 12:12:12 2025 +0000
URL:        https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=c2fbfba3

sci-ml/caffe2: add 2.8.0

Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>

 sci-ml/caffe2/Manifest                             |   1 +
 sci-ml/caffe2/caffe2-2.8.0.ebuild                  | 384 +++++++++++++++++++++
 sci-ml/caffe2/files/caffe2-2.8.0-cmake.patch       |   8 +
 sci-ml/caffe2/files/caffe2-2.8.0-gentoo.patch      | 231 +++++++++++++
 .../files/caffe2-2.8.0-unbundle_pocketfft.patch    |  18 +
 5 files changed, 642 insertions(+)

diff --git a/sci-ml/caffe2/Manifest b/sci-ml/caffe2/Manifest
index fda2a4a29e01..cb3c35b99981 100644
--- a/sci-ml/caffe2/Manifest
+++ b/sci-ml/caffe2/Manifest
@@ -5,3 +5,4 @@ DIST pytorch-2.5.1.tar.gz 116091366 BLAKE2B 7838b17562b94ffc7d798031348689db607d
 DIST pytorch-2.6.0.tar.gz 119594438 BLAKE2B 3152eb341cf42295e147e59625beb9c06608aa4b78f9618c1c0024b10c1c767715d07fe8c4be52d029ac47f808cd0d5e65c9530ec90d951a64b993083b4067ad SHA512 a70da80ff09d226085e18228132cf6bb236ad8cc47eed52375d0d2a615f09dd33849da947270b5670c184eab60cb8e2adf11d801babfbda7aa621400501d07b0
 DIST pytorch-2.7.0.tar.gz 50197290 BLAKE2B 2a317d1e9b0d8876f1593382246cd9f786eff3c1b8602353c5e0010dc8414720c5de61886361843a0c33268830c784963a89b410b361e1b67636e652f6a6a2eb SHA512 63eb0363ea68d23567f5524ee8b51756d9302bbe1cbefa367335ab5ebe652523dba75fa417ea3e7eedfc67aa4bef1434c8b7e3dfde2152061b91b6e489763a55
 DIST pytorch-2.7.1.tar.gz 50203605 BLAKE2B 3f4b2643d86fe9ff30b2f335353dfe6a8e222bcc12143bc5d09268fb37bfd42f9451620e6e0db225c3c3e7930c999115fdd2ed62b7eae93b0d5e233270c7c760 SHA512 a9fc2252af9031c2cd46dde558c491aea8bc322fb80157a7760f300a44b759d4bfe866f030fbb974b80493057cfff4dd512498f99a100ed6d05bf620258ed37e
+DIST pytorch-2.8.0.tar.gz 56565754 BLAKE2B a8f07513b92f9293f8322508f9fc73a462f89fe51cb1f280af371cee19cbe7e2bf900ba2b3c43fd08ea415566db441a6d6310d77f18477e957641be311a361a5 SHA512 448e9dad4aa10f1793d35e6ffe9f0f69b7719d41e6eccceb687a8d0c148e22d03e4f76170a05308ef9323a7aea41aa74605077ae1d68c6d949f13b3340ebf310

diff --git a/sci-ml/caffe2/caffe2-2.8.0.ebuild b/sci-ml/caffe2/caffe2-2.8.0.ebuild
new file mode 100644
index 000000000000..aa94e0e5bc24
--- /dev/null
+++ b/sci-ml/caffe2/caffe2-2.8.0.ebuild
@@ -0,0 +1,384 @@
+# Copyright 2022-2025 Gentoo Authors
+# Distributed under the terms of the GNU General Public License v2
+
+EAPI=8
+
+PYTHON_COMPAT=( python3_{11..14} )
+ROCM_VERSION=6.1
+inherit python-single-r1 cmake cuda flag-o-matic prefix rocm toolchain-funcs
+
+MYPN=pytorch
+MYP=${MYPN}-${PV}
+
+# caffe2-2.6.0 depends on future version of composable kernel
+# TODO: replace it with RDEPEND in the future
+CK_COMMIT=8086bbe3a78d931eb96fe12fdc014082e18d18d3
+CK_P=composable_kernel-${CK_COMMIT:0:8}
+
+FLASH_PV=2.7.4
+FLASH_PN=flash-attention
+FLASH_P=${FLASH_PN}-${FLASH_PV}
+
+AOTRITON_PV=0.9.2b
+AOTRITON_PN=aotriton
+AOTRITON_P=${AOTRITON_PN}-${AOTRITON_PV}
+AOTRITON_tar=${AOTRITON_P}-manylinux_2_28_x86_64-rocm6.3-shared.tar.gz
+
+DESCRIPTION="A deep learning framework"
+HOMEPAGE="https://pytorch.org/"
+SRC_URI="
+	https://github.com/pytorch/${MYPN}/archive/refs/tags/v${PV}.tar.gz -> ${MYP}.tar.gz
+	rocm? (
+		https://github.com/ROCm/composable_kernel/archive/${CK_COMMIT}.tar.gz
+		-> ${CK_P}.tar.gz
+	)
+	flash? (
+		https://github.com/Dao-AILab/${FLASH_PN}/archive/refs/tags/v${FLASH_PV}.tar.gz
+		-> ${FLASH_P}.gh.tar.gz
+	)
+"
+
+S="${WORKDIR}"/${MYP}
+
+LICENSE="BSD"
+SLOT="0"
+KEYWORDS="~amd64 ~arm64"
+IUSE="cuda cusparselt distributed fbgemm flash gloo memefficient mkl mpi nnpack +numpy
+	onednn openblas opencl openmp qnnpack rocm xnnpack"
+RESTRICT="test"
+REQUIRED_USE="
+	${PYTHON_REQUIRED_USE}
+	mpi? ( distributed )
+	gloo? ( distributed )
+	?? ( cuda rocm )
+	rocm? (
+		|| ( ${ROCM_REQUIRED_USE} )
+		!flash
+	)
+"
+
+RDEPEND="
+	${PYTHON_DEPS}
+	dev-cpp/abseil-cpp:=
+	dev-cpp/gflags:=
+	>=dev-cpp/glog-0.5.0
+	dev-cpp/nlohmann_json
+	dev-cpp/opentelemetry-cpp
+	dev-libs/cpuinfo
+	dev-libs/libfmt:=
+	dev-libs/protobuf:=
+	dev-libs/pthreadpool
+	dev-libs/sleef
+	sci-ml/foxi
+	~sci-ml/kineto-0.4.0_p20250617
+	<sci-ml/onnx-1.18.0
+	virtual/lapack
+	cuda? (
+		dev-libs/cudnn
+		>=sci-ml/cudnn-frontend-1.0.3:0/8
+		>=dev-util/nvidia-cuda-toolkit-12.9:=[profiler]
+		cusparselt? ( dev-libs/cusparselt )
+	)
+	fbgemm? ( sci-ml/FBGEMM )
+	gloo? ( <=sci-ml/gloo-2025.06.04[cuda?] )
+	mpi? ( virtual/mpi )
+	nnpack? ( sci-ml/NNPACK )
+	numpy? ( $(python_gen_cond_dep '
+		dev-python/numpy[${PYTHON_USEDEP}]
+		') )
+	onednn? ( =sci-ml/oneDNN-3.5* )
+	opencl? ( virtual/opencl )
+	qnnpack? (
+		!sci-libs/QNNPACK
+		sci-ml/gemmlowp
+	)
+	rocm? (
+		>=dev-libs/rccl-6.1      <dev-libs/rccl-6.5
+		>=dev-util/hip-6.1       <dev-util/hip-6.5
+		>=dev-util/roctracer-6.1 <dev-util/roctracer-6.5
+		>=sci-libs/hipBLAS-6.1   <sci-libs/hipBLAS-6.5
+		>=sci-libs/hipBLASLt-6.1 <sci-libs/hipBLASLt-6.5
+		>=sci-libs/hipCUB-6.1    <sci-libs/hipCUB-6.5
+		>=sci-libs/hipFFT-6.1    <sci-libs/hipFFT-6.5
+		>=sci-libs/hipRAND-6.1   <sci-libs/hipRAND-6.5
+		>=sci-libs/hipSOLVER-6.1 <sci-libs/hipSOLVER-6.5
+		>=sci-libs/hipSPARSE-6.1 <sci-libs/hipSPARSE-6.5
+		>=sci-libs/miopen-6.1    <sci-libs/miopen-6.5
+		>=sci-libs/rocPRIM-6.1   <sci-libs/rocPRIM-6.5
+		>=sci-libs/rocThrust-6.1 <sci-libs/rocThrust-6.5
+		memefficient? ( sci-libs/aotriton-bin:0/0.9 )
+	)
+	distributed? (
+		sci-ml/tensorpipe[cuda?]
+		dev-cpp/cpp-httplib
+	)
+	xnnpack? ( >=sci-ml/XNNPACK-2024.11 )
+	mkl? ( sci-libs/mkl )
+	openblas? ( sci-libs/openblas )
+"
+
+DEPEND="
+	${RDEPEND}
+	dev-libs/flatbuffers
+	dev-libs/FXdiv
+	dev-libs/pocketfft
+	dev-libs/psimd
+	sci-ml/FP16
+	$(python_gen_cond_dep '
+		dev-python/pybind11[${PYTHON_USEDEP}]
+		dev-python/pyyaml[${PYTHON_USEDEP}]
+		dev-python/typing-extensions[${PYTHON_USEDEP}]
+	')
+	cuda? ( ~dev-libs/cutlass-3.8.0 )
+	onednn? ( sci-ml/ideep )
+	qnnpack? ( dev-libs/clog )
+"
+
+PATCHES=(
+	"${FILESDIR}"/${PN}-2.5.1-unbundle_fmt.patch
+	"${FILESDIR}"/${PN}-2.5.1-unbundle_kineto.patch
+	"${FILESDIR}"/${P}-unbundle_pocketfft.patch
+	"${FILESDIR}"/${PN}-2.5.1-cudnn_include_fix.patch
+	"${FILESDIR}"/${P}-gentoo.patch
+	"${FILESDIR}"/${PN}-2.4.0-cpp-httplib.patch
+	"${FILESDIR}"/${PN}-2.5.1-glog-0.6.0.patch
+	"${FILESDIR}"/${PN}-2.5.1-newfix-functorch-install.patch
+	"${FILESDIR}"/${PN}-2.6.0-rocm-fix-std-cpp17.patch
+	"${FILESDIR}"/${P}-cmake.patch
+	"${FILESDIR}"/${PN}-2.7.0-glog-0.7.1.patch
+	"${FILESDIR}"/${PN}-2.7.1-aotriton-fixes.patch
+)
+
+src_prepare() {
+	if use flash; then
+		mv "${WORKDIR}"/${FLASH_P}/* third_party/${FLASH_PN}/ || die
+	fi
+	filter-lto #bug 862672
+
+	# Unbundle fmt
+	sed -i \
+		-e 's|::fmt-header-only||' \
+		c10/CMakeLists.txt \
+		cmake/Dependencies.cmake \
+		torch/CMakeLists.txt \
+		|| die
+
+	# Drop third_party from CMake tree
+	sed -i \
+		-e '/add_subdirectory.*third_party/d' \
+		CMakeLists.txt \
+		cmake/Dependencies.cmake \
+		cmake/ProtoBuf.cmake \
+		aten/src/ATen/CMakeLists.txt \
+		|| die
+	# Change libc10* path
+	sed -i \
+		-e "/EXPORT/s|DESTINATION lib)|DESTINATION $(get_libdir))|" \
+		c10/cuda/CMakeLists.txt \
+		c10/CMakeLists.txt \
+		c10/hip/CMakeLists.txt \
+		|| die
+
+	# Change libaotriton path
+	sed -i \
+		-e "s|}/lib|}/$(get_libdir)|g" \
+		cmake/External/aotriton.cmake \
+		|| die
+
+	# Noisy warnings from Logging.h
+	sed -i 's/-Wextra-semi//' cmake/public/utils.cmake || die
+
+	cmake_src_prepare
+	pushd torch/csrc/jit/serialization || die
+	flatc --cpp --gen-mutable --scoped-enums mobile_bytecode.fbs || die
+	popd
+
+	# prefixify the hardcoded paths, after all patches are applied
+	hprefixify \
+		aten/CMakeLists.txt \
+		caffe2/CMakeLists.txt \
+		cmake/Metal.cmake \
+		cmake/Modules/*.cmake \
+		cmake/Modules_CUDA_fix/FindCUDNN.cmake \
+		cmake/Modules_CUDA_fix/upstream/FindCUDA/make2cmake.cmake \
+		cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake \
+		cmake/public/LoadHIP.cmake \
+		cmake/public/cuda.cmake \
+		cmake/Dependencies.cmake \
+		torch/CMakeLists.txt \
+		CMakeLists.txt
+
+	if use rocm; then
+		sed -e "s:/opt/rocm:/usr:" \
+			-e "s:lib/cmake:$(get_libdir)/cmake:g" \
+			-i cmake/public/LoadHIP.cmake || die
+
+		# TODO: delete, when caffe2 depends on systemwide composable_kernel
+		sed -e "s:third_party/composable_kernel:../composable_kernel-${CK_COMMIT}:g" \
+			-i aten/src/ATen/CMakeLists.txt || die
+
+		# Bug 959808: fix for gfx101x targets
+		pushd "${WORKDIR}/composable_kernel-${CK_COMMIT}" > /dev/null || die
+		eapply "${FILESDIR}"/composable-kernel-6.4.1-expand-isa.patch
+		popd > /dev/null || die
+
+		if tc-is-clang; then
+			# Systemwide gcc (for absl and at::TensorBase) + hipcc (llvm>=18) need abi-compat=17.
+			# But systemwide clang>=18 + hipcc (>=llvm-18) need opposite!
+			# See also: https://github.com/llvm/llvm-project/issues/102443#issuecomment-2329726287
+			sed '/-fclang-abi-compat=17/d' -i cmake/Dependencies.cmake || die
+		fi
+
+		# Workaround for libc++ issue https://github.com/llvm/llvm-project/issues/100802
+		sed 's/std::memcpy/memcpy/g' -i c10/util/Half.h || die
+
+		ebegin "HIPifying cuda sources"
+		${EPYTHON} tools/amd_build/build_amd.py || die
+		eend $?
+	fi
+}
+
+src_configure() {
+	if use cuda && [[ -z ${TORCH_CUDA_ARCH_LIST} ]]; then
+		ewarn "WARNING: caffe2 is being built with its default CUDA compute capabilities: 3.5 and 7.0."
+		ewarn "These may not be optimal for your GPU."
+		ewarn ""
+		ewarn "To configure caffe2 with the CUDA compute capability that is optimal for your GPU,"
+		ewarn "set TORCH_CUDA_ARCH_LIST in your make.conf, and re-emerge caffe2."
+		ewarn "For example, to use CUDA capability 7.5 & 3.5, add: TORCH_CUDA_ARCH_LIST=7.5 3.5"
+		ewarn "For a Maxwell model GPU, an example value would be: TORCH_CUDA_ARCH_LIST=Maxwell"
+		ewarn ""
+		ewarn "You can look up your GPU's CUDA compute capability at https://developer.nvidia.com/cuda-gpus"
+		ewarn "or by running /opt/cuda/extras/demo_suite/deviceQuery | grep 'CUDA Capability'"
+	fi
+
+	local mycmakeargs=(
+		-DBUILD_CUSTOM_PROTOBUF=OFF
+		-DLIBSHM_INSTALL_LIB_SUBDIR="${EPREFIX}"/usr/$(get_libdir)
+		-DPython_EXECUTABLE="${PYTHON}"
+		-DTORCH_INSTALL_LIB_DIR="${EPREFIX}"/usr/$(get_libdir)
+		-DUSE_CCACHE=OFF
+		-DUSE_CUDA=$(usex cuda)
+		-DUSE_DISTRIBUTED=$(usex distributed)
+		-DUSE_FAKELOWP=OFF
+		-DUSE_FBGEMM=$(usex fbgemm)
+		-DUSE_FLASH_ATTENTION=$(usex flash)
+		-DUSE_GFLAGS=ON
+		-DUSE_GLOG=ON
+		-DUSE_GLOO=$(usex gloo)
+		-DUSE_ITT=OFF
+		-DUSE_KINETO=ON
+		-DUSE_KLEIDIAI=OFF # TODO
+		-DUSE_MAGMA=OFF # TODO: In GURU as sci-libs/magma
+		-DUSE_MEM_EFF_ATTENTION=$(usex memefficient)
+		-DUSE_MKLDNN=$(usex onednn)
+		-DUSE_MPI=$(usex mpi)
+		-DUSE_NCCL=OFF
+		-DUSE_NNPACK=$(usex nnpack)
+		-DUSE_NUMA=OFF
+		-DUSE_NUMPY=$(usex numpy)
+		-DUSE_OPENCL=$(usex opencl)
+		-DUSE_OPENMP=$(usex openmp)
+		-DUSE_PYTORCH_QNNPACK=$(usex qnnpack)
+		-DUSE_PYTORCH_METAL=OFF
+		-DUSE_ROCM=$(usex rocm)
+		-DUSE_SYSTEM_CPUINFO=ON
+		-DUSE_SYSTEM_EIGEN_INSTALL=ON
+		-DUSE_SYSTEM_FP16=ON
+		-DUSE_SYSTEM_FXDIV=ON
+		-DUSE_SYSTEM_GLOO=ON
+		-DUSE_SYSTEM_NVTX=ON
+		-DUSE_SYSTEM_ONNX=ON
+		-DUSE_SYSTEM_PSIMD=ON
+		-DUSE_SYSTEM_PTHREADPOOL=ON
+		-DUSE_SYSTEM_PYBIND11=ON
+		-DUSE_SYSTEM_SLEEF=ON
+		-DUSE_SYSTEM_XNNPACK=$(usex xnnpack)
+		-DUSE_TENSORPIPE=$(usex distributed)
+		-DUSE_UCC=OFF
+		-DUSE_VALGRIND=OFF
+		-DUSE_XNNPACK=$(usex xnnpack)
+		-DUSE_XPU=OFF
+		-Wno-dev
+	)
+
+	if use mkl; then
+		mycmakeargs+=(-DBLAS=MKL)
+	elif use openblas; then
+		mycmakeargs+=(-DBLAS=OpenBLAS)
+	else
+		mycmakeargs+=(-DBLAS=Generic -DBLAS_LIBRARIES=)
+	fi
+
+	if use cuda; then
+		addpredict "/dev/nvidiactl" # bug 867706
+		addpredict "/dev/char"
+		addpredict "/proc/self/task" # bug 926116
+
+		mycmakeargs+=(
+			-DUSE_CUDNN=ON
+			-DTORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST:-3.5 7.0}"
+			-DUSE_NCCL=OFF # TODO: NVIDIA Collective Communication Library
+			-DCMAKE_CUDA_FLAGS="$(cuda_gccdir -f | tr -d \")"
+			-DUSE_CUSPARSELT=$(usex cusparselt)
+		)
+	elif use rocm; then
+		export PYTORCH_ROCM_ARCH="$(get_amdgpu_flags)"
+
+		if use memefficient; then
+			export AOTRITON_INSTALLED_PREFIX="${ESYSROOT}/usr"
+		fi
+
+		mycmakeargs+=(
+			-DUSE_NCCL=ON
+			-DUSE_SYSTEM_NCCL=ON
+			-DCMAKE_REQUIRE_FIND_PACKAGE_HIP=ON
+		)
+
+		# ROCm libraries produce too much warnings
+		append-cxxflags -Wno-deprecated-declarations -Wno-unused-result -Wno-unused-value
+	fi
+
+	if use onednn; then
+		mycmakeargs+=(
+			-DMKLDNN_FOUND=ON
+			-DMKLDNN_LIBRARIES=dnnl
+			-DMKLDNN_INCLUDE_DIR="${ESYSROOT}/usr/include/oneapi/dnnl"
+		)
+	fi
+
+	cmake_src_configure
+}
+
+src_compile() {
+	PYTORCH_BUILD_VERSION=${PV} \
+	PYTORCH_BUILD_NUMBER=0 \
+	cmake_src_compile
+}
+
+python_install() {
+	python_domodule python/torch
+	mkdir "${D}"$(python_get_sitedir)/torch/bin || die
+	mkdir "${D}"$(python_get_sitedir)/torch/lib || die
+	mkdir "${D}"$(python_get_sitedir)/torch/include || die
+	ln -s ../../../../../include/torch \
+		"${D}$(python_get_sitedir)"/torch/include/torch || die # bug 923269
+	ln -s ../../../../../bin/torch_shm_manager \
+		"${D}"/$(python_get_sitedir)/torch/bin/torch_shm_manager || die
+	ln -s ../../../../../$(get_libdir)/libtorch_global_deps.so \
+		"${D}"/$(python_get_sitedir)/torch/lib/libtorch_global_deps.so || die
+}
+
+src_install() {
+	cmake_src_install
+
+	# Used by pytorch ebuild
+	insinto "/var/lib/${PN}"
+	doins "${BUILD_DIR}"/CMakeCache.txt
+	dostrip -x /var/lib/${PN}/functorch.so
+
+	rm -rf python
+	mkdir -p python/torch || die
+	cp torch/version.py python/torch/ || die
+	python_install
+}

diff --git a/sci-ml/caffe2/files/caffe2-2.8.0-cmake.patch b/sci-ml/caffe2/files/caffe2-2.8.0-cmake.patch
new file mode 100644
index 000000000000..7f08ef2d39d1
--- /dev/null
+++ b/sci-ml/caffe2/files/caffe2-2.8.0-cmake.patch
@@ -0,0 +1,8 @@
+--- a/.ci/pytorch/test_example_code/CMakeLists.txt	2025-06-24 11:57:17.268200696 +0200
++++ b/.ci/pytorch/test_example_code/CMakeLists.txt	2025-06-24 11:57:27.656239353 +0200
+@@ -1,4 +1,4 @@
+-cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
++cmake_minimum_required(VERSION 3.5 FATAL_ERROR)
+ project(simple-torch-test)
+ 
+ find_package(Torch REQUIRED)

diff --git a/sci-ml/caffe2/files/caffe2-2.8.0-gentoo.patch b/sci-ml/caffe2/files/caffe2-2.8.0-gentoo.patch
new file mode 100644
index 000000000000..9ffa905796b6
--- /dev/null
+++ b/sci-ml/caffe2/files/caffe2-2.8.0-gentoo.patch
@@ -0,0 +1,231 @@
+--- a/CMakeLists.txt
++++ b/CMakeLists.txt
+@@ -987,7 +987,7 @@
+   set(CMAKE_COLOR_DIAGNOSTICS ON)
+ endif()
+ if(NOT MSVC)
+-  string(APPEND CMAKE_CXX_FLAGS " -O2 -fPIC")
++  string(APPEND CMAKE_CXX_FLAGS " -O2")
+ 
+   # This prevents use of `c10::optional`, `c10::nullopt` etc within the codebase
+   string(APPEND CMAKE_CXX_FLAGS " -DC10_NODEPRECATED")
+@@ -998,7 +998,6 @@
+   # Details at http://eigen.tuxfamily.org/bz/show_bug.cgi?id=1459
+   string(APPEND CMAKE_CXX_FLAGS " -Wall")
+   string(APPEND CMAKE_CXX_FLAGS " -Wextra")
+-  append_cxx_flag_if_supported("-Werror=return-type" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-Werror=non-virtual-dtor" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-Werror=braced-scalar-init" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-Werror=range-loop-construct" CMAKE_CXX_FLAGS)
+
+@@ -1083,7 +1082,6 @@
+   endif()
+   append_cxx_flag_if_supported("-fno-math-errno" CMAKE_CXX_FLAGS)
+   append_cxx_flag_if_supported("-fno-trapping-math" CMAKE_CXX_FLAGS)
+-  append_cxx_flag_if_supported("-Werror=format" CMAKE_CXX_FLAGS)
+   if(CMAKE_COMPILER_IS_GNUCXX AND CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 13)
+     append_cxx_flag_if_supported("-Wno-dangling-reference" CMAKE_CXX_FLAGS)
+     append_cxx_flag_if_supported("-Wno-error=dangling-reference" CMAKE_CXX_FLAGS)
+     append_cxx_flag_if_supported("-Wno-error=redundant-move" CMAKE_CXX_FLAGS)
+--- a/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
++++ b/aten/src/ATen/native/quantized/cpu/qnnpack/CMakeLists.txt
+@@ -323,7 +323,7 @@
+ set_target_properties(pytorch_qnnpack PROPERTIES PUBLIC_HEADER include/qnnpack_func.h)
+ 
+ # ---[ Configure clog
+-if(NOT TARGET clog)
++if(FALSE)
+   set(CLOG_BUILD_TESTS OFF CACHE BOOL "")
+   set(CLOG_RUNTIME_TYPE "${CPUINFO_RUNTIME_TYPE}" CACHE STRING "")
+   add_subdirectory(
+@@ -335,7 +335,8 @@
+     target_compile_options(clog PRIVATE "-Wno-unused-result")
+   endif()
+ endif()
+-target_link_libraries(pytorch_qnnpack PUBLIC clog)
++find_library(CLOG_LIBRARY NAMES clog REQUIRED)
++target_link_libraries(pytorch_qnnpack PUBLIC ${CLOG_LIBRARY})
+ 
+ # ---[ Configure cpuinfo
+ if(NOT TARGET cpuinfo AND USE_SYSTEM_CPUINFO)
+--- a/caffe2/CMakeLists.txt
++++ b/caffe2/CMakeLists.txt
+@@ -87,7 +87,7 @@ endif()
+ # Note: the folders that are being commented out have not been properly
+ # addressed yet.
+ 
+-if(NOT MSVC AND USE_XNNPACK)
++if(FALSE)
+   if(NOT TARGET fxdiv)
+     set(FXDIV_BUILD_TESTS OFF CACHE BOOL "")
+     set(FXDIV_BUILD_BENCHMARKS OFF CACHE BOOL "")
+@@ -1195,7 +1195,6 @@ if(USE_XPU)
+ endif()
+ 
+ if(NOT MSVC AND USE_XNNPACK)
+-  TARGET_LINK_LIBRARIES(torch_cpu PRIVATE fxdiv)
+ endif()
+ 
+ # ==========================================================
+@@ -1307,17 +1306,6 @@
+ target_include_directories(torch_cpu PRIVATE
+   "/usr/include/kineto")
+ 
+-if(USE_KINETO)
+-  target_include_directories(torch_cpu PRIVATE
+-    ${TORCH_ROOT}/third_party/kineto/libkineto/src)
+-endif()
+-
+-target_include_directories(torch_cpu PRIVATE
+-  ${TORCH_ROOT}/third_party/cpp-httplib)
+-
+-target_include_directories(torch_cpu PRIVATE
+-  ${TORCH_ROOT}/third_party/nlohmann/include)
+-
+ install(DIRECTORY
+   "${TORCH_SRC_DIR}/csrc"
+   "${TORCH_SRC_DIR}/headeronly"
+--- a/cmake/Codegen.cmake
++++ b/cmake/Codegen.cmake
+@@ -64,7 +64,7 @@ if(INTERN_BUILD_ATEN_OPS)
+   if(MSVC)
+     set(OPT_FLAG "/fp:strict ")
+   else(MSVC)
+-    set(OPT_FLAG "-O3 ")
++    set(OPT_FLAG " ")
+     if("${CMAKE_BUILD_TYPE}" MATCHES "Debug")
+       set(OPT_FLAG " ")
+     endif()
+--- a/cmake/Dependencies.cmake
++++ b/cmake/Dependencies.cmake
+@@ -461,7 +461,9 @@
+       set_property(TARGET pytorch_qnnpack PROPERTY POSITION_INDEPENDENT_CODE ON)
+       set_property(TARGET cpuinfo PROPERTY POSITION_INDEPENDENT_CODE ON)
+       # QNNPACK depends on gemmlowp headers
+-      target_include_directories(pytorch_qnnpack PRIVATE "${CAFFE2_THIRD_PARTY_ROOT}/gemmlowp")
++      find_package(gemmlowp REQUIRED)
++      get_target_property(GEMMLOWP_INCLUDE_DIRS gemmlowp::gemmlowp INTERFACE_INCLUDE_DIRECTORIES)
++      target_include_directories(pytorch_qnnpack PRIVATE ${GEMMLOWP_INCLUDE_DIRS})
+     endif()
+ 
+     list(APPEND Caffe2_DEPENDENCY_LIBS pytorch_qnnpack)
+@@ -556,7 +558,7 @@
+   find_library(microkernels-prod_LIBRARY microkernels-prod)
+   set_property(TARGET XNNPACK PROPERTY IMPORTED_LOCATION "${XNNPACK_LIBRARY}")
+   set_property(TARGET microkernels-prod PROPERTY IMPORTED_LOCATION "${microkernels-prod_LIBRARY}")
+-  if(NOT XNNPACK_LIBRARY or NOT microkernels-prod_LIBRARY)
++  if(FALSE)
+     message(FATAL_ERROR "Cannot find XNNPACK")
+   endif()
+   message("-- Found XNNPACK: ${XNNPACK_LIBRARY}")
+@@ -637,7 +639,7 @@ if(BUILD_TEST OR BUILD_MOBILE_BENCHMARK OR BUILD_MOBILE_TEST)
+ endif()
+ 
+ # ---[ FBGEMM
+-if(USE_FBGEMM)
++if(FALSE)
+   set(CAFFE2_THIRD_PARTY_ROOT "${PROJECT_SOURCE_DIR}/third_party")
+   if(NOT DEFINED FBGEMM_SOURCE_DIR)
+     set(FBGEMM_SOURCE_DIR "${CAFFE2_THIRD_PARTY_ROOT}/fbgemm" CACHE STRING "FBGEMM source directory")
+@@ -696,6 +698,7 @@ if(USE_FBGEMM)
+ endif()
+ 
+ if(USE_FBGEMM)
++  list(APPEND Caffe2_DEPENDENCY_LIBS fbgemm)
+   caffe2_update_option(USE_FBGEMM ON)
+ else()
+   caffe2_update_option(USE_FBGEMM OFF)
+@@ -1140,7 +1140,6 @@
+     endif()
+     set(TP_BUILD_LIBUV ON CACHE BOOL "" FORCE)
+     add_compile_options(-DTORCH_USE_LIBUV)
+-    include_directories(BEFORE SYSTEM ${CMAKE_CURRENT_LIST_DIR}/../third_party/tensorpipe/third_party/libuv/include)
+     set(TP_STATIC_OR_SHARED STATIC CACHE STRING "" FORCE)
+ 
+     # Tensorpipe uses cuda_add_library
+@@ -1712,11 +1712,9 @@
+ 
+ # Include cpp-httplib
+ add_library(httplib INTERFACE IMPORTED)
+-target_include_directories(httplib SYSTEM INTERFACE ${PROJECT_SOURCE_DIR}/third_party/cpp-httplib)
+ 
+ # Include nlohmann-json
+ add_library(nlohmann INTERFACE IMPORTED)
+-include_directories(nlohmann SYSTEM INTERFACE ${PROJECT_SOURCE_DIR}/third_party/nlohmann/include)
+ 
+ # Include moodycamel
+ add_library(moodycamel INTERFACE IMPORTED)
+--- a/cmake/External/nnpack.cmake
++++ b/cmake/External/nnpack.cmake
+@@ -56,7 +56,7 @@
+   set(PTHREADPOOL_SOURCE_DIR "${CAFFE2_THIRD_PARTY_ROOT}/pthreadpool" CACHE STRING "pthreadpool source directory")
+   set(GOOGLETEST_SOURCE_DIR "${CAFFE2_THIRD_PARTY_ROOT}/googletest" CACHE STRING "Google Test source directory")
+ 
+-  if(NOT TARGET nnpack)
++  if(FALSE)
+     set(NNPACK_BUILD_TESTS OFF CACHE BOOL "")
+     set(NNPACK_BUILD_BENCHMARKS OFF CACHE BOOL "")
+     set(NNPACK_LIBRARY_TYPE "static" CACHE STRING "")
+--- a/cmake/public/utils.cmake
++++ b/cmake/public/utils.cmake
+@@ -460,8 +460,6 @@ function(torch_compile_options libname)
+   endif()
+ 
+   # Use -O2 for release builds (-O3 doesn't improve perf, and -Os results in perf regression)
+-  target_compile_options(${libname} PRIVATE
+-      $<$<AND:$<COMPILE_LANGUAGE:CXX>,$<OR:$<CONFIG:Release>,$<CONFIG:RelWithDebInfo>>>:-O2>)
+ 
+ endfunction()
+ 
+--- a/aten/src/ATen/CMakeLists.txt	2025-02-27 14:23:02.402742165 +0100
++++ b/aten/src/ATen/CMakeLists.txt	2025-02-27 14:23:40.445850718 +0100
+@@ -326,8 +326,6 @@
+ if(USE_CUDA)
+   list(APPEND ATen_CUDA_INCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/cuda)
+   # Next two lines are needed because TunableOp uses third-party/fmt
+-  list(APPEND ATen_CUDA_INCLUDE $<TARGET_PROPERTY:fmt::fmt-header-only,INTERFACE_INCLUDE_DIRECTORIES>)
+-  list(APPEND ATen_CUDA_DEPENDENCY_LIBS fmt::fmt-header-only)
+   list(APPEND ATen_CUDA_CU_SRCS
+     ${cuda_cu}
+     ${native_cuda_cu}
+@@ -395,8 +393,6 @@
+   _pytorch_rocm_generate_ck_conf()
+
+   # Next two lines are needed because TunableOp uses third-party/fmt
+-  list(APPEND ATen_HIP_INCLUDE $<TARGET_PROPERTY:fmt::fmt-header-only,INTERFACE_INCLUDE_DIRECTORIES>)
+-  list(APPEND ATen_HIP_DEPENDENCY_LIBS fmt::fmt-header-only)
+ if(USE_FLASH_ATTENTION)
+   list(APPEND ATen_HIP_INCLUDE ${CMAKE_CURRENT_SOURCE_DIR}/native/transformers/hip/flash_attn/ck)
+ endif()
+--- a/torch/CMakeLists.txt
++++ b/torch/CMakeLists.txt
+@@ -60,16 +60,10 @@
+     ${CMAKE_BINARY_DIR}/aten/src
+     ${CMAKE_BINARY_DIR}/caffe2/aten/src
+     ${CMAKE_BINARY_DIR}/third_party
+-    ${CMAKE_BINARY_DIR}/third_party/onnx
+ 
+     ${TORCH_ROOT}/third_party/valgrind-headers
+ 
+-    ${TORCH_ROOT}/third_party/gloo
+-    ${TORCH_ROOT}/third_party/onnx
+-    ${TORCH_ROOT}/third_party/flatbuffers/include
+     "/usr/include/kineto"
+-    ${TORCH_ROOT}/third_party/cpp-httplib
+-    ${TORCH_ROOT}/third_party/nlohmann/include
+ 
+     ${TORCH_SRC_DIR}/csrc
+     ${TORCH_SRC_DIR}/csrc/api/include
+--- a/cmake/FlatBuffers.cmake
++++ b/cmake/FlatBuffers.cmake
+@@ -1,10 +1 @@
+-set(FlatBuffers_Include ${PROJECT_SOURCE_DIR}/third_party/flatbuffers/include)
+-file(GLOB FlatBuffers_Library_SRCS
+-  ${FlatBuffers_Include}/flatbuffers/*.h
+-)
+ add_library(flatbuffers INTERFACE)
+-target_sources(
+-  flatbuffers
+-  INTERFACE ${FlatBuffers_Library_SRCS}
+-)
+-target_include_directories(flatbuffers INTERFACE ${FlatBuffers_Include})

diff --git a/sci-ml/caffe2/files/caffe2-2.8.0-unbundle_pocketfft.patch b/sci-ml/caffe2/files/caffe2-2.8.0-unbundle_pocketfft.patch
new file mode 100644
index 000000000000..3ffe9c775b28
--- /dev/null
+++ b/sci-ml/caffe2/files/caffe2-2.8.0-unbundle_pocketfft.patch
@@ -0,0 +1,18 @@
+--- a/cmake/Dependencies.cmake
++++ b/cmake/Dependencies.cmake
+@@ -276,15 +276,8 @@
+ # --- [ PocketFFT
+ set(AT_POCKETFFT_ENABLED 0)
+ if(NOT AT_MKL_ENABLED)
+-  set(POCKETFFT_INCLUDE_DIR "${Torch_SOURCE_DIR}/third_party/pocketfft/")
+-  if(NOT EXISTS "${POCKETFFT_INCLUDE_DIR}")
+-    message(FATAL_ERROR "pocketfft directory not found, expected ${POCKETFFT_INCLUDE_DIR}")
+-  elseif(NOT EXISTS "${POCKETFFT_INCLUDE_DIR}/pocketfft_hdronly.h")
+-    message(FATAL_ERROR "pocketfft headers not found in ${POCKETFFT_INCLUDE_DIR}")
+-  endif()
+ 
+   set(AT_POCKETFFT_ENABLED 1)
+-  message(STATUS "Using pocketfft in directory: ${POCKETFFT_INCLUDE_DIR}")
+ endif()
+ 
+ # ---[ Dependencies


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [gentoo-commits] repo/gentoo:master commit in: sci-ml/caffe2/, sci-ml/caffe2/files/
@ 2025-08-17  5:55 Alfredo Tupone
  0 siblings, 0 replies; 5+ messages in thread
From: Alfredo Tupone @ 2025-08-17  5:55 UTC (permalink / raw
  To: gentoo-commits

commit:     906973431e540e5350ba61eeeedc2aebce5b0a2c
Author:     Sv. Lockal <lockalsash <AT> gmail <DOT> com>
AuthorDate: Sat Aug 16 20:49:32 2025 +0000
Commit:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Sun Aug 17 05:54:24 2025 +0000
URL:        https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=90697343

sci-ml/caffe2: Fix use of undeclared identifier 'CHECK_NOSPARSE_CONTIGUOUS_CUDA' with USE='-flash'

Bug: https://github.com/pytorch/pytorch/issues/160826

Signed-off-by: Sv. Lockal <lockalsash <AT> gmail.com>
Part-of: https://github.com/gentoo/gentoo/pull/43468
Closes: https://github.com/gentoo/gentoo/pull/43468
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>

 sci-ml/caffe2/caffe2-2.8.0.ebuild                  |  1 +
 .../files/caffe2-2.8.0-rocm-minus-flash.patch      | 86 ++++++++++++++++++++++
 2 files changed, 87 insertions(+)

diff --git a/sci-ml/caffe2/caffe2-2.8.0.ebuild b/sci-ml/caffe2/caffe2-2.8.0.ebuild
index aa94e0e5bc24..3d1b231f40b4 100644
--- a/sci-ml/caffe2/caffe2-2.8.0.ebuild
+++ b/sci-ml/caffe2/caffe2-2.8.0.ebuild
@@ -147,6 +147,7 @@ PATCHES=(
 	"${FILESDIR}"/${P}-cmake.patch
 	"${FILESDIR}"/${PN}-2.7.0-glog-0.7.1.patch
 	"${FILESDIR}"/${PN}-2.7.1-aotriton-fixes.patch
+	"${FILESDIR}"/${PN}-2.8.0-rocm-minus-flash.patch
 )
 
 src_prepare() {

diff --git a/sci-ml/caffe2/files/caffe2-2.8.0-rocm-minus-flash.patch b/sci-ml/caffe2/files/caffe2-2.8.0-rocm-minus-flash.patch
new file mode 100644
index 000000000000..c48f3ec6a2d3
--- /dev/null
+++ b/sci-ml/caffe2/files/caffe2-2.8.0-rocm-minus-flash.patch
@@ -0,0 +1,86 @@
+Fix use of undeclared identifier 'CHECK_NOSPARSE_CONTIGUOUS_CUDA' with USE='-flash'
+
+Bug: https://github.com/pytorch/pytorch/issues/160826
+--- a/aten/src/ATen/native/transformers/cuda/attention.cu
++++ b/aten/src/ATen/native/transformers/cuda/attention.cu
+@@ -71,6 +71,7 @@
+ #include <ATen/native/transformers/cuda/sdp_utils.h>
+ #include <ATen/native/transformers/sdp_utils_cpp.h>
+ 
++#include <ATen/native/transformers/flash_api_common.h>
+ #ifdef USE_FLASH_ATTENTION
+ // FlashAttention Specific Imports
+ #include <ATen/native/transformers/cuda/flash_attn/flash_api.h>
+--- a/aten/src/ATen/native/transformers/cuda/attention_backward.cu
++++ b/aten/src/ATen/native/transformers/cuda/attention_backward.cu
+@@ -33,6 +33,7 @@
+ #include <ATen/ops/_scaled_dot_product_flash_attention_backward_native.h>
+ #endif
+ 
++#include <ATen/native/transformers/flash_api_common.h>
+ #ifdef USE_FLASH_ATTENTION
+ // FlashAttention Specific Imports
+ #include <ATen/native/transformers/cuda/flash_attn/flash_api.h>
+--- /dev/null
++++ b/aten/src/ATen/native/transformers/flash_api_common.h
+@@ -0,0 +1,28 @@
++#pragma once
++#include <cstdint>
++#include <limits>
++
++#include <ATen/core/Tensor.h>
++#include <c10/util/Exception.h>
++
++#define CHECK_NOSPARSE_CONTIGUOUS_CUDA(TENSOR)                            \
++  TORCH_CHECK(TENSOR.is_cuda(), #TENSOR " must be a CUDA tensor");     \
++  TORCH_CHECK(!TENSOR.is_sparse(), #TENSOR " must be a dense tensor"); \
++  TORCH_CHECK(TENSOR.is_contiguous());
++
++#define CHECK_NOSPARSE_LASTCONTIGUOUS_CUDA(TENSOR)                        \
++  TORCH_CHECK(TENSOR.is_cuda(), #TENSOR " must be a CUDA tensor");     \
++  TORCH_CHECK(!TENSOR.is_sparse(), #TENSOR " must be a dense tensor"); \
++  TORCH_CHECK(                                                         \
++      TENSOR.stride(-1) == 1, #TENSOR ": last dimension must be contiguous");
++
++#define CHECK_ALIGNED_PTR(PTR, ALIGNMENT) \
++  TORCH_CHECK(                         \
++      uint64_t(PTR) % ALIGNMENT == 0, #PTR " is not correctly aligned")
++
++#define ASSIGN_CHECK_OVERFLOW(A, B)                                    \
++  {                                                                    \
++    A = B;                                                             \
++    TORCH_CHECK(                                                    \
++        B < std::numeric_limits<decltype(A)>::max(), #B " overflows"); \
++  }
+--- a/aten/src/ATen/native/transformers/hip/flash_attn/flash_api.h
++++ b/aten/src/ATen/native/transformers/hip/flash_attn/flash_api.h
+@@ -4,28 +4,7 @@
+ #include <ATen/Context.h>
+ #include <ATen/core/Tensor.h>
+ #include <c10/util/Exception.h>
+-
+-#define CHECK_NOSPARSE_CONTIGUOUS_CUDA(TENSOR)                            \
+-  TORCH_CHECK(TENSOR.is_cuda(), #TENSOR " must be a CUDA tensor");     \
+-  TORCH_CHECK(!TENSOR.is_sparse(), #TENSOR " must be a dense tensor"); \
+-  TORCH_CHECK(TENSOR.is_contiguous());
+-
+-#define CHECK_NOSPARSE_LASTCONTIGUOUS_CUDA(TENSOR)                        \
+-  TORCH_CHECK(TENSOR.is_cuda(), #TENSOR " must be a CUDA tensor");     \
+-  TORCH_CHECK(!TENSOR.is_sparse(), #TENSOR " must be a dense tensor"); \
+-  TORCH_CHECK(                                                         \
+-      TENSOR.stride(-1) == 1, #TENSOR ": last dimension must be contiguous");
+-
+-#define CHECK_ALIGNED_PTR(PTR, ALIGNMENT) \
+-  TORCH_CHECK(                         \
+-      uint64_t(PTR) % ALIGNMENT == 0, #PTR " is not correctly aligned")
+-
+-#define ASSIGN_CHECK_OVERFLOW(A, B)                                    \
+-  {                                                                    \
+-    A = B;                                                             \
+-    TORCH_CHECK(                                                    \
+-        B < std::numeric_limits<decltype(A)>::max(), #B " overflows"); \
+-  }
++#include <ATen/native/transformers/flash_api_common.h>
+ 
+ namespace pytorch_flash {
+ 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-08-17  5:56 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-24 10:14 [gentoo-commits] repo/gentoo:master commit in: sci-ml/caffe2/, sci-ml/caffe2/files/ Alfredo Tupone
  -- strict thread matches above, loose matches on Subject: below --
2025-08-17  5:55 Alfredo Tupone
2025-08-16 12:12 Alfredo Tupone
2025-05-04 19:23 Alfredo Tupone
2025-04-26 18:25 Alfredo Tupone

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox