public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Alfredo Tupone" <tupone@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] repo/gentoo:master commit in: sci-libs/caffe2/
Date: Sun, 27 Oct 2024 14:13:07 +0000 (UTC)	[thread overview]
Message-ID: <1730037628.7e11aa2639192352d26804f1e45136343ea95844.tupone@gentoo> (raw)

commit:     7e11aa2639192352d26804f1e45136343ea95844
Author:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
AuthorDate: Sun Oct 27 14:00:28 2024 +0000
Commit:     Alfredo Tupone <tupone <AT> gentoo <DOT> org>
CommitDate: Sun Oct 27 14:00:28 2024 +0000
URL:        https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=7e11aa26

sci-libs/caffe2: drop 2.3.0-r3, 2.3.1

Closes: https://bugs.gentoo.org/942335
Closes: https://bugs.gentoo.org/928580
Signed-off-by: Alfredo Tupone <tupone <AT> gentoo.org>

 sci-libs/caffe2/Manifest               |   2 -
 sci-libs/caffe2/caffe2-2.3.0-r3.ebuild | 294 ---------------------------------
 sci-libs/caffe2/caffe2-2.3.1.ebuild    | 294 ---------------------------------
 sci-libs/caffe2/metadata.xml           |   2 -
 4 files changed, 592 deletions(-)

diff --git a/sci-libs/caffe2/Manifest b/sci-libs/caffe2/Manifest
index 8233a46783dc..1bdb2764edd1 100644
--- a/sci-libs/caffe2/Manifest
+++ b/sci-libs/caffe2/Manifest
@@ -1,5 +1,3 @@
 DIST caffe2-patches-20240809.tar.gz 15242 BLAKE2B 77503c61487e7d85cca5afcab9a6e638f9833a70861845638cf1b62bc492d7b6650e6db81d53ebb2f39c6313509250d339f725f04d03ec6dd23dd0cf70843d8c SHA512 74b3b0b6671b655ecac93f7436c4ed7cb0157a83aafbf6afcc0811e11cef341cd8f638db1a111bcbb01e1a6dd4daf3a36b96d7a8ce90f04c2fa091bd6e3a142b
-DIST pytorch-2.3.0.tar.gz 117029829 BLAKE2B 8f9c0d71ee0a9219b495eddccdcc65107f7ad537c43c68100b229f3d27b0e6c01ccb1659c7fffc356a48d80f2adc0a10361305dc8f1df20446de837d380f89f6 SHA512 67f7e9a096c3ffb952206ebf9105bedebb68c24ad82456083adf1d1d210437fcaa9dd52b68484cfc97d408c9eebc9541c76868c34a7c9982494dc3f424cfb07c
-DIST pytorch-2.3.1.tar.gz 117035696 BLAKE2B d419d7fa1342f1fb317ffce09ec9dc1447414627cc83d36578fe60f68c283c620b2b4d49f414cd206d537b90b16432a06cd1941662720db05d5e2b6c493325f5 SHA512 e1bcae44f9939fc7ccb1360a9b1970d92426f25e5de73e36964df3dd15ad5d8d9f5bd2f9a7dda6b8f64e2bba3674005bd869f542489cc442ad0125a02676f587
 DIST pytorch-2.4.0.tar.gz 115031093 BLAKE2B d206477963977011627df284efa01482fbf57e9fcb5f58f51d679c742b8e5dde6aa6affd8745ab817fcd09477d129a81e74e07be576b5d3585eaca1c735b8e01 SHA512 804d25944035f33de6591fd942fbda44d3de037717a4397d38a97474b01775d30eaf93d16dd708a832c0119050d24d73b90990fd3e3773be79d26ada25244d22
 DIST pytorch-2.4.1.tar.gz 115029469 BLAKE2B c2909ff27d527bc57cba56b780d3b8cd07a043ab045caa6c6b27857a16f9ad10aaab2116b26226b1e46ee08ffb44007965d914464418e4ae14ca48c3f3f383bb SHA512 7e9b4485e242eaf0d648765c6621d73d95e7107b766646a098175436d1ab2e2b864badd0757a3bab6b7c318233f2120bad9ac07b39bb9e357897919580c87631

diff --git a/sci-libs/caffe2/caffe2-2.3.0-r3.ebuild b/sci-libs/caffe2/caffe2-2.3.0-r3.ebuild
deleted file mode 100644
index 7fe4818311cb..000000000000
--- a/sci-libs/caffe2/caffe2-2.3.0-r3.ebuild
+++ /dev/null
@@ -1,294 +0,0 @@
-# Copyright 2022-2024 Gentoo Authors
-# Distributed under the terms of the GNU General Public License v2
-
-EAPI=8
-
-PYTHON_COMPAT=( python3_{10..12} )
-ROCM_VERSION=6.1
-inherit python-single-r1 cmake cuda flag-o-matic prefix rocm
-
-MYPN=pytorch
-MYP=${MYPN}-${PV}
-
-DESCRIPTION="A deep learning framework"
-HOMEPAGE="https://pytorch.org/"
-SRC_URI="https://github.com/pytorch/${MYPN}/archive/refs/tags/v${PV}.tar.gz
-	-> ${MYP}.tar.gz
-	https://dev.gentoo.org/~tupone/distfiles/${PN}-patches-20240809.tar.gz"
-
-S="${WORKDIR}"/${MYP}
-
-LICENSE="BSD"
-SLOT="0"
-KEYWORDS="~amd64"
-IUSE="cuda distributed fbgemm ffmpeg flash gloo mkl mpi nnpack +numpy onednn openblas opencl opencv openmp qnnpack rocm xnnpack"
-RESTRICT="test"
-REQUIRED_USE="
-	${PYTHON_REQUIRED_USE}
-	ffmpeg? ( opencv )
-	mpi? ( distributed )
-	gloo? ( distributed )
-	?? ( cuda rocm )
-	rocm? (
-		|| ( ${ROCM_REQUIRED_USE} )
-		!flash
-	)
-"
-
-# CUDA 12 not supported yet: https://github.com/pytorch/pytorch/issues/91122
-RDEPEND="
-	${PYTHON_DEPS}
-	dev-cpp/gflags:=
-	>=dev-cpp/glog-0.5.0
-	dev-libs/cpuinfo
-	dev-libs/libfmt
-	dev-libs/protobuf:=
-	dev-libs/pthreadpool
-	dev-libs/sleef
-	virtual/lapack
-	sci-libs/onnx
-	sci-libs/foxi
-	cuda? (
-		dev-libs/cudnn
-		>=dev-libs/cudnn-frontend-1.0.3:0/8
-		<dev-util/nvidia-cuda-toolkit-12.4.0:=[profiler]
-	)
-	fbgemm? ( >=dev-libs/FBGEMM-2023.12.01 )
-	ffmpeg? ( media-video/ffmpeg:= )
-	gloo? ( sci-libs/gloo[cuda?] )
-	mpi? ( virtual/mpi )
-	nnpack? ( sci-libs/NNPACK )
-	numpy? ( $(python_gen_cond_dep '
-		dev-python/numpy[${PYTHON_USEDEP}]
-		') )
-	onednn? ( dev-libs/oneDNN )
-	opencl? ( virtual/opencl )
-	opencv? ( media-libs/opencv:= )
-	qnnpack? ( sci-libs/QNNPACK )
-	rocm? (
-		=dev-util/hip-6.1*
-		=dev-libs/rccl-6.1*[${ROCM_USEDEP}]
-		=sci-libs/rocThrust-6.1*[${ROCM_USEDEP}]
-		=sci-libs/rocPRIM-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipBLAS-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipFFT-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipSPARSE-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipRAND-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipCUB-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipSOLVER-6.1*[${ROCM_USEDEP}]
-		=sci-libs/miopen-6.1*[${ROCM_USEDEP}]
-		=dev-util/roctracer-6.1*[${ROCM_USEDEP}]
-
-		amdgpu_targets_gfx90a? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx90a] )
-		amdgpu_targets_gfx940? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx940] )
-		amdgpu_targets_gfx941? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx941] )
-		amdgpu_targets_gfx942? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx942] )
-	)
-	distributed? ( sci-libs/tensorpipe[cuda?] )
-	xnnpack? ( >=sci-libs/XNNPACK-2022.12.22 )
-	mkl? ( sci-libs/mkl )
-	openblas? ( sci-libs/openblas )
-"
-DEPEND="
-	${RDEPEND}
-	cuda? ( >=dev-libs/cutlass-3.4.1 )
-	onednn? ( sci-libs/ideep )
-	dev-libs/psimd
-	dev-libs/FP16
-	dev-libs/FXdiv
-	dev-libs/pocketfft
-	dev-libs/flatbuffers
-	>=sci-libs/kineto-0.4.0_p20231031
-	$(python_gen_cond_dep '
-		dev-python/pyyaml[${PYTHON_USEDEP}]
-		dev-python/pybind11[${PYTHON_USEDEP}]
-		dev-python/typing-extensions[${PYTHON_USEDEP}]
-	')
-"
-
-PATCHES=(
-	../patches/${PN}-2.2.1-gentoo.patch
-	../patches/${PN}-1.13.0-install-dirs.patch
-	../patches/${PN}-1.12.0-glog-0.6.0.patch
-	../patches/${PN}-1.13.1-tensorpipe.patch
-	../patches/${P}-cudnn_include_fix.patch
-	../patches/${PN}-2.1.2-fix-rpath.patch
-	../patches/${PN}-2.1.2-fix-openmp-link.patch
-	../patches/${P}-rocm-fix-std-cpp17.patch
-	../patches/${PN}-2.2.2-musl.patch
-	../patches/${P}-CMakeFix.patch
-	../patches/${PN}-2.3.0-exclude-aotriton.patch
-	../patches/${PN}-2.3.0-fix-rocm-gcc14-clamp.patch
-	../patches/${PN}-2.3.0-optional-hipblaslt.patch
-	../patches/${PN}-2.3.0-fix-libcpp.patch
-	../patches/${PN}-2.3.0-fix-gcc-clang-abi-compat.patch
-)
-
-src_prepare() {
-	filter-lto #bug 862672
-	sed -i \
-		-e "/third_party\/gloo/d" \
-		cmake/Dependencies.cmake \
-		|| die
-	cmake_src_prepare
-	pushd torch/csrc/jit/serialization || die
-	flatc --cpp --gen-mutable --scoped-enums mobile_bytecode.fbs || die
-	popd
-	# prefixify the hardcoded paths, after all patches are applied
-	hprefixify \
-		aten/CMakeLists.txt \
-		caffe2/CMakeLists.txt \
-		cmake/Metal.cmake \
-		cmake/Modules/*.cmake \
-		cmake/Modules_CUDA_fix/FindCUDNN.cmake \
-		cmake/Modules_CUDA_fix/upstream/FindCUDA/make2cmake.cmake \
-		cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake \
-		cmake/public/LoadHIP.cmake \
-		cmake/public/cuda.cmake \
-		cmake/Dependencies.cmake \
-		torch/CMakeLists.txt \
-		CMakeLists.txt
-
-	if use rocm; then
-		sed -e "s:/opt/rocm:/usr:" \
-			-e "s:lib/cmake:$(get_libdir)/cmake:g" \
-			-e "s/HIP 1.0/HIP 1.0 REQUIRED/" \
-			-i cmake/public/LoadHIP.cmake || die
-
-		ebegin "HIPifying cuda sources"
-		${EPYTHON} tools/amd_build/build_amd.py || die
-		eend $?
-	fi
-}
-
-src_configure() {
-	if use cuda && [[ -z ${TORCH_CUDA_ARCH_LIST} ]]; then
-		ewarn "WARNING: caffe2 is being built with its default CUDA compute capabilities: 3.5 and 7.0."
-		ewarn "These may not be optimal for your GPU."
-		ewarn ""
-		ewarn "To configure caffe2 with the CUDA compute capability that is optimal for your GPU,"
-		ewarn "set TORCH_CUDA_ARCH_LIST in your make.conf, and re-emerge caffe2."
-		ewarn "For example, to use CUDA capability 7.5 & 3.5, add: TORCH_CUDA_ARCH_LIST=7.5 3.5"
-		ewarn "For a Maxwell model GPU, an example value would be: TORCH_CUDA_ARCH_LIST=Maxwell"
-		ewarn ""
-		ewarn "You can look up your GPU's CUDA compute capability at https://developer.nvidia.com/cuda-gpus"
-		ewarn "or by running /opt/cuda/extras/demo_suite/deviceQuery | grep 'CUDA Capability'"
-	fi
-
-	local mycmakeargs=(
-		-DBUILD_CUSTOM_PROTOBUF=OFF
-		-DBUILD_SHARED_LIBS=ON
-
-		-DUSE_CCACHE=OFF
-		-DUSE_CUDA=$(usex cuda)
-		-DUSE_DISTRIBUTED=$(usex distributed)
-		-DUSE_MPI=$(usex mpi)
-		-DUSE_FAKELOWP=OFF
-		-DUSE_FBGEMM=$(usex fbgemm)
-		-DUSE_FFMPEG=$(usex ffmpeg)
-		-DUSE_FLASH_ATTENTION=$(usex flash)
-		-DUSE_GFLAGS=ON
-		-DUSE_GLOG=ON
-		-DUSE_GLOO=$(usex gloo)
-		-DUSE_KINETO=OFF # TODO
-		-DUSE_LEVELDB=OFF
-		-DUSE_MAGMA=OFF # TODO: In GURU as sci-libs/magma
-		-DUSE_MKLDNN=$(usex onednn)
-		-DUSE_NNPACK=$(usex nnpack)
-		-DUSE_QNNPACK=$(usex qnnpack)
-		-DUSE_XNNPACK=$(usex xnnpack)
-		-DUSE_SYSTEM_XNNPACK=$(usex xnnpack)
-		-DUSE_TENSORPIPE=$(usex distributed)
-		-DUSE_PYTORCH_QNNPACK=OFF
-		-DUSE_NUMPY=$(usex numpy)
-		-DUSE_OPENCL=$(usex opencl)
-		-DUSE_OPENCV=$(usex opencv)
-		-DUSE_OPENMP=$(usex openmp)
-		-DUSE_ROCM=$(usex rocm)
-		-DUSE_SYSTEM_CPUINFO=ON
-		-DUSE_SYSTEM_PYBIND11=ON
-		-DUSE_UCC=OFF
-		-DUSE_VALGRIND=OFF
-		-DPYBIND11_PYTHON_VERSION="${EPYTHON#python}"
-		-DPYTHON_EXECUTABLE="${PYTHON}"
-		-DUSE_ITT=OFF
-		-DUSE_SYSTEM_PTHREADPOOL=ON
-		-DUSE_SYSTEM_FXDIV=ON
-		-DUSE_SYSTEM_FP16=ON
-		-DUSE_SYSTEM_GLOO=ON
-		-DUSE_SYSTEM_ONNX=ON
-		-DUSE_SYSTEM_SLEEF=ON
-		-DUSE_METAL=OFF
-
-		-Wno-dev
-		-DTORCH_INSTALL_LIB_DIR="${EPREFIX}"/usr/$(get_libdir)
-		-DLIBSHM_INSTALL_LIB_SUBDIR="${EPREFIX}"/usr/$(get_libdir)
-	)
-
-	if use mkl; then
-		mycmakeargs+=(-DBLAS=MKL)
-	elif use openblas; then
-		mycmakeargs+=(-DBLAS=OpenBLAS)
-	else
-		mycmakeargs+=(-DBLAS=Generic -DBLAS_LIBRARIES=)
-	fi
-
-	if use cuda; then
-		addpredict "/dev/nvidiactl" # bug 867706
-		addpredict "/dev/char"
-		addpredict "/proc/self/task" # bug 926116
-
-		mycmakeargs+=(
-			-DUSE_CUDNN=ON
-			-DTORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST:-3.5 7.0}"
-			-DUSE_NCCL=OFF # TODO: NVIDIA Collective Communication Library
-			-DCMAKE_CUDA_FLAGS="$(cuda_gccdir -f | tr -d \")"
-		)
-	elif use rocm; then
-		export PYTORCH_ROCM_ARCH="$(get_amdgpu_flags)"
-		local use_hipblaslt="OFF"
-		if use amdgpu_targets_gfx90a || use amdgpu_targets_gfx940 || use amdgpu_targets_gfx941 \
-			|| use amdgpu_targets_gfx942; then
-			use_hipblaslt="ON"
-		fi
-
-		mycmakeargs+=(
-			-DUSE_NCCL=ON
-			-DUSE_SYSTEM_NCCL=ON
-			-DUSE_HIPBLASLT=${use_hipblaslt}
-		)
-
-		# ROCm libraries produce too much warnings
-		append-cxxflags -Wno-deprecated-declarations -Wno-unused-result
-	fi
-
-	if use onednn; then
-		mycmakeargs+=(
-			-DUSE_MKLDNN=ON
-			-DMKLDNN_FOUND=ON
-			-DMKLDNN_LIBRARIES=dnnl
-			-DMKLDNN_INCLUDE_DIR="${ESYSROOT}/usr/include/oneapi/dnnl"
-		)
-	fi
-
-	cmake_src_configure
-
-	# do not rerun cmake and the build process in src_install
-	sed '/RERUN/,+1d' -i "${BUILD_DIR}"/build.ninja || die
-}
-
-src_install() {
-	cmake_src_install
-
-	insinto "/var/lib/${PN}"
-	doins "${BUILD_DIR}"/CMakeCache.txt
-
-	rm -rf python
-	mkdir -p python/torch/include || die
-	mv "${ED}"/usr/lib/python*/site-packages/caffe2 python/ || die
-	cp torch/version.py python/torch/ || die
-	python_domodule python/caffe2
-	python_domodule python/torch
-	ln -s ../../../../../include/torch \
-		"${D}$(python_get_sitedir)"/torch/include/torch || die # bug 923269
-}

diff --git a/sci-libs/caffe2/caffe2-2.3.1.ebuild b/sci-libs/caffe2/caffe2-2.3.1.ebuild
deleted file mode 100644
index ff2a9caebd59..000000000000
--- a/sci-libs/caffe2/caffe2-2.3.1.ebuild
+++ /dev/null
@@ -1,294 +0,0 @@
-# Copyright 2022-2024 Gentoo Authors
-# Distributed under the terms of the GNU General Public License v2
-
-EAPI=8
-
-PYTHON_COMPAT=( python3_{10..12} )
-ROCM_VERSION=6.1
-inherit python-single-r1 cmake cuda flag-o-matic prefix rocm
-
-MYPN=pytorch
-MYP=${MYPN}-${PV}
-
-DESCRIPTION="A deep learning framework"
-HOMEPAGE="https://pytorch.org/"
-SRC_URI="https://github.com/pytorch/${MYPN}/archive/refs/tags/v${PV}.tar.gz
-	-> ${MYP}.tar.gz
-	https://dev.gentoo.org/~tupone/distfiles/${PN}-patches-20240809.tar.gz"
-
-S="${WORKDIR}"/${MYP}
-
-LICENSE="BSD"
-SLOT="0"
-KEYWORDS="~amd64"
-IUSE="cuda distributed fbgemm ffmpeg flash gloo mkl mpi nnpack +numpy onednn openblas opencl opencv openmp qnnpack rocm xnnpack"
-RESTRICT="test"
-REQUIRED_USE="
-	${PYTHON_REQUIRED_USE}
-	ffmpeg? ( opencv )
-	mpi? ( distributed )
-	gloo? ( distributed )
-	?? ( cuda rocm )
-	rocm? (
-		|| ( ${ROCM_REQUIRED_USE} )
-		!flash
-	)
-"
-
-# CUDA 12 not supported yet: https://github.com/pytorch/pytorch/issues/91122
-RDEPEND="
-	${PYTHON_DEPS}
-	dev-cpp/gflags:=
-	>=dev-cpp/glog-0.5.0
-	dev-libs/cpuinfo
-	dev-libs/libfmt
-	dev-libs/protobuf:=
-	dev-libs/pthreadpool
-	dev-libs/sleef
-	virtual/lapack
-	sci-libs/onnx
-	sci-libs/foxi
-	cuda? (
-		dev-libs/cudnn
-		>=dev-libs/cudnn-frontend-1.0.3:0/8
-		<dev-util/nvidia-cuda-toolkit-12.4.0:=[profiler]
-	)
-	fbgemm? ( >=dev-libs/FBGEMM-2023.12.01 )
-	ffmpeg? ( media-video/ffmpeg:= )
-	gloo? ( sci-libs/gloo[cuda?] )
-	mpi? ( virtual/mpi )
-	nnpack? ( sci-libs/NNPACK )
-	numpy? ( $(python_gen_cond_dep '
-		dev-python/numpy[${PYTHON_USEDEP}]
-		') )
-	onednn? ( dev-libs/oneDNN )
-	opencl? ( virtual/opencl )
-	opencv? ( media-libs/opencv:= )
-	qnnpack? ( sci-libs/QNNPACK )
-	rocm? (
-		=dev-util/hip-6.1*
-		=dev-libs/rccl-6.1*[${ROCM_USEDEP}]
-		=sci-libs/rocThrust-6.1*[${ROCM_USEDEP}]
-		=sci-libs/rocPRIM-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipBLAS-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipFFT-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipSPARSE-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipRAND-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipCUB-6.1*[${ROCM_USEDEP}]
-		=sci-libs/hipSOLVER-6.1*[${ROCM_USEDEP}]
-		=sci-libs/miopen-6.1*[${ROCM_USEDEP}]
-		=dev-util/roctracer-6.1*[${ROCM_USEDEP}]
-
-		amdgpu_targets_gfx90a? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx90a] )
-		amdgpu_targets_gfx940? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx940] )
-		amdgpu_targets_gfx941? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx941] )
-		amdgpu_targets_gfx942? ( =sci-libs/hipBLASLt-6.1*[amdgpu_targets_gfx942] )
-	)
-	distributed? ( sci-libs/tensorpipe[cuda?] )
-	xnnpack? ( >=sci-libs/XNNPACK-2022.12.22 )
-	mkl? ( sci-libs/mkl )
-	openblas? ( sci-libs/openblas )
-"
-DEPEND="
-	${RDEPEND}
-	cuda? ( >=dev-libs/cutlass-3.4.1 )
-	onednn? ( sci-libs/ideep )
-	dev-libs/psimd
-	dev-libs/FP16
-	dev-libs/FXdiv
-	dev-libs/pocketfft
-	dev-libs/flatbuffers
-	>=sci-libs/kineto-0.4.0_p20231031
-	$(python_gen_cond_dep '
-		dev-python/pyyaml[${PYTHON_USEDEP}]
-		dev-python/pybind11[${PYTHON_USEDEP}]
-		dev-python/typing-extensions[${PYTHON_USEDEP}]
-	')
-"
-
-PATCHES=(
-	../patches/${PN}-2.2.1-gentoo.patch
-	../patches/${PN}-1.13.0-install-dirs.patch
-	../patches/${PN}-1.12.0-glog-0.6.0.patch
-	../patches/${PN}-1.13.1-tensorpipe.patch
-	../patches/${PN}-2.3.0-cudnn_include_fix.patch
-	../patches/${PN}-2.1.2-fix-rpath.patch
-	../patches/${PN}-2.1.2-fix-openmp-link.patch
-	../patches/${PN}-2.3.0-rocm-fix-std-cpp17.patch
-	../patches/${PN}-2.2.2-musl.patch
-	../patches/${PN}-2.3.0-CMakeFix.patch
-	../patches/${PN}-2.3.0-exclude-aotriton.patch
-	../patches/${PN}-2.3.0-fix-rocm-gcc14-clamp.patch
-	../patches/${PN}-2.3.0-optional-hipblaslt.patch
-	../patches/${PN}-2.3.0-fix-libcpp.patch
-	../patches/${PN}-2.3.0-fix-gcc-clang-abi-compat.patch
-)
-
-src_prepare() {
-	filter-lto #bug 862672
-	sed -i \
-		-e "/third_party\/gloo/d" \
-		cmake/Dependencies.cmake \
-		|| die
-	cmake_src_prepare
-	pushd torch/csrc/jit/serialization || die
-	flatc --cpp --gen-mutable --scoped-enums mobile_bytecode.fbs || die
-	popd
-	# prefixify the hardcoded paths, after all patches are applied
-	hprefixify \
-		aten/CMakeLists.txt \
-		caffe2/CMakeLists.txt \
-		cmake/Metal.cmake \
-		cmake/Modules/*.cmake \
-		cmake/Modules_CUDA_fix/FindCUDNN.cmake \
-		cmake/Modules_CUDA_fix/upstream/FindCUDA/make2cmake.cmake \
-		cmake/Modules_CUDA_fix/upstream/FindPackageHandleStandardArgs.cmake \
-		cmake/public/LoadHIP.cmake \
-		cmake/public/cuda.cmake \
-		cmake/Dependencies.cmake \
-		torch/CMakeLists.txt \
-		CMakeLists.txt
-
-	if use rocm; then
-		sed -e "s:/opt/rocm:/usr:" \
-			-e "s:lib/cmake:$(get_libdir)/cmake:g" \
-			-e "s/HIP 1.0/HIP 1.0 REQUIRED/" \
-			-i cmake/public/LoadHIP.cmake || die
-
-		ebegin "HIPifying cuda sources"
-		${EPYTHON} tools/amd_build/build_amd.py || die
-		eend $?
-	fi
-}
-
-src_configure() {
-	if use cuda && [[ -z ${TORCH_CUDA_ARCH_LIST} ]]; then
-		ewarn "WARNING: caffe2 is being built with its default CUDA compute capabilities: 3.5 and 7.0."
-		ewarn "These may not be optimal for your GPU."
-		ewarn ""
-		ewarn "To configure caffe2 with the CUDA compute capability that is optimal for your GPU,"
-		ewarn "set TORCH_CUDA_ARCH_LIST in your make.conf, and re-emerge caffe2."
-		ewarn "For example, to use CUDA capability 7.5 & 3.5, add: TORCH_CUDA_ARCH_LIST=7.5 3.5"
-		ewarn "For a Maxwell model GPU, an example value would be: TORCH_CUDA_ARCH_LIST=Maxwell"
-		ewarn ""
-		ewarn "You can look up your GPU's CUDA compute capability at https://developer.nvidia.com/cuda-gpus"
-		ewarn "or by running /opt/cuda/extras/demo_suite/deviceQuery | grep 'CUDA Capability'"
-	fi
-
-	local mycmakeargs=(
-		-DBUILD_CUSTOM_PROTOBUF=OFF
-		-DBUILD_SHARED_LIBS=ON
-
-		-DUSE_CCACHE=OFF
-		-DUSE_CUDA=$(usex cuda)
-		-DUSE_DISTRIBUTED=$(usex distributed)
-		-DUSE_MPI=$(usex mpi)
-		-DUSE_FAKELOWP=OFF
-		-DUSE_FBGEMM=$(usex fbgemm)
-		-DUSE_FFMPEG=$(usex ffmpeg)
-		-DUSE_FLASH_ATTENTION=$(usex flash)
-		-DUSE_GFLAGS=ON
-		-DUSE_GLOG=ON
-		-DUSE_GLOO=$(usex gloo)
-		-DUSE_KINETO=OFF # TODO
-		-DUSE_LEVELDB=OFF
-		-DUSE_MAGMA=OFF # TODO: In GURU as sci-libs/magma
-		-DUSE_MKLDNN=$(usex onednn)
-		-DUSE_NNPACK=$(usex nnpack)
-		-DUSE_QNNPACK=$(usex qnnpack)
-		-DUSE_XNNPACK=$(usex xnnpack)
-		-DUSE_SYSTEM_XNNPACK=$(usex xnnpack)
-		-DUSE_TENSORPIPE=$(usex distributed)
-		-DUSE_PYTORCH_QNNPACK=OFF
-		-DUSE_NUMPY=$(usex numpy)
-		-DUSE_OPENCL=$(usex opencl)
-		-DUSE_OPENCV=$(usex opencv)
-		-DUSE_OPENMP=$(usex openmp)
-		-DUSE_ROCM=$(usex rocm)
-		-DUSE_SYSTEM_CPUINFO=ON
-		-DUSE_SYSTEM_PYBIND11=ON
-		-DUSE_UCC=OFF
-		-DUSE_VALGRIND=OFF
-		-DPYBIND11_PYTHON_VERSION="${EPYTHON#python}"
-		-DPYTHON_EXECUTABLE="${PYTHON}"
-		-DUSE_ITT=OFF
-		-DUSE_SYSTEM_PTHREADPOOL=ON
-		-DUSE_SYSTEM_FXDIV=ON
-		-DUSE_SYSTEM_FP16=ON
-		-DUSE_SYSTEM_GLOO=ON
-		-DUSE_SYSTEM_ONNX=ON
-		-DUSE_SYSTEM_SLEEF=ON
-		-DUSE_METAL=OFF
-
-		-Wno-dev
-		-DTORCH_INSTALL_LIB_DIR="${EPREFIX}"/usr/$(get_libdir)
-		-DLIBSHM_INSTALL_LIB_SUBDIR="${EPREFIX}"/usr/$(get_libdir)
-	)
-
-	if use mkl; then
-		mycmakeargs+=(-DBLAS=MKL)
-	elif use openblas; then
-		mycmakeargs+=(-DBLAS=OpenBLAS)
-	else
-		mycmakeargs+=(-DBLAS=Generic -DBLAS_LIBRARIES=)
-	fi
-
-	if use cuda; then
-		addpredict "/dev/nvidiactl" # bug 867706
-		addpredict "/dev/char"
-		addpredict "/proc/self/task" # bug 926116
-
-		mycmakeargs+=(
-			-DUSE_CUDNN=ON
-			-DTORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST:-3.5 7.0}"
-			-DUSE_NCCL=OFF # TODO: NVIDIA Collective Communication Library
-			-DCMAKE_CUDA_FLAGS="$(cuda_gccdir -f | tr -d \")"
-		)
-	elif use rocm; then
-		export PYTORCH_ROCM_ARCH="$(get_amdgpu_flags)"
-		local use_hipblaslt="OFF"
-		if use amdgpu_targets_gfx90a || use amdgpu_targets_gfx940 || use amdgpu_targets_gfx941 \
-			|| use amdgpu_targets_gfx942; then
-			use_hipblaslt="ON"
-		fi
-
-		mycmakeargs+=(
-			-DUSE_NCCL=ON
-			-DUSE_SYSTEM_NCCL=ON
-			-DUSE_HIPBLASLT=${use_hipblaslt}
-		)
-
-		# ROCm libraries produce too much warnings
-		append-cxxflags -Wno-deprecated-declarations -Wno-unused-result
-	fi
-
-	if use onednn; then
-		mycmakeargs+=(
-			-DUSE_MKLDNN=ON
-			-DMKLDNN_FOUND=ON
-			-DMKLDNN_LIBRARIES=dnnl
-			-DMKLDNN_INCLUDE_DIR="${ESYSROOT}/usr/include/oneapi/dnnl"
-		)
-	fi
-
-	cmake_src_configure
-
-	# do not rerun cmake and the build process in src_install
-	sed '/RERUN/,+1d' -i "${BUILD_DIR}"/build.ninja || die
-}
-
-src_install() {
-	cmake_src_install
-
-	insinto "/var/lib/${PN}"
-	doins "${BUILD_DIR}"/CMakeCache.txt
-
-	rm -rf python
-	mkdir -p python/torch/include || die
-	mv "${ED}"/usr/lib/python*/site-packages/caffe2 python/ || die
-	cp torch/version.py python/torch/ || die
-	python_domodule python/caffe2
-	python_domodule python/torch
-	ln -s ../../../../../include/torch \
-		"${D}$(python_get_sitedir)"/torch/include/torch || die # bug 923269
-}

diff --git a/sci-libs/caffe2/metadata.xml b/sci-libs/caffe2/metadata.xml
index e99253402e7a..cef968bc82ed 100644
--- a/sci-libs/caffe2/metadata.xml
+++ b/sci-libs/caffe2/metadata.xml
@@ -8,7 +8,6 @@
 	<use>
 		<flag name="distributed">Support distributed applications</flag>
 		<flag name="fbgemm">Use FBGEMM</flag>
-		<flag name="ffmpeg">Add support for video processing operators</flag>
 		<flag name="flash">Enable flash attention</flag>
 		<flag name="gloo">Use sci-libs/gloo</flag>
 		<flag name="mkl">Use <pkg>sci-libs/mkl</pkg> for blas, lapack and sparse blas routines</flag>
@@ -16,7 +15,6 @@
 		<flag name="numpy">Add support for math operations through numpy</flag>
 		<flag name="onednn">Use oneDNN</flag>
 		<flag name="openblas">Use <pkg>sci-libs/openblas</pkg> for blas routines</flag>
-		<flag name="opencv">Add support for image processing operators</flag>
 		<flag name="openmp">Use OpenMP for parallel code</flag>
 		<flag name="qnnpack">Use QNNPACK</flag>
 		<flag name="rocm">Enable ROCm gpu computing support</flag>


             reply	other threads:[~2024-10-27 14:13 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-27 14:13 Alfredo Tupone [this message]
  -- strict thread matches above, loose matches on Subject: below --
2025-03-03 20:08 [gentoo-commits] repo/gentoo:master commit in: sci-libs/caffe2/ Alfredo Tupone
2025-02-23 18:01 Alfredo Tupone
2025-01-24 18:59 Alfredo Tupone
2024-12-19 20:29 Alfredo Tupone
2024-12-14 21:53 Alfredo Tupone
2024-12-11 21:33 Alfredo Tupone
2024-12-09  8:29 Alfredo Tupone
2024-12-06 21:15 Alfredo Tupone
2024-11-28  5:43 Alfredo Tupone
2024-11-18  6:59 Alfredo Tupone
2024-11-15  7:39 Alfredo Tupone
2024-11-01 17:14 Alfredo Tupone
2024-10-26 20:43 Alfredo Tupone
2024-10-11  6:49 Alfredo Tupone
2024-09-24 12:41 Alfredo Tupone
2024-09-16 17:42 Alfredo Tupone
2024-09-15 18:43 Alfredo Tupone
2024-09-15 17:36 Alfredo Tupone
2024-07-22 17:14 Alfredo Tupone
2024-06-07  8:32 Alfredo Tupone
2024-06-06 19:07 Alfredo Tupone
2024-05-09 18:46 Alfredo Tupone
2024-05-07 20:48 Alfredo Tupone
2024-05-07  6:06 Alfredo Tupone
2024-05-05 14:15 Alfredo Tupone
2024-04-20 19:31 Alfredo Tupone
2024-04-04 16:28 Alfredo Tupone
2024-04-04  9:23 Alfredo Tupone
2024-04-04  9:20 Alfredo Tupone
2024-03-30 19:40 Alfredo Tupone
2024-03-30 19:36 Alfredo Tupone
2024-03-11 19:28 Alfredo Tupone
2024-03-08  7:26 Alfredo Tupone
2024-03-01 18:54 Alfredo Tupone
2024-02-24 21:57 Alfredo Tupone
2024-02-17 22:21 Alfredo Tupone
2024-02-03  7:22 Michał Górny
2024-01-28 22:53 Jonas Stein
2024-01-10 21:15 Alfredo Tupone
2024-01-09  7:17 Alfredo Tupone
2024-01-03 18:23 Alfredo Tupone
2023-12-29  9:53 Alfredo Tupone
2023-12-26 22:14 Alfredo Tupone
2023-12-23 16:05 Alfredo Tupone
2023-12-23  8:24 Alfredo Tupone
2023-12-22 22:27 Alfredo Tupone
2023-12-17  9:30 Alfredo Tupone
2023-12-14 18:30 Alfredo Tupone
2023-12-10 10:31 Alfredo Tupone
2023-12-06 19:49 Alfredo Tupone
2023-12-01  5:54 Alfredo Tupone
2023-08-11  8:40 Alfredo Tupone
2023-08-11  7:31 Alfredo Tupone
2023-08-01  7:30 Alfredo Tupone
2023-06-17 14:02 Alfredo Tupone
2023-05-15  6:27 Alfredo Tupone
2023-05-14  7:59 Alfredo Tupone
2023-05-08 21:00 Alfredo Tupone
2023-05-05  6:00 Alfredo Tupone
2023-04-23 17:25 Alfredo Tupone
2023-04-23 13:24 Alfredo Tupone
2023-04-08 14:29 Alfredo Tupone
2023-04-06 19:46 Alfredo Tupone
2023-03-26 18:10 Alfredo Tupone
2023-03-17 18:47 Alfredo Tupone
2023-02-28  7:12 Alfredo Tupone
2023-02-28  7:12 Alfredo Tupone
2023-02-27 14:16 Alfredo Tupone
2023-02-27  7:23 Alfredo Tupone
2023-02-20 21:22 Alfredo Tupone
2023-02-15 19:40 Alfredo Tupone
2023-02-15  7:36 Sam James
2023-02-15  7:18 Alfredo Tupone
2023-02-12  9:09 Alfredo Tupone
2023-01-03  6:22 Alfredo Tupone
2022-12-16 15:56 Alfredo Tupone
2022-11-30 18:20 Alfredo Tupone
2022-11-10 16:51 Alfredo Tupone
2022-09-14 20:52 Alfredo Tupone
2022-08-29  6:42 Alfredo Tupone
2022-08-04  8:00 Alfredo Tupone
2022-07-03 10:45 Alfredo Tupone
2022-07-02 16:20 Alfredo Tupone
2022-06-26 20:24 Alfredo Tupone
2022-06-26 18:03 Alfredo Tupone
2022-06-26 17:30 Alfredo Tupone

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1730037628.7e11aa2639192352d26804f1e45136343ea95844.tupone@gentoo \
    --to=tupone@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox