From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.gentoo.org (woodpecker.gentoo.org [140.211.166.183]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 3E2EF15808A for ; Mon, 28 Jul 2025 11:51:08 +0000 (UTC) Received: from lists.gentoo.org (bobolink.gentoo.org [140.211.166.189]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) (Authenticated sender: relay-lists.gentoo.org@gentoo.org) by smtp.gentoo.org (Postfix) with ESMTPSA id 21FCB341FBA for ; Mon, 28 Jul 2025 11:51:08 +0000 (UTC) Received: from bobolink.gentoo.org (localhost [127.0.0.1]) by bobolink.gentoo.org (Postfix) with ESMTP id EB437110561; Mon, 28 Jul 2025 11:50:59 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [IPv6:2001:470:ea4a:1:5054:ff:fec7:86e4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by bobolink.gentoo.org (Postfix) with ESMTPS id D8EF0110561 for ; Mon, 28 Jul 2025 11:50:59 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange x25519) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 82F66341F6F for ; Mon, 28 Jul 2025 11:50:59 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id D5E2832B5 for ; Mon, 28 Jul 2025 11:50:57 +0000 (UTC) From: "Lucio Sauer" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Lucio Sauer" Message-ID: <1753657147.bf6fdb43eb7155c0ca1e4b40f2d0e0a4a0732adf.watermanpaint@gentoo> Subject: [gentoo-commits] repo/proj/guru:master commit in: sci-misc/llama-cpp/ X-VCS-Repository: repo/proj/guru X-VCS-Files: sci-misc/llama-cpp/Manifest sci-misc/llama-cpp/llama-cpp-0_pre5821.ebuild sci-misc/llama-cpp/llama-cpp-0_pre5857.ebuild sci-misc/llama-cpp/llama-cpp-0_pre6002.ebuild sci-misc/llama-cpp/llama-cpp-9999.ebuild X-VCS-Directories: sci-misc/llama-cpp/ X-VCS-Committer: watermanpaint X-VCS-Committer-Name: Lucio Sauer X-VCS-Revision: bf6fdb43eb7155c0ca1e4b40f2d0e0a4a0732adf X-VCS-Branch: master Date: Mon, 28 Jul 2025 11:50:57 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply X-Archives-Salt: 8d412514-456e-49be-8813-85b2ca719497 X-Archives-Hash: ef0e16f5700da24cb64e94e68212186e commit: bf6fdb43eb7155c0ca1e4b40f2d0e0a4a0732adf Author: Sergey Alirzaev riseup net> AuthorDate: Sun Jul 27 22:59:07 2025 +0000 Commit: Lucio Sauer posteo net> CommitDate: Sun Jul 27 22:59:07 2025 +0000 URL: https://gitweb.gentoo.org/repo/proj/guru.git/commit/?id=bf6fdb43 sci-misc/llama-cpp: + opencl backend, bump Signed-off-by: Sergey Alirzaev riseup.net> sci-misc/llama-cpp/Manifest | 2 +- sci-misc/llama-cpp/llama-cpp-0_pre5857.ebuild | 5 ++- ...0_pre5821.ebuild => llama-cpp-0_pre6002.ebuild} | 51 ++++++++++++---------- sci-misc/llama-cpp/llama-cpp-9999.ebuild | 5 ++- 4 files changed, 36 insertions(+), 27 deletions(-) diff --git a/sci-misc/llama-cpp/Manifest b/sci-misc/llama-cpp/Manifest index 6f878fb6e4..dc5a4e2df1 100644 --- a/sci-misc/llama-cpp/Manifest +++ b/sci-misc/llama-cpp/Manifest @@ -2,5 +2,5 @@ DIST llama-cpp-0_pre4576.tar.gz 20506059 BLAKE2B 8f011811e4df1f8d0c26b19f96a7099 DIST llama-cpp-0_pre5097.tar.gz 21018571 BLAKE2B 001241580964aa6874a3aa4dbfa0a8cda58a144578992f6a6df7c5c7887cda847503f47c7f3be7b19bb3758ab6ce8de60435e29129cac71672160b29b1cab340 SHA512 86543cd001014fa4fee01a37d46e1794c2ffac7c25c7ed328aa4afd3d615b7f42b617ca5d8a0a78b5a41e31cb81184fc6f55f58ffd9433acb3f36cb947a620a5 DIST llama-cpp-0_pre5332.tar.gz 21140774 BLAKE2B a390d4c1c6902d90d1e779291e1fcbe69ab57eb35a5df0be6fb3d9edc88b086a18bcf48983b3c0b2e88d0cfaaddbfdeee74fb126b8a758547836f5b83dd4bc33 SHA512 c19c3a6b47684f9466e2872aa67d8516add69028c4fdc7d1abb7a0ff7d87b92adfdaf773cda87461be8e891285c6de34a4edca70244936e8efaf10cc02126a8d DIST llama-cpp-0_pre5633.tar.gz 24986657 BLAKE2B 6215dbfea54cb23a57419cc5a530be5622ec834c6d005337bcf92c50e152979375592088e215845e8f07c6b3f7eec15132cd15ebf9b0725adabe499951ae4735 SHA512 11a1917eb86c7065ea901cb62bdc7a25d8d7b962358570c2c7ae0c2d7abce6d19ebc6af74512593ebafbb4ee23546128cf8bfee5ba769c4f3cd2e254cdc1a1a4 -DIST llama-cpp-0_pre5821.tar.gz 25019017 BLAKE2B 5bf7e168a690ac02aee17dd72469481db3b7c61db990407596a99f814eef1737e9c83aae18ef27d3cd3cca01159104e702ed114cd28c1291aea03422a0b5c0f2 SHA512 7aed0a1a29bb4096d67f781299bf48718021f5a0916451a9bdaada2ac1181cc84cbaeab43811e12c13a10beb0d23f0897cfb5f2f26929a166dfd50d90d026d37 DIST llama-cpp-0_pre5857.tar.gz 25037397 BLAKE2B c5b9105ace7b66341b9dff32d3246f38e056097f2024df1919be2f7ac516ba37caa534aa521e5eb7717963b2df8a5fbe72663d829e0e67a0883edcbdb1b124d7 SHA512 1f91c4b11091a3ede785d5df1a0ab22360bafb36a0b7ee19ce70331bc36bae862ea52f2f0a5c8a4494022c37c8f363e850eb98d74ba910276267a7b5b4f927ed +DIST llama-cpp-0_pre6002.tar.gz 25253850 BLAKE2B 0767feea94598bbe6ea4a27b20dfda9c3423a9a6c6e4fc66f464c58fda2b13becdd703789604f7a2b424e335ddbe7eef3ee9ee33415ce135ed6eda2456578cca SHA512 57777dc0bfc7386daac6d0ea677d9c56e0db2feac7fa83f2468a057fca5d5ba829bb332ee64ba6049c969cdd7c2eca7501fe401f1bd5f4e8728ce453b160511f diff --git a/sci-misc/llama-cpp/llama-cpp-0_pre5857.ebuild b/sci-misc/llama-cpp/llama-cpp-0_pre5857.ebuild index 3f39aebf74..99dc17ab50 100644 --- a/sci-misc/llama-cpp/llama-cpp-0_pre5857.ebuild +++ b/sci-misc/llama-cpp/llama-cpp-0_pre5857.ebuild @@ -23,7 +23,7 @@ HOMEPAGE="https://github.com/ggml-org/llama.cpp" LICENSE="MIT" SLOT="0" CPU_FLAGS_X86=( avx avx2 f16c ) -IUSE="curl openblas blis hip cuda vulkan" +IUSE="curl openblas blis hip cuda opencl vulkan" REQUIRED_USE="?? ( openblas blis )" # curl is needed for pulling models from huggingface @@ -38,10 +38,12 @@ CDEPEND=" cuda? ( dev-util/nvidia-cuda-toolkit:= ) " DEPEND="${CDEPEND} + opencl? ( dev-util/opencl-headers ) vulkan? ( dev-util/vulkan-headers ) " RDEPEND="${CDEPEND} dev-python/numpy + opencl? ( dev-libs/opencl-icd-loader ) vulkan? ( media-libs/vulkan-loader ) " @@ -74,6 +76,7 @@ src_configure() { -DBUILD_NUMBER="1" -DGENTOO_REMOVE_CMAKE_BLAS_HACK=ON -DGGML_CUDA=$(usex cuda ON OFF) + -DGGML_OPENCL=$(usex opencl ON OFF) -DGGML_VULKAN=$(usex vulkan ON OFF) # avoid clashing with whisper.cpp diff --git a/sci-misc/llama-cpp/llama-cpp-0_pre5821.ebuild b/sci-misc/llama-cpp/llama-cpp-0_pre6002.ebuild similarity index 75% rename from sci-misc/llama-cpp/llama-cpp-0_pre5821.ebuild rename to sci-misc/llama-cpp/llama-cpp-0_pre6002.ebuild index 297952fc97..99dc17ab50 100644 --- a/sci-misc/llama-cpp/llama-cpp-0_pre5821.ebuild +++ b/sci-misc/llama-cpp/llama-cpp-0_pre6002.ebuild @@ -5,7 +5,7 @@ EAPI=8 ROCM_VERSION="6.3" -inherit cmake cuda rocm +inherit cmake cuda rocm linux-info if [[ "${PV}" != "9999" ]]; then KEYWORDS="~amd64" @@ -23,47 +23,42 @@ HOMEPAGE="https://github.com/ggml-org/llama.cpp" LICENSE="MIT" SLOT="0" CPU_FLAGS_X86=( avx avx2 f16c ) -IUSE="curl openblas blis hip cuda vulkan" +IUSE="curl openblas blis hip cuda opencl vulkan" REQUIRED_USE="?? ( openblas blis )" -AMDGPU_TARGETS_COMPAT=( - gfx900 - gfx90c - gfx902 - gfx1010 - gfx1011 - gfx1012 - gfx1030 - gfx1031 - gfx1032 - gfx1034 - gfx1035 - gfx1036 - gfx1100 - gfx1101 - gfx1102 - gfx1103 - gfx1150 - gfx1151 -) - # curl is needed for pulling models from huggingface # numpy is used by convert_hf_to_gguf.py CDEPEND=" curl? ( net-misc/curl:= ) openblas? ( sci-libs/openblas:= ) blis? ( sci-libs/blis:= ) - hip? ( >=dev-util/hip-6.3:= ) + hip? ( >=dev-util/hip-6.3:= + >=sci-libs/hipBLAS-6.3:= + ) cuda? ( dev-util/nvidia-cuda-toolkit:= ) " DEPEND="${CDEPEND} + opencl? ( dev-util/opencl-headers ) vulkan? ( dev-util/vulkan-headers ) " RDEPEND="${CDEPEND} dev-python/numpy + opencl? ( dev-libs/opencl-icd-loader ) vulkan? ( media-libs/vulkan-loader ) " +pkg_setup() { + if use hip; then + linux-info_pkg_setup + if linux-info_get_any_version && linux_config_exists; then + if ! linux_chkconfig_present HSA_AMD_SVM; then + ewarn "To use ROCm/HIP, you need to have HSA_AMD_SVM option enabled in your kernel." + fi + fi + + fi +} + src_prepare() { use cuda && cuda_src_prepare @@ -81,6 +76,7 @@ src_configure() { -DBUILD_NUMBER="1" -DGENTOO_REMOVE_CMAKE_BLAS_HACK=ON -DGGML_CUDA=$(usex cuda ON OFF) + -DGGML_OPENCL=$(usex opencl ON OFF) -DGGML_VULKAN=$(usex vulkan ON OFF) # avoid clashing with whisper.cpp @@ -100,6 +96,13 @@ src_configure() { ) fi + if use cuda; then + local -x CUDAHOSTCXX="$(cuda_gccdir)" + # tries to recreate dev symlinks + cuda_add_sandbox + addpredict "/dev/char/" + fi + if use hip; then rocm_use_hipcc mycmakeargs+=( diff --git a/sci-misc/llama-cpp/llama-cpp-9999.ebuild b/sci-misc/llama-cpp/llama-cpp-9999.ebuild index 3f39aebf74..99dc17ab50 100644 --- a/sci-misc/llama-cpp/llama-cpp-9999.ebuild +++ b/sci-misc/llama-cpp/llama-cpp-9999.ebuild @@ -23,7 +23,7 @@ HOMEPAGE="https://github.com/ggml-org/llama.cpp" LICENSE="MIT" SLOT="0" CPU_FLAGS_X86=( avx avx2 f16c ) -IUSE="curl openblas blis hip cuda vulkan" +IUSE="curl openblas blis hip cuda opencl vulkan" REQUIRED_USE="?? ( openblas blis )" # curl is needed for pulling models from huggingface @@ -38,10 +38,12 @@ CDEPEND=" cuda? ( dev-util/nvidia-cuda-toolkit:= ) " DEPEND="${CDEPEND} + opencl? ( dev-util/opencl-headers ) vulkan? ( dev-util/vulkan-headers ) " RDEPEND="${CDEPEND} dev-python/numpy + opencl? ( dev-libs/opencl-icd-loader ) vulkan? ( media-libs/vulkan-loader ) " @@ -74,6 +76,7 @@ src_configure() { -DBUILD_NUMBER="1" -DGENTOO_REMOVE_CMAKE_BLAS_HACK=ON -DGGML_CUDA=$(usex cuda ON OFF) + -DGGML_OPENCL=$(usex opencl ON OFF) -DGGML_VULKAN=$(usex vulkan ON OFF) # avoid clashing with whisper.cpp