BDEPEND=app-alternatives/ninja >=dev-build/cmake-3.28.5 DEFINED_PHASES=compile configure install prepare test DEPEND=curl? ( net-misc/curl:= ) sci-ml/ggml DESCRIPTION=Port of Facebook's LLaMA model in C/C++ EAPI=8 HOMEPAGE=https://github.com/ggml-org/llama.cpp IUSE=+curl +server +amdgpu_targets_gfx908 +amdgpu_targets_gfx90a +amdgpu_targets_gfx942 +amdgpu_targets_gfx1030 +amdgpu_targets_gfx1100 +amdgpu_targets_gfx1101 +amdgpu_targets_gfx1200 +amdgpu_targets_gfx1201 amdgpu_targets_gfx803 amdgpu_targets_gfx900 amdgpu_targets_gfx906 amdgpu_targets_gfx940 amdgpu_targets_gfx941 amdgpu_targets_gfx1010 amdgpu_targets_gfx1011 amdgpu_targets_gfx1012 amdgpu_targets_gfx1031 amdgpu_targets_gfx1102 amdgpu_targets_gfx1103 amdgpu_targets_gfx1150 amdgpu_targets_gfx1151 KEYWORDS=~amd64 LICENSE=MIT RDEPEND=curl? ( net-misc/curl:= ) dev-python/numpy SLOT=0 SRC_URI=https://github.com/ggml-org/llama.cpp/archive/refs/tags/b6644.tar.gz -> llama-cpp-0.6644.tar.gz _eclasses_=cmake 62d01e4ddde33c9129a86c3c7a3a0074 flag-o-matic a7afe42e95fb46ce9691605acfb24672 multiprocessing 1e32df7deee68372153dca65f4a7c21f ninja-utils 2df4e452cea39a9ec8fb543ce059f8d6 rocm 922af7ff86e77b32bf6eba22cf6537e6 toolchain-funcs 98d9f464d912ae6b7316fb8a3721f5db xdg-utils 42869b3c8d86a70ef3cf75165a395e09 _md5_=24e3a1c3f52f27a7f7d5595914d7f9fe