BDEPEND=cuda? ( dev-util/nvidia-cuda-toolkit ) rocm? ( sci-libs/hipBLAS:0/6.1 dev-libs/rocm-opencl-runtime ) >=dev-lang/go-1.20:= app-arch/unzip DEFINED_PHASES=compile install postinst unpack DESCRIPTION=Infer on language models like Llama, Phi, Mistral, Gemma. EAPI=8 HOMEPAGE=https://ollama.com IUSE=cuda rocm video_cards_amdgpu cpu_flags_x86_avx cpu_flags_x86_avx2 cpu_flags_x86_avx512 +amdgpu_targets_gfx906 +amdgpu_targets_gfx908 +amdgpu_targets_gfx90a +amdgpu_targets_gfx942 +amdgpu_targets_gfx1030 +amdgpu_targets_gfx1100 amdgpu_targets_gfx803 amdgpu_targets_gfx900 amdgpu_targets_gfx940 amdgpu_targets_gfx941 amdgpu_targets_gfx1010 amdgpu_targets_gfx1011 amdgpu_targets_gfx1012 amdgpu_targets_gfx1031 amdgpu_targets_gfx1101 amdgpu_targets_gfx1102 KEYWORDS=~amd64 ~x86 LICENSE=MIT RESTRICT=network-sandbox strip SLOT=0 SRC_URI=https://github.com/ollama/ollama/archive/v0.5.4.tar.gz -> ollama-0.5.4.tar.gz _eclasses_=flag-o-matic f14aba975c94ccaa9f357a27e3b17ffe go-env 90efbc8636d2f02d9654183330e84cf7 go-module df32d29550d40a92da723d3b8e17b467 multilib b2a329026f2e404e9e371097dda47f96 multiprocessing 1e32df7deee68372153dca65f4a7c21f rocm 826765f795a41b937d1bfe8e709346cd toolchain-funcs 14648d8795f7779e11e1bc7cf08b7536 _md5_=a1d8a21dabdc12134c04599a81e70249