Subscribe

How to install ollama (aarch64)

Get up and running with large language models locally

Install

sudo yum -y install https://extras.getpagespeed.com/release-latest.rpm
sudo yum -y install ollama

Description

Ollama allows you to run large language models locally. Get up and running with Llama 3, Mistral, Gemma, and other open source models.

RPMs

Danila Vershinin (2026-04-24) - Hardened OpenClaw onboarding flow for improved security. · - Enhanced user experience during onboarding process.
Danila Vershinin (2026-04-22) - Updated top recommended model to k2.6 from kimi-k2.5. · - Improved model performance and compatibility with latest features.
Danila Vershinin (2026-04-18) - Added Hermes agent for automated skill creation. · - Supported Gemma 4 on MLX for Apple Silicon. · - Integrated GitHub Copilot CLI with ollama launch. · - OpenCode now uses inline configuration. · - ollama launch no longer rewrites unchanged config. · - Fixed non-interactive setup for openclaw command. · - Resolved Gemma 4 compiler error affecting Metal builds.
Danila Vershinin (2026-04-14) - Improved quality for gemma:e2b and gemma:e4b with thinking disabled. · - Updated ROCm to version 7.2.1 for Linux.
Danila Vershinin (2026-04-13) - Improved tool calling with Gemma 4. · - Fixed various application bugs. · - Enhanced parallel tool calling performance. · - Updated documentation for Hermes Agent.
Danila Vershinin (2026-04-11) - Added OpenClaw channel setup for messaging apps. · - Enabled flash attention for Gemma 4 on compatible GPUs. · - Improved detection of curl-based OpenCode installs. · - Fixed /save command for safetensors-based models.
Danila Vershinin (2026-04-09) - Improved M5 performance with NAX. · - Enabled flash attention for gemma4.
Danila Vershinin (2026-04-08) - Improved Gemma 4 Tool Calling functionality. · - Added latest models to Ollama App. · - Fixed OpenClaw issues for TUI launching.
Danila Vershinin (2026-04-05) - Default app home view set to new chat. · - Improved user experience on application launch.
Danila Vershinin (2026-04-03) - Added support for Effective 2B and 4B models. · - Introduced 26B and 31B model options. · - Updated documentation for PI features. · - Tokenizer now respects add_bos_token setting. · - Added SentencePiece-style BPE support.