Jump to letter: [
ACDFGHJLMNOPQRSTWXZ
]
ollama (aarch64) - Get up and running with large language models locally
- Description:
Ollama allows you to run large language models locally. Get up and running
with Llama 3, Mistral, Gemma, and other open source models.
- Architecture:
Optimized for aarch64 (ARM64) architecture. It can be used on AWS Graviton instances, as well as Raspberry Pi 4 and newer.
How to Install ollama (aarch64)
sudo yum -y install https://extras.getpagespeed.com/release-latest.rpm
sudo yum -y install ollama
Packages
| ollama-0.17.4-1.amzn2023.aarch64
[9.2 MiB] |
Changelog
by Danila Vershinin (2026-02-27):
- Added stable tool call indexing for glm47 and qwen3 parsers.
- Improved model parsing performance and reliability.
- Enhanced compatibility with existing model formats.
- Fixed minor bugs in tool call handling.
- Updated documentation for new features and usage.
|
| ollama-0.17.0-1.amzn2023.aarch64
[9.1 MiB] |
Changelog
by Danila Vershinin (2026-02-26):
- Initial package (CPU-only)
|