Skip to main content

One post tagged with "Ollama"

Ollamaを使ったローカルLLM実行に関する記事。

View all tags

Releasing Ollama Multi-Model Benchmarker: Compare Local LLMs for Free on Google Colab

· 3 min read
hiroaki
Individual Developer
If this site helped you, please support us with a star! 🌟
Star on GitHub

Today we are releasing Ollama Multi-Model Benchmarker, an open-source tool for evaluating local LLMs before committing to a setup. It runs multiple Ollama models sequentially on Google Colab's free T4 GPU and produces a side-by-side comparison of generation speed, responsiveness, model size, and more.