Skip to main content
L
Lightbridge Lab
Menu
All Projects

OllamaMon

Active

Monitor and benchmark your local Ollama instance — model performance at a glance.

AIDeveloper ToolsCLI

OllamaMon gives you visibility into your local Ollama setup — which models are loaded, how much memory they’re using, and how fast they’re responding. Useful when you’re running multiple models and want to know what’s happening under the hood.

What it does

  • Live resource monitoring — VRAM/RAM usage per model
  • Performance benchmarks — tokens/second for each loaded model
  • Model management — see what’s loaded, unload what you don’t need
  • Lightweight — runs as a simple CLI or background process