Skip to main content
0
O

Ollama

Run open-source LLMs locally on your machine

Rating

4.6

Votes

0

score

Downloads

3.5K

total

Price

Free

No login needed

Works With

Claude CodeCursorWindsurfVS Code

About

Ollama makes it easy to run large language models locally. Download and run Llama, Mistral, Gemma, Phi, and dozens of other open-source models with a single command.

No GPU required (though it helps). Models run on your hardware with no data leaving your machine. Perfect for privacy-sensitive work and offline use.

Provides an OpenAI-compatible API so it works with most AI tools.

Don't lose this

Three weeks from now, you'll want Ollama again. Will you remember where to find it?

Save it to your library and the next time you need Ollama, it’s one tap away — from any AI app you use. Group it into a bench with the rest of the team for that kind of task and you can pull the whole stack at once.

⚡ Pro tip for geeks: add a-gnt 🤵🏻‍♂️ as a custom connector in Claude or a custom GPT in ChatGPT — one click and your library is right there in the chat. Or, if you’re in an editor, install the a-gnt MCP server and say “use my [bench name]” in Claude Code, Cursor, VS Code, or Windsurf.

🤵🏻‍♂️

a-gnt's Take

Our honest review

Run open-source LLMs locally on your machine. Best for anyone looking to make their AI assistant more capable in ai models. It's backed by an active open-source community and verified by the creator. This one just landed in the catalog — worth trying while it's fresh.

Tips for getting started

1

Tap "Get" above, pick your AI app, and follow the steps. Most installs take under 30 seconds.

What's New

Version 1.0.06 days ago

Initial release

Ratings & Reviews

4.6

out of 5

3 ratings

No reviews yet. Be the first to share your experience.

People Who Use This

joey-io's avatar