Skip to main content
1

Your Tools, Your Skills, Anywhere You Use AI — Across Providers

A
a-gnt Community5 min read

Your .claude folder is locked to one project on one machine. Your a-gnt library isn't. Here's how to call your skills, souls, and benches from any AI client that speaks MCP — and from OpenAI too.

Here's a thing that's quietly true and almost nobody talks about: the souls in your .claude/agents/ folder and the skills in your .claude/skills/ folder are locked to the one project and the one provider you installed them in. Open a different repo on the same machine — gone. Open ChatGPT in a browser tab — gone. Open Cursor, Zed, a Gemini session, a local OpenAI-compatible runtime — all gone. You wrote them once, and they live in exactly one place, and every time you switch tools you start from zero again.

That is a ridiculous state of affairs, and it's the problem a-gnt was built to fix.

The short version

If you save your tools, skills, souls, prompts, and benches to your a-gnt library, you can call them from any AI client that speaks MCP — which, in 2026, is most of them — and increasingly from OpenAI-flavored tool-calling too. The same saved bench that fires in Claude Code on your laptop fires in ChatGPT in the browser, in Cursor on a different repo, in a local Ollama setup, and in whatever custom agent you're building against the OpenAI SDK. One library. Many front-ends. You stop rewriting the same skill file into three different folders.

That's the pitch. The rest of this article is how, and why it's actually useful, because the pitch alone sounds like the kind of thing vendors say in slide decks.

What the a-gnt MCP server actually does

a-gnt runs a public MCP server — the endpoint is https://a-gnt.com/api/mcp — and any MCP-speaking client can connect to it. When you connect, the server exposes a handful of tools scoped to your account: browsing your library, activating a tool, activating a bench, searching the catalog, getting tool details, submitting new items. When your AI calls one of those tools, the server returns the actual content — the skill body, the soul file, the bench definition — and the client's model reads it like any other context.

That means: when you tell Claude Code "use my commit skill" and Claude Code is connected to the a-gnt MCP server, it calls activate_tool, the server hands back the skill body, and Claude reads it and uses it. Same conversation, different machine, different project, same skill. The skill doesn't live in .claude/skills/ on every laptop you own — it lives on a-gnt once and gets pulled down on demand.

MCP is the standard. OpenAI-flavored tool-calling is the bridge.

MCP (Model Context Protocol) is the Anthropic-originated standard for how a model client talks to an external tool server. It's open, it's documented, and at this point it's supported across Claude, Cursor, Zed, Windsurf, and a growing pile of local runtimes. If your client speaks MCP, connecting to a-gnt takes one config line. That's the easy case.

The harder case is the OpenAI side of the house. ChatGPT and the OpenAI API don't natively speak MCP (yet), they speak OpenAI's own function-calling format. That's fine — the two are translatable, and there are small proxies and adapters that expose an MCP server as an OpenAI-compatible tool list. You point your OpenAI agent at the adapter, the adapter points at a-gnt.com/api/mcp, and from the model's perspective it's just calling activate_tool like any other function. Nothing about your library has to change. The model on the other end doesn't have to be Claude. It just has to be something that knows how to call tools.

In practice, for most users, that means: Claude works natively, Cursor works natively, local MCP clients work natively, and OpenAI/ChatGPT works through a thin adapter that's a one-time setup. Once the plumbing is in place, you stop thinking about which client you're using and start thinking about which skill you need.

What "across providers" actually buys you

I want to put a concrete picture on this because it's the part that doesn't land until you've lived it.

Yesterday. You're in Claude Code on your main project. You use your commit skill six times. You use your review soul twice. You use your changelog prompt once. You close the laptop.

Today. You're away from your desk. You have a quick code question on your phone. You open ChatGPT. ChatGPT has no idea your commit skill exists. No idea your review soul exists. You explain what you want, from scratch, and ChatGPT gives you generic advice because that's all it has. You close the app, mildly annoyed.

Now imagine you had a-gnt wired up on both sides. Yesterday's session pulled your tools from the library. Today's ChatGPT session, connected through the adapter, pulls the same tools from the same library. You ask your question and the model says "do you want me to use your 'commit' skill? You've used it on this kind of change before." The skill body loads, the answer is shaped by your conventions, and you get an answer that sounds like your project instead of the internet.

That's what "across providers" actually means in practice. Not "your stuff follows you around" as a marketing line — literally, the same file gets read by a different model, and the output matches the house style you already established.

The specific things you save once and use everywhere

  • Skills — playbooks with instructions and optional reference files. Commit conventions, release processes, review checklists, changelog formats, whatever repeated thing you do.
  • Souls — characters with voice, worldview, and tool scope. Debugging ducks, cranky reviewers, patient teachers, specific-to-your-stack experts.
  • Prompts — short, surgical instructions that work well as /slash-commands. Turn-a-diff-into-a-changelog, write-the-release-notes, explain-this-error.
  • Tools — full tool definitions (MCP servers you'd want to connect to, configured the way you like).
  • Benches — bundles of the above, wired together for one kind of work. A "weekend prototyping" bench. A "production debugging" bench. A "content writing" bench.

Every one of these has a permalink on a-gnt and a corresponding activate_ call on the MCP server. The activation returns the file your client needs. That's the whole protocol.

The part where I sell you the obvious thing

If you don't have an a-gnt account yet, go make one and save one skill to your library — whichever one you've been copy-pasting between projects the most. Then wire up the MCP connection in Claude Code (claude mcp add a-gnt https://a-gnt.com/api/mcp, or the equivalent entry in your .claude/settings.json). Open a fresh project, any project, one that has no .claude/skills/ folder at all. Ask Claude to use your skill by name. Watch it work.

That's the moment the thing clicks. Your workflow stops being locked to one folder on one machine. It becomes a library you carry with you, a personal toolbox that follows you across every client you touch.

The .claude/ folder is still great. Keep using it for the project-specific pieces that really do belong inside the repo. But for everything that's about you instead of about this project* — your voice, your conventions, your debugging habits, your favorite prompts — put it in your a-gnt library and stop maintaining it in five places.

One library. Many front-ends. Same you. That's the whole pitch.

Go save the thing.

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.