Skip to main content
0
G

Guardrails AI

Add safety guardrails to LLM applications

Rating

4.7

Votes

0

score

Downloads

1.2K

total

Price

Free

No login needed

Works With

Claude CodeCursorWindsurfVS CodeDeveloper tool

About

Guardrails AI is a Python framework for adding input/output guardrails to LLM applications. Detect and mitigate risks like PII leakage, toxic language, hallucination, and prompt injection.

Essential for production AI applications. Validates LLM inputs and outputs against configurable rules.

Install via pip. Open-source with a marketplace of pre-built validators.

Don't lose this

Three weeks from now, you'll want Guardrails AI again. Will you remember where to find it?

Save it to your library and the next time you need Guardrails AI, it’s one tap away — from any AI app you use. Group it into a bench with the rest of the team for that kind of task and you can pull the whole stack at once.

⚡ Pro tip for geeks: add a-gnt 🤵🏻‍♂️ as a custom connector in Claude or a custom GPT in ChatGPT — one click and your library is right there in the chat. Or, if you’re in an editor, install the a-gnt MCP server and say “use my [bench name]” in Claude Code, Cursor, VS Code, or Windsurf.

🤵🏻‍♂️

a-gnt's Take

Our honest review

Add safety guardrails to LLM applications. Best for anyone looking to make their AI assistant more capable in security. It's backed by an active open-source community and verified by the creator. This one just landed in the catalog — worth trying while it's fresh.

Tips for getting started

1

Tap "Get" above, pick your AI app, and follow the steps. Most installs take under 30 seconds.

What's New

Version 1.0.06 days ago

Initial release

Ratings & Reviews

4.7

out of 5

27 ratings

No reviews yet. Be the first to share your experience.