- Home
- DevOps & Monitoring
- Langfuse
Langfuse
πͺ’ Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playg
Rating
Votes
0
score
Downloads
0
total
Price
Free
No login needed
Works With
About
Langfuse Is Doubling Down On Open Source
Langfuse Cloud Β·
Self Host Β·
Demo
Docs Β· Report Bug Β· Feature Request Β· Changelog Β· Roadmap Β·
Langfuse uses GitHub Discussions for Support and Feature Requests.
We're hiring. Join us in product engineering and technical go-to-market roles.
Proudly made with ClickHouse open source database
Langfuse is an open source LLM engineering platform. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be self-hosted in minutes and is battle-tested.
[](https://langfuse.com/watch-demo)
β¨ Core Features
- LLM Application Observability: Instrument your app and start ingesting traces to Langfuse, thereby tracking LLM calls and other relevant logic in your app such as retrieval, embedding, or agent actions. Inspect and debug complex logs and user sessions. Try the interactive demo to see this in action.
- Prompt Management helps you centrally manage, version control, and collaboratively iterate on your prompts. Thanks to strong caching on server and client side, you can iterate on prompts without adding latency to your application.
- Evaluations are key to the LLM application development workflow, and Langfuse adapts to your needs. It supports LLM-as-a-judge, user feedback collection, manual labeling, and custom evaluation pipelines via APIs/SDKs.
- Datasets enable test sets and benchmarks for evaluating your LLM application. They support continuous improvement, pre-deployment testing, structured experiments, flexible evaluation, and seamless integration with frameworks like LangChain and LlamaIndex.
- LLM Playground is a tool for testing and iterating on your prompts and model configurations, shortening the feedback loop and accelerating development. When you see a bad result in tracing, you can directly jump to the playground to iterate on it.
- Comprehensive API: Langfuse is frequently used to power bespoke LLMOps workflows while using the building blocks provided by Langfuse via the API. OpenAPI spec, Postman collection, and typed SDKs for Python, JS/TS are available.
π¦ Deploy Langfuse
Langfuse Cloud
Managed deployment by the Langfuse team, generous free-tier, no credit card required.
Self-Host Langfuse
Run Langfuse on your own infrastructure:
Don't lose this
Three weeks from now, you'll want Langfuse again. Will you remember where to find it?
Save it to your library and the next time you need Langfuse, itβs one tap away β from any AI app you use. Group it into a bench with the rest of the team for that kind of task and you can pull the whole stack at once.
β‘ Pro tip for geeks: add a-gnt π€΅π»ββοΈ as a custom connector in Claude or a custom GPT in ChatGPT β one click and your library is right there in the chat. Or, if youβre in an editor, install the a-gnt MCP server and say βuse my [bench name]β in Claude Code, Cursor, VS Code, or Windsurf.
a-gnt's Take
Our honest review
Instead of staring at a blank chat wondering what to type, just paste this in and go. πͺ’ Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playg. You can tweak the parts in brackets to make it yours. It's completely free and works across most major AI apps. This one just landed in the catalog β worth trying while it's fresh.
Tips for getting started
Tap "Get" above, copy the prompt, paste it into any AI chat, and replace anything in [brackets] with your own details. Hit send β that's it.
You can keep the conversation going after the first response β ask follow-up questions, ask it to change the tone, or go deeper on any part.
What's New
Imported from GitHub
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.