Skip to main content
0
C

Claw Compactor

14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis

Rating

0.0

Votes

0

score

Downloads

0

total

Price

Free

Access token required

Works With

Claude CodeCursorWindsurfVS CodeDeveloper tool

About

{ "@context": "https://schema.org", "@type": "SoftwareApplication", "name": "Claw Compactor", "description": "14-stage Fusion Pipeline for LLM token compression with reversible compression, AST-aware code analysis, and intelligent content routing", "applicationCategory": "DeveloperApplication", "operatingSystem": "Cross-platform", "softwareVersion": "7.0.0", "license": "https://opensource.org/licenses/MIT", "url": "https://github.com/open-compress/claw-compactor", "downloadUrl": "https://github.com/open-compress/claw-compactor", "author": { "@type": "Organization", "name": "OpenClaw", "url": "https://openclaw.ai" }, "offers": { "@type": "Offer", "price": "0", "priceCurrency": "USD" }, "keywords": "token compression, LLM, AI agent, fusion pipeline, reversible compression, AST code analysis, context window optimization" }

-->

Claw Compactor

14-Stage Fusion Pipeline for LLM Token Compression

](https://github.com/open-compress/claw-compactor/actions) [ ](https://python.org) [ ](https://pypi.org/project/claw-compactor/) [ [](https://github.com/open-compress/claw-compactor)

15–82% compression depending on content · Zero LLM inference cost · Reversible · 1600+ tests

Documentation · Architecture · Benchmarks · Quick Start · API

What is Claw Compactor?

Claw Compactor is an open-source LLM token compression engine built around a 14-stage Fusion Pipeline. Each stage is a specialized compressor — from AST-aware code analysis to JSON statistical sampling to simhash-based deduplication — chained through an immutable data flow architecture where each stage's output feeds the next.

Demo

$ claw-compactor benchmark ./my-workspace

  Claw Compactor v7.0 — Fusion Pipeline Benchmark
  ─────────────────────────────────────────────────

  Scanning workspace... 47 files, 234,891 tokens

  Stage Results:
  ┌──────────────────┬──────────┬───────────┬──────────┐
  │ Stage            │ Applied  │ Reduction │ Time     │
  ├──────────────────┼──────────┼───────────┼──────────┤
  │ Cortex           │ 47/47    │ —         │ 12ms     │
  │ Photon           │ 3/47     │ 2.1%      │ 4ms      │
  │ RLE              │ 41/47    │ 8.3%      │ 6ms      │
  │ SemanticDedup    │ 47/47    │ 12.7%     │ 18ms     │
  │ Ionizer          │ 8/47     │ 71.2%     │ 9ms      │
  │ Neurosyntax      │ 23/47    │ 18.4%     │ 31ms     │
  │ TokenOpt         │ 47/47    │ 4.1%      │ 3ms      │
  │ Abbrev           │ 12/47    │ 6.8%      │ 5ms      │
  └──────────────────┴──────────┴───────────┴──────────┘

Don't lose this

Three weeks from now, you'll want Claw Compactor again. Will you remember where to find it?

Save it to your library and the next time you need Claw Compactor, it’s one tap away — from any AI app you use. Group it into a bench with the rest of the team for that kind of task and you can pull the whole stack at once.

⚡ Pro tip for geeks: add a-gnt 🤵🏻‍♂️ as a custom connector in Claude or a custom GPT in ChatGPT — one click and your library is right there in the chat. Or, if you’re in an editor, install the a-gnt MCP server and say “use my [bench name]” in Claude Code, Cursor, VS Code, or Windsurf.

🤵🏻‍♂️

a-gnt's Take

Our honest review

14-stage Fusion Pipeline for LLM token compression — reversible compression, AST-aware code analysis. Best for anyone looking to make their AI assistant more capable in content. It's completely free and works across most major AI apps. This one just landed in the catalog — worth trying while it's fresh.

Tips for getting started

1

Tap "Get" above, pick your AI app, and follow the steps. Most installs take under 30 seconds.

What's New

Version 1.0.06 days ago

Imported from GitHub

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.