- Home
- Search & Web
- Scpr
Rating
Votes
0
score
Downloads
0
total
Price
Free
No login needed
Works With
About
scpr
scpr is a simple and straightforward webscraping CLI tool made to scrape page as markdown content, and developed to be used both by humans and by coding agents (either as an MCP server or as a skill).
scpr is written in Go and based on colly for web scraping and `html-to-markdown` for converting HTML pages to markdown.
Installation
Install with Go (v1.24+ required):
go install github.com/AstraBert/scprInstall with NPM:
npm install @cle-does-things/scprExtra instructions for Windows installation
If you are on Windows, scpr might not be available right after global installation with npm. In that case, you might need to take extra steps:
- 1.Find where the
nodeexecutable is stored on your machine:
Get-Command nodeThis will print the directory where node.exe is stored: scpr will be installed at .\bin\scpr.exe in that folder.
[!NOTE]
>
_If you are usingnvmfor Windows,node.exewill be atC:\Users\nvm4w\nodejs_
- 1.Add
{NODE_FOLDER}\bin(in the case of nvm:C:\Users\nvm4w\nodejs\bin) to the PATH environment variables. Follow this guide for instructions on how to set PATH env variables. - 2.Restart your computer
- 3.Test
scpr --helpfrom your terminal. The execution might be challenged by your antivirus, but, since the executable does not contain any harmful code, the antivirus will eventually allow it
Usage
As a CLI tool
Basic usage (scrape a single page):
scpr --url https://example.com --output ./scrapedThis will scrape the page and save it as a markdown file in the ./scraped folder.
Recursive scraping
To scrape a page and all linked pages within the same domain:
scpr --url https://example.com --output ./scraped --recursive --allowed example.com --max 3Parallel scraping
Speed up recursive scraping with multiple threads:
scpr --url https://example.com --output ./scraped --recursive --allowed example.com --max 2 --parallel 5Additional options
--log- Set logging level (info, debug, warn, error)--max- Maximum depth of pages to follow (default: 1)--parallel- Number of concurrent threads (default: 1)--allowed- Allowed domains for recursive scraping (can be specified multiple times)
For more details, run:
scpr --helpAs a stdio MCP server
Start the MCP server with:
scpr mcpAnd configure it in agents using:
{
"mcpServers": {
"web-scraping": {
"type": "stdio",
"command": "scpr",
"args": [
"mcp"
],
"env": {}
}
}
}_The above JSON snippet is reported as used by Claude Code, adapt it to your agent before using it_
Contributing
Don't lose this
Three weeks from now, you'll want Scpr again. Will you remember where to find it?
Save it to your library and the next time you need Scpr, it’s one tap away — from any AI app you use. Group it into a bench with the rest of the team for that kind of task and you can pull the whole stack at once.
⚡ Pro tip for geeks: add a-gnt 🤵🏻♂️ as a custom connector in Claude or a custom GPT in ChatGPT — one click and your library is right there in the chat. Or, if you’re in an editor, install the a-gnt MCP server and say “use my [bench name]” in Claude Code, Cursor, VS Code, or Windsurf.
a-gnt's Take
Our honest review
This plugs directly into your AI and gives it new abilities it didn't have before. Web scraper CLI and MCP built for human and coding agents. Once connected, just ask your AI to use it. It's completely free and works across most major AI apps. This one just landed in the catalog — worth trying while it's fresh.
Tips for getting started
Tap "Get" above, pick your AI app, and follow the steps. Most installs take under 30 seconds.
What's New
Imported from GitHub
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.