Scrapemate
Golang Crawling and scraping framework
Install / Use
/learn @gosom/ScrapemateREADME
scrapemate
Scrapemate is a web crawling and scraping framework written in Golang. It is designed to be simple and easy to use, yet powerful enough to handle complex scraping tasks.
Features
- Low level API & Easy High Level API
- Customizable retry and error handling
- Javascript Rendering with ability to control the browser
- Screenshots support (when JS rendering is enabled)
- Capability to write your own result exporter
- Capability to write results in multiple sinks
- Default CSV writer
- Caching (File/LevelDB/Custom)
- Custom job providers (memory provider included)
- Headless and Headful support when using JS rendering
- Automatic cookie and session handling
- Rotating HTTP/HTTPS/SOCKS5 proxy support
Browser Engines
Scrapemate supports two browser engines for JavaScript rendering: Playwright (default) and Rod. The browser engine is selected at compile time using Go build tags.
Playwright (Default)
Playwright is used by default when no build tags are specified. It requires the Playwright browsers to be installed.
# Install playwright browsers
go run github.com/playwright-community/playwright-go/cmd/playwright install --with-deps chromium
Build and run without any special tags:
go build ./...
Rod
Rod is a pure Go solution that uses the Chrome DevTools Protocol directly. To use Rod instead of Playwright, compile with the rod build tag.
# Build with Rod support
go build -tags rod ./...
# Run with Rod support
go run -tags rod ./...
Rod will automatically download and manage Chrome/Chromium if it's not already available on your system.
Choosing Between Playwright and Rod
| Feature | Playwright | Rod | |---------|-----------|-----| | Dependencies | Requires browser installation step | Auto-downloads browser | | Browser Support | Chromium, Firefox, WebKit | Chromium only | | Performance | Slightly higher overhead | Lower overhead, pure Go | | Docker | Larger image size | Smaller image size | | API Stability | Very stable | Stable |
Recommendation: Use Playwright if you need multi-browser support or are already familiar with Playwright. Use Rod if you prefer a pure Go solution with automatic browser management and smaller Docker images.
Example Usage
The books-to-scrape-simple example demonstrates how to use both browser engines:
# Run with Playwright (default)
# First install browsers: go run github.com/playwright-community/playwright-go/cmd/playwright install --with-deps chromium
go run . -js
# Run with Rod
go run -tags rod . -js
# Run with Rod in stealth mode
go run -tags rod . -js -stealth
Installation
go get github.com/gosom/scrapemate
Quickstart
package main
import (
"context"
"encoding/csv"
"fmt"
"net/http"
"os"
"strings"
"time"
"github.com/PuerkitoBio/goquery"
"github.com/gosom/scrapemate"
"github.com/gosom/scrapemate/adapters/writers/csvwriter"
"github.com/gosom/scrapemate/scrapemateapp"
)
func main() {
csvWriter := csvwriter.NewCsvWriter(csv.NewWriter(os.Stdout))
cfg, err := scrapemateapp.NewConfig(
[]scrapemate.ResultWriter{csvWriter},
)
if err != nil {
panic(err)
}
app, err := scrapemateapp.NewScrapeMateApp(cfg)
if err != nil {
panic(err)
}
seedJobs := []scrapemate.IJob{
&SimpleCountryJob{
Job: scrapemate.Job{
ID: "identity",
Method: http.MethodGet,
URL: "https://www.scrapethissite.com/pages/simple/",
Headers: map[string]string{
"User-Agent": scrapemate.DefaultUserAgent,
},
Timeout: 10 * time.Second,
MaxRetries: 3,
},
},
}
err = app.Start(context.Background(), seedJobs...)
if err != nil && err != scrapemate.ErrorExitSignal {
panic(err)
}
}
type SimpleCountryJob struct {
scrapemate.Job
}
func (j *SimpleCountryJob) Process(ctx context.Context, resp *scrapemate.Response) (any, []scrapemate.IJob, error) {
doc, ok := resp.Document.(*goquery.Document)
if !ok {
return nil, nil, fmt.Errorf("failed to cast response document to goquery document")
}
var countries []Country
doc.Find("div.col-md-4.country").Each(func(i int, s *goquery.Selection) {
var country Country
country.Name = strings.TrimSpace(s.Find("h3.country-name").Text())
country.Capital = strings.TrimSpace(s.Find("div.country-info span.country-capital").Text())
country.Population = strings.TrimSpace(s.Find("div.country-info span.country-population").Text())
country.Area = strings.TrimSpace(s.Find("div.country-info span.country-area").Text())
countries = append(countries, country)
})
return countries, nil, nil
}
type Country struct {
Name string
Capital string
Population string
Area string
}
func (c Country) CsvHeaders() []string {
return []string{"Name", "Capital", "Population", "Area"}
}
func (c Country) CsvRow() []string {
return []string{c.Name, c.Capital, c.Population, c.Area}
}
go mod tidy
go run main.go 1>countries.csv
(hit CTRL-C to exit)
Migrating from v0.9.x to v1.0.0
Version 1.0.0 introduces a BrowserPage interface abstraction to support multiple browser engines. This is a breaking change for users who use JavaScript rendering with BrowserActions.
Update BrowserActions signature
// Before (v0.9.x)
func (j *MyJob) BrowserActions(ctx context.Context, page playwright.Page) scrapemate.Response {
page.Goto("https://example.com", playwright.PageGotoOptions{
WaitUntil: playwright.WaitUntilStateNetworkidle,
})
html, _ := page.Content()
return scrapemate.Response{Body: []byte(html)}
}
// After (v1.0.0)
func (j *MyJob) BrowserActions(ctx context.Context, page scrapemate.BrowserPage) scrapemate.Response {
resp, err := page.Goto("https://example.com", scrapemate.WaitUntilNetworkIdle)
if err != nil {
return scrapemate.Response{Error: err}
}
return scrapemate.Response{
Body: resp.Body,
StatusCode: resp.StatusCode,
}
}
Accessing the underlying browser page
If you need browser-specific features, use Unwrap():
// For Playwright
pwPage := page.Unwrap().(playwright.Page)
// For Rod (when compiled with -tags rod)
rodPage := page.Unwrap().(*rod.Page)
See CHANGELOG.md for the full list of changes.
Documentation
You can find more documentation here
For the High Level API see this example.
Read also how to use high level api
For the Low Level API see books.toscrape.com
Additionally, for low level API you can read the blogpost
See an example of how you can use scrapemate go scrape Google Maps: https://github.com/gosom/google-maps-scraper
Contributing
Contributions are welcome.
Licence
Scrapemate is licensed under the MIT License. See LICENCE file
Related Skills
node-connect
341.6kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
xurl
341.6kA CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
frontend-design
84.6kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
341.6kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
