Filesql
sql driver for CSV, TSV, LTSV, JSON, Parquet, Excel with gzip, bzip2, xz, zstd support.
Install / Use
/learn @nao1215/FilesqlREADME
filesql
日本語 | Русский | 中文 | 한국어 | Español | Français

filesql is a Go SQL driver that enables you to query CSV, TSV, LTSV, Parquet, and Excel (XLSX) files using SQLite3 SQL syntax. Query your data files directly without any imports or transformations!
Want to try filesql's capabilities? Check out sqly - a command-line tool that uses filesql to easily execute SQL queries against CSV, TSV, LTSV, and Excel files directly from your shell. It's the perfect way to experience the power of filesql in action!
Why filesql?
This library was born from the experience of maintaining two separate CLI tools - sqly and sqluv. Both tools shared a common feature: executing SQL queries against CSV, TSV, and other file formats.
Rather than maintaining duplicate code across both projects, we extracted the core functionality into this reusable SQL driver. Now, any Go developer can leverage this capability in their own applications!
Features
- SQLite3 SQL Interface - Use SQLite3's powerful SQL dialect to query your files
- Multiple File Formats - Support for CSV, TSV, LTSV, Parquet, and Excel (XLSX) files
- Compression Support - Automatically handles .gz, .bz2, .xz, .zst, .z, .snappy, .s2, and .lz4 compressed files
- Stream Processing - Efficiently handles large files through streaming with configurable chunk sizes
- Flexible Input Sources - Support for file paths, directories, io.Reader, and embed.FS
- Zero Setup - No database server required, everything runs in-memory
- Auto-Save - Automatically persist changes back to files
- Cross-Platform - Works seamlessly on Linux, macOS, and Windows
- SQLite3 Powered - Built on the robust SQLite3 engine for reliable SQL processing
Supported File Formats
| Extension | Format | Description |
|-----------|--------|-------------|
| .csv | CSV | Comma-separated values |
| .tsv | TSV | Tab-separated values |
| .ltsv | LTSV | Labeled Tab-separated Values |
| .parquet | Parquet | Apache Parquet columnar format |
| .xlsx | Excel XLSX | Microsoft Excel workbook format |
| .json | JSON | JSON format (use json_extract() for field access) |
| .jsonl | JSONL | JSON Lines format (one JSON object per line) |
| .csv.gz, .tsv.gz, .ltsv.gz, .parquet.gz, .xlsx.gz, .json.gz, .jsonl.gz | Gzip compressed | Gzip compressed files |
| .csv.bz2, .tsv.bz2, .ltsv.bz2, .parquet.bz2, .xlsx.bz2, .json.bz2, .jsonl.bz2 | Bzip2 compressed | Bzip2 compressed files |
| .csv.xz, .tsv.xz, .ltsv.xz, .parquet.xz, .xlsx.xz, .json.xz, .jsonl.xz | XZ compressed | XZ compressed files |
| .csv.zst, .tsv.zst, .ltsv.zst, .parquet.zst, .xlsx.zst, .json.zst, .jsonl.zst | Zstandard compressed | Zstandard compressed files |
| .csv.z, .tsv.z, .ltsv.z, .parquet.z, .xlsx.z, .json.z, .jsonl.z | Zlib compressed | Zlib compressed files |
| .csv.snappy, .tsv.snappy, .ltsv.snappy, .parquet.snappy, .xlsx.snappy, .json.snappy, .jsonl.snappy | Snappy compressed | Snappy compressed files |
| .csv.s2, .tsv.s2, .ltsv.s2, .parquet.s2, .xlsx.s2, .json.s2, .jsonl.s2 | S2 compressed | S2 compressed files (Snappy compatible) |
| .csv.lz4, .tsv.lz4, .ltsv.lz4, .parquet.lz4, .xlsx.lz4, .json.lz4, .jsonl.lz4 | LZ4 compressed | LZ4 compressed files |
| .ach | ACH (NACHA) | Automated Clearing House files (Experimental) |
| .fed | Fedwire | Legacy Fedwire message files (Experimental) |
Installation
go get github.com/nao1215/filesql
Requirements
- Go Version: 1.25 or later
- Operating Systems:
- Linux
- macOS
- Windows
Quick Start
Simple Usage
The recommended way to get started is with OpenContext for proper timeout handling:
package main
import (
"context"
"fmt"
"log"
"time"
"github.com/nao1215/filesql"
)
func main() {
// Create context with timeout for large file operations
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Open a CSV file as a database
db, err := filesql.OpenContext(ctx, "data.csv")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Query the data (table name = filename without extension)
rows, err := db.QueryContext(ctx, "SELECT * FROM data WHERE age > 25")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
// Process results
for rows.Next() {
var name string
var age int
if err := rows.Scan(&name, &age); err != nil {
log.Fatal(err)
}
fmt.Printf("Name: %s, Age: %d\n", name, age)
}
}
Multiple Files and Formats
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Open multiple files at once (including Parquet)
db, err := filesql.OpenContext(ctx, "users.csv", "orders.tsv", "logs.ltsv.gz", "analytics.parquet")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Join data across different file formats
rows, err := db.QueryContext(ctx, `
SELECT u.name, o.order_date, l.event, a.metrics
FROM users u
JOIN orders o ON u.id = o.user_id
JOIN logs l ON u.id = l.user_id
JOIN analytics a ON u.id = a.user_id
WHERE o.order_date > '2024-01-01'
`)
Working with Directories
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
// Load all supported files from a directory (recursive)
db, err := filesql.OpenContext(ctx, "/path/to/data/directory")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// See what tables are available
rows, err := db.QueryContext(ctx, "SELECT name FROM sqlite_master WHERE type='table'")
JSON / JSONL Support
JSON and JSONL files are stored as raw JSON in a single data TEXT column. Use SQLite's json_extract() function to query fields:
// Open a JSON file
db, err := filesql.OpenContext(ctx, "users.json")
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Query using json_extract()
rows, err := db.QueryContext(ctx, `
SELECT json_extract(data, '$.name') AS name,
json_extract(data, '$.age') AS age
FROM users
WHERE json_extract(data, '$.age') > 25
`)
// Nested fields work too
rows, err = db.QueryContext(ctx, `
SELECT json_extract(data, '$.address.city') AS city
FROM users
WHERE json_extract(data, '$.address.country') = 'Japan'
`)
Advanced Usage
Builder Pattern
For advanced scenarios, use the builder pattern:
package main
import (
"context"
"embed"
"log"
"github.com/nao1215/filesql"
)
//go:embed data/*.csv
var embeddedFiles embed.FS
func main() {
ctx := context.Background()
// Configure data sources with builder
validatedBuilder, err := filesql.NewBuilder().
AddPath("local_file.csv"). // Local file
AddFS(embeddedFiles). // Embedded files
SetDefaultChunkSize(5000). // 5000 rows per chunk
Build(ctx)
if err != nil {
log.Fatal(err)
}
db, err := validatedBuilder.Open(ctx)
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Query across all data sources
rows, err := db.Query("SELECT name FROM sqlite_master WHERE type='table'")
if err != nil {
log.Fatal(err)
}
defer rows.Close()
}
Auto-Save Features
Auto-Save on Database Close
// Auto-save changes when database is closed
validatedBuilder, err := filesql.NewBuilder().
AddPath("data.csv").
EnableAutoSave("./backup"). // Save to backup directory
Build(ctx)
if err != nil {
log.Fatal(err)
}
db, err := validatedBuilder.Open(ctx)
if err != nil {
log.Fatal(err)
}
defer db.Close() // Changes are automatically saved here
// Make changes
db.Exec("UPDATE data SET status = 'processed' WHERE id = 1")
db.Exec("INSERT INTO data (name, age) VALUES ('John', 30)")
Auto-Save on Transaction Commit
// Auto-save after each transaction
validatedBuilder, err := filesql.NewBuilder().
AddPath("data.csv").
EnableAutoSaveOnCommit(""). // Empty = overwrite original files
Build(ctx)
if err != nil {
log.Fatal(err)
}
db, err := validatedBuilder.Open(ctx)
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Changes are saved after each commit
tx, _ := db.Begin()
tx.Exec("UPDATE data SET status = 'processed' WHERE id = 1")
tx.Commit() // Auto-save happens here
Working with io.Reader and Network Data
import (
"net/http"
"github.com/nao1215/filesql"
)
// Load data from HTTP response
resp, err := http.Get("https://example.com/data.csv")
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
validatedBuilder, err := filesql.NewBuilder().
AddReader(resp.Body, "remote_data", filesql.FileTypeCSV).
Build(ctx)
if err != nil {
log.Fatal(err)
}
db, err := validatedBuilder.Open(ctx)
if err != nil {
log.Fatal(err)
}
defer db.Close()
// Query remote data
rows, err := db.QueryContext(ctx, "SELECT * FROM remote_data LIMIT 10")
Manual Data Export
If you prefer manual control ove
