Dockertest
Write better integration tests! Dockertest helps you boot up ephermal docker images for your Go tests with minimal work.
Install / Use
/learn @ory/DockertestREADME
Use Docker to run your Go integration tests against third party services on Windows, macOS, and Linux!
Dockertest supports running any Docker image from Docker Hub or from a Dockerfile.
<!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->- Why should I use Dockertest?
- Installation
- Quick Start
- Migration from v3
- API overview
- Examples
- Troubleshoot & FAQ
- Running in CI
Why should I use Dockertest?
When developing applications, it is often necessary to use services that talk to a database system. Unit testing these services can be cumbersome because mocking database/DBAL is strenuous. Making slight changes to the schema implies rewriting at least some, if not all mocks. The same goes for API changes in the DBAL.
To avoid this, it is smarter to test these specific services against a real database that is destroyed after testing. Docker is the perfect system for running integration tests as you can spin up containers in a few seconds and kill them when the test completes.
The Dockertest library provides easy to use commands for spinning up Docker containers and using them for your tests.
[!WARNING]
Dockertest v4 is not yet finalized and may still receive breaking changes before the stable release.
Installation
go get github.com/ory/dockertest/v4
Quick Start
package myapp_test
import (
"testing"
"time"
dockertest "github.com/ory/dockertest/v4"
)
func TestPostgres(t *testing.T) {
pool := dockertest.NewPoolT(t, "")
// Container is automatically reused across test runs based on "postgres:14".
postgres := pool.RunT(t, "postgres",
dockertest.WithTag("14"),
dockertest.WithEnv([]string{
"POSTGRES_PASSWORD=secret",
"POSTGRES_DB=testdb",
}),
)
hostPort := postgres.GetHostPort("5432/tcp")
// Connect to postgres://postgres:secret@hostPort/testdb
// Wait for PostgreSQL to be ready
err := pool.Retry(t.Context(), 30*time.Second, func() error {
// try connecting...
return nil
})
if err != nil {
t.Fatalf("Could not connect: %v", err)
}
}
Migration from v3
Version 4 introduces automatic container reuse, making tests significantly faster by reusing containers across test runs. Additionally, a lightweight docker client is used which reduces third party dependencies significantly.
See UPGRADE.md for the complete migration guide.
API overview
View the Go API documentation.
Pool creation
// For tests - auto-cleanup with t.Cleanup()
pool := dockertest.NewPoolT(t, "")
// With options
pool := dockertest.NewPoolT(t, "",
dockertest.WithMaxWait(2*time.Minute),
)
// With a custom Docker client
pool := dockertest.NewPoolT(t, "",
dockertest.WithMobyClient(myClient),
)
// For non-test code - requires manual Close()
ctx := context.Background()
pool, err := dockertest.NewPool(ctx, "")
if err != nil {
panic(err)
}
defer pool.Close(ctx)
Running containers
// Test helper - fails test on error
resource := pool.RunT(t, "postgres",
dockertest.WithTag("14"),
dockertest.WithEnv([]string{"POSTGRES_PASSWORD=secret"}),
dockertest.WithCmd([]string{"postgres", "-c", "log_statement=all"}),
)
// With error handling
resource, err := pool.Run(ctx, "postgres",
dockertest.WithTag("14"),
dockertest.WithEnv([]string{"POSTGRES_PASSWORD=secret"}),
)
if err != nil {
panic(err)
}
See Cleanup for container lifecycle management.
Container configuration
Customize container settings with configuration options:
resource := pool.RunT(t, "postgres",
dockertest.WithTag("14"),
dockertest.WithUser("postgres"),
dockertest.WithWorkingDir("/var/lib/postgresql/data"),
dockertest.WithLabels(map[string]string{
"test": "integration",
"service": "database",
}),
dockertest.WithHostname("test-db"),
dockertest.WithEnv([]string{"POSTGRES_PASSWORD=secret"}),
)
Available configuration options:
WithTag(tag string)- Set the image tag (default:"latest")WithEnv(env []string)- Set environment variablesWithCmd(cmd []string)- Override the default commandWithEntrypoint(entrypoint []string)- Override the default entrypointWithUser(user string)- Set the user to run commands as (supports "user" or "user:group")WithWorkingDir(dir string)- Set the working directoryWithLabels(labels map[string]string)- Add labels to the containerWithHostname(hostname string)- Set the container hostnameWithName(name string)- Set the container nameWithMounts(binds []string)- Set bind mounts ("host:container" or "host:container:mode")WithPortBindings(bindings network.PortMap)- Set explicit port bindingsWithReuseID(id string)- Set a custom reuse key (default:"repository:tag")WithoutReuse()- Disable container reuse for this runWithContainerConfig(modifier func(*container.Config))- Modify the container config directlyWithHostConfig(modifier func(*container.HostConfig))- Modify the host config (port bindings, volumes, restart policy, memory/CPU limits)
For advanced container configuration, use WithContainerConfig:
stopTimeout := 30
resource := pool.RunT(t, "app",
dockertest.WithContainerConfig(func(cfg *container.Config) {
cfg.StopTimeout = &stopTimeout
cfg.StopSignal = "SIGTERM"
cfg.Healthcheck = &container.HealthConfig{
Test: []string{"CMD", "curl", "-f", "http://localhost/health"},
Interval: 10 * time.Second,
Timeout: 5 * time.Second,
Retries: 3,
}
}),
)
For host-level configuration, use WithHostConfig:
resource := pool.RunT(t, "postgres",
dockertest.WithTag("14"),
dockertest.WithHostConfig(func(hc *container.HostConfig) {
hc.RestartPolicy = container.RestartPolicy{
Name: container.RestartPolicyOnFailure,
MaximumRetryCount: 3,
}
}),
)
Container reuse
Containers are automatically reused based on repository:tag. Reuse is
reference-counted: each Run/RunT call increments the ref count, and each
Close/cleanup decrements it. The container is only removed from Docker when
the last reference is released.
// First test creates container
r1 := pool.RunT(t, "postgres", dockertest.WithTag("14"))
// Second test reuses the same container
r2 := pool.RunT(t, "postgres", dockertest.WithTag("14"))
// r1 and r2 point to the same container
Disable reuse if needed:
resource := pool.RunT(t, "postgres",
dockertest.WithTag("14"),
dockertest.WithoutReuse(), // Always create new container
)
Getting connection info
resource := pool.RunT(t, "postgres", dockertest.WithTag("14"))
// Get host:port (e.g., "127.0.0.1:54320")
hostPort := resource.GetHostPort("5432/tcp")
// Get just the port (e.g., "54320")
port := resource.GetPort("5432/tcp")
// Get just the IP (e.g., "127.0.0.1")
ip := resource.GetBoundIP("5432/tcp")
// Get container ID
id := resource.ID()
Waiting for readiness
Use pool.Retry to wait for a container to become ready:
err := pool.Retry(t.Context(), 30*time.Second, func() error {
return db.Ping()
})
if err != nil {
t.Fatalf("Container not ready: %v", err)
}
If timeout is 0, pool.MaxWait (default 60s) is used. The retry interval is
fixed at 1 second.
For more control, use the package-level functions:
// Fixed interval retry
err := dockertest.Retry(ctx, 30*time.Second, 500*time.Millisecond, func() error {
return db.Ping()
})
// Exponential backoff retry
err := dockertest.RetryWithBackoff(ctx,
30*time.Second, // timeout
100*time.Millisecond, // initial interval
5*time.Second, // max interval
func() error {
return db.Ping()
},
)
Executing commands
Run commands inside a running container:
result, err := resource.Exec(ctx, []string{"pg_isready", "-U", "postgres"})
if err != nil {
t.Fatal(err)
}
if result.ExitCode != 0 {
t.Fatalf("command failed: %s", result.StdErr)
}
t.Log(result.StdOut)
Container logs
// Get all logs with stdout and stderr separated
stdout, stderr, err := resource.Logs(ctx)
if err != nil {
t.Fatal(err)
}
t.Log(stdout)
t.Log(stderr)
// Stream logs until container exits or ctx is cancelled
var buf bytes.Buffer
err = resource.FollowLogs(ctx, &buf, io.Discard)
Building from Dockerfile
Build a Docker image from a Dockerfile and run it:
version := "1.0.0"
resou
Related Skills
imsg
343.1kiMessage/SMS CLI for listing chats, history, and sending messages via Messages.app.
node-connect
343.1kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
oracle
343.1kBest practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
lobster
343.1kLobster Lobster executes multi-step workflows with approval checkpoints. Use it when: - User wants a repeatable automation (triage, monitor, sync) - Actions need human approval before executing (s
