Mailstrom
Thundermail Server IaC
Install / Use
/learn @thunderbird/MailstromREADME
Thunderbird Pro Mail Service Infrastructure Documentation
This project aims to deploy Stalwart Mail Server in a scalable way using Pulumi and tb_pulumi.
Some terminology for clarity:
- Thundermail: The marketing name of an email service provided by Thunderbird.
- Mailstrom: The name of the infrastructure-as-code project which builds/manages Thundermail's infrastructure.
- Stalwart: An open source email platform deployed by Mailstrom.
- Pulumi: An infrastructure-as-code library and platform.
- tb_pulumi: An extension of Pulumi defining some common infrastructure patterns at Thunderbird.
Virtual Environments
Pulumi manages its own Python virtual environment based on the pulumi/requirements.txt file. For local development,
you should manage your own virtual environment.
virtualenv .venv
source .venv/bin/activate
pip install ."[dev]"
When you run Pulumi commands, it is best if you deactivate your development environment first.
Ruff
We use the Ruff tool to ensure consistent code style throughout our projects. The tool has plugins for most of your favorite IDEs, and is easy to use from the command line as well.
Set up the local dev environment as shown above, then run...
# Rewrite files in our configured format (ruff.toml).
ruff format
# Fix any issues that can be automatically fixed
ruff check --fix
# Identify remaining issues to fix by hand
ruff check
How Stalwart Node Bootstrapping Works
In the broadest strokes:
- This repo's
bootstrapdirectory contains a script and related files that will eventually run on a Stalwart node at launch time to configure a running Stalwart instance there. - The
stalwart.StalwartClusterclass (instalwart.py) tarballs these bootstrapping files, bzip2-compresses them, base64-encodes that, and then injects that string into a Bash script template (stalwart_instance_user_data.sh.j2). - That script gets set as the instance's user data script, such that when the instance is first launched, the script runs.
- Additional configuration is stored either as tags on the instance or as secrets (credentials, etc.).
- The first bootstrap stage unpacks the stage-two tar.bz file and runs the Python script contained therein.
- The second stage script templates a config file for Stalwart and a systemd service file that runs it as a docker container when the instance comes online.
In this way, a pulumi up with a proper node configuration can bootstrap a functioning Stalwart cluster.
Configuration
YAML Config
This project follows the conventions outlined in the
tb_pulumi documentation. All code
related to infrastructure lives in the pulumi/ directory. The Stalwart cluster is declared in the __main__.py file,
but the configuration for that resource is mostly contained in the config.$env.yaml file. Begin with this config
shell:
resources:
tb:mailstrom:StalwartCluster:
thundermail:
https_features: # List of supported features of the https service, to be enabled or not cluster-wide.
- caldav # Features must match entries in the StalwartCluster.HTTPS_FEATURES dict and must be explicitly
- carddav # listed to be enabled. Enabling the "management" service on a node allows traffic to all https
- jmap # features because it removes any access restrictions. You should typically deploy a separate
- webdav # node to serve the management page privately (see "Operation" below).
stalwart_image: stalwartlabs/mail-server:v0.11 # Set what version you wish to deploy
nodes:
"0": # Entries in this list must be stringified integers; this will become the Stalwart cluster node-id.
disable_api_termination: False # Set to True in production environments; prevent accidental deletion of nodes
ignore_ami_changes: True # Prevent the node from being rebuilt when AWS releases a new OS
ignore_user_data_changes: True # Prevent the node from restarting when user data changes
instance_type: t3.micro
key_name: your-ec2-keypair # Keypair to use for SSH access
node_roles: # Stalwart cluster node roles to enable (not really implemented yet)
- all
services: # List of services to enable on the node, or "all"
- all # Enable all services; incompatible with other services
# - https # Enables the HTTP server with the various enabled "https_features"
# - imap
# - imaps
# - lmtp
# - management # Admin panel is HTTPS, but served over a different port to prevent accidental exposure
# - managesieve
# - pop3
# - pop3s
# - smtp
# - smtps
# - submission
storage_capacity: 20 # Ephemeral storage volume size in GB
load_balancer:
services: # Configuration of service exposure through the load balancer
https: # All "https_features" are exposed through this listener
source_cidrs: ['0.0.0.0/0']
imap: # Mail services should be public
source_cidrs: ['0.0.0.0/0']
imaps:
source_cidrs: ['0.0.0.0/0']
lmtp:
source_cidrs: ['0.0.0.0/0']
# "management" is the web admin interface, which should never be exposed to the world; you should usually
# disable this entirely, but *at least* restrict access as much as possible.
management:
source_cidrs: ['10.0.0.0/16']
managesieve:
source_cidrs: ['0.0.0.0/0']
smtp:
source_cidrs: ['0.0.0.0/0']
smtps:
source_cidrs: ['0.0.0.0/0']
submission:
source_cidrs: ['0.0.0.0/0']
# The "all" service exposes all services to the same set of sources. If you do this, you should only ever
# expose them to private network space for testing purposes. Exposing "all" to the world exposes the web admin
# interface to the world, which you should never do.
# all:
# source_cidrs:
# - 10.1.0.0/16
# source_security_group_ids:
# - your-ssh-bastions-id-maybe
Adjust these values to your liking, adding additional nodes as needed.
Additional Secrets
This project uses Neon Databases as a Postgres backend. There is currently no Neon provider in the Pulumi registry, so this resource is managed manually. The details of the database connection must nevertheless be delivered to the EC2 instances, so you must store this data in AWS Secrets Manager. If you would prefer to use some other Postgres compatible storage backend like RDS, you may do so and specify the connection details in this secret.
Craft the data:
{
"host": "database-hostname",
"port": 5432,
"database": "db_name",
"user": "db_user",
"password": "db_password",
"tls": {
"enable": true,
"allow-invalid-certs": false
}
}
Paste that into a config command:
pulumi config set --secret stalwart.postboot.postgresql_backend '$all_that_json'
You'll need another secret set up containing the connection details for the tb-accounts database backend. Follow the
same procedure, defining those connection details in the stalwart.postboot.tb-accounts_backend secret.
Finally, set the web admin panel's password by setting stalwart.postboot.fallback_admin_password to a secure string.
Ensure these secrets are pushed to AWS by the PulumiSecretsManager:
resources:
# ...
tb:secrets:PulumiSecretsManager:
secrets:
secret_names:
- stalwart.postboot.fallback_admin_password
- stalwart.postboot.postgresql_backend
- stalwart.postboot.tb-accounts_backend
This ensures the secrets are populated with your connection details at the time the instances bootstrap and retrieve them.
Note: The Redis and S3 storage backends are created by this module. Their connection details are stored in secrets
automatically by the StalwartCluster module; you need take no additional action regarding those stores.
Operation
SSH Setup
You may need to gain SSH access to the Stalwart nodes to debug problems. The nodes are all built in private network space with no external access, though, which prevents this. To get around this, you will need to build an SSH bastion — a server that exposes private SSH connections through a single public interface — by adding a configuration with your authentication details to the YAML file.
tb:ec2:SshableInstance:
yourname-bastion:
ssh_keypair_name: name-of-your-ec2-keypair
source_cidrs:
- your.public.ip.address/32 # You can obtain this with `curl -4 https://curlmyip.net`
SSH into that machine.
ssh -i ~/.ssh/your_id_rsa ec2-user@1.2.3.4
On your local machine, edit your ~/.ssh/config file to include these sections:
Host mailstrom-my-bastion
Hostname $bastion_public_ip
User ec2-user
# Adjust this IP range to match the actual network
Host 10.1.*
User ec2-user
IdentityFile ~/.ssh/stalwart_node_id_rsa
ProxyCommand ssh -W %h:%p mailstrom-my-bastion
You should now be able to SSH directly into the node, punching through to the private network via the bastion.
# ssh $node_ip
The authenticity of host '$node_id ($node_id) can't be established.
ED25519 key fingerprint is SHA256:somethingugly.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Accessing the Web Admin Panel
If you have the above SSH configuration working, you should also be able to open an SSH tunnel into a
