EShopOnSteroids
Cloud-native online shop powered by Java 17, Python 3, containers, and Kubernetes 🐋⚓
Install / Use
/learn @badass-techie/EShopOnSteroidsREADME
eShopOnSteroids
eShopOnSteroids is a well-architected, distributed, event-driven, cloud-native e-commerce platform powered by the following building blocks of microservices:
- API Gateway (Spring Cloud Gateway)
- Service Discovery (Docker and Kubernetes builtin)
- Distributed Tracing (Sleuth, Zipkin)
- Circuit Breaker (Resilience4j)
- Event Bus (RabbitMQ)
- Database per Microservice (PostgreSQL, MongoDB, Redis)
- Centralized Monitoring (Prometheus, Grafana)
- Centralized Logging (Elasticsearch, Fluentd, Kibana)
- Control Loop (Kubernetes, Terraform)
This code follows best practices such as:
- Unit Testing (JUnit 5, Mockito, Pytest)
- Integration Testing (Testcontainers)
- Design Patterns (Publish/Subscribe, Backend for Frontend, ...)
microservices, event-driven, distributed systems, e-commerce, domain-driven-design, java, python, spring cloud, spring boot, spring cloud gateway, spring cloud sleuth, zipkin, resilience4j, postgresql, mongodb, redis, cache, rabbitmq, kubernetes, k8s, terraform, observability, prometheus, grafana, elasticsearch, fluentd, kibana
Note: If you are interested in this project, no better way to show it than ★ starring the repository!
Architecture
The architecture proposes a microservices-oriented implementation where each microservice is responsible for a single business capability. The microservices are deployed in a containerized environment (Docker) and orchestrated by a control loop (Kubernetes) which continuously compares the state of each microservice to the desired state, and takes necessary actions to arrive at the desired state.
Each microservice stores its data in its own database tailored to its requirements, such as an In-Memory Database for a shopping cart whose persistence is short-lived, a Document Database for a product catalog for its flexibility, or a Relational Database for an order management system for its ACID properties.
Microservices communicate externally via REST through a secured API Gateway, and internally via
- gRPC for synchronous communication which excels for its performance
- an event bus for asynchronous communication in which the receiving microservice is free to handle the event whenever it has the capacity
Below is a visual representation:

- All microservices are inside a private network and not accessible except through the API Gateway.
- The API Gateway routes requests to the corresponding microservice, routes requests to the appropriate endpoint based on the client (Backend for Frontend), and validates the authorization of requests.
- The Identity Microservice acts as an Identity Provider and is responsible for storing users and their roles, and for issuing authorization credentials.
- The Cart Microservice manages the shopping cart of each user. It uses a cache (Redis) as the storage.
- The Product Microservice stores the product catalog and stock. It's subscribed to the Event Bus to receive notifications of new orders and update the stock accordingly.
- The Order Microservice manages order processing and fulfillment. It performs a gRPC call to the Product Microservice to check the availability and pricing of the products in the order pre-checkout and publishes events to the Event Bus to initiate a payment and to update the stock post-checkout.
- The gRPC communication between the microservices is fault-tolerant thanks to a circuit breaker.
- The Payment Microservice handles payment processing. It's subscribed to the Event Bus to receive notifications of new orders and initiate a payment. It does not lie behind the API Gateway as it is not directly accessible by the user. It is also stateless and does not store any data.
Observability services include:
- Zipkin and Sleuth for assigning traces to requests to track their path across microservices

- Prometheus and Grafana for collecting metrics from microservices and setting up alerts for when a metric exceeds a threshold

- Elasticsearch, Fluentd, and Kibana for aggregating logs from microservices

Future work:
- Outsource authentication to a third-party identity provider such as Keycloak
Setup
Prerequisites
Yes, that's it!
Development
-
Create the env file and fill in the missing values
cp .env.example .env vi .env ... -
Start the containers
docker compose -f docker-compose.dev.yml upThe first time you run this command, it will take a few minutes to build the images, after which you should be able to access the application at port 8080 locally. Changes to the source code will be automatically reflected in the containers without any extra steps.
To stop the containers, run:
docker compose -f docker-compose.dev.yml downTo remove saved data along with the containers, run the following command:
docker compose -f docker-compose.dev.yml down -v
Production
Deploy containers with docker compose
-
Create the env file and fill in the missing values
cp .env.example .env vi .env ...Then:
-
(Optional) Run the following command to build the images locally:
docker compose buildIt will take a few minutes. Alternatively, you can skip this step and the images will be pulled from Docker Hub.
-
Start the containers
docker compose up
You can now access the application at port 8080 locally
Deploy to local Kubernetes cluster
-
Ensure you have enabled Kubernetes in Docker Desktop as below:

(or alternatively, install Minikube and start it with
minikube start)Then:
-
Enter the directory containing the Kubernetes manifests
cd k8s -
Create the env file and fill in the missing values
cp ./config/.env.example ./config/.env vi ./config/.env ... -
Create the namespace
kubectl apply -f ./namespace -
Change the context to the namespace
kubectl config set-context --current --namespace=eshoponsteroids -
Create Kubernetes secrets from the env file
kubectl create secret generic env-secrets --from-env-file=./config/.env -
Apply the configmaps
kubectl apply -f ./config -
Apply the persistent volumes
kubectl apply -f ./volumes -
Install kubernetes metrics server (needed to scale microservices based on metrics)
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -
Deploy the containers
kubectl apply -f ./deployments -
Expose the API gateway
kubectl apply -f ./networking/node-port.yml
You can now access the application at port 30080 locally
To tear everything down, run the following command:
kubectl delete namespace eshoponsteroids
Future work:
- Simplify the deployment process by templating similar manifests with Helm
- Overlay/patch manifests to tailor them to different environments (dev, staging, prod, ...) using Kustomize
Deploy to AWS EKS cluster
Typically, operations teams provision cloud resources, and dev teams focus more on shipping their code to these resources. However, as a developer, to be able to design cloud-native applications such as this one, it is important to understand the infrastructure on which your code runs (hence the rise of DevOps as a software development methodology). That is why we will provision our own Kubernetes cluster on AWS EKS (Elastic Kubernetes Service) for our application.
For this section, in addition to Docker you will need:
- Basic knowledge of the AWS platform
- An AWS account
- AWS CLI configured with the credentials of either your account or an IAM user with administrator access (run
aws configure)
Here is a breakdown of the resources to set up and configure:
- VPC (Virtual Private Cloud): a virtual network that represents a logically isolated section of the AWS cloud where our cluster will reside
- Subnets: 2 public and 2 private subnets in different availability zones (required by EKS to ensure high availability of the cluster). Think of subnets as segments of our VPC that allow us to group resources based on their security and connectivity requirements
- Internet Gateway: a VPC component that allows our public subnets to access the internet
- NAT Gateway: a VPC component that allows our private subnets to access the internet but prevents the internet from initiating connections to our private subnets
- Route Tables: contains a set of rules called routes that direct traffic and in this case connect the internet gateway to the public subnets and the NAT gateway to the private subnets
- Security Groups: they act as virtual fire
Related Skills
node-connect
339.3kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
prose
339.3kOpenProse VM skill pack. Activate on any `prose` command, .prose files, or OpenProse mentions; orchestrates multi-agent workflows.
claude-opus-4-5-migration
83.9kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
83.9kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
