SkillAgentSearch skills...

EShopOnSteroids

Cloud-native online shop powered by Java 17, Python 3, containers, and Kubernetes 🐋⚓

Install / Use

/learn @badass-techie/EShopOnSteroids

README

<img src="./diagrams/banner.png" alt="eShop logo" title="eShopOnSteroids" align="right" height="60" />

eShopOnSteroids

Build Issues Pull-Requests Stars Forks

eShopOnSteroids is a well-architected, distributed, event-driven, cloud-native e-commerce platform powered by the following building blocks of microservices:

  1. API Gateway (Spring Cloud Gateway)
  2. Service Discovery (Docker and Kubernetes builtin)
  3. Distributed Tracing (Sleuth, Zipkin)
  4. Circuit Breaker (Resilience4j)
  5. Event Bus (RabbitMQ)
  6. Database per Microservice (PostgreSQL, MongoDB, Redis)
  7. Centralized Monitoring (Prometheus, Grafana)
  8. Centralized Logging (Elasticsearch, Fluentd, Kibana)
  9. Control Loop (Kubernetes, Terraform)

This code follows best practices such as:

  • Unit Testing (JUnit 5, Mockito, Pytest)
  • Integration Testing (Testcontainers)
  • Design Patterns (Publish/Subscribe, Backend for Frontend, ...)

microservices, event-driven, distributed systems, e-commerce, domain-driven-design, java, python, spring cloud, spring boot, spring cloud gateway, spring cloud sleuth, zipkin, resilience4j, postgresql, mongodb, redis, cache, rabbitmq, kubernetes, k8s, terraform, observability, prometheus, grafana, elasticsearch, fluentd, kibana

Note: If you are interested in this project, no better way to show it than ★ starring the repository!

Architecture

The architecture proposes a microservices-oriented implementation where each microservice is responsible for a single business capability. The microservices are deployed in a containerized environment (Docker) and orchestrated by a control loop (Kubernetes) which continuously compares the state of each microservice to the desired state, and takes necessary actions to arrive at the desired state.

Each microservice stores its data in its own database tailored to its requirements, such as an In-Memory Database for a shopping cart whose persistence is short-lived, a Document Database for a product catalog for its flexibility, or a Relational Database for an order management system for its ACID properties.

Microservices communicate externally via REST through a secured API Gateway, and internally via

  • gRPC for synchronous communication which excels for its performance
  • an event bus for asynchronous communication in which the receiving microservice is free to handle the event whenever it has the capacity

Below is a visual representation:

Architecture

  • All microservices are inside a private network and not accessible except through the API Gateway.
  • The API Gateway routes requests to the corresponding microservice, routes requests to the appropriate endpoint based on the client (Backend for Frontend), and validates the authorization of requests.
  • The Identity Microservice acts as an Identity Provider and is responsible for storing users and their roles, and for issuing authorization credentials.
  • The Cart Microservice manages the shopping cart of each user. It uses a cache (Redis) as the storage.
  • The Product Microservice stores the product catalog and stock. It's subscribed to the Event Bus to receive notifications of new orders and update the stock accordingly.
  • The Order Microservice manages order processing and fulfillment. It performs a gRPC call to the Product Microservice to check the availability and pricing of the products in the order pre-checkout and publishes events to the Event Bus to initiate a payment and to update the stock post-checkout.
  • The gRPC communication between the microservices is fault-tolerant thanks to a circuit breaker.
  • The Payment Microservice handles payment processing. It's subscribed to the Event Bus to receive notifications of new orders and initiate a payment. It does not lie behind the API Gateway as it is not directly accessible by the user. It is also stateless and does not store any data.

Observability services include:

  • Zipkin and Sleuth for assigning traces to requests to track their path across microservices Zipkin Dashboard
  • Prometheus and Grafana for collecting metrics from microservices and setting up alerts for when a metric exceeds a threshold Grafana Dashboard
  • Elasticsearch, Fluentd, and Kibana for aggregating logs from microservices Kibana Dashboard

Future work:

  • Outsource authentication to a third-party identity provider such as Keycloak

Setup

Prerequisites

Yes, that's it!

Development

  1. Create the env file and fill in the missing values

    cp .env.example .env
    vi .env
    ...
    
  2. Start the containers

    docker compose -f docker-compose.dev.yml up
    

    The first time you run this command, it will take a few minutes to build the images, after which you should be able to access the application at port 8080 locally. Changes to the source code will be automatically reflected in the containers without any extra steps.

    To stop the containers, run:

    docker compose -f docker-compose.dev.yml down
    

    To remove saved data along with the containers, run the following command:

    docker compose -f docker-compose.dev.yml down -v
    

Production

Deploy containers with docker compose

  1. Create the env file and fill in the missing values

    cp .env.example .env
    vi .env
    ...
    

    Then:

  2. (Optional) Run the following command to build the images locally:

    docker compose build
    

    It will take a few minutes. Alternatively, you can skip this step and the images will be pulled from Docker Hub.

  3. Start the containers

    docker compose up
    

You can now access the application at port 8080 locally

Deploy to local Kubernetes cluster

  1. Ensure you have enabled Kubernetes in Docker Desktop as below:

    Enable Kubernetes

    (or alternatively, install Minikube and start it with minikube start)

    Then:

  2. Enter the directory containing the Kubernetes manifests

    cd k8s
    
  3. Create the env file and fill in the missing values

    cp ./config/.env.example ./config/.env
    vi ./config/.env
    ...
    
  4. Create the namespace

    kubectl apply -f ./namespace
    
  5. Change the context to the namespace

    kubectl config set-context --current --namespace=eshoponsteroids
    
  6. Create Kubernetes secrets from the env file

    kubectl create secret generic env-secrets --from-env-file=./config/.env
    
  7. Apply the configmaps

    kubectl apply -f ./config
    
  8. Apply the persistent volumes

    kubectl apply -f ./volumes
    
  9. Install kubernetes metrics server (needed to scale microservices based on metrics)

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
    
  10. Deploy the containers

    kubectl apply -f ./deployments
    
  11. Expose the API gateway

    kubectl apply -f ./networking/node-port.yml
    

You can now access the application at port 30080 locally

To tear everything down, run the following command:

kubectl delete namespace eshoponsteroids

Future work:

  • Simplify the deployment process by templating similar manifests with Helm
  • Overlay/patch manifests to tailor them to different environments (dev, staging, prod, ...) using Kustomize

Deploy to AWS EKS cluster

Typically, operations teams provision cloud resources, and dev teams focus more on shipping their code to these resources. However, as a developer, to be able to design cloud-native applications such as this one, it is important to understand the infrastructure on which your code runs (hence the rise of DevOps as a software development methodology). That is why we will provision our own Kubernetes cluster on AWS EKS (Elastic Kubernetes Service) for our application.

For this section, in addition to Docker you will need:

  • Basic knowledge of the AWS platform
  • An AWS account
  • AWS CLI configured with the credentials of either your account or an IAM user with administrator access (run aws configure)

Here is a breakdown of the resources to set up and configure:

  • VPC (Virtual Private Cloud): a virtual network that represents a logically isolated section of the AWS cloud where our cluster will reside
  • Subnets: 2 public and 2 private subnets in different availability zones (required by EKS to ensure high availability of the cluster). Think of subnets as segments of our VPC that allow us to group resources based on their security and connectivity requirements
  • Internet Gateway: a VPC component that allows our public subnets to access the internet
  • NAT Gateway: a VPC component that allows our private subnets to access the internet but prevents the internet from initiating connections to our private subnets
  • Route Tables: contains a set of rules called routes that direct traffic and in this case connect the internet gateway to the public subnets and the NAT gateway to the private subnets
  • Security Groups: they act as virtual fire

Related Skills

View on GitHub
GitHub Stars33
CategoryDevelopment
Updated4mo ago
Forks15

Languages

Java

Security Score

92/100

Audited on Nov 9, 2025

No findings