DevopsProject2
Amazon-Prime-Clone
Install / Use
/learn @pandacloud1/DevopsProject2README
Amazon Prime Clone Deployment Project

Project Overview
This project demonstrates deploying an Amazon Prime clone using a set of DevOps tools and practices. The primary tools include:
- Terraform: Infrastructure as Code (IaC) tool to create AWS infrastructure such as EC2 instances and EKS clusters.
- GitHub: Source code management.
- Jenkins: CI/CD automation tool.
- SonarQube: Code quality analysis and quality gate tool.
- NPM: Build tool for NodeJS.
- Aqua Trivy: Security vulnerability scanner.
- Docker: Containerization tool to create images.
- AWS ECR: Repository to store Docker images.
- AWS EKS: Container management platform.
- ArgoCD: Continuous deployment tool.
- Prometheus & Grafana: Monitoring and alerting tools.
Pre-requisites
- AWS Account: Ensure you have an AWS account. Create an AWS Account
- AWS CLI: Install AWS CLI on your local machine. AWS CLI Installation Guide
- VS Code (Optional): Download and install VS Code as a code editor. VS Code Download
- Install Terraform in Windows: Download and install Terraform in Windows Terraform in Windows
Configuration
AWS Setup
- IAM User: Create an IAM user and generate the access and secret keys to configure your machine with AWS.
- Key Pair: Create a key pair named
keyfor accessing your EC2 instances.
Infrastructure Setup Using Terraform
- Clone the Repository (Open Command Prompt & run below):
git clone https://github.com/pandacloud1/DevopsProject2.git cd DevopsProject2 code . # this command will open VS code in backend - Initialize and Apply Terraform:
- Run the below commands to reduce the path displayed in VS Code terminal (Optional)
code $PROFILE function prompt {"$PWD > "} function prompt {$(Get-Location -Leaf) + " > "} - Open
terraform_code/ec2_server/main.tfin VS Code. - Run the following commands:
aws configure terraform init terraform apply --auto-approve
- Run the below commands to reduce the path displayed in VS Code terminal (Optional)
This will create the EC2 instance, security groups, and install necessary tools like Jenkins, Docker, SonarQube, etc.
SonarQube Configuration
- Login Credentials: Use
adminfor both username and password. - Generate SonarQube Token:
- Create a token under
Administration → Security → Users → Tokens. - Save the token for integration with Jenkins.
- Create a token under
Jenkins Configuration
-
Add Jenkins Credentials:
- Add the SonarQube token, AWS access key, and secret key in
Manage Jenkins → Credentials → System → Global credentials.
- Add the SonarQube token, AWS access key, and secret key in
-
Install Required Plugins:
- Install plugins such as SonarQube Scanner, NodeJS, Docker, and Prometheus metrics under
Manage Jenkins → Plugins.
- Install plugins such as SonarQube Scanner, NodeJS, Docker, and Prometheus metrics under
-
Global Tool Configuration:
- Set up tools like JDK 17, SonarQube Scanner, NodeJS, and Docker under
Manage Jenkins → Global Tool Configuration.
- Set up tools like JDK 17, SonarQube Scanner, NodeJS, and Docker under
Pipeline Overview
Pipeline Stages
- Git Checkout: Clones the source code from GitHub.
- SonarQube Analysis: Performs static code analysis.
- Quality Gate: Ensures code quality standards.
- Install NPM Dependencies: Installs NodeJS packages.
- Trivy Security Scan: Scans the project for vulnerabilities.
- Docker Build: Builds a Docker image for the project.
- Push to AWS ECR: Tags and pushes the Docker image to ECR.
- Image Cleanup: Deletes images from the Jenkins server to save space.
Running Jenkins Pipeline
Create and run the build pipeline in Jenkins. The pipeline will build, analyze, and push the project Docker image to ECR. Create a Jenkins pipeline by adding the following script:
Build Pipeline
pipeline {
agent any
parameters {
string(name: 'ECR_REPO_NAME', defaultValue: 'amazon-prime', description: 'Enter repository name')
string(name: 'AWS_ACCOUNT_ID', defaultValue: '123456789012', description: 'Enter AWS Account ID') // Added missing quote
}
tools {
jdk 'JDK'
nodejs 'NodeJS'
}
environment {
SCANNER_HOME = tool 'SonarQube Scanner'
}
stages {
stage('1. Git Checkout') {
steps {
git branch: 'main', url: 'https://github.com/pandacloud1/DevopsProject2.git'
}
}
stage('2. SonarQube Analysis') {
steps {
withSonarQubeEnv ('sonar-server') {
sh """
$SCANNER_HOME/bin/sonar-scanner \
-Dsonar.projectName=amazon-prime \
-Dsonar.projectKey=amazon-prime
"""
}
}
}
stage('3. Quality Gate') {
steps {
waitForQualityGate abortPipeline: false,
credentialsId: 'sonar-token'
}
}
stage('4. Install npm') {
steps {
sh "npm install"
}
}
stage('5. Trivy Scan') {
steps {
sh "trivy fs . > trivy.txt"
}
}
stage('6. Build Docker Image') {
steps {
sh "docker build -t ${params.ECR_REPO_NAME} ."
}
}
stage('7. Create ECR repo') {
steps {
withCredentials([string(credentialsId: 'access-key', variable: 'AWS_ACCESS_KEY'),
string(credentialsId: 'secret-key', variable: 'AWS_SECRET_KEY')]) {
sh """
aws configure set aws_access_key_id $AWS_ACCESS_KEY
aws configure set aws_secret_access_key $AWS_SECRET_KEY
aws ecr describe-repositories --repository-names ${params.ECR_REPO_NAME} --region us-east-1 || \
aws ecr create-repository --repository-name ${params.ECR_REPO_NAME} --region us-east-1
"""
}
}
}
stage('8. Login to ECR & tag image') {
steps {
withCredentials([string(credentialsId: 'access-key', variable: 'AWS_ACCESS_KEY'),
string(credentialsId: 'secret-key', variable: 'AWS_SECRET_KEY')]) {
sh """
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com
docker tag ${params.ECR_REPO_NAME} ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${params.ECR_REPO_NAME}:${BUILD_NUMBER}
docker tag ${params.ECR_REPO_NAME} ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${params.ECR_REPO_NAME}:latest
"""
}
}
}
stage('9. Push image to ECR') {
steps {
withCredentials([string(credentialsId: 'access-key', variable: 'AWS_ACCESS_KEY'),
string(credentialsId: 'secret-key', variable: 'AWS_SECRET_KEY')]) {
sh """
docker push ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${params.ECR_REPO_NAME}:${BUILD_NUMBER}
docker push ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${params.ECR_REPO_NAME}:latest
"""
}
}
}
stage('10. Cleanup Images') {
steps {
sh """
docker rmi ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${params.ECR_REPO_NAME}:${BUILD_NUMBER}
docker rmi ${params.AWS_ACCOUNT_ID}.dkr.ecr.us-east-1.amazonaws.com/${params.ECR_REPO_NAME}:latest
docker images
"""
}
}
}
}
Continuous Deployment with ArgoCD
- Create EKS Cluster: Use Terraform to create an EKS cluster and related resources.
- Deploy Amazon Prime Clone: Use ArgoCD to deploy the application using Kubernetes YAML files.
- Monitoring Setup: Install Prometheus and Grafana using Helm charts for monitoring the Kubernetes cluster.
Deployment Pipeline
pipeline {
agent any
environment {
KUBECTL = '/usr/local/bin/kubectl'
}
parameters {
string(name: 'CLUSTER_NAME', defaultValue: 'amazon-prime-cluster', description: 'Enter your EKS cluster name')
}
stages {
stage("Login to EKS") {
steps {
script {
withCredentials([string(credentialsId: 'access-key', variable: 'AWS_ACCESS_KEY'),
string(credentialsId: 'secret-key', variable: 'AWS_SECRET_KEY')]) {
sh "aws eks --region us-east-1 update-kubeconfig --name ${params.CLUSTER_NAME}"
}
}
}
}
stage("Configure Prometheus & Grafana") {
steps {
script {
sh """
helm repo add stable https://charts.helm.sh/stable || true
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts || true
# Check if namespace 'prometheus' exists
if kubectl get namespace prometheus > /dev/null 2>&1; then
# If namespace exists, upgrade the Helm release
helm upgrade stable prometheus-community/kube-prometheus-stack -n prometheus
else
# If namespace d
