Web Deployment with Cluster Monitoring on Amazon EKS and Fargate
The Technology Stack

Web Deployment with Cluster Monitoring on Amazon EKS and Fargate

This is a task given by Mr. Vimal Daga in EKS Training.

Overview

Main Objective : Deploying Wordpress website on EKS Cluster via AWS Fargate profile and using HELM Package Manager to integrate monitoring tools like Prometheus and graphical analysis with Grafana.

We'll try to follow the best practices in the entire process. Our aim is to maximize security and deliver a highly available and scalable Web Application. Firstly we will create the cluster using a fargate profile then, launch Wordpress and Mysql pods. We'll use Secrets to configure the mysql and wordpress pods with the passwords and prohibit any outside access to our database deployment by using ClusterIP Service. The wordpress pods will be connected to a LoadBalancer Service which by default will use the AWS ELB : Elastic Load Balancer. We will then use HELM , the package manager for Kubernetes to install and launch a prometheus server and then integrate it with Grafana which will provide us with a beautiful dashboard to monitor the cluster.

Let's begin,

Getting Started:

Firstly we have to interact with AWS and we have 3 ways for it:

  • WebUI
  • CLI (Command Line Interface)
  • SDK (Software Development Kit)

We can use any of the ways but using CLI would help us to solve multiple use cases and for doing so firstly we have to create a IAM user in AWS. While creating it we will download a file which would have our access key and secret key using which we can connect to AWS using CLI. After Downloading it we have to install AWS CLI on our device and then we have to use the command aws configure. We have to use this credentials file to provide the access and secret keys and login to AWS.

We'll use the AWS CLI Tool for working with AWS.

AWS EKS Service: Amazon Elastic Kubernetes Service is a fully managed Kubernetes service, EKS provides a scalable and highly-available control plane that runs across multiple availability zones to eliminate a single point of failure, automatically applies the latest security patches to your cluster control plane, you can also easily migrate any standard Kubernetes application to EKS without needing to refactor your code.

No alt text provided for this image

More info : EKS Documentation

Now for connecting to AWS EKS service using cli we have two ways:

  • aws eks : It's a part of the AWS CLI Tools . It's used for launching EKS clusters but from this command we can’t achieve higher level of customization.
  •  eksctl : It's a simple CLI tool for creating clusters on EKS - Amazon's new managed Kubernetes service for EC2. It is written in Go, and uses CloudFormation. This is the method we will use.

Other tools :

Eksctl : This command line tool for managing the clusters. For connecting to EKS service using eksctl command through CLI we have to download eksctl and also we have to download kubectl because it is the tool through which client can connect to kubernetes cluster(master).

Kubectl : This command line tool lets you control Kubernetes clusters. For configuration, kubectl looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the KUBECONFIG environment variable or by setting the --kubeconfig flag.

Helm : The package manager for Kubernetes. It helps you manage Kubernetes application. Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. We'll use this to install Prometheus and Grafana.

Download the tools required:

No alt text provided for this image


Install and set the PATH Variable

Install the AWS CLI and extract the binaries to a common folder that must be added to the PATH Environment variable.

I had Minikube installed with its installation path added to the PATH variable so I just copied the binaries to the same folder to keeps things simple : ).

No alt text provided for this image


Editing the Environment Variables :

No alt text provided for this image
No alt text provided for this image

Make sure that both the paths are added.

Configure the AWS CLI

Enter your details as asked by the cli. It's recommended to create a separate IAM user with only the minimum required access policies.

No alt text provided for this image


cluster.yml

Now we have to create the cluster using ekstcl command and the yml code in which we have provided the nodegroups, number of nodes and the instance types. This is the yml code to create the cluster:

No alt text provided for this image


What is a Spot Instance ? A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Since it enable us to request unused EC2 instances at steep discounts, we can lower our Amazon EC2 costs significantly. I am not using any spot instances for now so I have commented that part. Remember that smaller instance types have limitations and many pods can fail to launch because of less resources available.

Creating the cluster using the cluster.yml file :

eksctl create cluster -f cluster.yml

No alt text provided for this image

We can check that the cluster has been created and also the nodegroups using following commands:

eksctl get cluster

eksctl get nodegroup --cluster cluster1

If we want to connect the kubectl to this cluster we have to update the kubeconfig file. We can do so using the command:

aws eks update -kubeconfig -–name cluster1

No alt text provided for this image

We can also check from the WebUI that our cluster has been created.

No alt text provided for this image
No alt text provided for this image

This cluster has total of 4 nodes and we can check it by using both CLI as well as WebUI.

EFS Configuration

The WHAT : Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

The WHY : We need a centralized storage the will be persistent and mountable by numerous pods simultaneously. EBS fails on these parameters. So, we'll so for EFS instead for the standard EBS we get already.

The HOW : We have to create a file system using EFS (Elastic File Storage) service of AWS for providing persistent storage to our pods. While creating the EFS storage we have to see that we create it in same VPC in which our nodes has been created so that they can connect to the EFS storage. We have to provide the same security group which is used by the nodes.

Firstly, we have to also install amazon-efs-utils in all the slave/worker nodes because if we want to connect our pods to EFS storage this utility should be present in all the nodes. We can simply install it in all the nodes by using the following command:

yum install amazon-efs-utils -y

Then connect to the instance via Browser based SSH Connection or any other way you like.

No alt text provided for this image

Execute the command to install the packages required,

No alt text provided for this image

Now creating an EFS Volume,

No alt text provided for this image
No alt text provided for this image

Important : Remember to select the Same VPC and Security Groups as the EKS Cluster as shown. Also create mount points in all the AZs available.

No alt text provided for this image

Keep a record of the DNS Name and FILE_SYSTEM_ID provided for the EFS Volume, we'll use them later.

Next I will create a new namespace for this project ‘efsns’ and I will set this namespace to be used by default by using the following command:

kubectl config set-context --current --namespace=efsns

No alt text provided for this image


create-efs-provisioner.yaml

Next, I have created an EFS provisioner with the help of YML code. I have provided the ID & DNS from my already created EFS file system. I have created this file ‘create-efs-provisioner.yaml’ so that kubectl can connect to the EFS storage that is running on the cloud.

No alt text provided for this image


create-rbac.yaml

Let's create a file ‘create-rbac.yaml’ to modify some permissions using Role Based Access Control (RBAC). Cloud Formation connect to EFS for mounting to pod . For this it require some power that is known as role.

No alt text provided for this image

create-storage.yaml

And a ‘create-storage.yaml’ file for creating a new SC (storage class) which will create a PVC (Persistent Volume Claim) that would be connected to the pods. A dynamic PV (Persistent Volume) will be created which will contact to this SC.

No alt text provided for this image

We'll now set the default namespace as efsns.

kubectl config set-context --current --namespace=efsns

Next I have created a secret so that I do not reveal any crucial info and while creating the mysql pods. We will use this secret to provide the environment variables like mysql-password.

kubectl create secret generic mysqlsecret --from-literal=password=redhat

We can run all these files in the following order:

kubectl create -f create-efs-provisioner.yml

kubectl create -f create-rbac.yml

kubectl create -f storage.yml


deploy-mysql.yml

Now we need to launch a MYSQL database which will be connected to WordPress . For doing so, I have launched a pod using MYSQL version 5.7 image. I have picked the environment variables form the pre-created secret and I have also created a service type: ClusterIP which would be connected to this pod.

No alt text provided for this image

deploy-wordpress.yml

Next I have launched a pod using WordPress version 4.8-apache image. I have provided the environment variables and I have created a service type: Load Balancer which would help us to expose this pod to the outside world so that we could connect to the WordPress. For this service type cloudformation would connect to the ELB(Elastic Load Balancer) service of AWS for creating a load balancer and would connect it to the pod.

No alt text provided for this image

Now run these files as well :

kubectl create -f deploy-mysql.yml

kubectl create -f deploy-wordpress.yml


No alt text provided for this image

After running these files we can check using WebUI that a Load balancer has been created as well. Browse to the DNS Name of the Load Balancer.

No alt text provided for this image

We'll land at the Wordpress Welcome Page:

No alt text provided for this image

Now continue through the basic installation wizard :

No alt text provided for this image

Here, We'll create one blog post.

No alt text provided for this image

Create and then browse to the newly generated page.

No alt text provided for this image

It will look like this :

No alt text provided for this image

Even if you delete any pods now with kubectl they will automatically by created by the kubernetes ReplicaSets which are a part of Kubernetes Deployment which we have created. Also no data will be lost because we are using Persistent Volume from centralized AWS EFS Volume.

That's it for the complete Wordpress deployment using AWS EKS.

Finally. delete the cluster to avoid any unwanted costs:

eksctl delete cluster -f cluster.yml

No alt text provided for this image

Now, let's have a look at another AWS Service,


AWS Fargate

No alt text provided for this image

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate.

We can use fargate for creating a Kubernetes cluster just like EKS but in fargate it creates a serverless cluster which means that when client sends a request to launch a pod at that time it will create a worker node with the resources that are required for launching the pod over it. There is no pre-created worker/slave node when we create a fargate cluster and it will only create it when a demand comes from client to launch a pod and that’s why it is known as serverless cluster.

fcluster.yml

I have created a file ‘fcluster.yaml’ for creating a fargate cluster. We will create this cluster in ap-southeast-1 (Singapore) region.

No alt text provided for this image

We can create this cluster using the following command:

eksctl create cluster -f fcluster.yml

No alt text provided for this image

We can check through CloudFormation and also through EKS cluster that our fargate cluster is being created. After creating the cluster we can launch pods and we can see while requesting for this pod a new worker node will create.

No alt text provided for this image
No alt text provided for this image


Also update the kubeconfig so that helm and kubectl can recognise our cluster-configuration and install applications over there.

aws eks update -kubeconfig -–name cluster1

No alt text provided for this image

Now, let's get into monitoring our cluster.

No alt text provided for this image

PROMETHEUS :

Prometheus is a an open-source monitoring system with a dimensional data model, flexible query language, efficient time-series database and modern alerting approach. Prometheus collects the metric data from the exporters program and save in the time-series data base .The exporter is the program which we install on the nodes and exposes the metrics.


No alt text provided for this image

GRAFANA :

Grafana is multi-platform open-source analytics and interactive visualization web application. It provides charts, graphs, and alerts for the web when connected to supported data sources. It is expandable through a plug-in system. End users can create complex monitoring dashboards using interactive query builders.

Let's launch Prometheus and Grafana by using Helm.

Helm

We need to initialise helm before its first use. We also need to create a service account with the required permissions for helm.

kubectl -n kube-system create serviceaccount tiller

kubectl create cluster rolebinding tiller --clusterrole cluster-admin  --serviceaccount=kube-system:tiller

helm init --service-account tiller

helm init --service-account tiller --upgrade

Installing Prometheus in its namespace :

kubectl create namespace prometheus

helm install  stable/prometheus  --namespace prometheus  --set alertmanager.persistentVolume.storageClass="gp2"  --set server.persistentVolume.storageClass="gp2"

Use Port Forwarding to view the dashboard from our local machine :

kubectl get svc -n prometheus
kubectl -n prometheus port-forward svc/dull-bumblebee-prometheus-server  88

Finally browse to : 127.0.0.1:8888

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


Installing Grafana in its namespace :

kubectl create namespace grafana


helm install grafana/stable  --namespace grafana  --set persistence.storageClassName="gp2"  --set adminPasswod=redhat  --set service.type=LooadBalancer

Use Port Forwarding in Grafana :

kubectl get svc -n grafana
kubectl -n grafana  port-forward svc/exasperated-seal-grafana  1234:80

The Grafana Dashboard :

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


After you are done, remember to delete the cluster to avoid any unwanted costs:

eksctl delete cluster -f fcluster.yml

That's it, Thanks for reading !

Priyanshu Upadhyay

SDE @ Juspay | Cyber Security Enthusiast

3y

Congratulations Sir 👍

Tavishi Singh

Associate Site Reliability Engineer @Zscaler | RedHat Certified Engineer

3y

Amazing 👌

KRISHAN KANT DWIVEDI

Senior Analyst / Software Engineer at Capgemini Bangalore

3y

Congo Bro

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics