When most people talk of Git, there is usualy that correlation between git and Github. What most do not understand is that git is a local version control system.
Github just extends it functionality by giving us a remote cloud-based source where we can store our version controlled files. This creates a resilient setup epecially in a situation where your computer mulfunctions, your files are safe because they are saved in the cloud.
But did you know you can have a full local git workflow that does not involve Guthub? this was the main intention of git before cloud providers came into the picture. Let me put you upto speed on the setup
Building A LAMP
LAMP
LAMP stands for Linux, Apache, Mysql, Php
Scenario
>X-FusionCorp Industries is planning to host a WordPress website on their infra in Stratos Datacenter.
They have already done infrastructure configuration—for example,
on the storage server they already have a shared directory /vaw/www/html that is mounted on each app host under /var/www/html directory.
Please perform the following steps to accomplish the task:
a. Install httpd, php and its dependencies on all app hosts.
b. Apache should serve on port 6200 within the apps.
c. Install/Configure MariaDB server on DB Server.
d. Create a database named kodekloud_db5 and create a database user named kodekloud_top identified as password YchZHRcLkL.
Further make sure this newly created user is able to perform all operation on the database you created.
e. Finally you should be able to access the website on LBR link,by clicking on the App button on the top bar.
You should see a message like App is able to connect to the database using user kodekloud_top
solution
considering that I am working on CentOs, all commands will apply ot all RHEL flavours
Install and configure a web application(Apache)
The problem
XCorp industry requires me as the DevOps Engineer to configure a webserver in their Stratos Data Center in preparation for the website that is still in progress
Requiremets
- Install Apache on the specified server
- The website should be served on port
8086 - Use the provided templates i.e
clusterandecommerceto server the files - The following endpoints should serve the respective files
curl localhost:8086/clusterandcurl localhost:8086/ecommerce/
The solution
- Installing Apace
- considering that we are using a REHL based system, I will use the default package manager to install Apache(httpd)
# using dnf
sudo dnf install httpd -y
By default, the apache service will be disabled and its the best time to make changes to the config file.
The Apache config file is found in etc/http/conf/http.conf. This is consitsent in all linux flavours.
How to use Nginx as a Loadbalancer
What is a Loadbalancer?
In situations where we have a single server and multiple clients, our server can be overwhelmed making it unable to serve webpages as expected. In a technical fashion, its is recommended to have multiple servers to distribute the request load. Having multiple severs is good but its all useless unless there’s a tool that handles how the logic of how the load is balanced through the servers. A load balancer basically accepts request form users on behalf of severs and then routes traffic to the servers depending on the algorithm used. The algorithm could require the loadbalancer to either forward traffic to a server that’s alive or just any sever. The main goal is to recduce the load on the severs. This reduces cases such as DDOS attacks as no single sever will have to handle all traffic. As discussed earlie, nginx can also be used as a Loadbalancer
Using Nginx as a Web server
A mentioned earlier, Nginx can be used in a number of ways. One of them is as a webserver.
A webserver is a software that sends webpages stored in webservers to users on request. By default, the web uses HTTP protocol on port 80 which sends data in unencrypted format. Nginx supports both HTTP and HTTPS meaning that we can send data securely.
Nginx configuration files are found in the /etc/nginx directory. We majourly make changes to the sites-available directory which can be structured to house multiple sites each described as a configuration file with the .conf extension
After creating a site configuration file, we are supposed to publish the site to the sites-enabled. This is done by creating a symbolic link with the command
ln -s ~/etc/nginx/sites-available/default ~/etc/nginx/sites-enabled/default
The command above basically links the site-enbaled/default and for the changes to take place, we need to restart nginx using the command sudo nginx -s reload
What is Nginx?
What is Nginx?
This is a software used to ‘serve’ web content to our browser. It is an intermediary between a client and a server. In this instance, it acts as a reverse proxy where it receives client’s requests on behalf of a server. Nginx can either be used as: - HTTP Server - Revers proxy - Mail Proxy - Generic TCP/UDP proxy server Feautures in Nginx are activated using modules. This feature allows admins to only use the features they need. This means that you could install nginx and only use it as a HTTP server only
How to create a 3-node kubernetes cluster using kubeadm
How to create a 3-node kubernetes cluster using kubeadm
- In kubernetes, a cluster is basicaly a collection of nodes. We majorly have a
Control planethat does all the administrative operations and theNodeswhichs are responsible for housing thepods.
Control plane
- This is the brains of the whole sytem, processes instrictions and sends them to the nodes via the
kubelet - The control plane contains the following:
- Apiserver
- Etcd
- kubelet
- kube-scheduler
- kubectl
Apiserver
- This is the tool that receives requests from users and redirects it to different components within the controlplane
- It acts as a gateway to the outside world and receives requests from the user using the
kubectlclient - If a request is sent to create a pod
kubectl run nginx --image nginx, the request is sent to thekube-schedulerwhich comunicates with thekubeletsto know which node is going to process the workload - The kubelets are found in every node and is the gateway between the nodes and the control plane
- It keeps regular watch for any instructions coming from the apiserver which is sent as a
podspec - The kubelet checks the podspec to see if the pods are in the described state, if not so, it communictaes with the
container runtimeto crate pods so as to meet the desired state.
The setup
This is a basic setup running on a proxmox server. I have three VMs control, Node0 and Node1
the main intention is to create a kubentes cluster
How to upgrade a kubernetes node in a cluster
##context
we have 3 major nodes running
- control
- node0
- node1
upon running the command kubectl get nodes, we realise that the two worker nodes are on kuberntes v1.30 and the controlplan is on kubentes v1.34.
Our main tasks is to upgrade the two nodes.
NOTE: This update procedure only applies to updating a patch version
When upgrading a node in a cluster, you need to work on one node at a time for high availability. This means that systems do not have to go down because of an upgrade.
1 Hardware Review
DevOps Zero to Hero: # 1 Hardware Review
So, the hardware landed two weeks after the order. The specs were:
LenovoThinkSystem SR650 V3 (2U Rack Server)Processor: Intel Xeon Silver 4410Y (12C 2.0GHz)Memory: 1 x 64GB DDR5 RDIMMDisk Bay: 8 SAS/SATA SFFStorage: 2 x 480GB Read Intensive SATA SSD, 4 x 1.2TB SAS HDDNIC: 4-port 1GbE RJ45, 2-port 10GbE SFP28Power Supply: 2 x 1100W TitaniumFans: 5 x Standard Fans
Processor
Coming from the world of regular CPUs, you might not be familiar with Intel Xeon CPUs. Server CPUs are designed to balance performance and power consumption, striking a delicate balance. They are not as fast as regular CPUs in terms of clock speed, but they are optimized for sustained performance under heavy workloads.
AWS: Identity and Access Management(IAM)
AWS: Identity and Access Management
What is IAM?
IAM stands for Identity and Access Management. It is a tool provided by cloud providers that you can use for user, role, and privilege management. This tool is very important as it can determine how vulnerable or strong your account can be. IAM is one of the main security tools provided by AWS as part of its shared responsibility model in the cloud.