In my last devlog, I talked about configuring the db01 server. I wrote a shell script to configure the MongoDB server on the db01 machine. I had to make a few changes in that script and also add two more shell scripts for installing and starting MongoDB.
I had to use MongoDB 2.6 because the chat application that’s going to run on the app servers expects the 2.6 version of MongoDB. To check the updated shell scripts, click here.
Configuring app servers
I planned to use this project for the app servers. I didn’t face any tough challenges while working on the app servers.
Using Ansible, I installed the git, nodejs, and npm packages after adding the firewall rules. The app servers were running on CentOS 7, so I had to install epel-release first before installing the packages. Then the Ansible script will clone the GitHub repository.
Configuring the application
All I needed to configure the chat application were three variables. DB_USER, DB_PASS, and DB_HOST. DB_USER variable is hard coded.
DB_PASS variable’s value was coming from an Ansible vault. The vault is located on my laptop. I stored the database password in a file and encrypted it with Ansible Vault. Then, before running ansible-playbooks, I have to decrypt the vault file. The ansible-playbook, which is responsible for configuring app servers, copies the file that contains the database password and pastes it into the app servers. A shell script will use the copied file to set the DB_PASS variable’s value and use it in the app’s config.
DB_HOST variable is supposed to contain an IP address that is the db01 machine’s public IP address (I tested with a public IP address, but it should work with a private IP address too if the app and the DB servers are in the same region/datacenter). After Terraform builds the db01 server, I added another local_file resource that will contain the public IP address of the db01 server. I copied this file to the app server and using the Ansible playbook, I added the value of DB_HOST in the app’s config file.
Then I used PM2 to run the chat application. PM2 is very reliable for running a Node.js app in a production environment.
Configuring Load balancer
I created a separate server that will act as a load balancer using Nginx. I wanted the /etc/nginx/nginx.conf
to look like this:
events {
worker_connections 1024;
}
http {
upstream backend {
server app01:5000;
server app02:5000;
server app03:5000;
}
server {
listen 80;
location / {
proxy_pass http://backend;
}
}
}
This configuration makes Nginx work as a reverse proxy. Whenever there is a request coming in on port 80, it passes the request to http://backend
. Then the request goes to the cluster named “backend” which is created by the upstream backend
.
By default, Nginx uses the round-robin algorithm to handle requests.
To make this configuration works, the private IP address of the app servers needs to be added to the /etc/hosts
file. A public IP address won’t work. Also, the servers need to be in the same region or datacenter.
CentOS has SELinux enabled, so I had to run the command below to allow the reverse proxy.
setsebool -P httpd_can_network_connect 1
Conclusion
With the configurations I stated in this devlog, I successfully managed to set up everything and make the architecture work as I expected. You can check all the codes in this link below:
https://github.com/Abu-Zakaria/devops-practice-chatty-infra/tree/v0.1.0
My next goal is to containerize the chat application. Then use Kubernetes to build a cluster and use the app servers as nodes for the control-plane.
Thanks for reading my devlog. Subscribe to my substack publication to stay tuned for future updates.