From Localhost To Production — Best Practice on Software Development and Deployment
I remember the joy when I first deployed my web application using cPanel FTP. I right clicked on my folder | compress to zip, then upload it to cPanel, uncompress everything, and setup the credentials — no .env file, everything is hardcoded on the config file.
It took me multiple years of learning and experience to be able to deploy a software correctly. I will write a summary on how to do it, one for my personal reference, and two so people can learn about it as well and doesn’t have to fall into the pit of having their software hacked.
Assumption
This article assume that we will be deploying these software systems:
- Server (VM)
- Database
- Backend API
- Frontend
- Nginx
- SSL
We won’t go deep into horizontal scaling or multiservice architecture as most of the time, you won’t need it.
We assumed that you have experience on interacting with server, writing code, and using Linux.
Setting Up The Server
We will go with the traditional route of using a virtual machine / virtual private server — instead of ready to use system like AWS AppRunner or Google App Engine.
You will have to pick an OS — most providers support Windows, but unless you are deploying Microsoft based software like .NET — it’s always a good idea to use Linux.
Go ahead and set up your server using Digital Ocean, Google Compute Engine, or any other providers.
Securing The Server
The first step that you need to do once you got your server up and running is to secure it.
The main gate of your server is most likely an SSH server, so we will secure it first.
- Create an SSH key if you don’t have already
- Create a new user with strong password, use your SSH key for this user
- Add the user to sudoers
- Configure SSH to only allow this user to log in
- Configure SSH to prevent password login
- Configure SSH to disallow root login
The
>
means you write it as content of the file, not an actual character that you type.
# Run on your local
ssh-keygen -t rsa -b 4096
# Connect to your server
adduser new_username
mkdir /home/new_username/.ssh
chmod 700 /home/new_username/.ssh
nano /home/new_username/.ssh/authorized_keys
usermod -aG sudo new_username
nano /etc/ssh/sshd_config
> AllowUsers new_username
> PasswordAuthentication no
> PermitRootLogin no
Then we need to configure the firewall to shutdown everything.
- Enable firewall
- Disallow all port except SSH
# firewall
ufw enable
ufw default deny incoming
ufw allow ssh
We will whitelist our port later.
Next, use the OS package manager (apt, yum, etc) to update all existing softwares.
apt update
apt upgrade
Setting Up Database
For our database server, we’ll be using PostgreSQL, a robust and feature-rich open-source database management system. MySQL is a good alternative but I kept forgetting how to setup Postgres properly so here I am writing about it.
Installing PostgreSQL on our Ubuntu server is a straightforward process. First, we’ll update the package index and install the PostgreSQL package:
sudo apt update
sudo apt install postgresql
After the installation, PostgreSQL automatically creates a default database cluster. However, we’ll create a new cluster with our preferred settings to ensure optimal performance and configuration.
Switch to the PostgreSQL user and initialize a new cluster with the desired locale and encoding settings:
sudo su - postgres
initdb -D /path/to/data/directory --locale=en_US.UTF-8 --encoding=UTF8
Next, we’ll configure PostgreSQL by editing the postgresql.conf
file located in the data directory we specified during cluster initialization. Here, we can adjust settings such as listen addresses, port numbers, maximum connections, shared buffers, and memory allocation for various operations.
For authentication, we’ll edit the pg_hba.conf
file. During development, we can use peer or ident authentication for local connections and MD5 for remote connections. However, in a production environment, it's recommended to use MD5 or certificate-based authentication for enhanced security.
To create a dedicated PostgreSQL user and database for our application, we’ll execute the following commands as the postgres
user:
createuser --pwprompt app_user
createdb --owner=app_user app_database
This creates a new user (app_user
) and a dedicated database (app_database
) owned by that user.
By default, PostgreSQL doesn’t allow remote connections. To enable remote access, we’ll update the listen_addresses
parameter in postgresql.conf
and add a line in pg_hba.conf
to allow remote connections with appropriate authentication methods.
Securing the PostgreSQL server is crucial. We’ll disable the PostgreSQL user’s ability to log in via password, use strong passwords for database users, consider SSL/TLS encryption for connections, and regularly update PostgreSQL to the latest version for security patches.
Regular backups are essential for data integrity and disaster recovery. We’ll set up backup procedures using tools like pg_dump
or pg_basebackup
, storing backups in a secure off-site or cloud location. Testing backup and restore processes regularly is also a best practice.
Performance monitoring and tuning are ongoing tasks. We’ll use tools like pgBadger
or pg_stat_statements
to monitor PostgreSQL's performance, tune database settings based on workload and hardware resources, implement indexing strategies, and consider partitioning large tables for better management and performance.
With our PostgreSQL database server set up, configured, and secured according to best practices, we’re ready to connect our application and begin development and deployment processes.
Setting up your server timezone so your server, dabase, and application uses the same timezone settings will save you headache in the future. Use UTC if you are serving international customers, or set it to your local time if you are sure that it’s only going to be used internally or specific to your region.
Preparing Your Backend Service
The first step that you need to do is to ensure that you don’t have any secret credentials in any of your version controller file. Utilize environment variables and make sure your backend service use the environment value instead of hard coding it on your code.
We won’t talk about how you version control or how you managed to get your code into the server. A quick info on this, you can create an SSH key on your server and use it for deploy keys on GitHub.
Once you securely move all of your secret information into environment variables, it’s time to get it up and running
The Daemon
While you can just npm start
— it will run in foreground and once your session ends, your backend service will die as well.
Systemd is a popular daemon system that we can use to ensure our backend service will keep running even when we close our SSH session, or when it fails and need to restart.
First, we need to have a dedicated user to run our service. So create a new user and define a daemon configuration that uses this user.
sudo adduser backend-app-user
nano /etc/systemd/system/backend-app.service
> [Unit]
> Description=Backend Application
> After=network.target
> [Service]
> User=backend-app-user
> Group=backend-app-user
> WorkingDirectory=/path/to/backend/app
> Environment=NODE_ENV=production
> Environment=PORT=3000
> ExecStart=/usr/bin/node /path/to/backend/app/app.js
> Restart=always
> RestartSec=10
> [Install]
> WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl start backend-app
sudo systemctl status backend-app
sudo systemctl enable backend-app
Now your backend service is up and running, it’s time to set up the frontend.
Deploying The Frontend
Most frontend is just a static files, even if you use framework like React, Vue, Angular — in the end it will be compiled into a static file.
We won’t talk about server-side frontend like Next, you’d better of writing a fullstack application using Rails or Laravel — trust me.
Because front end is just static files, we will just need to make sure that we can bring our frontend artifacts into our server.
Nginx
Nginx is fast, small, and easy to configure. So let’s use it.
First, let’s install Nginx on our server:
sudo apt update
sudo apt install nginx
After the installation, Nginx will start automatically, and you can verify its status with the following command:
sudo systemctl status nginx
Next, we’ll configure Nginx to serve our frontend application’s static files. Create a new configuration file (e.g., frontend.conf
) in the /etc/nginx/conf.d/
directory:
sudo nano /etc/nginx/conf.d/frontend.conf
Paste the following configuration into the file, replacing /path/to/frontend/dist
with the actual path to your frontend application's built or compiled static files:
server {
listen 80;
server_name your_domain.com www.your_domain.com;
root /path/to/frontend/dist;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
}
This configuration tells Nginx to listen on port 80 (the default HTTP port) for requests to your_domain.com
and www.your_domain.com
. It sets the document root to /path/to/frontend/dist
, which is where your frontend application's static files are located.
The try_files
directive ensures that Nginx will first try to serve the requested file or directory. If neither exists, it will serve the index.html
file, enabling client-side routing for single-page applications.
But How Do I Call Backend From My Frontend?
We managed to run our backend service, but it running locally on a local port. We want to have a reverse-proxy that act as a gate. So it will become like FE -> Nginx Proxy -> Backend.
Modify our nginx website conf,
server {
listen 80;
server_name your_domain.com www.your_domain.com;
root /path/to/frontend/dist;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Now your_domain.com/api become the base endpoint for your API.
Securing Nginx with SSL/TLS
It’s 2024, if you try to open http://your_domain.com
— browser will shame you publicly by saying your website is dangerous/insecure/badly written in React.
Before we can obtain our SSL certificate, we need a domain name that points to our server. You can open your domain provider and setup an A record that points to your server IP.
Being 2024 means we have Let’s Encrypt to help us obtain SSL certificate. Let’s start with installing certbot.
sudo apt update
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx
Certbot will automatically configure everything for you. Make sure that everything is good, test your nginx configuration and reload it.
sudo nginx -t
sudo systemctl reload nginx
Let’s Encrypt SSL has a short lifetime, make sure to run sudo certbot renew
every 90 days to make sure that your certificate is valid. Most of the time this will be done automatically.
Now, enable port 80 and 443 on your firewall and you’re all set. You probably want to do more future proofing like setting backups, setting monitoring, compressions, caching, etc. But, enjoy your first step for now 🎉