As a Software Engineer, major consideration has to be given to your development and deployment workflow in order to ensure coordination in development, seamless integration and effortless deployment. In an attempt to automate these processes efficiently, the following concepts were introduced:

  • Continuous Integration
  • Continuous Deployment/ Delivery

If this all seems foreign to you at this point, don’t get frightened by the technicality. In this article, we focus on explaining these concepts following a practical approach.

We will be setting up an auto-deploy method for our website that can receive code via push and does the following:

  • it will accept the push and
  • test if the resulting site is working before accepting it. I.e it will run quick integrity tests and only when this is achieved will it allow the push to go through.


First, let us take a look at the meaning of these concepts and shed some light on their relevance.

What is Continuous Integration?

Continuous Integration (CI) is the process of automating the build and testing of code every time a team member commits changes to version control. — Visual Studio Team

This basically describes a development workflow that allows a team of developers build software collaboratively, while automatically testing each code pushed to the code-base which is managed by a Version Control System(VCS) such as Git.

This is very handy when working with large teams and it facilitates working remotely. This also enables developers to push new features and updates to the production build.

What is Continuous Delivery & Deployment?

Continuous delivery is a series of practices designed to ensure that code can be rapidly and safely deployed to production by delivering every change to a production-like environment and ensuring business applications and services function as expected through rigorous automated testing.

Continuous deployment is the next step of continuous delivery: Every change that passes the automated tests is deployed to production automatically. –The Puppet Blog

Simply put code is automatically tested rigorously and is only deployed when it passes the tests. Hence, the production build(live deployment) is hardly compromised.

On the other hand, running a server without continuous integration and continuous deployment is all fun and games until your users are left without a live production site as your engineers rush to pull a backup image. Same goes for taking down your live production site into maintenance mode just to add a plugin, modifying the theme or changing a single file.

The better approach:

The Loop of Sanity(DevOps)


In order to follow along through this guide, you should have the following:

  • Basic knowledge of Linux and linux commands
  • A server and want to automate the process of committing changes from Github to the server and adding Travis CI somewhere in between.


For this project we shall be making use of the following:

About Travis CI:

Travis CI, the main tool we are making use of, does it all really. It integrates, tests and deploys by itself and does it all for free(i.e for only open source and public repositories) and integrates perfectly with Github our versioning platform.

Travis CI is a hosted, distributed continuous integration service used to build and test software projects hosted at GitHub.


We need to get our VPS server up and running, so we login into it using ssh

Then once logged in, we create a normal user using root:

Then we login into our new account directly;

or you can logout and relogin using the following command

Then once that’s done, we setup our web server ‘Nginx’ and database ‘MySQL’ and PHPMyAdmin

Setting up Nginx

After installing php and dependencies on server, SSL needs to be setup for nginx, to ensure web security. However, before setting up SSL, we need to ensure that HNGFun is accessible via the domain address and, hence the need to create configurations for HNGFun.

nginx configuration for HNGFun can be found at /etc/nginx/sites-available, with contents below.

Save the file and quit your editor. Then, verify the syntax of your configuration edits.

If you get any errors, reopen the file and check for typos, then test it again.

Once your configuration’s syntax is correct, reload Nginx to load the new configuration.

After a successful restart, its time to secure nginx, certbot comes handy here, below is required.

A fully registered domain name (

Step 1 — Installing Certbot

The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on the server.

Certbot is in very active development, so the Certbot packages provided by Ubuntu tend to be outdated. However, the Certbot developers maintain a Ubuntu software repository with up-to-date versions, so we’ll use that repository instead.

First, add the repository.

You’ll need to press ENTER to accept. Then, update the package list to pick up the new repository’s package information.

And finally, install Certbot’s Nginx package with apt-get.

Certbot is now ready to use, but in order for it to configure SSL for Nginx, we need to verify some of Nginx’s configuration.

Step 2 — Allowing HTTPS Through the Firewall

If you have the ufw firewall enabled, as recommended by the prerequisite guides, you’ll need to adjust the settings to allow for HTTPS traffic. Luckily, Nginx registers a few profiles with ufw upon installation.

You can see the current setting by typing:

It will probably look like this, meaning that only HTTP traffic is allowed to the web server:

To additionally let in HTTPS traffic, we can allow the Nginx Full profile and then delete the redundant Nginx HTTP profile allowance:

Your status should look like this now:

We’re now ready to run Certbot and fetch our certificates.

Step 3 — Obtaining an SSL Certificate

Certbot provides a variety of ways to obtain SSL certificates, through various plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary:

This runs certbot with the — nginx plugin, using -d to specify the names we’d like the certificate to be valid for.

If this is your first time running certbot, you will be prompted to enter an email address and agree to the terms of service. After doing so, certbot will communicate with the Let’s Encrypt server, then run a challenge to verify that you control the domain you’re requesting a certificate for.

If that’s successful, certbot will ask how you’d like to configure your HTTPS settings.

Select your choice then hit ENTER. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored:

Your certificates are downloaded, installed, and loaded. Try reloading your website using https:// and notice your browser’s security indicator. It should indicate that the site is properly secured, usually with a green lock icon. If you test your server using the SSL Labs Server Test, it will get an A grade.

Let’s finish by testing the renewal process.

Step 4— Verifying Certbot Auto-Renewal

Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot package we installed takes care of this for us by running ‘certbot renew’ twice a day via a systemd timer. On non-systemd distributions, this functionality is provided by a script placed in /etc/cron.d. This task runs twice a day and will renew any certificate that’s within thirty days of expiration.

To test the renewal process, you can do a dry run with certbot:

If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.

Backing up your Database and Repos

We will be keeping two days backup of both our database and repos.

As a superuser run

save and create below after

As superuser run below, to create directories to store the generated backups


Travis CI comes in two halves. is for free Public and Open Source repositories on Github and is for Private repositories on Github.

So for what we are doing, we are using a public repository due to the amount of people that are to commit to the repository and there we go to and link it to our Github Account.

Then you will be directed to sign in with your Github Account, once you have signed into Github and allowed application access, then you will be redirected to the Travis CI dashboard that should look like this when you have added a few repositories to it:

It will be responsible for testing our PHP applications and will build and test every branch and commit unless you tell it otherwise. If our test passes we should have an email notifying us of the successful build

You can view the history of our builds here.

Here is a sample of our .travis.yml file

So when a build is successful, it runs a script called “” and it is included in the travis.yml config file. The first command is to grant execution privileges to the script and the second is to run it.

Here is our push scripting and what it entails.


The practices of DevOps include continuously testing codes at each commit to ensuring that any recent code changes do not break the build. The testing framework used for this application is PHPUnit.

The first concern that had to be sorted out was figuring out what to test to ensure that the application does not break.

After careful observance, the most common thing that broke the application was the constant modification of the config.php file which held database connection parameters, as well as the db.php file which used the parameters contained in the config.php file to create a database connection object.

The approach taken to write the test was this:

At each commit, we check whether the db.php file still exists. This is important because without this file, the major parts of the database would not be able to connect to the database.

If the db.php file is found, then we proceed to check whether a config.php file located in a directory immediately outside the root directory is referenced or “require”d within it. This is because our observation revealed that contributors try modifying the location of the config.php file used when they encounter difficulties connecting to their databases on their local development machines. To prevent the test from being affected by any wrong parameters which may have been introduced by a contributor after pushing his or her code to the remote repositiory, a bash script named was written. The function of this script is to create a config.php file with the correct database configuration parameters in the correct directory on the Travis CI server . Here is what contains:

The PHP test was written in the DatabaseTest.php file in the tests folder and contains:

Also part of the testing process is the phpunit.xml file which contains some configurations, an important part being:

which specifies the directory which PHPUnit is to check for the test files.

When Travis CI runs the tests and they pass, it proceeds to begin the auto-deployment process. If the test fails, the auto-deployment process is halted.


We have to get our Git and our Keys ready for Github

First thing first, we have to check our git and install it:

Then to check if git has been installed properly:

Then we have to configure git for the automated user:

Then we created ssh for the current username:

Then go to and get the repository ready:

We add our keys to the deploy part of our Repository settings:

  • Add the SSH key to the repo
  • Create a new key and name it appropriately
  • Paste the deploy key you generated on the server and save

Adding these keys allows our server to talk to git without a password.

Then back to the server and ssh in, and cd into our folder:

Then we set up a webhook, a webhook is a URL that Github will hit when a repository is updated.

So we will create a deployment script that Github will hit and the script will run. This script when running will pull in from the git and update the codebase on the server.

Creating the deployment script:

Then paste the code below

Setting up, the webhook on Github:

Now, in your repo settings, we will set up the webhook which will automatically call the deployed URL, thus triggering a PULL request from the server to the origin.

  • Go to
  • Click Add webhook to add a service hook
  • Enter the URL to your deployment script as the Payload URL —
  • Leave the rest of the options as default. Make sure ‘Active’ is checked on.
  • Click Add webhook


As you can see, implementing a CI/CD structure and starting with a DevOps mindset is truly not a herculean task. 😃

It does not take forever to set up, changes are reversible and reflect immediately on the live server.

Have you ever imagined a CI/CD setup for working with databases?

We look forward to seeing that happen.

Lastly, huge thanks to Wisdom Anthony, Gabriel Umoh, Justine Philip, Akinduko Olugbenga and Chigozie Ekwonu.

For the assists and saves. 😆

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.

Want to Get in Touch?

Spelling error report

The following text will be sent to our editors: