It seems that many people talk about setting up a continuous deployment system but few actually take the plunge and make it a reality. I’ve recently set up continuous deployment for an API project at work and thought I would blog about how I got it all to work.
This project is relatively new and we’d setup the following process for it:
- We’re using Git and GitHub for the project. Every developer working on it uses a fork of the main repository.
- When some work is ready to be merged we send a pull request. Another developer reviews the code before it is merged.
- Once code is merged into the main repository GitHub notifies our Jenkins server, which then runs a build on the project. This performs many tasks including lint checking changed files, running our unit and functional test suite, checking that our database is in sync with our entities and performing various metrics against our code base.
What we needed is for code to be automatically deployed at the end of a build only if the build succeeds, i.e. all tests are passing, there are no syntax errors and the entities are in sync with the database. Deployments should also be able to be quickly rolled back should anything go wrong with code in production.
To automate the deployment we chose to work with Capifony. This is a deployment solution for Symfony 1 and 2 apps that is a series of deployment recipes that is built on top of Capistrano. This does entail installing Ruby and Ruby Gems on the system from which you want to run the deployment but the advantages it offers outweigh any hassle this may entail. Some of the advantages of using Capifony are:
- Once your deployment script is defined running a deployment is as simple as changing into the root directory of your project and typing the command ‘cap deploy’.
- Capifony stores multiple releases (you get to choose how many, the default is 3) of your project on your server. A symlink called current is created that links to the most recent release. This makes rolling back the project in the event of problems a cinch: to go back to the previous deployment simply type ‘cap deploy:rollback’ from the root directory of your project.
- Capifony performs deployments by logging into the production server over SSH. It uses SCM such as Git to pull code down to the server.
- Since Capifony connects to the server using SSH it can run any bash commands or symfony console commands that you wish during the deployment.
- The deployment is run as a ‘transaction’: if any part of it fails the entire deployment is rolled back and aborted.
Capifony adds a number of default tasks that it performs with every deployment. These can easily be extended or overridden in your own deploy.rb script. Capifony can also run any symfony console command on the live server as part of the deployment by using the keyword ‘symfony’. For example, to update the Symfony2 translation file you would simply need to add ‘run “symfony:translation:update”‘ to your deploy.rb file.
Once you have installed capifony you need to cd to the root directory of your project and to run ‘capifony .’ in the terminal. This will automatically detect if you’re working wth a Symfony 1 or 2 application and will create a Capfile in in the root directory as well as creating a deploy.rb file in the config directory. These need to be added to source control. Once you’ve setup a basic deployment script run ‘cap deploy:setup’ to setup the deployment on the remote server.
A sample deployment script
Below is a sample deploy.rb script that I’m using to deploy a Symfony2 app:
A few notes on this:
- Since Capifony performs actions on the live server it needs a user to log in as to perform actions. I created a user on the server called deploy with the primary group of www-pub and the secondary group of www-data. I then added www-pub as a secondary group to the user www-data that apache runs as. This solved any permission issues that I had.
- Setting the option “set :deploy_via, :remote_cache” tells capifony to keep a local, cached copy of the repository on the server. This speeds up deployments since only changes to the code base need to be fetched. We needed to install Git on the server to allow this to work and to add a deployment key to our GitHub repo for the deploy user to use when fetching code from Git.
- Capifony allows you to share files between deployments. This deployment script shares the parameters.yml file, the log, vendors and uploads directories.
- The line “ssh_options[:keys] = [File.join(ENV[“HOME”], “.ssh”, “KEY FILE NAME”)]” tells Capifony to use an SSH keyfile with the supplied name. This allows us to register a key on the server for the deploy user.
- Since the deploy user needs to access the GitHub repository we needed to add a deployment key for the deploy user to use on GitHub.
- The line “before “symfony:cache:warmup”, “symfony:doctrine:migrations:migrate”” automatically runs any new doctrine migrations with every deployment. The line “set :interactive_mode, false” ensures that Symfony won’t ask for confirmation when running this command.
- The final part of the script overrides the restart method. This is left blank in Capifony (although I think it’s a standard task in Capistrano Ruby deployments). I needed to restart Apache with every deployment to make sure that the APC cache was cleared since a lot of cached data is stored in APC. The deploy user was given sudo permission only to restart the apache process.
Getting it working with Jenkins
We elected to have the Capifony deployment set up as a separate job on our Jenkins server rather than having it run as a post build task in our main Jenkins job for the project. The deployment job simply pulls in the latest version of the code from our GitHub repository and performs ‘cap deploy’ as a shell command for its single build action. This job is triggered as a downstream job once the main build for our project successfully completes. We chose this configuration for a couple of reasons:
- This setup gives us a little more freedom to run the deployment job on it’s own if we ever need to.
- If a build or a deployment fails it’s a little easier for is to see instantly where things have gone wrong.
We needed to add an ssh key on the Jenkins server for Jenkins to use when connecting to the live server and to add this to authorized_keys for the deploy user on the server. This setup did give us one other problem though. By default Capifony will deploy the latest commit to a Git repository. If a new commit is made while Jenkins is running a build then Capifony will deploy that, meaning that we could end up with code that has not been through our build process being deployed. Our solution was to have our main Jenkins build tag the repository and to push this tag back to GitHub using the Git Publisher post build action. This tag is given the same number as the build number in Jenkins for easy identification. As you can see in the deployment script above our Capifony deployment looks for the latest tag in GitHub and deploys this to the live environment. To make sure there are no problems with this process we made sure that Jenkins will only build this project sequentially, not in parallel. This should ensure that the the latest tag in Git is the most recent tested version of the code. Naturally, for the deployment to work Capifony needs to be installed on the Jenkins server.
After setting all of this up I found that it wasn’t working. I could run ‘cap deploy’ from my development environment with it working perfectly, deploying the latest tag from GitHub. When the Jenkins server ran this it failed with a cryptic error about not being able to find the specified tag. I could see the tags being created in GitHub though and spent a couple of frustrating hours trying to work this out. Eventually, I found the problem. It seems that the Jenkins Git plugin creates an internal tag every time a build is run. This is only created on the Jenkins server and not pushed to GitHub. What was happening is that when the Capifony deployment was run on the Jenkins server it connected to the live server and then tried to checkout the tag that was only created on the Jenkins server, resulting in the deployment failing. The solution was to go into the advanced Git config and to make sure that the skip internal tag option is checked. The image below shows the option to check for this.
Setting all of this up did take me quite a lot of time, the majority of which was simply due to my needing to learn how to configure and use setup Capifony. As a result of it we have a great continuous deployment setup that works seamlessly. I’d definitely use Capifony again and am already looking at how we can use it to deploy a couple of legacy Symfony1 apps we have at work. If you have a Symfony app I’d strongly recommend using Capifony for deployments. If you have a Jenkins server, take it one step further and setup a continuous deployment.