Migration Journey From DIY to Services Infrastructure


I’ve been building and deploying Rails applications on home grown “cloud” infrastructure for several years now – That is built on Xen servers with DRDB for failover of VMs, backup storage over iSCSI, automation with Puppet, our own hosted SCM servers (subversion) and a host of related tools and technology. The DIY was fun and interesting from a learning perspective, but like most companies it has now become /relatively/ error prone, time consuming and expensive to maintain. This blog post is about what choices I am making to move wholesale away from our own Data Center to a managed infrastructure on AWS while maintaining as much automation and control as reasonably practical.

Initial Goal:

Migrate an existing Rails 3.2 application from entirely self hosted, including in-house SCM repository, gem server and deployment to our own VMs to a hosted SCM and deployment to AWS with maximum scalability at minimal investment of time and money. Of note on the application is that it currently runs on a single server on Apache/Passenger and has no caching, so along the way I will modify the application to take advantage of the best caching strategies with an eye to Rails 4 technologies and integrating with AWS Elastic Cache. Changes to the code will be made following TDD best practices.


AWS has a plethora of services and I began by evaluating which services would be most useful to deploying a globally available application with an eye to the future of what is and what might be coming. To begin with, I am an automation junkie so the first look was at Cloud Formation, Elastic Beanstalk and Ops Works. While all of these are seductive offerings, at this early stage I decided that was a bit too much magic for me particularly with Elastic Beanstalk – What about migrations, database backup scripts, etc? In initial testing EB just seems to “handle it”, but I feel I have no control. Cloud Formation seemed like overkill for a first app to be deployed to AWS’s Singapore DC. OpsWorks also was a little too basic at this stage (June, 2013) for what I want to accomplish.


– Chef for Provisioning: With an eye to the future I decided to migrate the Puppet configuration to Chef as Chef is the chocie for OpsWorks. I figure that possibly in the future as AWS expands capabilities of OpsWorks I may likely be able to migrate these recipes to OpsWorks with minimal effort. We’ll see, but all else being equal this seems a reasonable choice

– GitHub for SCM: By now Git is pretty much the only sane choice for SCM in the Ruby/Rails world. It is a bit of learning for me to get off SVN, but the choice clearly has been made by the community to support Git and GitHub

– Development: I have been using Virtual Box VMs on my laptop to develop for a while now. The one thing is that I don’t have a standard image for that. I use vim with several plugins to ease Rails development on vim. What I want to create is a nearly identical environment for dev, staging and production. In this way, I plan to use the same Chef receipes to provision development VMs on the laptop as I use to provision EC2 instances for App Servers

– AWS Services: This is a bit less straight forward. Ideally, I want this app and any app I may develop in the future to have easy auto scaling to reach a global audience in the happy event that it should have such a following. That includes many AWS services and, as always, there is more than one way to do it. I chose to break it down into three distinct units of work based on the requirements and the learning curve:

Milestone 1: Integrate the Basics

РStorage: S3  (using  https://github.com/fog/fog Fog) gem for the application

– Database: MySQL on RDS

– App Servers: Custom AMI based on Debian Wheezy and pre-configured with a Chef recipe

– Load Balancer: AWS Elastic Load Balancer

Combining these units of work into one milestone seems to me to offer the least changes to the application while offering a healthy exposure to AWS core services. It will require changes to the code for storage of assets on S3 and changes to session state storage to support load balancing, but other than that it allows majority of time to be focused on the AWS components of the migration.

Milestone 2: Optimization

This has mainly to do with application performance and integrating with AWS Elastic Cache. At first glance this doesn’t seem like to big a deal, but with all the mechanisms available to cache and the code changes required as well as testing the performance to ensure the changes are effective this seemed like a unit of work on its own

Milestone 3: Automated Auto Scaling and CDN

At this point, the application is running on core AWS services and is optimized for performance. This is where I plan to look at performance globally and how much automation I can bring to management of the application. The main components seem to be:

– Auto Scaling: I plan to implement Cloud Watch to have AWS manage the scaling of EC2 instances.

– CDN: To move our assets closer to the end user we will implement AWS Cloud Front

– DNS: Last, and this I haven’t read too much on at this point is the Route53 DNS service offered

Another point to note here is application monitoring. I plan to investigate a few gems here for rolling my own exception notification, but also to consider a service like New Relic

And Finally…

This is a first stab based on reading and talking breifly with some AWS people here in Singapore. I am open to, and hope to receive questions, comments and suggestions as I move forward on this project. I have many years experience with networking, Linux and Ruby and Rails so I’d like to think I’m making the best choices, but there is always something to learn and I’m always aware that “I don’t know what I don’t know” so please feel free to comment in whatever style suits you. I’ll be following up with posts as I cross the major milestones.


Reddot Ruby Conference 2013

ember_reddot_ruby_confJust attended the Reddot Ruby Conference here in Singapore last Friday and Saturday. A fantastic event with over 300 Ruby/Rails developers from around the region and the world. It is always great to connect with like minded people and share ideas and experiences. Special thanks to https://twitter.com/winstonyw and the rest of the organizing team for all the hard work and effort to put the conference together. Great job guys!