There can be little doubt that Docker is the hippest thing in the software development ecospace right now (correct as at 10.54am, Fri 19th Feb). Despite my initial misgivings – having assumed it was just some Vagrant replacement, I’ve actually come around to thinking that Docker deserves (at least some of) the hype! Why? Let me explain.
We recently took over a legacy* Rails app – pretty straightforward affair with a MySQL backend, Redis and Sidekiq. We decide to use Docker instead of either;
- installing and configuring everything locally on the the developer machine (in our case MacBook Pro)
- setting up a Vagrant image.
In case you are in discussions with your Boss, CTO or Lead Developer about “why“ you should be using Docker, here is what we found:
- It’s easy. You can generally configure an application to use Docker in a day – compare that to Vagrant with Chef which generally took me several days to setup.
- It’s fast. Once you have your
docker-compose.ymlfile and the images pushed to DockerHub (you may not even need custom images) the blasting away the containers and re-building them usually only takes a few minutes (compared to Vagrant which typically took upwards of 15-20 minutes).
- It’s repeatable. Because it’s fast developers, generally don’t mind blowing away containers and starting over, which means the
docker-compose.ymlfile is tested repeatedly. The more frequently you test, the greater your confidence.
- It encourages sharing and collaboration. The
docker-compose.ymlfiles are easy to follow and maintain, which means the whole team is encouraged to contribute. If you want to add Imagemagik, you just add it to the Dockerfile locally, build and test – and if it works you just push it to DockerHub for the rest of the team to share.
- It’s convenient and simple. If your team work on more than one application, then every Docker-enabled app will have the same steps for setting it up the local development environment:
git clone ... cd ... docker-compose up
Go make a coffee and when you come back your app will be whirring away. It really is that simple!
Sales pitch over, here is what we did to get the app on Docker.
It’s quite likely that you will need to build a container for your Rails app. For other components (MySQl, Redis, etc) you will probably be fine using the official images. Here is our Dockerfile:
FROM ruby:2.0.0-p648-slim MAINTAINER CreatekIO RUN apt-get update && \ apt-get install -y \ git \ build-essential \ imagemagick \ libmagickwand-dev \ libmysqlclient-dev \ libqt5webkit5-dev \ npm \ qt5-default \ xvfb \ && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* && \ ln -sf /usr/lib/x86_64-linux-gnu/ImageMagick-6.8.9/bin-Q16/* \ /usr/local/bin/ && \ mkdir -p /app WORKDIR /app COPY Gemfile Gemfile.lock ./ RUN bundle install --jobs 20 --retry 5 COPY . ./ CMD ["/bin/bash"]
Some things worth noting about this
- We’ve chosen to base our container on the slim version of the ruby image (
ruby:2.0.0-p648-slim) and just
apt-getwhat we needed to keep the image small. There are
alpineimages which are even slimmer but a little bit trickier to work with (Maybe one day… 😉)
- The apps gems are baked-in the image which improves performance for first-time pulls and the CI builds.
COPY . ./command copies all of our application code into the image which we then push back up to DockerHub. Only do this if you have private (i.e. paid-for) repos or your project is Open Source.
Once you have a Dockerfile (and assuming you’ve installed Docker toolbox), you can build the image with:
docker build .
Which is lovely, but without a database container isn’t going to be much fun. Enter the compose file:
docker-compose.yml file is the glue that brings all the parts of your app (known as containers) together. You typically run every component as a separate container. In our app we have containers for the Rails app server, MySQL, Redis and Sidekiq.
my_app.app: build: . command: bash -c 'rm -f /app/tmp/pids/server.pid && bundle && rails server -b 0.0.0.0' volumes: - .:/app links: - my_app.db - my_app.redis ports: - '3000:3000' environment: REDIS_URL: redis://my_app.redis:6379 my_app.db: image: mysql:5.5 environment: MYSQL_ALLOW_EMPTY_PASSWORD: 'yes' ports: - '3306:3306' my_app.redis: image: redis:3.0 my_app.sidekiq: build: . command: bash -c 'bundle && bundle exec sidekiq' volumes: - .:/app links: - my_app.db - my_app.redis environment: REDIS_URL: redis://my_app.redis:6379
- Every container has it’s own section in the YAML file (
my_app.dband so on).
- If containers need to talk to one another, you need to mention them in the
links:section. For example, the Rails app needs to talk to MySQL and Redis. Docker creates host entries for the linked-to containers, so in
database.ymlyou just need to specify the container name like this:
development: adapter: mysql2 encoding: utf8 database: my_app_development pool: 5 username: root password: host: my_app.db
- To expose a port from a container to the outside world, add it to the
ports:section. At the very least you’ll need to expose 3000 for the Rails server, but you may want to expose the database port too.
Once you’re happy go ahead and fire up the app:
If you’ve done it right, you should see the STDOUT from each of the running containers a little bit like this:
That’s it! We’ll shortly be adding a post on how we manage images on DockerHub with and without private repos.
* legacy – any application with few or no automated tests.