Containerization is all the rage right now, so I decided to jump on that bandwagon. I'd grown tired of my old method of deploying my apps (a custom set of bash bootstrap scripts) and decided to try to create a more modern deployment method for them.
The first step in containerizing any app is to create a Dockerfile for it. My Dockerfile for my Rails apps is below. I'll go through it line by line to explain what everything does and why I wrote it the way I did.
FROM ruby:2.3 RUN apt-get update && \ apt-get install -qq -y --no-install-recommends cron && \ rm -rf /var/lib/apt/lists/* ENV APP_HOME /usr/src/app ENV RAILS_LOG_TO_STDOUT true ENV RAILS_ENV production ENV RAILS_SERVE_STATIC_FILES true RUN mkdir -p $APP_HOME WORKDIR $APP_HOME COPY Gemfile $APP_HOME/Gemfile COPY Gemfile.lock $APP_HOME/Gemfile.lock RUN bundle install COPY . $APP_HOME RUN bundle exec rake assets:precompile EXPOSE 7000 CMD bundle exec rails s -p 7000 -b '0.0.0.0'
This sets the official ruby base image as the starting point for my docker image. I chose ruby version 2.3 because that's what my apps were using when I started Dockerizing them. Now that they're containerized, it should be much easier to update their ruby versions independently.
RUN apt-get update && \ apt-get install -qq -y --no-install-recommends cron && \ rm -rf /var/lib/apt/lists/*
This line installs cron. I use the whenever gem to schedule cron tasks for my apps, and cron is not installed by default on the ruby base image. The
--no-install-recomments part reduces the number of extra packages that get installed so the final image will be smaller. The
rm -rf /var/lib/apt/lists/* part removes some files created by apt that we don't need in the final image. This also reduces the final image size.
ENV APP_HOME /usr/src/app ENV RAILS_LOG_TO_STDOUT true ENV RAILS_ENV production ENV RAILS_SERVE_STATIC_FILES true
These lines set four environment variables. APP_HOME sets where the app will be installed in the image. RAILS_LOG_TO_STDOUT makes Rails logs play nicely with docker instead of getting logged to a file inside the running container. RAILS_ENV sets the Rails app to run in production mode. (When I'm developing, this environment variable gets overridden.) Finally, RAILS_SERVE_STATIC_FILES tells recent versions of Rails (5.1+, I think) to serve static assets. Typically, you don't want your app server serving static assets, but my apps are behind the Cloudflare CDN, so these static assets don't get served by my app servers very often.
RUN mkdir -p $APP_HOME WORKDIR $APP_HOME
These lines just create the directory that the app will be installed into and set that app as the working directory for all future
COPY Gemfile $APP_HOME/Gemfile COPY Gemfile.lock $APP_HOME/Gemfile.lock RUN bundle install
These lines copy the Gemfile and Gemfile.lock into the app folder on the image. Then we run bundler to install all the gems we need for the container. These lines of the Dockerfile are placed here because we expect the Gemfiles to change less often than the application code that will be copied later but more often than the list of applications installed by apt earlier in the Dockerfile. (Only steps that change from the last time the Docker image was built are re-run, so it's best to order steps from least-changing to most-changing. this makes the best use of Docker's build cache.)
COPY . $APP_HOME
This copies the app into the image. This step and all future steps will be re-run whenever the app image is rebuilt (assuming the app's code is changed between rebuilds.)
RUN bundle exec rake assets:precompile
I expose each of my Rails apps on unique ports to make it easy to differentiate them. This particular app happens to run on port 7000. This command lets other containers talk to this container via its port 7000.
CMD bundle exec rails s -p 7000 -b '0.0.0.0'
Finally, this command runs the rails server, puma, and tells it to accept connection requests for port 7000 of any IP address.
How I use this Dockerfile
As I mentioned earlier, my Rails app has cron jobs that need to be run periodically. It also has a job queue that needs to be processed as jobs are created. These two additional tasks are able to use the above Dockerfile by overriding the
CMD line at the end. In this way, I'm able to run three different containers that perform different tasks all from the same image.
I'll go into more detail about how I run my containers in a later post.
Photo by frank mckenna