Tutorial: Deploying phoenix 1.4 to Digital Ocean Droplet With Docker



Am writing this article to explain how I was able to Deploy Phoenix 1.4 to Digital Ocean.  This article will be a combination of two articles and some of my knowledge.

Reference articles

I highly suggest reading theory on those two blog posts before continuing with this article.

Building the docker image.

In this step I chose to use the Builder Pattern discribed on the first blog link, where we have two docker containers one to build the image and the other to build a release to ensure that we dont get errors when we deploy the image in a different OS.

To achieve this, we will use mix_docker a dependency that make this process very painless.
Since mix_docker has not been maintain for over two year it is not compartible with phoenix 1.4.To solve this I forked it and updated distillery to version 2.0 and changed cowboy to plug_cowboy version 2.0.

Now let add my updated dependency to deps.
defp deps do
[
 {:mix_docker, github: "johninvictus/mix_docker"}
]
end
(NOTE), You can folk my mix_docker project so that you can have full control.

Now let continue, initiate the process of building the image by running this command on your terminal inside your project.
 mix docker.init
This command, will create a rel/ which contains config.ex file which defines the behaviour of Distillery.We will also use rel folder to set hooks for migration and seeding.

Configure mix_docker


We plan to push the docker image we build to the docker hub later so, to prepare for that create an account at Docker Hub if you don't have any yet. Create a repository with the name of your project. eg sample.


After you create the repository, add this config to Phoenix config file,
 # docker mix generated image
  config :mix_docker, image: "johninvictus/sample"
(NOTE), The image name contains my DockerHub username and the project name. After that, we need to customize the default generated docker files since the current files do not support the building of Phoenix 1.4 docker image.
To do this write this to your terminal.
 mix docker.customize
When you run that command, you will see Dockerfile.build and Dockerfile.release files appear. let first change the content of Dockerfile.build, the file that is responsible for building the docker image. Replace the content, in Dockerfile.build with this.

FROM bitwalker/alpine-elixir-phoenix

ENV HOME=/opt/app/ TERM=xterm

# RUN apk update && apk add bash
ENV MIX_ENV prod

# Add the files to the image
COPY . .

# Cache Elixir deps
RUN mix deps.get --only prod
RUN mix deps.compile

WORKDIR assets
# Cache Node deps
RUN npm i

# Compile JavaScript
RUN npm run deploy

WORKDIR ..
# Compile app
RUN mix compile
RUN mix phx.digest

RUN mix release --env=prod --verbose
This code basically adds the code from your project to the docker and then does the basic operations of downloading javascript assets and compiling the elixir files.

To see if the above code is okay, run this command to build the image.
 mix docker.build
If you experience no errors, copy this code to the Dockerfile.release file.
FROM bitwalker/alpine-elixir-phoenix


RUN apk update && apk add bash

EXPOSE 4000
ENV PORT=4000 MIX_ENV=prod REPLACE_OS_VARS=true SHELL=/bin/sh

ADD sample.tar.gz ./
RUN chown -R default ./releases

USER default

ENTRYPOINT ["/opt/app/bin/sample"]
(NOTE), From the code above not the sample.tar.gz has the name of my project, ie sample.Also, note the entry point ENTRYPOINT ["/opt/app/bin/sample"] has my project name. Please replace sample name with your project name.

Dockerfile.release, file is used to build a release image. To test that everything is okay with it, run this command.
 mix docker.release
Now let publish, the image to docker hub so that we can use it in docker compose late. To do this, make your project a git project and commit all the changes. We are doing this since the command we are going to use uses the git commit to create a docker tag which is used to identify different image versions in your docker hub repository.

 mix docker.shipit
This command will build an image, then create a release and then push the docker release to DockerHub. If all is well, you will get a project name with a unique tag.ie
 johninvictus/sample:0.1.0.20-aa6219bf13
If you see no tag, you can get it from your repository.

If all goes well, let set up docker compose so that we can set databases and migrations.

Setting up docker-compose

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

To start using docker-compose create docker-compose.yml file at the root of your project. Then, add the following content.
 version: "3"
services:
  db:
    image: postgres:10.2-alpine
    environment:
      DB_HOST: 127.0.0.1
      POSTGRES_DB: "sample_prod"
      POSTGRES_USER: "postgres"
      POSTGRES_PASSWORD: "postgres"
      DB_PORT: 5432

  web:
    image: "johninvictus/sample:0.1.0.20-aa6219bf13"
    command: foreground
    depends_on:
      - db
    ports:
        - "4000:4000"
    environment:
      DATABASE_URL: "ecto://postgres:postgres@db/sample_prod"
      PORT: 4000
      POOL_SIZE: 10
(NOTE), The image at the web section is using your repository URL and the tag. You will need to be changing the tag everytime you commit and ship your docker image.

From the above content, db contains the Postgres container and the initial desired initial configuration. The web section contains your uploaded docker hub repository image and it depends on the database you created. The DATABASE_URL, is of format "ecto://username:password@db/database name"

After that let Phoenix project know about the created database, by extending the configuration. Open your prod.exs or product.secret.exs and make sure the database is pointing to the created database.
 # Configure your database
config :sample, Blog.Repo,
  adapter: Ecto.Adapters.Postgres,
  url: "ecto://postgres:postgres@db/sample_prod",
  username: "postgres",
  password: "postgres",
  database: "sample_prod",
  pool_size: 15
After that, go to your config.exs and add server: true, to your Endpoint section so that you can access your project via localhost or IP address of your droplet.
ie
 # Configures the endpoint
config :sample, SampleWeb.Endpoint,
  url: [host: "localhost"],
  server: true,
After doing all that, commit all changes again and then mix docker.shipit and then change the tag of the docker compose web image. Now you can run the project using this command.
docker-compose up
If all is well, you can visit localhost:4000 to see the Phoenix html.
Now let add logic to create migrations and seed data.

Running migrations

Since you are using minimal docker build, you will not have tools like Mix to help you run migration for this we will use distillery scripts to do the Seeding and migrating of the database.

Create a file called release_tasks.ex at lib/your project name and add the following content.
defmodule Your project name.ReleaseTasks do
  @start_apps [
    :crypto,
    :ssl,
    :postgrex,
    :ecto_sql
  ]

  @repos Application.get_env(:sample, :ecto_repos, [])

  def migrate(_argv) do
    start_services()

    run_migrations()

    stop_services()
  end

  def seed(_argv) do
    start_services()

    run_migrations()

    run_seeds()

    stop_services()
  end

  defp start_services do
    IO.puts("Starting dependencies..")
    # Start apps necessary for executing migrations
    Enum.each(@start_apps, &Application.ensure_all_started/1)

    # Start the Repo(s) for app
    IO.puts("Starting repos..")
    Enum.each(@repos, & &1.start_link(pool_size: 10))
  end

  defp stop_services do
    IO.puts("Success!")
    :init.stop()
  end

  defp run_migrations do
    Enum.each(@repos, &run_migrations_for/1)
  end

  defp run_migrations_for(repo) do
    app = Keyword.get(repo.config, :otp_app)
    IO.puts("Running migrations for #{app}")
    migrations_path = priv_path_for(repo, "migrations")
    Ecto.Migrator.run(repo, migrations_path, :up, all: true)
  end

  defp run_seeds do
    Enum.each(@repos, &run_seeds_for/1)
  end

  defp run_seeds_for(repo) do
    # Run the seed script if it exists
    seed_script = priv_path_for(repo, "seeds.exs")

    if File.exists?(seed_script) do
      IO.puts("Running seed script..")
      Code.eval_file(seed_script)
    end
  end

  defp priv_path_for(repo, filename) do
    app = Keyword.get(repo.config, :otp_app)

    repo_underscore =
      repo
      |> Module.split()
      |> List.last()
      |> Macro.underscore()

    priv_dir = "#{:code.priv_dir(app)}"

    Path.join([priv_dir, repo_underscore, filename])
  end
end
(NOTE), @repos Application.get_env(:sample, :ecto_repos, []), should use an atom of your project eg my project is sample.
You can find more about this script here. distillery/2.0.0-rc.1/running_migrations

Now let create scripts to run the task, go to rel/ folder and create hooks folder where we will create a script that is run after the Phoenix application boots or before boot.
Then inside hooks, create another folder called post_start. In this folder, we are going to add all the script files that need to run after the Phoenix project boots.
Inside post_start folder create a file called migrationseed.sh and add the following content.
##!/bin

echo "Running migrations and seed data if any"
# bin/docker_magic  migrate
release_ctl eval --mfa "Sample.ReleaseTasks.seed/1" --argv -- "$@"
echo "Migrations and Seed data run successfully"
(NOTE), The above command has my Module name.
Now let make sure the command is actually run after Phoenix boots by adding the following command to rel/config.
environment :prod do
  set(post_start_hooks: "rel/hooks/post_start")
After you are done, mix docker.shipit and then update your tag and then run docker-compose up. If you have any migrations and seed everything will run smoothly.

Deploying with Docker Machine

This the last section in this long article and I believe its the simplest.Here we will use Docker Machine and Digital Ocean driver to deploy our docker image. If you wish to deploy to other services you can choose a different Driver.

To start the process of deployment, first, generate an API TOKEN from here. API TOKEN.

After you create the token, copy it to the terminal as an environment variable.
 export DIGITAL_OCEAN_TOKEN=your token here
To create the digital ocean droplet, run this command. (Be free to adjust the size)
 docker-machine create --driver=digitalocean --digitalocean-access-token=$DIGITAL_OCEAN_TOKEN --digitalocean-size=512mb sample 
Now to access your docker image in Digital Ocean you can simply use this command.
docker-machine ssh sample
Or
You can point your terminal to the remote host using this command.
eval $(docker-machine env sample)
This command will allow you to use your local terminal to affect the remote digital ocean image.
After doing the above we need to start the docker compose inside Digital ocean by running this command.
docker-compose up -d
The -d tag is telling docker-compose to run as a daemon. When you are done with the remote Digital ocean Image, we can point back our terminal to our local machine by using this command.
eval $(docker-machine env -u)


Final thoughts

I know that was a very long article but lemme hope it will be of help to somebody struggling to host his/her project to the cloud. Here is a project I have dockerlized, johninvictus/blog, Hope it helps.

One final thing, add this code to prod.exs since adding it, solved a bug I was struggling with.
# in production as building large stacktraces may be expensive.
config :phoenix, :stacktrace_depth, 20
For Nginx, read from the two articles I share at the top.
Tutorial: Deploying phoenix 1.4 to Digital Ocean Droplet With Docker Tutorial: Deploying phoenix 1.4 to Digital Ocean Droplet With Docker Reviewed by John Invictus on 08:49 Rating: 5

No comments:

Powered by Blogger.