You want to make simple services, that do things. To be able to modify that role out, at code base, at will? Ok, probably interested in Docker or things like it (containers). In this post we will be exploring some of my findings and this will sort of be a notes/detail reference for myself in the future. Thanks me.

Docker Build and Provisioning – Introducing Docker

Now, there must be a couples files that have been crafted governing the provisioning, and possible creation of your Docker container. This guide assumes you have an average distro and have installed the latest docker environment.

RedHat Enterprise Linux – docker

Ubuntu – docker

As I am getting to understand it, these are:

docker build -t containername .

In this example you use the “Dockerfile” file where you put the actual code the builds your docker container, can be as simple as getting an image, or, can be as complicated as layers and all the rest

docker-compose up -d

When you do this, it uses the “docker-compose.yml” file to provision the Docker container to your host

In most default configurations when you provision some default Docker container image it will network via a Bridge network connection. You will publish port forwards from your host to the docker0 network (in most cases). Some environments alter the default rollout for a standard environment, and to this, Docker can support quite a wide array. You just might need to know how to delivery that arrangement.

Docker Deployment via Docker Compose

In most cases you would be following a standard deployment suggestion from Docker Hub or someone’s Github project, you probably where told do a ‘docker run -d groupname/containername:version‘ and this uses Docker’s bridge and port forward usually. This is the “standard” deployment model.

It is suggested to create a folder by enviroment (inner/core/etc.) and then in those, have a folder per container/stack you are making. Each stack/container folder will have likely only one ‘docker-compose.yml‘ file, but might have more than one Dockerfile, and other supporting scripts/files, but, generally you only have one docker-cmpose.yml file.

In deploying this, we will be using a MACVLAN model so our IP traffic looks “normal” if you will, specific services on specific IPs, not all bound up in the Docker host’s IP. We will be doing this as if we had two VMs, one in “Inner”, and one in “Core”. In each VM we have setup Docker, Docker Compose, and all dependencies, and “hello world” Docker deploys and works.

Further, we have configured the Unbound on our OPNSense Router which we do not cover in this version of this guide, but, we have our hosts query the Pihole and the Pihole queries Unbound, or, our BIND depending on the Domain (‘homelab.home’ in this example) being requested – as configurable in the Pihole once you deploy and login.

We make our unique network first, lets create this file

build_dockerinner.sh
#!/bin/bash
docker network create dockerinner01 \
    --driver=macvlan \
    --subnet=192.168.1.0/27 \
    --gateway=192.168.1.1 \
    --ip-range=192.168.1.16/28 \
    --attachable \
    -o parent=eth1

Now you will want to chmod +x *.sh and then ./build_dockerinner01.sh

Using a docker-compose.yml file to be deployed with Docker Compose we build a PiHole using an IP that will use the MACVLAN IP that we configured on the extra NIC

Let make the main Pihole Docker Compose file

docker-compose.yml
version: "3.9"

services:
  pihole:
    container_name: pihole01
    image: pihole/pihole:latest
    hostname: pihole01
    domainname: homelab.home
    extra_hosts:
      - "pihole01:127.0.0.1"

      - "ns02.novalabs.home ns02:192.168.1.53"

      - "pihole01.novalabs.home pihole01:192.168.1.17"
    networks:
      dockerinner01:
        ipv4_address: 192.168.1.17
        aliases:
         - pihole01.homelab.home
    dns:
      - 192.168.1.1
    expose:
      - '53'
      - '53/udp'
      - '67/udp'
      - '80'
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "80:80/tcp"
    environment:
      TZ: 'America/Chicago'
      FTLCONF_REPLY_ADDR4: 192.168.1.17 # must match ipv4_address above
      VIRTUAL_HOST: 'pihole01.novalabs.home'  # Must be hostname + domainname from above
      WEBPASSWORD: 'USEYOUROWNPASSWORD'
    volumes:
      - pihole:/etc/pihole
      - dnsmasq:/etc/dnsmasq.d
    cap_add:
      - NET_ADMIN
    restart: always

networks:
  dockerinner01:
    external: true

volumes:
  pihole:
  dnsmasq:

We can create deploy if new image arrives script

deploy_container.sh
#!/bin/bash

set -ex

out=$(docker pull pihole/pihole:latest)
if [[ $out == *"Downloaded newer image"* ]]; then
    docker-compose up -d --force-recreate --always-recreate-deps --build --remove-orphans
    docker image prune -f
fi

Now we can chmod +x *.sh and ./deploy_container.sh to build and run our Pihole

If you were going admin the PiHole, you would browse to the “192.168.1.17” address and enter your password of “random” to login. When a host resolves a DNS address, it would connect to the “192.168.1.17” IP but, the host running that PiHole Docker Container (whatever IP it might be) would actually reply back to the requester. It can look odd on the network if you were expecting something else to happen.

If you are after a more “local network connection” or standard “service style” arrangement, then you would be interested in Docker’s other features.

You will need to make a new network for this via “docker” commands, you can also do this in a “docker-compose” method too, I will try to start with the first and detail the second. Haven’t actually used the second method yet, so…

Custom Deployment – Bind configurable via Webmin

In this deployment we will be using “docker-compose” and this must be installed and of course as it relies on Python, you will have to have and a series of dependencies installed. Please return once you have this installed, you should not have to do much configuration on a standard flavor of Linux as far as I know.

Create a file to make our MACVLAN Network for Docker

Again, we are using separate VMs for each of our zones, so this is being setup on its own VM (otherwise we would be setting these up on the same docker network with IPs quite close to each other)

build_dockercore01.sh
docker network create dockercore01 \
    --driver=macvlan \
    --subnet=192.168.1.0/24 \
    --gateway=192.168.1.1 \
    --ip-range=192.168.1.52/30 \
    --attachable \
    -o parent=eth1

Now you can use the Dockerfile and docker-compose.yml files plus a likely entrypoint.sh file to configure to this environment. At the end we cover how to execute the whole group of files. The bind example is a good strong example in my mind because the original creator built out the entrypoint.sh as such. It helped me learn some of the rest of the instrumentation more easily. Thank you!

A bind configuration (modified from sameersbn/docker-bind):

Dockerfile (copied and modified from source)

FROM ubuntu:focal AS add-apt-repositories

RUN DEBIAN_FRONTEND=noninteractive apt-get update \
 && DEBIAN_FRONTEND=noninteractive apt-get install -y apt-utils \
 && DEBIAN_FRONTEND=noninteractive apt-get install -y ca-certificates wget \
 && DEBIAN_FRONTEND=noninteractive apt-get install -y gnupg \
 && wget -q -O- http://www.webmin.com/jcameron-key.asc | apt-key add \
 && echo "deb http://download.webmin.com/download/repository sarge contrib" >> /etc/apt/sources.list

FROM ubuntu:focal

ENV BIND_USER=bind \
    BIND_VERSION=9.16.1-0ubuntu2.16 \
    WEBMIN_VERSION=2.102 \
    DATA_DIR=/data \
    HOSTNAME=ns02

COPY --from=add-apt-repositories /etc/apt/trusted.gpg /etc/apt/trusted.gpg

COPY --from=add-apt-repositories /etc/apt/sources.list /etc/apt/sources.list

RUN rm -rf /etc/apt/apt.conf.d/docker-gzip-indexes \
 && DEBIAN_FRONTEND=noninteractive apt-get update \
 && DEBIAN_FRONTEND=noninteractive apt-get install -y \
      bind9=1:${BIND_VERSION}* \
      bind9-host=1:${BIND_VERSION}* \
      dnsutils \
      webmin=${WEBMIN_VERSION}* \
 && rm -rf /var/lib/apt/lists/*

COPY entrypoint.sh /sbin/entrypoint.sh

RUN chmod 755 /sbin/entrypoint.sh

EXPOSE 53/udp 53/tcp 10000/tcp

ENTRYPOINT ["/sbin/entrypoint.sh"]

HEALTHCHECK --interval=12s --timeout=12s --start-period=30s \
  CMD [ "/usr/bin/dig", "+short", "+norecurse", "+retry=0", "@192.168.74.80", "ns02.novalabs.home", "||", "exit 1" ]

CMD ["/usr/sbin/named"]
docker-compose.yml (created by me, there is a reference but it was quite minimal)

services:
  bind:
    container_name: ns02
    image: homelab/bind-primary:latest
    build:
      context: .
      dockerfile: Dockerfile
    hostname: ns02
    domainname: homelab.home
    extra_hosts:
      - "ns02:127.0.0.1"
      - "ns02.homelab.home ns02:192.168.1.53"
    networks:
      dockercore01:
        ipv4_address: 192.168.1.53
        aliases:
         - ns02.homelab.home
    dns:
      - 192.168.1.17
      - 192.168.1.21
    expose:
      - '53'
      - '53/udp'
      - '10000'
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "10000:10000/tcp"
    volumes:
      - bind:/data/bind
      - webmin:/data/webmin
    restart: always

networks:
  dockercore01:
    external: true

volumes:
  bind:
  webmin:
entrypoint.sh (modified from source, only slightly)
#!/bin/bash
set -e

# usage: file_env VAR [DEFAULT]
#    ie: file_env 'XYZ_DB_PASSWORD' 'example'
# (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
#  "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
file_env() {
  local var="$1"
  local fileVar="${var}_FILE"
  local def="${2:-}"
  if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
    echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
    exit 1
  fi
  local val="$def"
  if [ "${!var:-}" ]; then
    val="${!var}"
  elif [ "${!fileVar:-}" ]; then
    val="$(< "${!fileVar}")"
  fi
  export "$var"="$val"
  unset "$fileVar"
}

file_env 'ROOT_PASSWORD'

ROOT_PASSWORD=${ROOT_PASSWORD:-password}
WEBMIN_ENABLED=${WEBMIN_ENABLED:-true}
WEBMIN_INIT_SSL_ENABLED=${WEBMIN_INIT_SSL_ENABLED:-true}
WEBMIN_INIT_REDIRECT_PORT=${WEBMIN_INIT_REDIRECT_PORT:-10000}
WEBMIN_INIT_REFERERS=${WEBMIN_INIT_REFERERS:-NONE}

BIND_DATA_DIR=${DATA_DIR}/bind
WEBMIN_DATA_DIR=${DATA_DIR}/webmin

create_bind_data_dir() {
  mkdir -p ${BIND_DATA_DIR}

  # populate default bind configuration if it does not exist
  if [ ! -d ${BIND_DATA_DIR}/etc ]; then
    mv /etc/bind ${BIND_DATA_DIR}/etc
  fi
  rm -rf /etc/bind
  ln -sf ${BIND_DATA_DIR}/etc /etc/bind
  chmod -R 0775 ${BIND_DATA_DIR}
  chown -R ${BIND_USER}:${BIND_USER} ${BIND_DATA_DIR}

  if [ ! -d ${BIND_DATA_DIR}/lib ]; then
    mkdir -p ${BIND_DATA_DIR}/lib
    chown ${BIND_USER}:${BIND_USER} ${BIND_DATA_DIR}/lib
  fi
  rm -rf /var/lib/bind
  ln -sf ${BIND_DATA_DIR}/lib /var/lib/bind
}

create_webmin_data_dir() {
  mkdir -p ${WEBMIN_DATA_DIR}
  chmod -R 0755 ${WEBMIN_DATA_DIR}
  chown -R root:root ${WEBMIN_DATA_DIR}

  # populate the default webmin configuration if it does not exist
  if [ ! -d ${WEBMIN_DATA_DIR}/etc ]; then
    mv /etc/webmin ${WEBMIN_DATA_DIR}/etc
  fi
  rm -rf /etc/webmin
  ln -sf ${WEBMIN_DATA_DIR}/etc /etc/webmin
}

disable_webmin_ssl() {
  sed -i 's/ssl=1/ssl=0/g' /etc/webmin/miniserv.conf
}

set_webmin_redirect_port() {
  echo "redirect_port=$WEBMIN_INIT_REDIRECT_PORT" >> /etc/webmin/miniserv.conf
}

set_webmin_referers() {
  echo "referers=$WEBMIN_INIT_REFERERS" >> /etc/webmin/config
}

set_root_passwd() {
  echo "root:$ROOT_PASSWORD" | chpasswd
}

create_pid_dir() {
  mkdir -p /var/run/named
  chmod 0775 /var/run/named
  chown root:${BIND_USER} /var/run/named
}

create_bind_cache_dir() {
  mkdir -p /var/cache/bind
  chmod 0775 /var/cache/bind
  chown root:${BIND_USER} /var/cache/bind
}

first_init() {
  if [ ! -f /data/webmin/.initialized ]; then
    set_webmin_redirect_port
    if [ "${WEBMIN_INIT_SSL_ENABLED}" == "false" ]; then
      disable_webmin_ssl
    fi
    if [ "${WEBMIN_INIT_REFERERS}" != "NONE" ]; then
      set_webmin_referers
    fi
    touch /data/webmin/.initialized
  fi
}

create_pid_dir
create_bind_data_dir
create_bind_cache_dir

# allow arguments to be passed to named
if [[ ${1:0:1} = '-' ]]; then
  EXTRA_ARGS="$*"
  set --
elif [[ ${1} == named || ${1} == "$(command -v named)" ]]; then
  EXTRA_ARGS="${*:2}"
  set --
fi

# default behaviour is to launch named
if [[ -z ${1} ]]; then
  if [ "${WEBMIN_ENABLED}" == "true" ]; then
    create_webmin_data_dir
    first_init
    set_root_passwd
    echo "Starting webmin..."
    /etc/init.d/webmin start
  fi

  echo "Starting named..."
  exec "$(command -v named)" -u ${BIND_USER} -g ${EXTRA_ARGS}
else
  exec "$@"
fi

Extra, a script to call the deploy process

deploy_container.sh (will pull, if new, redeploy, up)
#!/bin/bash

set -ex

out=$(docker pull ubuntu:jammy)


if [[ $out == *"Downloaded newer image"* ]]; then
    docker-compose up -d --force-recreate --always-recreate-deps --build --remove-orphans
    docker image prune -f
fi

After you create all the files in a directory together you would then issue these commands:

chmod +x *.sh
./deploy_container.sh

Extra, a script to force full no cache build and deploy

redeploy_container.sh (this will pull, stop, and build, then up)
#!/bin/bash
set -ex
IMAGE="ubuntu:jammy"
docker pull $IMAGE
docker container stop ns02
docker-compose build --no-cache
docker-compose up -d --force-recreate --always-recreate-deps --build --remove-orphans
docker image prune -f

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.