Improving Local Development with Docker Compose

In my previous post about how I built my drink tracking website I talked about how I setup my local Docker environment, however there were some things I could have done better, and now I have.

I am now using the offical MySQL Docker image rather than MySQL hosted on my local machine, and I am using Docker Compose so that I can build and run my API and database images more easily. Docker compose seems to serve the same purpose as having various bash scripts encapsulating docker build and docker run commands, but it does it in a nicer way as all the configuration for multiple images can go in a single compose.yaml file. There are of course more benefits, such as being able to create volumes and networks that the running containers all share.

compose.yaml

Here is my compose.yaml file:

services:
  api:
    build: .
    command: flask --app flaskr run -h 0.0.0.0 -p 8080 --debug
    container_name: mindful-drinking-api
    environment:
      - FLASK_SECRET_KEY=dev
      - FLASK_DATABASE_HOST=db
      - FLASK_DATABASE_PORT=3306
      - FLASK_DATABASE_DATABASE=mindful_drinking
      - FLASK_DATABASE_USER=dev
      - FLASK_DATABASE_PASSWORD=secret
    depends_on:
      - db
    networks:
      - mindful-net
    ports:
      - 8080:8080
    volumes:
      - type: bind
        source: ./flaskr
        target: /usr/src/app/flaskr
      - type: bind
        source: ./requirements.txt
        target: /usr/src/app/requirements.txt
  db:
    image: mysql:8.0
    container_name: mindful-drinking-db
    environment:
      - MYSQL_ROOT_PASSWORD=secret
      - MYSQL_DATABASE=mindful_drinking
      - MYSQL_USER=dev
      - MYSQL_PASSWORD=secret
    volumes:
        - type: bind
          source: ./docker-entrypoint-initdb.d
          target: /docker-entrypoint-initdb.d
    networks:
      - mindful-net

networks:
  mindful-net:

Some things I found really helpful when researching how to set this up:

  • The MySQL image will execute a .sql file placed in /docker-entrypoint-initdb.d when the container runs for the first time, meaning I could put my schema creation script in here. It will also use the environment variables passed in to create the database and user.
  • The MySQL image creates an anonymous volume on my host so that data is persisted between container starts and stops.
  • I can use a docker compose service name as a host name if the services are on the same network. For example, using the networks service level key I have defined a network called mindful-net that both containers will be on. Now I can set my flask app database host to db.
  • I can set the command for the api service as command: flask --app flaskr run -h 0.0.0.0 -p 8080 --debug which will override the CMD in my Dockerfile which is CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8080", "flaskr:create_app()"] meaning I can have the Flask dev server and debugging in local development.
  • I created a bind mount volume on my api container so that my source code is mounted into the container, and I can make changes and have the container see them without a rebuild. (This is also how I made the contents of docker-entrypoint-initdb.d on my host available to the MySQL container)

Local Dev Workflow

To get my API and database running locally I run docker compose up which will build the images and run the containers. docker compose down will stop and remove the containers. docker compose -v down will do the same but also remove any volumes, in this case the volume being used by MySQL which will destroy and database data.

I haven’t quite figured out the best approach for adding new Python packages yet. One solution could be to access a bash shell on the running container with docker exec -it <container-name-or-id> bash, pip install whatever package I need, and pip freeze > requirements.txt which would update the requirements.txt file on my host. This way I get the correct package versions for my production Python version, and next time I rebuild the image locally the Dockerfile would pip install as normal.

For the UI (Vue app) I am opting for using the Vite dev server on my host, and not a Docker container. I figure that with a tool like nvm making it so easy to have a specific version of a nodejs environment, and the fact that local and production have very different requirements (production requires the files to be built and served by nginx, local doesn’t need any of that and just needs the Vite dev server running so that it can reload and rebuild changed files) it made sense to keep it that way for now.