Skip to content

Latest commit

 

History

History
64 lines (52 loc) · 1.37 KB

README.md

File metadata and controls

64 lines (52 loc) · 1.37 KB

How to properly ship and deploy your machine learning model

A practical guide with FastAPI, Docker and GitHub Actions

Project setup

  1. Create the virtual environment.
virtualenv /path/to/venv --python=/path/to/python3

You can find out the path to your python3 interpreter with the command which python3.

  1. Activate the environment and install dependencies.
source /path/to/venv/bin/activate
pip install -r requirements.txt
  1. Launch the service
uvicorn api.main:app

Posting requests locally

When the service is running, try

127.0.0.1/docs

or

curl

Deployment with Docker

  1. Build the Docker image
docker build --file Dockerfile --tag fastapi-ml-quickstart .
  1. Running the Docker image
docker run -p 8000:8000 fastapi-ml-quickstart
  1. Entering into the Docker image
docker run -it --entrypoint /bin/bash fastapi-ml-quickstart

docker-compose

  1. Launching the service
docker-compose up

This command looks for the docker-compose.yaml configuration file. If you want to use another configuration file, it can be specified with the -f switch. For example

  1. Testing
docker-compose -f docker-compose.test.yaml up --abort-on-container-exit --exit-code-from fastapi-ml-quickstart

Reference: https://towardsdatascience.com/how-to-properly-ship-and-deploy-your-machine-learning-model-8a8664b763c4