Skip to content

Latest commit

 

History

History
33 lines (29 loc) · 1.24 KB

README.md

File metadata and controls

33 lines (29 loc) · 1.24 KB

Order of things:

  1. training
  • Train the model
  • Export it
  1. inference_test
  • Sanity check the exported model
  • Plot the detections onto test images
  1. tensorflow_serving_test
  • Use the exported model in a TensorFlow Serving container
  • Send inference requests to the container
    • Prepare input
    • Process output
  1. kserve_test
  • Use the exported model in KServe
  • Send inference requests to the KServe InferenceService
    • Prepare input
    • Process output
  1. prediction_backend
  • Use the exported model in a Flask app
  • Send inference requests to the Flask app from a HTML page
  1. ovm_test
  • Use the exported model in OpenVINO Model Server

TODO:

  • Inference with KServe takes too long and needs too much CPU
    • Inference using a 100x83 image takes 1.5s with 5 CPU and 12Gi memory
    • Inference using a 960x540 image takes ~4.5s with 5 CPU and 12Gi memory
    • Inference using a 6000x8000 image (image to be posted by a phone) takes ~100s with 5 CPU and 12Gi memory
    • When CPU is set to 1, durations are ~2.5x longer
    • When TensorFlow Serving is used in a Docker container, durations are much shorter (no memory/CPU limit)