Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increasing Video FPS running on CPU Using Threading #1411

Open
2 tasks done
dsaha21 opened this issue Jul 27, 2024 · 13 comments
Open
2 tasks done

Increasing Video FPS running on CPU Using Threading #1411

dsaha21 opened this issue Jul 27, 2024 · 13 comments
Assignees
Labels
enhancement New feature or request

Comments

@dsaha21
Copy link
Contributor

dsaha21 commented Jul 27, 2024

Search before asking

  • I have searched the Supervision issues and found no similar feature requests.

Description

I want to increase FPS of a video running on my CPU system. I tested with few annotated and object tracking videos. When I am running the frames without passing through the model the fps is still low thus resulting lesser while passing them through YOLO or any model.

The code snippet I am using is

VideoSpeed1

So, with the following method and running the normal frames I am getting something like the following :

VideoSpeed2

With normal supervision's frame generator - fps is around 1-10 max
With threading its increasing to a greater value

Use case

If we notice there is a significant change with threading. I was wondering if we could add a MainThread Class in the supervision utils in sv.VideoInfo or add a total new class so that frames running on CPU can have such fps. Let me know if we can handle such case. I can share the python file on drive if necesssary.

Thanks

Additional

No response

Are you willing to submit a PR?

  • Yes I'd like to help by submitting a PR!
@dsaha21 dsaha21 added the enhancement New feature or request label Jul 27, 2024
@yeldarby
Copy link
Contributor

Have you tried InferencePipeline from our other open source repo? It handles multithreading for video and can even handle processing multiple streams concurrently.

@dsaha21
Copy link
Contributor Author

dsaha21 commented Jul 28, 2024

Hi @yeldarby, Let me give it a try with the Inference Pipeline. If its successful, I will close the issue.

Thanks for the help 👍

@SkalskiP
Copy link
Collaborator

Hi @dsaha21, get_video_frames_generator was meant to be a very simple utility. I agree with @yeldarby. If you want a higher fps throughput InferencePipeline is for you. Also are you sure you got 0.17 fps? It seems super low.

@dsaha21
Copy link
Contributor Author

dsaha21 commented Jul 29, 2024

Hi @SkalskiP, Yes actually its very slow. I am trying by resizing the frames and InferencePipeline like mentioned above. Will let you if it runs with a good fps.

Thank you :)

@Sapienscoding
Copy link

Sapienscoding commented Oct 15, 2024

Hi @dsaha21, were you able to fix it, if not I can submit a pull request on this and work on it

@dsaha21
Copy link
Contributor Author

dsaha21 commented Oct 15, 2024

Hi @Sapienscoding, you can continue with this .. I really did not get the time to continue this.

I opened the issue after reading an article posted by pyimagesearch about speeding up fps using threading. However, before continuing with threading you can go through this https://inference.roboflow.com/using_inference/inference_pipeline/

I hope it will solve.

@LinasKo
Copy link
Collaborator

LinasKo commented Oct 15, 2024

Hi @Sapienscoding 👋

Great to see you're eager to help us out! I'm assigning this to you.

@Sapienscoding
Copy link

Hi @dsaha21, what and where did you change to see the difference in frames improvement?

@dsaha21
Copy link
Contributor Author

dsaha21 commented Oct 18, 2024

Hi @Sapienscoding, you can follow the steps :

1. pip install inference
2. pip install inference-gpu ( If you have an NVIDIA GPU, you can accelerate your inference with )

3. # import the InferencePipeline interface
    from inference import InferencePipeline
    # import a built-in sink called render_boxes (sinks are the logic that happens after inference)
    from inference.core.interfaces.stream.sinks import render_boxes
    
    api_key = "YOUR_ROBOFLOW_API_KEY"
    
    # create an inference pipeline object
    pipeline = InferencePipeline.init(
        model_id="yolov8x-1280", # set the model id to a yolov8x model with in put size 1280
        video_reference="https://storage.googleapis.com/com-roboflow-marketing/inference/people-walking.mp4", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
        on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
        api_key=api_key, # provide your roboflow api key for loading models from the roboflow api
    )
    # start the pipeline
    pipeline.start()
    # wait for the pipeline to finish
    pipeline.join()

Please try this on a google colab first. If you have an roboflow apikey - its very good otherwise download a manually - like for e.g yolov8n.pt for basic object detection. Then start the inference pipeline and test the fps.

@LinasKo Just wanted to know that if we dont have a roboflow apikey, is the above thing like manually downloading the model a correct thing to do ? Please let me know once. Then @Sapienscoding can follow the above steps.

@LinasKo
Copy link
Collaborator

LinasKo commented Oct 18, 2024

There is a set of models that do not need an API key:
https://inference.roboflow.com/quickstart/aliases/

All others will need a key.

@dsaha21, you gave a very good example of using InferencePipeline, but it doesn't provide a way to speed up our frame processing on CPU, especially if we choose not to use inference. You mentioned experimenting with threading - do you still have an example that produced an improvement?

@dsaha21
Copy link
Contributor Author

dsaha21 commented Oct 18, 2024

@LinasKo Actually I did not test the algorithm using threading, very sorry. I will try testing it. Till then I think let give @Sapienscoding a chance to try as he told he will approach with this issue.

If the testing is done I will post the fps improvement ASAP

@dsaha21
Copy link
Contributor Author

dsaha21 commented Oct 18, 2024

Hi @dsaha21, what and where did you change to see the difference in frames improvement?

@Sapienscoding
My plan was that first I will check normal fps by running with supervision library.

Then I will use Queue data structure using threading. There will be a class name VideoStream where it will contain start(), stop(), update(), read() functions. After that run the class with a video uploaded on the system and check fps on CPU.

This was my plan. Have you tried like this once ?

@Sapienscoding
Copy link

@dsaha21 I'm thinking of using asyncio to process frames during inference. However, can you test it out like @LinasKo mentioned, if you're getting any improvement in FPS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants