Frigate on Docker, in Windows Subsystem for Linux (WSL) #4375
Replies: 20 comments 51 replies
-
Thanks for the post! Have you been able to get your GPU passed in for hardware acceleration? That's another important part 👍 |
Beta Was this translation helpful? Give feedback.
-
This is awesome, has anyone done this with an intel gpu and a google coral yet? |
Beta Was this translation helpful? Give feedback.
-
hello, I have some issue in my docker-compose file but can find where is the probleme. Can you help me? version: "3.9" |
Beta Was this translation helpful? Give feedback.
-
There is also the possibility of creating a Hyper-V Debian VM and using DDA to pass through an entire PCIe coral device. I don't think the VM is using Result of DDA survey (powershell survey-dda.ps1):
My devices in docker compose:
This is the table in the frigate/system tab:
The VM is set up to use 10 processors. The CPU of ffmpeg here results in only 2% CPU usage on the Hypervisor. VAINFO in the frigate/system tab returns:
|
Beta Was this translation helpful? Give feedback.
-
What's unfortunate is that GPU-PV doesn't play nice with DDA in Hyper-V VMs ( I'm hoping that WSL getting VAAPI might allow me to use the Intel iGPU in a Hyper-V VM so I can use hardware acceleration encode/decode. I just want one computer that can do all of the things! Someday. |
Beta Was this translation helpful? Give feedback.
-
Hello, when running "docker-compose up -d" in WSL Ubuntu, I got the following:
How do I modify the volume paths in my docker-compose.yml to avoid this error? Any help would be appreciated. Host OS is Win11 22H2, and the Docker Desktop version is v4.24.0. My docker-compose.yml: version: "3.9" |
Beta Was this translation helpful? Give feedback.
-
Hi there, I'm getting error not a valid Windows Path PS D:\docker\frigate> docker-compose up -d
seems like i can't use
|
Beta Was this translation helpful? Give feedback.
-
I have a problem that the UI keeps loading on port 5000 2023-11-16 09:20:33 Traceback (most recent call last): And I am pretty sure about the path of my config file |
Beta Was this translation helpful? Give feedback.
-
Thank you for these comprehensive instructions. Saved me many hours! I am running Windows 11 / WSL2 and have had success passing the Coral USB to WSL2 using the command:
The problem I'm running into is that this command needs to be given every time the machine starts up or the USB device is not passed through to WSL (and so far I've failed to get this to work as a Task Scheduler task). Is this to be expected? Any ideas on how to handle this? |
Beta Was this translation helpful? Give feedback.
-
have followed directions for coral tpu and can see the device in both windows and wsl (the vid is correct, comes as global unichip corp in wsl) but upon container start frigate is telling me it can't find the tpu. Any ideas for troubleshooting? 2024-02-15 13:16:53.175712047 [2024-02-15 13:16:49] frigate.detectors.plugins.edgetpu_tfl INFO : Attempting to load TPU as usb |
Beta Was this translation helpful? Give feedback.
-
Thanks a lot for this guide. I have followed it to get Nvidia hwaccell for decode/encode which works like a charm. However I struggle to get the detect hw support working with tensorrt. Which seem to indicate that the GPU is available in WSL. The frigate log have no errors an talks about GPU: However the system tab in frigate says no GPU detected but says tensorrt is used and the interference speed is better than with default CPU but not on pair with a 1070 GPU. Does anyone have any pointers to what is going on? Any more logs I can look at? |
Beta Was this translation helpful? Give feedback.
-
trying to set this u on intel system, but unfortunately wsl intel_gpu_top returns "not found" |
Beta Was this translation helpful? Give feedback.
-
Intel iGPU won't work; Even latest verison WSL2 host already supported iGPU VAAPI, but frigate docker under WSL2 still not. |
Beta Was this translation helpful? Give feedback.
-
Tried the steps and it's working great - haven't yet tried the GPU passthrough to use for hw acceleration yet. I have a current Frigate instance that's in an LXC container which seems to use significantly less CPU usage when using that to decode. I'm able to assign 3 cores to the LXC container and barely have it hit 50% usage Whereas I have 4 cores assigned to the Windows VM and it's capping 99% usage with the exact same settings. Is there something I'm doing differently? |
Beta Was this translation helpful? Give feedback.
-
For anyone who might have issue with hardware acceleration. I was getting ffmpeg errors the whole day. Turns out here is problem with Nvidia driver version 555, which prevents GPU acceleration to work with Docker Desktop below 4.31. |
Beta Was this translation helpful? Give feedback.
-
I am able to connect to the web UI for a few seconds, then the container stops. Any help is greatly appreciated.
|
Beta Was this translation helpful? Give feedback.
-
Dude, i just wanted to say thank you. |
Beta Was this translation helpful? Give feedback.
-
Hi Man, Thanks for this. I am just strugling with getting the Nvidia GPU to work as detector in WSL2 and docker desktop. this command shows error 404 and I also can not go to the Website. wget https://github.com/blakeblackshear/frigate/raw/master/docker/tensorrt_models.sh I do not see this file tensorrt_models.sh |
Beta Was this translation helpful? Give feedback.
-
No it gets stuck and CPU usage went through the roof
I manage to fix it not quite sure what I did.
…On Thu, 17 Oct 2024, 19:56 Nicolas Mowen, ***@***.***> wrote:
are you sure it is not just building the model?
—
Reply to this email directly, view it on GitHub
<#4375 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AVL5HRFONRIPSSHFGOWYD7TZ3724FAVCNFSM6AAAAAAR52USTSVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOJXGQ4DANI>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
Hi , any ideas ? |
Beta Was this translation helpful? Give feedback.
-
I wanted to write something up to share with the HA community as it's mentioned only briefly in the Installation of Frigate. Specifically:
I couldn't find much in the way of this, but I'll openly admit I could have just been searching in the wrong places. In any case, if you're like me with a computer running Windows, you may think to yourself "I can run Docker on Windows. Docker can interface with WSL. I should be able to pull off running Frigate." and I'm here to tell you that yes you can, and it's easier than you might think.
I've also been able to get Frigate running with Nvidia GPU Tensor, as well as the Google Coral USB. Granted, the Coral as of this current edit produces some errors in Frigate logs. It never the less works!
Anyway, let's get Frigate running in Windows.
Deploy and Configure WSL
There are two ways to get WSL going in Windows. If you want the fast path to WSL, open up Windows PowerShell (not x86, not ISE, just regular old blue icon Windows PowerShell) as an Admin and execute
wsl --install -d Ubuntu
This command will make WSL available and deploy Ubuntu. Want to see more about this approach? Check out the Microsoft Docs article on it. Want to go the slightly longer way? Read steps 1-3 to get started with WSL.
This might requite a reboot once installed. I forget.
At this point, WSL/Linux are running on Windows. Neat! Onto Docker.
Deploy and Configure Docker on Windows (v4.13.1 at the time of this writing)
At this point, we have WSL running and we have Docker integrated with it. Finally, it's time to pull down Frigate and configure it.
Deploy and Configure Frigate
Confirmed working with versions:
BEFORE
AFTER
Again, this is telling the Frigate container in Docker, how to access resources that exist in the WSL instance. In fact, you might recognize part of this as a UNC Network Path because it is! Curious? Open up File Explorer in Windows, and type in "\\wsl$". Doing so allows you to navigate into the WSL distro. just use "\\wsl$" with a single $ instead of $$ as listed above in the docker compose. As far as the rest of the Compose is concerned, configure everything else such as the password, ports, etc. as you see fit. The USERNAME will be the account you configured when you setup WSL with a password a few steps back. You can also verify the path by using the UNC Path to navigate into the directory and verify the name.
We need to now define a Frigate configuration file (config.yml). Let's head into the WSL path as defined above by opening windows File Explorer and navigating into \\wsl$\Ubuntu\home\USERNAME\frigate. If you don't have a Frigate folder, now is a great time to create it. Then, head into that directory and create a new file called config.yml. At this point, you should be able to define a config.yml file that contains your MQTT server, cameras, etc. and everything that you see in the setup instructions, YouTube videos, and elsewhere.
Now let's spin up the container using our Docker Compose file. Open up a PowerShell window (connected to your Windows host, not your WSL instance) if you don't already have one open (see the paragraph above step 1 on choosing the "right" PowerShell to open if you have question). Navigate to the directory where you compose file is located. You can do this by typing "cd c:\folder\path" where c:\folder\path is the directory the file is stored. Now type/paste in "docker-compose up -d". You should see a result returned such as "Running". Interested in what this command and other relevant Docker arguments mean? Check out https://docs.docker.com/engine/reference/commandline/compose_up/
Head back into Docker within Windows, and go to Containers. You should see the name of your folder the Docker Compose was located in and a sub item of "frigate" now running. At this point, Frigate is now running in Docker and all troubleshooting/subsequent configuration should follow any other threads you find online.
Hardware Acceleration
The other thing that might cross your mind with a Windows computer is taking advantage of a dedicated GPU to use within WSL. Fortunately, there are only a couple things that are required
wsl -l -v
and verifying you see "2" next to the distro you configured above.
Once you've met these requirements, open up your WSL command prompt by going to Start, typing "WSL" and opening up the command prompt. There, type "nvidia-smi". You should return a result that looks something like this:
In the above screenshot, I've highlighted and circled the ID for the GPU in the event you have multiple GPUs and want to pick and choose what gets passed into WSL.
Then in your Docker Compose file, you'll want to add the following section. You'll notice I commented out #device_ids, and instead opted for count: all. Count takes either an integer or the string of "all".
As an example, here's the Docker Compose file with that section added.
Finally, in your config.yml we'll need to add arguments for hardware acceleration. Per Frigate Documentation on Nvidia GPUs I'm just going to use h264. To see a list of supported codecs, in Docker, navigate into the Terminal for the Frigate container. Then use
ffmpeg -decoders | grep cuvid
to see a list of supported codecs. In my case, I have an Nvidia GPU so per documentation I can simply use "preset-nvidia-h264"
Again, we'll use "docker-compose up -d" at the Windows PowerShell command line in the directory the compose file is located to rebuild the container using the new compose that makes use of the GPU. Then in WSL, we'll type "nvidia-smi" once again, only this time we should see the ffmpeg process attached for each camera we have HW Accel arguments defined in our config.yml.
Limiting WSL Host Resources
If you've done the above, something you may notice after a couple days of use is that your Windows host, has near maxed out it's memory usage! This is because WSL will continue to consume resources as it needs them. So you will more than likely want to set a ceiling on what WSL is capable of consuming. Solving this is incredibly simple.
In the above config, only the CPU/Memory for WSL is capped and everything after the # is a note per line. Want to know what else you can configure? Check out the following Microsoft Doc on Advanced Settings Configuration in WSL
At this point, either restart your Windows host or just restart WSL with PowerShell (as an Admin) for changes to take effect:
Restart-Service LxssManager
Using a Google Coral USB Detector
It should be noted, while the following works, I am as of this writing currently producing sporadic errors in Frigate that result in either the detection service restarting or the container restarting. However these events seemingly have not adversely affected recording, performance, or anything that jeopardizes Frigate's ability to function. From what I've seen, no more than 5 seconds of downtime occurs as a result of this. Secondly, it appears the Coral will not auto-attach as described in Step 7 below. After a reboot of the Windows host the command must be executed to re-establish connectivity. I suspect this can be automated and am working on doing so. I'll update this section if I can get that working.
With WSL2, it's possible to attach USB devices to your Windows host and pass them through to the WSL instance using the open source project known as usbipd-win that Microsoft is advertising as the recommended path forward. With a single USB Coral, my inferencing times average about 15-17ms vs. the 75-80ms I had with two CPUs.
But when it comes to setting this up. Things seem to be ever so slightly different based on whether you are running Windows 10 vs. Windows 11 per the Microsoft documentation.
The good news is that the Windows 10 setup is in fact pretty minimal
usbipd wsl list
At this point, the Windows side of things are done. Now we need to configure WSL instance so it can receive/listen to when we mount the USB Coral. But it's worth calling out some conflicting documentation here and which one to choose.
sudo apt install linux-tools-generic hwdata sudo update-alternatives --install /usr/local/bin/usbip usbip /usr/lib/linux-tools/*-generic/usbip 20
sudo apt install linux-tools-virtual hwdata sudo update-alternatives --install /usr/local/bin/usbip usbip `ls /usr/lib/linux-tools/*/usbip | tail -n1` 20
Specifically, the very first line in those respective commands is the only problem I ran into. Note the very small difference of "generic" vs. "virtual" in those commands. When I followed the Microsoft documentation, the command starts returning output before eventually throwing an error that the listed dependencies/packages can't be found. But following the usbipd-win WSL Support documentation, everything works as expected. So I'm going to advise that you use the usbipd-win documentation's respective commands. To do that:
sudo apt install linux-tools-virtual hwdata sudo update-alternatives --install /usr/local/bin/usbip usbip `ls /usr/lib/linux-tools/*/usbip | tail -n1` 20
Back in your Windows terminal (step 2, above) find the Bus ID of the attached Coral device. If you aren't sure which is the Coral device. Unplug it, run the command, plug the Coral in, run it again and spot the difference.
(OPTIONAL) If you received a warning somewhere along the way in a Windows terminal about needing to use bind and force commands. Make sure you run those first and no less from an administrative terminal in Windows. Run the following where IDENTIFIERHERE is the Bus ID of the Coral. For example, usbipd wsl attach --busid 4-3
usbipd wsl bind --force --busid IDENTIFIERHERE
From Windows
usbipd wsl list
From WSL
At this point, modify your Frigate configuration to use the Google Coral detector(s) per Frigate documentation of Edge-TPU Detector. Restart Frigate with the new config, head over to Logs/System to verify Frigate has picked up your Coral and you're done!
Using Nvidia GPUs and TensorRT Detectors
With the advent of Frigate 0.12, not only can you use your Nvidia GPU for Hardware Acceleration as outlined above. But also as your detectors. When I began using Frigate, I allocated two CPUs to it. They each averaged about 75-80ms inferencing time. With an Nvidia GTX1080, inferencing time now averages 5-7ms. Before we get started, if Frigate is currently running in Docker/WSL you'll want to stop the container before starting model training, much less building the needed TensorRT specific container for Frigate. Secondly, if this is your only GPU and you enjoy gaming. You might experience decreased frame rates and less than optimal gaming performance while the Frigate container is utilizing GPU inferencing.
At this point, if you use File Explorer you can go into WSL and verify the folder was created and tensorrt_models.sh are both in your \\wsl$\Ubuntu\home\YOURUSERNAMEHERE directory.
Let's head back to your WSL command line. Then we'll run the final command to begin training
If your card doesn't support FP16 operations you might see some warnings get generated from running the above. In which case, you can just delete the files and start over. You can do that with:
Let's try it one more time, with a single additional argument per the documentation of "-e USE_FP16=False"
Once you return to a blinking command line, your models are trained and your GPU is ready to go. We just need to do some slight alteration of the Docker Compose from above.
First, we need to use a Tensor RT specific instance of Frigate
BEFORE
AFTER
Second, we need to introduce a new line item to our Volumes section:
BEFORE
AFTER
Then we'll use "docker-compose up -d" at the Windows PowerShell command line in the directory the compose file is located to rebuild the container using the new compose that makes use of the GPU and TensorRT instance of Frigate. Once Frigate starts, you'll want to update your config.yml/detectors section per the documentation. Once you've done this, save your config, and then restart Frigate to finalize your flip towards TensorRT based inferencing.
You can verify things are running smoothly by heading into Logs and ensuring you don't see errors. Finally, check out System to see
Just wanted to throw this out there and hope someone finds it helpful.
Beta Was this translation helpful? Give feedback.
All reactions