-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
image processing scripts #50
base: master
Are you sure you want to change the base?
Conversation
Update: I forgot I already added a key to write forward a minute in parse_videos.py but I also added a key to write until the end in case the video is much longer and you want to have a coffee and come back later and have the synchronized videos written out. |
Thank you for your contribution, I really appreciate it! |
Sorry it's been taking a bit longer to get started on this. I've allocated time to it on Wednesday and will look through the code then. @alexrockhill If you have some time, do you have some sample videos that you could share for this code? Thank you again! |
No problem these videos should work with this command
*Due to space requirements, the files are quite small but the action of interest continues on in the base video and could be aligned further in time than right after the clap. |
Hi @alexrockhill , I have two main high-level comments/questions
Additionally, I didn't look through the code in detail, but I noticed you used If you're interested, I could see an automatic synchronization script based on audio being part of anipose, with a user option for using the GUI that you designed on a per video basis. It would require quite a bit of change to your code though, so I'm not sure if you'd like to do that... Either way, you should showcase these scripts somewhere! |
I have a script to sync with the audio but I haven't done a correlation etc. based approach and the max volume for a clap led to a lot of false-positives which were a pain to sort out. I can share the audio matching scripts if that would help, I just didn't put enough effort in to do it automatically enough especially since the audio was just a clap in my case which varied depending on my execution and not a computer-generated sound. I think for computer generated sounds, the audio matching should work perfectly, especially when compared with time-stamps for when the computer generated the sound. Feel free to change the scripts however you want. I would be happy if anipose had this functionality integrated so however that happens is more than okay with me. |
Hey @lambdaloop, I wrote some script for a package I maintain that maybe does what you might want for multiple videos synchronized by a computer: https://alexrockhill.github.io/pd-parser/dev/auto_examples/plot_find_audio_events.html#sphx-glr-auto-examples-plot-find-audio-events-py. I'm not sure what you think the most widely used case is but I both had video that was synchronized by hand and synchronized by the computer. I think the GUI in parse_videos.py worked well for me for the hand-synchronized video and the automated version in the example worked best for the computer-generated-sound one. Whatever you want to do for anipose is fine by me, I just thought since I was writing some of this it might help other people using the same process. |
Addresses #47.
These aren't integrated into the anipose CLI yet but I thought I would ask before doing that so that it was done the right way.
This PR was motivated by there not being good open-source image processing to easily take in my case three large videos and synchronize them together, especially storing the metadata about where they came from as far as I found. I looked at OpenShot and a few other projects but they didn't quite do this and after a few tries, I found a pretty good process with a few python scripts using ffmpeg and opencv.
There are three scripts: