Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why is it that using the pre-training model you provided, without any changes in the code, the test results vary greatly, even up to 10 point fluctuations? #4

Open
one23sunnyQQ opened this issue Mar 8, 2021 · 3 comments

Comments

@one23sunnyQQ
Copy link

Hi, Thank you for sharing your code. But why is it that using the pre-training model you provided, without any changes in the code, the test results vary greatly, even up to 10 point fluctuations?May I ask how the test results provided in your paper can be determined as the final result when the performance fluctuates so much? Looking forward to your reply.

@dvornikita
Copy link
Owner

Hi, I am sorry for the inconvenience. I just realized that while debugging the code before the release I set the number of test examples to 5 (instead of 600) and forgot to set it back to 600 before pushing the code. This caused such fluctuation in the final performance.

This is now fixed and you can update the code and run it again.

@tangminfang
Copy link

Hi, Thank you for sharing your code. But why is it that using the pre-training model you provided, without any changes in the code, the test results vary greatly, even up to 10 point fluctuations?May I ask how the test results provided in your paper can be determined as the final result when the performance fluctuates so much? Looking forward to your reply.

Hi, I have the same problem as you. But I ran the test.py with the author's latest code. The gap was much larger than the official announcement. May I ask what is the latest development of your question? Did you solve it? Thanks in advance! ^^

@sudarshan1994
Copy link

Yeah I seem to have gotten similar results

100%|█████████████████████████████████████████████████████████████████████████████████████████| 600/600 [10:13<00:00, 1.02s/it]
model \ data SUR


ilsvrc_2012 47.50 +- 0.00
omniglot 88.79 +- 0.00
aircraft 88.75 +- 0.00
cu_birds 85.56 +- 0.00
dtd 76.67 +- 0.00
quickdraw 87.27 +- 0.00
fungi 87.50 +- 0.00
vgg_flower 88.00 +- 0.00
traffic_sign 50.87 +- 0.00
mscoco 59.23 +- 0.00

Any help debugging this would be greatly appreciated, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants