Skip to content
This repository has been archived by the owner on Jan 24, 2024. It is now read-only.

The running results of Anakin on GPU are not stable #474

Open
Weyne168 opened this issue Nov 6, 2018 · 3 comments
Open

The running results of Anakin on GPU are not stable #474

Weyne168 opened this issue Nov 6, 2018 · 3 comments

Comments

@Weyne168
Copy link

Weyne168 commented Nov 6, 2018

we use Anakin to run a model converted from caffe.
we found that, the output of Anakin is not stable comparing the caffe.

with the same input, we can get very stable output on nvidia-GPU by pycaffe, but when we use Anakin run the same model on the same GPU, the output often change.

sometimes, the output of Anakin is the same as the output of caffe, but in more often situation, the output of Anakin is changing and is not correct.

we wonder that what is factor cause this phenomenon?

@MyPandaShaoxiang
Copy link

can you provide the model name?

@Weyne168
Copy link
Author

Weyne168 commented Dec 3, 2018

our model likes resnet18. it only has conv and fc layers, its activation is not relu but prelu.
I found that the graph.Optimize may have bug.
our model does not contain any bn or deconv layers, but the log of fusing operations print DeconvReleu, ConvBatchnormScaleRelu

@MyPandaShaoxiang
Copy link

Thanks for your issue. The graph.optimize will check all the possible fusion patterns registed in code. when model has not some layers contained in fusion pattern, it does nothing with this pattern, but our log will print this process info. Maybe it's cuased by prelu, you can check it in saber/funcs/cuda/saber_conv and we will check it too.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants