You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello
I'm trying to use this method on a vision transformer model(model = torchvision.models.vit_b_16(), first several layers in below image). I read the document, And I think I need to write and use new rules?(I see that there are some new types of layers that doesn't have an existing class . And also submodules are a little complex. So I have to use new rules right?). I read the document of how to write a custom rule, but I can't think of an idea which rules use on which layer in this VIT model.( I want to get images like original epsilonplusflat.) I run the code below and got error show below. Do you have any recommendations on how to run lrp method on this model?
Thank you!
composite = EpsilonPlusFlat()
with Gradient(model=model, composite=composite) as attributor:
output, attribution = attributor(data, target)
The text was updated successfully, but these errors were encountered:
we have planned support for transformers/ attention, and the new built-in torch.nn.MultiheadAttention is great for that, since we do not need to explicitly support external implementations that way.
Have a look at this previous comment, where I mention this work by @tschnake et. al. , which is the implementation approach we will be taking.
Unfortunately, my schedule is super full, so I will probably not get to work on this until maybe late summer.
If you feel up to it, you can try to have a shot at implementing the missing rules yourself, and if you have the time to also write tests and documentation, we would also be happy if you would contribute to Zennit.
But do not feel pressured, I will eventually try to find someone to do it, or do it myself once my schedule allows me to.
In the meantime, feel free to ask any questions here with respect to the implementation, and I would be happy to assist!
Hello
I'm trying to use this method on a vision transformer model(model = torchvision.models.vit_b_16(), first several layers in below image). I read the document, And I think I need to write and use new rules?(I see that there are some new types of layers that doesn't have an existing class . And also submodules are a little complex. So I have to use new rules right?). I read the document of how to write a custom rule, but I can't think of an idea which rules use on which layer in this VIT model.( I want to get images like original epsilonplusflat.) I run the code below and got error show below. Do you have any recommendations on how to run lrp method on this model?
Thank you!
The text was updated successfully, but these errors were encountered: