You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Negative content embedding with style input in full blocks does works.
I wonder where the authors learned that up blocks.0.attentions.1 and down blocks.2.attentions.1 capture style, from experiments or previous papers?
If from previous papers, I'm sorry that I might miss it in your paper and may I ask for the reference paper title.
If from experiments, I wonder that are those pics noted with the block names in fig.7 the results generated with only the corresponding block in whole process.
Looking forward to your response!
The text was updated successfully, but these errors were encountered:
Style blocks can also be found in B-LoRA. We conducted experiments on IP-Adapter to verify this property. You can find more illustrations in our paper. Figure 7 is generated via injecting into the specific block.
Thanks for your reply!
May I ask for the scale setting in IP-Adapter?
I set the scale factor of IP-Adapter 1.0 and "up_blocks.0.attentions.1" the target block, found that the stylized results are quite similar using different style image prompt.
Negative content embedding with style input in full blocks does works.
I wonder where the authors learned that up blocks.0.attentions.1 and down blocks.2.attentions.1 capture style, from experiments or previous papers?
If from previous papers, I'm sorry that I might miss it in your paper and may I ask for the reference paper title.
If from experiments, I wonder that are those pics noted with the block names in fig.7 the results generated with only the corresponding block in whole process.
Looking forward to your response!
The text was updated successfully, but these errors were encountered: