We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
请问如何初始化让paddle.nn.Embedding()与torch.nn.Embedding()的权重可以一样?
PyTorch代码:
import torch print("PyTorch version:", torch.__version__) torch.manual_seed(1) vocab_size = 6 output_dim = 3 embedding_layer = torch.nn.Embedding(vocab_size, output_dim) print(embedding_layer.weight) print(embedding_layer(torch.tensor([3])))
Paddle代码:
import paddle print("PaddlePaddle version:", paddle.__version__) paddle.seed(1) vocab_size = 6 output_dim = 3 embedding_layer = paddle.nn.Embedding(vocab_size, output_dim) print(embedding_layer.weight) print(embedding_layer(paddle.to_tensor([3])))
二者输出结果完全不一样
No response
The text was updated successfully, but these errors were encountered:
参考 #44565 默认初始化的方法有差异
Sorry, something went wrong.
多谢!~ 原来paddle.nn.Embedding默认的参数初始化,是用的]XavierUniform,:[-x, x]间的均匀分布,其中 x = \sqrt{\frac{6.0}{fan_in + fan_out}}
XavierUniform做参数初始化非常不错!有助于提高训练速度和收敛性,并能增强模型的泛化能力
zoooo0820
No branches or pull requests
bug描述 Describe the Bug
请问如何初始化让paddle.nn.Embedding()与torch.nn.Embedding()的权重可以一样?
PyTorch代码:
Paddle代码:
二者输出结果完全不一样
其他补充信息 Additional Supplementary Information
No response
The text was updated successfully, but these errors were encountered: