1. Pytorch's CPU's and GPU's seeds are independent. So set/save/load them independently
https://discuss.pytorch.org/t/are-gpu-and-cpu-random-seeds-independent/142
2. If you use randomness on severall gpus, you need to set torch.cuda.manual_seed_all(seed).
If you use cudnn, you need to set torch.backends.cudnn.deterministic=True.
3. There are many lines that set random seeds in different functions. E.g.,
linear_layer_init(seed=1)
sample_next_word(seed=10)
4. Different random generators
import random as rng
import numpy.random as rng
import torch; torch.cuda.manual_seed()
5. Forget to save random seed before testing the model on test/dev sets (which also do sampling or shuffle or simply setting seeds)
6. Forget to test immediately after loading the model before resuming training (because testing(random sample involved, performance showed) is always at the end of each epoch)

本文详细介绍了PyTorch中CPU与GPU随机种子的独立性及设置方法,包括使用多个GPU时如何设置torch.cuda.manual_seed_all,以及使用cudnn时如何确保结果的确定性。同时,列举了在不同函数中设置随机种子的例子,以及容易忽视的随机种子保存与加载问题。
2458

被折叠的 条评论
为什么被折叠?



