PyTorch - Autograd mechanics

本文深入探讨了PyTorch中Autograd模块的工作原理,包括变量如何记录操作历史形成计算图,以及如何利用这些信息进行梯度计算。此外还讨论了变量在进行就地操作时可能遇到的问题及其实现复杂性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

How autograd encodes the history

Each Variable has a .creator attribute, that points to the function, of which it is an output. This is an entry point to a directed acyclic graph (DAG) consisting of Function objects as nodes, and references between them being the edges. Every time an operation is performed, a new Functionrepresenting it is instantiated, its forward() method is called, and its output Variable s creators are set to it. Then, by following the path from any Variable to the leaves, it is possible to reconstruct the sequence of operations that has created the data, and automatically compute the gradients.

An important thing to note is that the graph is recreated from scratch at every iteration, and this is exactly what allows for using arbitrary Python control flow statements, that can change the overall shape and size of the graph at every iteration. You don’t have to encode all possible paths before you launch the training - what you run is what you differentiate.

In-place operations on Variables

Supporting in-place operations in autograd is a hard matter, and we discourage their use in most cases. Autograd’s aggressive buffer freeing and reuse makes it very efficient and there are very few occasions when in-place operations actually lower memory usage by any significant amount. Unless you’re operating under heavy memory pressure, you might never need to use them.

There are two main reasons that limit the applicability of in-place operations:

  1. Overwriting values required to compute gradients. This is why variables don’t support log_. Its gradient formula requires the original input, and while it is possible to recreate it by computing the inverse operation, it is numerically unstable, and requires additional work that often defeats the purpose of using these functions.
  2. Every in-place operation actually requires the implementation to rewrite the computational graph. Out-of-place versions simply allocate new objects and keep references to the old graph, while in-place operations, require changing the creator of all inputs to the Functionrepresenting this operation. This can be tricky, especially if there are many Variables that reference the same storage (e.g. created by indexing or transposing), and in-place functions will actually raise an error if the storage of modified inputs is referenced by any other Variable.

In-place correctness checks

Every variable keeps a version counter, that is incremented every time it’s marked dirty in any operation. When a Function saves any tensors for backward, a version counter of their containing Variable is saved as well. Once you access self.saved_tensors it is checked, and if it’s greater than the saved value an error is raised.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值