tfa.seq2seq.LuongAttention解析

源码

class LuongAttention(AttentionMechanism):
    """Implements Luong-style (multiplicative) attention scoring.
    This attention has two forms.  The first is standard Luong attention,
    as described in:
    Minh-Thang Luong, Hieu Pham, Christopher D. Manning.
    [Effective Approaches to Attention-based Neural Machine Translation.
    EMNLP 2015.](https://arxiv.org/abs/1508.04025)
    The second is the scaled form inspired partly by the normalized form of
    Bahdanau attention.
    To enable the second form, construct the object with parameter
    `scale=True`.
    """

    @typechecked
    def __init__(
        self,
        units: TensorLike,
        memory: Optional[TensorLike] = None,
        memory_sequence_length: Optional[TensorLike] = None,
        scale: bool = False,
        probability_fn: str = "softmax",
        dtype: AcceptableDTypes = None,
        name: str = "LuongAttention",
        **kwargs,
    ):
        """Construct the AttentionMechanism mechanism.
        Args:
          units: The depth of the attention mechanism.
          me
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值