D:Sequence Swapping

本文介绍了一种解决括号序列问题的动态规划算法,旨在通过交换括号位置获得最大得分。具体步骤包括读取输入序列、计算每一步可能的最大增益,并最终输出可能达到的最大得分。

BaoBao has just found a strange sequence {<, >, <, >, , <, >} of length in his pocket. As you can see, each element <, > in the sequence is an ordered pair, where the first element in the pair is the left parenthesis '(' or the right parenthesis ')', and the second element in the pair is an integer.

As BaoBao is bored, he decides to play with the sequence. At the beginning, BaoBao's score is set to 0. Each time BaoBao can select an integer , swap the -th element and the -th element in the sequence, and increase his score by , if and only if , '(' and ')'.

BaoBao is allowed to perform the swapping any number of times (including zero times). What's the maximum possible score BaoBao can get?

Input

There are multiple test cases. The first line of the input contains an integer , indicating the number of test cases. For each test case:

The first line contains an integer (), indicating the length of the sequence.

The second line contains a string () consisting of '(' and ')'. The -th character in the string indicates , of which the meaning is described above.

The third line contains integers (). Their meanings are described above.

It's guaranteed that the sum of of all test cases will not exceed .

Output

For each test case output one line containing one integer, indicating the maximum possible score BaoBao can get.

Sample Input
4
6
)())()
1 3 5 -1 3 2
6
)())()
1 3 5 -100 3 2
3
())
1 -1 -1
3
())
-1 -1 -1
Sample Output
24
21
0
2
Hint

For the first sample test case, the optimal strategy is to select in order.

For the second sample test case, the optimal strategy is to select in order.

就是一个dp,dp[i][j]表示第i个左括号扩到第j个时候的最大值。

然后更新即可。

但是他没到的不能和他已经到的同等对待。

玛德智障,

 

#include<cstdio>
#include<cstring>
#include<algorithm>
using namespace std;
const int N=1e3+88;
char s[N];
int n,p[N],nxt[N];
long long sum[N],v[N];
long long mx[2][N];
int main(){
    int T;
    while(scanf("%d",&T)!=EOF){
    while(T--){
        scanf("%d",&n);
        long long ans=0;
        int tot=0,f=0;
        scanf("%s",s+1);
        for(int i=1;i<=n;++i) scanf("%lld",v+i);
        for(int i=n;i>=1;--i) if(s[i]=='(') p[++tot]=i;
        for(int i=1;i<=n;++i) sum[i]=sum[i-1]+(s[i]==')'?v[i]:0);
        memset(nxt,0,sizeof(nxt)); 
        for(int i=1;i<=n;++i) if(s[i]==')') {
            if(f) nxt[f]=i;
            f=i;
        }
        for(int i=1;i<=n;++i) mx[1][i]=mx[0][i]=0;
        mx[1][0]=mx[0][0]=-(1LL<<60);
        int cur=0;
        for(int i=1;i<=tot;++i) {
            for(int j=p[i]+1;j<=n;++j) if(s[j]==')') mx[cur][j]=v[p[i]]*(sum[j]-sum[p[i]])+mx[cur^1][j];
            for(int j=n;j>=p[i];--j) if(s[j]==')') mx[cur][j]=max(mx[cur][nxt[j]],mx[cur][j]);
            for(int j=p[i]-1;j>=1;--j) if(s[j]==')') mx[cur][j]=max(mx[cur^1][j],mx[cur][nxt[j]]);
            for(int j=1;j<=n;++j) if(s[j]==')') ans=max(ans,mx[cur][j]);
            cur^=1;
        }
        printf("%lld\n",ans);
    }
    }
}

 

转载于:https://www.cnblogs.com/mfys/p/8973477.html

import torch import torch.nn as nn from torch.nn import functional as F import math """ This file defines layer types that are commonly used for transformers. """ class MultiHeadAttention(nn.Module): """ A model layer which implements a simplified version of masked attention, as introduced by "Attention Is All You Need" (https://arxiv.org/abs/1706.03762). Usage: attn = MultiHeadAttention(embed_dim, num_heads=2) # self-attention data = torch.randn(batch_size, sequence_length, embed_dim) self_attn_output = attn(query=data, key=data, value=data) # attention using two inputs (cross-attention) other_data = torch.randn(batch_size, sequence_length, embed_dim) attn_output = attn(query=data, key=other_data, value=other_data) """ def __init__(self, embed_dim, num_heads, dropout=0.1): """ Construct a new MultiHeadAttention layer. Inputs: - embed_dim: Dimension of the token embedding - num_heads: Number of attention heads - dropout: Dropout probability """ super().__init__() assert embed_dim % num_heads == 0 # We will initialize these layers for you, since swapping the ordering # would affect the random number generation (and therefore your exact # outputs relative to the autograder). Note that the layers use a bias # term, but this isn't strictly necessary (and varies by # implementation). self.key_MLP = nn.Linear(embed_dim, embed_dim) self.query_MLP = nn.Linear(embed_dim, embed_dim) self.value_MLP = nn.Linear(embed_dim, embed_dim) self.proj_MLP = nn.Linear(embed_dim, embed_dim) self.attn_drop_layer = nn.Dropout(dropout) self.n_head = num_heads self.emd_dim = embed_dim self.head_dim = self.emd_dim // self.n_head def forward(self, query, key, value, attn_mask=None): """ Calculate the masked attention output for the provided data, computing all attention heads in parallel. In the shape definitions below, N is the batch size, S is the source sequence length, T is the target sequence length, and E is the embedding dimension. Inputs: - query: Input data to be used as the query, of shape (N, S, E) - key: Input data to be used as the key, of shape (N, T, E) - value: Input data to be used as the value, of shape (N, T, E) - attn_mask: Array of shape (S, T) where mask[i,j] == 0 indicates token i in the source should not influence token j in the target. Returns: - output: Tensor of shape (N, S, E) giving the weighted combination of data in value according to the attention weights calculated using key and query. """ N, S, E = query.shape N, T, E = value.shape # Create a placeholder, to be overwritten by your code below. output = torch.empty((N, S, E)) ############################################################################ # TODO: Implement multiheaded attention using the equations given in # # Transformer_Captioning.ipynb. # # A few hints: # # 1) You'll want to split your shape from (N, T, E) into (N, T, H, E/H), # # where H is the number of heads. # # 2) The function torch.matmul allows you to do a batched matrix multiply.# # For example, you can do (N, H, T, E/H) by (N, H, E/H, T) to yield a # # shape (N, H, T, T). For more examples, see # # https://pytorch.org/docs/stable/generated/torch.matmul.html # # 3) For applying attn_mask, think how the scores should be modified to # # prevent a value from influencing output. Specifically, the PyTorch # # function masked_fill may come in handy. # ############################################################################ # *****START OF YOUR CODE ***** pass # *****END OF YOUR CODE ***** return output class PositionalEncoding(nn.Module): """ Encodes information about the positions of the tokens in the sequence. In this case, the layer has no learnable parameters, since it is a simple function of sines and cosines. """ def __init__(self, embed_dim, dropout=0.1, max_len=5000): """ Construct the PositionalEncoding layer. Inputs: - embed_dim: the size of the embed dimension - dropout: the dropout value - max_len: the maximum possible length of the incoming sequence """ super().__init__() self.dropout = nn.Dropout(p=dropout) assert embed_dim % 2 == 0 # Create an array with a "batch dimension" of 1 (which will broadcast # across all examples in the batch). pe = torch.zeros(1, max_len, embed_dim) ############################################################################ # TODO: Construct the positional encoding array (pe) as described in # # Transformer_Captioning.ipynb. The goal is for each row to alternate # # sine and cosine, and have exponents of 0, 0, 2, 2, 4, 4, etc. up to # # embed_dim. Of course this exact specification is somewhat arbitrary, but # # this is what the autograder is expecting. # ############################################################################ # *****START OF YOUR CODE ***** pass # *****END OF YOUR CODE ***** # Make sure the positional encodings will be saved with the model # parameters (mostly for completeness). And it won't be updated automatically. self.register_buffer('pe', pe) def forward(self, x): """ Element-wise add positional embeddings to the input sequence. Inputs: - x: the sequence fed to the positional encoder model, of shape (N, S, D), where N is the batch size, S is the sequence length and D is embed dim Returns: - output: the input sequence + positional encodings, of shape (N, S, D) """ N, S, D = x.shape # Create a placeholder, to be overwritten by your code below. output = torch.empty((N, S, D)) ############################################################################ # TODO: Index into your array of positional encodings, and add the # # appropriate ones to the input sequence. Don't forget to apply dropout # # afterward. This should only take a few lines of code. # ############################################################################ # *****START OF YOUR CODE ***** pass # *****END OF YOUR CODE ***** return output 请补全
最新发布
05-31
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值