UNIMODAL PALINDROMIC DECOMPOSITIONS OpenJ_Bailian - 1221

本文介绍了一种特定类型的回文序列——单峰回文序列,并探讨了如何计算一个整数的所有可能单峰回文分解的数量。通过动态规划的方法,提供了一个有效的解决方案。

A sequence of positive integers is Palindromic if it reads the same forward and backward. For example:
23 11 15 1 37 37 1 15 11 23
1 1 2 3 4 7 7 10 7 7 4 3 2 1 1
A Palindromic sequence is Unimodal Palindromic if the values do not decrease up to the middle value and then (since the sequence is palindromic) do not increase from the middle to the end For example, the first example sequence above is NOT Unimodal Palindromic while the second example is.
A Unimodal Palindromic sequence is a Unimodal Palindromic Decomposition of an integer N, if the sum of the integers in the sequence is N. For example, all of the Unimodal Palindromic Decompositions of the first few integers are given below:
1: (1)
2: (2), (1 1)
3: (3), (1 1 1)
4: (4), (1 2 1), (2 2), (1 1 1 1)
5: (5), (1 3 1), (1 1 1 1 1)
6: (6), (1 4 1), (2 2 2), (1 1 2 1 1), (3 3),
(1 2 2 1), ( 1 1 1 1 1 1)
7: (7), (1 5 1), (2 3 2), (1 1 3 1 1), (1 1 1 1 1 1 1)
8: (8), (1 6 1), (2 4 2), (1 1 4 1 1), (1 2 2 2 1),
(1 1 1 2 1 1 1), ( 4 4), (1 3 3 1), (2 2 2 2),
(1 1 2 2 1 1), (1 1 1 1 1 1 1 1)

Write a program, which computes the number of Unimodal Palindromic Decompositions of an integer.

Input

Input consists of a sequence of positive integers, one per line ending with a 0 (zero) indicating the end.

Output

For each input value except the last, the output is a line containing the input value followed by a space, then the number of Unimodal Palindromic Decompositions of the input value. See the example on the next page.

Sample Input

2
3
4
5
6
7
8
10
23
24
131
213
92
0

Sample Output

2 2
3 2
4 4
5 3
6 7
7 5
8 11
10 17
23 104
24 199
131 5010688
213 1055852590
92 331143

Hint

N < 250


#include<iostream>
#include<algorithm>
#include<string>
#include<map>//int dx[4]={0,0,-1,1};int dy[4]={-1,1,0,0};
#include<set>//int gcd(int a,int b){return b?gcd(b,a%b):a;}
#include<vector>
#include<cmath>
#include<stack>
#include<string.h>
#include<stdlib.h>
#include<cstdio>
#define mod 1e9+7
#define ll long long
#define maxn 305
#define ms memset
using namespace std;
/*
题目大意:定义一个新的回文数字串,
要求其和要一样都是n,并且从两边到中间的
数字段是不减的,对于给定的n,求计数。
定义dp[i][j],即和为i的两端为j组成的回文串计数。
枚举两端的j,对于剩余的i-2*j,累加其dp[i-2*j][k],
其中k是从j到i-2*j的范围。
最后dp[i][i]特殊化为1.
*/
ll n,table[maxn][maxn];
void init()
{
    memset(table,0,sizeof(table));
    for(int i=0;i<maxn;i++) table[0][i]=1;///多余
    for(int i=1;i<maxn;i++)
    {
         table[i][1]=0;
         for(int j=1;j<=i/2;j++)
         {
             if(i-2*j==0)
             {
                 table[i][j]=1;
                 break;
             }
             for(int k=j;k<=i-2*j;k++)
               table[i][j]+=table[i-2*j][k];
         }
        table[i][i]=1;
    }
}
 
int main()
{
    init();
    while( scanf("%lld",&n) &&n )
    {
        ll ans=0;
        for(int i=0;i<=n;i++) ans+=table[n][i];
        printf("%lld %lld\n",n,ans);
    }
    return 0;

}

import torch import torch.nn as nn import torch.nn.functional as F import open_clip import timm from vit_pytorch.simple_vit_with_patch_dropout import SimpleViT from vit_pytorch.extractor import Extractor from model_init import UrbanCLIP_init # There could be more options to initialize the parameters! The following checkpoint is one of them. # Our design is based on CLIP. So CLIP variants are also within our scope. Welcome any commit for UrbanCLIP! model, _, transform = open_clip.create_model_and_transforms( model_name="coca_ViT-L-14", # pretrained="mscoco_finetuned_laion2B-s13B-b90k" pretrained="/root/autodl-tmp/laion-mscoco_finetuned_CoCa-ViT-L-14-laion2B-s13B-b90k/open_clip_pytorch_model.bin" # pretrained="/root/autodl-tmp/laion-CoCa-ViT-L-14-laion2B-s13B-b90k/open_clip_pytorch_model.bin" ) # more general details of initialized model can be seen as follows: vit = SimpleViT( image_size = 256, patch_size = 32, num_classes = 1000, dim = 1024, depth = 6, heads = 16, mlp_dim = 2048, patch_dropout = 0.5 # https://arxiv.org/abs/2212.00794 ) vit = Extractor(vit, return_embeddings_only = True, detach = False) urbanclip_init = UrbanCLIP_init( dim = 512, # model dimension img_encoder = vit, # vision transformer - image encoder, returning image embeddings as (batch, seq, dim) image_dim = 1024, # image embedding dimension, if not the same as model dimensions num_tokens = 20000, # number of text tokens unimodal_depth = 6, # depth of the unimodal transformer multimodal_depth = 6, # depth of the multimodal transformer dim_head = 64, # dimension per attention head heads = 8, # number of attention heads caption_loss_weight = 1., # weight on the autoregressive caption loss contrastive_loss_weight = 1., # weight on the contrastive loss between image and text CLS embeddings ).cuda() text = torch.randint(0, 20000, (4, 512)).cuda() images = torch.randn(4, 3, 256, 256).cuda() loss = urbanclip_init( text = text, images = images, return_loss = True # set this to True to get the full caption + contrastive loss ) loss.backward() logits = urbanclip_init( text = text, images = images ) text_embeds, image_embeds = urbanclip_init( text = text, images = images, return_embeddings = True )
最新发布
10-17
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值