G. Running in Pairs

本文探讨了在特定条件下,如何通过调整选手顺序以最大化竞赛时长,同时确保不超过设定的最大时长限制。通过构造策略,实现了对最大和最小状态的求解,当超出最大状态时输出最大状态,当小于最小状态时输出-1。

链接:https://codeforces.com/contest/1244/problem/G

Demonstrative competitions will be held in the run-up to the 20NN20NN Berlatov Olympic Games. Today is the day for the running competition!

Berlatov team consists of 2n2n runners which are placed on two running tracks; nn runners are placed on each track. The runners are numbered from 11 to nn on each track. The runner with number ii runs through the entire track in ii seconds.

The competition is held as follows: first runners on both tracks start running at the same time; when the slower of them arrives at the end of the track, second runners on both tracks start running, and everyone waits until the slower of them finishes running, and so on, until all nnpairs run through the track.

The organizers want the run to be as long as possible, but if it lasts for more than kk seconds, the crowd will get bored. As the coach of the team, you may choose any order in which the runners are arranged on each track (but you can't change the number of runners on each track or swap runners between different tracks).

You have to choose the order of runners on each track so that the duration of the competition is as long as possible, but does not exceed kkseconds.

Formally, you want to find two permutations pp and qq (both consisting of nn elements) such that sum=∑i=1nmax(pi,qi)sum=∑i=1nmax(pi,qi) is maximum possible, but does not exceed kk. If there is no such pair, report about it.

Input

The first line contains two integers nn and kk (1≤n≤106,1≤k≤n21≤n≤106,1≤k≤n2) — the number of runners on each track and the maximum possible duration of the competition, respectively.

Output

If it is impossible to reorder the runners so that the duration of the competition does not exceed kk seconds, print −1−1.

Otherwise, print three lines. The first line should contain one integer sumsum — the maximum possible duration of the competition not exceeding kk. The second line should contain a permutation of nn integers p1,p2,…,pnp1,p2,…,pn (1≤pi≤n1≤pi≤n, all pipi should be pairwise distinct) — the numbers of runners on the first track in the order they participate in the competition. The third line should contain a permutation of nnintegers q1,q2,…,qnq1,q2,…,qn (1≤qi≤n1≤qi≤n, all qiqi should be pairwise distinct) — the numbers of runners on the second track in the order they participate in the competition. The value of sum=∑i=1nmax(pi,qi)sum=∑i=1nmax(pi,qi) should be maximum possible, but should not exceed kk. If there are multiple answers, print any of them.

Examples

input

Copy

5 20

output

Copy

20
1 2 3 4 5 
5 2 4 3 1 

input

Copy

3 9

output

Copy

8
1 2 3 
3 2 1 

input

Copy

10 54

output

Copy

-1

Note

In the first example the order of runners on the first track should be [5,3,2,1,4][5,3,2,1,4], and the order of runners on the second track should be [1,4,2,5,3][1,4,2,5,3]. Then the duration of the competition is max(5,1)+max(3,4)+max(2,2)+max(1,5)+max(4,3)=5+4+2+5+4=20max(5,1)+max(3,4)+max(2,2)+max(1,5)+max(4,3)=5+4+2+5+4=20, so it is equal to the maximum allowed duration.

In the first example the order of runners on the first track should be [2,3,1][2,3,1], and the order of runners on the second track should be [2,1,3][2,1,3]. Then the duration of the competition is 88, and it is the maximum possible duration for n=3n=3.

题解

这是一道构造题,求最大状态和最小状态,超出最大状态输出最大状态小于最小状态输出-1,在中间将一定有解,找出那个解就行了

代码:

#include<bits/stdc++.h>
using namespace std;
long long n,m,ans,s,k,a[1000001],b[1000001],f[1000001];
int main()
{
	cin>>n>>m;
	if(n*(n+1)/2>m)
	cout<<-1;
	else
	{
		s=0;
		for(long long i=1;i<=n;i++)
		{
			s+=max(i,n-i+1);
		}
		if(s<m)
		{
			cout<<s<<endl;
			for(int i=1;i<=n;i++)
			cout<<i<<" ";
			cout<<endl;
			for(int i=n;i>=1;i--)
			cout<<i<<" ";
			cout<<endl;
		}
		else
		{
			cout<<m<<endl;
			long long x=m-n*(n+1)/2;
			for(int i=1;i<=n;i++)
			{
				a[i]=i;
				b[i]=i;
				f[i]=0;
			}
			long long l=1,r=n;
			for(int i=1;i<=n&&x>0;i++)
			{
				if(x>=a[r]-a[i]&&f[i]==0)
				{
					x-=a[r]-a[i];
					f[i]=1;
					swap(a[r],a[i]);
					r--;
				}
			}
			for(int i=1;i<=n;i++)
			cout<<a[i]<<" ";
			cout<<endl;
			for(int i=1;i<=n;i++)
			cout<<b[i]<<" ";
			cout<<endl;
		}
	}
}

 

import os import re import torch import torch.nn as nn import torch.optim as optim import pandas as pd from torch.utils.data import Dataset, DataLoader from tqdm import tqdm import numpy as np from datetime import datetime import nltk from nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords from nltk import word_tokenize from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity from gensim.models import Word2Vec from torch.utils.data import random_split # 设置 nltk 数据 nltk.data.path.append("/root/nltk_data") lemmatizer = WordNetLemmatizer() STOPWORDS = set(stopwords.words('english')) def clean_text(text): return re.sub(r'[^a-zA-Z0-9\s]', '', str(text)).strip().lower() def tokenize(text): return word_tokenize(text) def remove_stopwords(tokens): return [token for token in tokens if token not in STOPWORDS] def lemmatize_tokens(tokens): return [lemmatizer.lemmatize(w) for w in tokens] def preprocess(text): text = clean_text(text) tokens = tokenize(text) tokens = remove_stopwords(tokens) tokens = lemmatize_tokens(tokens) return " ".join(tokens) class QADataset(Dataset): def __init__(self, qa_pairs, tfidf_vectorizer): self.qa_pairs = qa_pairs self.vectorizer = tfidf_vectorizer def __len__(self): return len(self.qa_pairs) def __getitem__(self, idx): q, a = self.qa_pairs[idx] q_vec = self.vectorizer.transform([preprocess(q)]).toarray()[0] a_vec = self.vectorizer.transform([preprocess(a)]).toarray()[0] pair_vec = np.concatenate((q_vec, a_vec)) label = 1 # 全部为正样本 return torch.tensor(pair_vec, dtype=torch.float32), torch.tensor(label, dtype=torch.float32) class KnowledgeBase: def __init__(self, data_path='/root/ubuntu_qa_system/UbuntuQA/merged_qa.csv'): self.qa_pairs = [] self.q_texts = [] self.a_texts = [] self.load_data(data_path) self.tfidf_vectorizer = TfidfVectorizer(tokenizer=lambda x: x.split(), lowercase=False, max_features=10000) self.tfidf_qa = [preprocess(q) for q in self.q_texts] self.tfidf_matrix = self.tfidf_vectorizer.fit_transform(self.tfidf_qa) tokenized_questions = [preprocess(q).split() for q in self.q_texts] self.word2vec_model = Word2Vec(sentences=tokenized_questions, vector_size=100, window=5, min_count=1, workers=4) self.sentence_vectors = None # 可选 self.semantic_model = None if os.path.exists("semantic_match_model.pth"): self.load_model() def load_data(self, data_path): print("🔄 正在读取知识库...") df = pd.read_csv(data_path) self.qa_pairs = list(df[['question_text', 'answer_text']].itertuples(index=False, name=None)) self.q_texts = [pair[0] for pair in self.qa_pairs] self.a_texts = [pair[1] for pair in self.qa_pairs] def build_model(self, epochs=3, batch_size=128, lr=0.001, patience=2, min_delta=0.001): # 检测GPU支持 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(f"使用设备: {device}") # 分割训练集和验证集 train_size = int(0.8 * len(self.qa_pairs)) val_size = len(self.qa_pairs) - train_size train_pairs, val_pairs = random_split(self.qa_pairs, [train_size, val_size]) # 创建数据加载器 train_dataset = QADataset(train_pairs, self.tfidf_vectorizer) val_dataset = QADataset(val_pairs, self.tfidf_vectorizer) train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True, num_workers=2) val_loader = DataLoader(val_dataset, batch_size=batch_size, shuffle=False, num_workers=2) # 初始化模型 sample_input, _ = train_dataset[0] input_dim = sample_input.shape[0] model = SemanticMatchModel(input_dim).to(device) criterion = nn.BCELoss() optimizer = optim.Adam(model.parameters(), lr=lr) scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, 'min', patience=1, factor=0.5, verbose=True) # 早停设置 best_val_loss = float('inf') early_stop_count = 0 # 训练循环 for epoch in range(epochs): print(f"\nEpoch {epoch + 1}/{epochs}") # 训练阶段 model.train() train_loss = 0 train_correct = 0 train_total = 0 train_progress = tqdm(train_loader, desc="🌲 训练中") for X_batch, y_batch in train_progress: X_batch, y_batch = X_batch.to(device), y_batch.to(device) optimizer.zero_grad() outputs = model(X_batch).squeeze() loss = criterion(outputs, y_batch) loss.backward() optimizer.step() train_loss += loss.item() predicted = (outputs > 0.5).float() train_total += y_batch.size(0) train_correct += (predicted == y_batch).sum().item() train_progress.set_postfix( {"Loss": f"{loss.item():.4f}", "Acc": f"{100 * train_correct / train_total:.2f}%"}) avg_train_loss = train_loss / len(train_loader) train_accuracy = 100 * train_correct / train_total # 验证阶段 model.eval() val_loss = 0 val_correct = 0 val_total = 0 with torch.no_grad(): for X_batch, y_batch in tqdm(val_loader, desc="🔍 验证中"): X_batch, y_batch = X_batch.to(device), y_batch.to(device) outputs = model(X_batch).squeeze() loss = criterion(outputs, y_batch) val_loss += loss.item() predicted = (outputs > 0.5).float() val_total += y_batch.size(0) val_correct += (predicted == y_batch).sum().item() avg_val_loss = val_loss / len(val_loader) val_accuracy = 100 * val_correct / val_total # 打印训练和验证结果 print(f"训练 Loss: {avg_train_loss:.4f}, 训练 Acc: {train_accuracy:.2f}%") print(f"验证 Loss: {avg_val_loss:.4f}, 验证 Acc: {val_accuracy:.2f}%") # 学习率调度 scheduler.step(avg_val_loss) # 早停检查 if avg_val_loss < best_val_loss - min_delta: print(f"验证损失改善: {best_val_loss:.4f} → {avg_val_loss:.4f}, 保存模型") best_val_loss = avg_val_loss torch.save(model.state_dict(), 'semantic_match_model.pth') early_stop_count = 0 else: early_stop_count += 1 print(f"验证损失未改善,计数: {early_stop_count}/{patience}") if early_stop_count >= patience: print("触发早停机制!") break # 加载最佳模型 model.load_state_dict(torch.load('semantic_match_model.pth')) self.semantic_model = model.to(device) print("✅ 模型已保存并加载") def load_model(self): input_dim = self.tfidf_matrix.shape[1] * 2 self.semantic_model = SemanticMatchModel(input_dim) self.semantic_model.load_state_dict(torch.load("semantic_match_model.pth")) self.semantic_model.eval() def sentence_to_vec(self, sentence): tokens = preprocess(sentence).split() vecs = [self.word2vec_model.wv[w] for w in tokens if w in self.word2vec_model.wv] return np.mean(vecs, axis=0) if vecs else np.zeros(self.word2vec_model.vector_size) def retrieve(self, query, semantic_topk=100): # 1. 先用TF-IDF和句向量做粗检,选top-K query_tfidf = self.tfidf_vectorizer.transform([preprocess(query)]) tfidf_scores = cosine_similarity(query_tfidf, self.tfidf_matrix).flatten() query_sent_vec = self.sentence_to_vec(query) sent_vecs = np.array([self.sentence_to_vec(q) for q in self.q_texts]) sent_scores = cosine_similarity([query_sent_vec], sent_vecs).flatten() # 2. 先合并得分,取前K名 sim_scores = tfidf_scores + sent_scores topk_indices = np.argpartition(sim_scores, -semantic_topk)[-semantic_topk:] topk_indices = topk_indices[np.argsort(sim_scores[topk_indices])[::-1]] # 3. 只对top-K做模型推理 semantic_scores = [] with torch.no_grad(): batch_inputs = [] for i in topk_indices: q = preprocess(self.q_texts[i]) a = preprocess(self.a_texts[i]) q_vec = self.tfidf_vectorizer.transform([q]).toarray()[0] a_vec = self.tfidf_vectorizer.transform([a]).toarray()[0] pair_input = np.concatenate((q_vec, a_vec)) batch_inputs.append(pair_input) batch_inputs = torch.tensor(np.stack(batch_inputs), dtype=torch.float32) batch_scores = self.semantic_model(batch_inputs).squeeze().cpu().numpy() semantic_scores = batch_scores # 按最终分数排序 final_scores = sim_scores[topk_indices] + semantic_scores best_idx = topk_indices[np.argmax(final_scores)] if final_scores.max() < 0.1: return None, None return self.qa_pairs[best_idx], final_scores.max() def recommend_similar(self, query, topk=3): query_tfidf = self.tfidf_vectorizer.transform([preprocess(query)]) scores = cosine_similarity(query_tfidf, self.tfidf_matrix) topk_idx = scores.argsort()[0][-topk:][::-1] return [(self.qa_pairs[i][0], self.qa_pairs[i][1]) for i in topk_idx] class SemanticMatchModel(nn.Module): def __init__(self, input_dim): super().__init__() self.fc1 = nn.Linear(input_dim, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() self.dropout = nn.Dropout(0.5) def forward(self, x): x = self.dropout(torch.relu(self.fc1(x))) x = self.dropout(torch.relu(self.fc2(x))) x = self.sigmoid(self.fc3(x)) return x class FeedbackRecorder: def __init__(self, file_path='unanswered_questions.csv'): self.file_path = file_path if not os.path.exists(self.file_path): with open(self.file_path, 'w', newline='', encoding='utf-8') as f: import csv csv.writer(f).writerow(['time', 'question']) def record_question(self, question): with open(self.file_path, 'a', newline='', encoding='utf-8') as f: import csv csv.writer(f).writerow([datetime.now().isoformat(), question]) class FeedbackRecorder: def __init__(self, file_path='unanswered_questions.csv'): self.file_path = file_path if not os.path.exists(self.file_path): with open(self.file_path, 'w', newline='', encoding='utf-8') as f: import csv csv.writer(f).writerow(['time', 'question']) def record_question(self, question): with open(self.file_path, 'a', newline='', encoding='utf-8') as f: import csv csv.writer(f).writerow([datetime.now().isoformat(), question]) class SemanticMatchModel(nn.Module): def __init__(self, input_dim): super().__init__() self.fc1 = nn.Linear(input_dim, 128) self.fc2 = nn.Linear(128, 64) self.fc3 = nn.Linear(64, 1) self.relu = nn.ReLU() self.sigmoid = nn.Sigmoid() self.dropout = nn.Dropout(0.5) def forward(self, x): x = self.dropout(torch.relu(self.fc1(x))) x = self.dropout(torch.relu(self.fc2(x))) x = self.sigmoid(self.fc3(x)) return x class FeedbackRecorder: def __init__(self, file_path='unanswered_questions.csv'): self.file_path = file_path if not os.path.exists(self.file_path): with open(self.file_path, 'w', newline='', encoding='utf-8') as f: import csv csv.writer(f).writerow(['time', 'question']) def record_question(self, question): with open(self.file_path, 'a', newline='', encoding='utf-8') as f: import csv csv.writer(f).writerow([datetime.now().isoformat(), question]) def main(): kb = KnowledgeBase() if input("是否重新训练语义匹配模型?(y/n): ").strip().lower() == 'y': kb.build_model() recorder = FeedbackRecorder() print("\n🎯 智能知识问答系统已启动(输入'q'退出聊天)\n") while True: query = input("🧐 问题:") if query.strip().lower() == 'q': break result, score = kb.retrieve(query) if result: print("💡 回答:", result[1]) else: print("⚠ 没有找到合适的答案,已将你的问题记录下来。") recorder.record_question(query) print("🔥 相似问题推荐:") for q, a in kb.recommend_similar(query): print(f"Q: {q}\nA: {a}\n") if __name__ == "__main__": main()
07-04
echo "Installing the necessary packages ..." pip install -r script/requirements.txt echo "Installing pytorch3d ..." # cd third_party/pytorch3d_simplified # pip install -e . # cd ../.. pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable" echo "Adjusting code in sapien/wrapper/urdf_loader.py ..." # location of sapien, like "~/.conda/envs/RoboTwin/lib/python3.10/site-packages/sapien" SAPIEN_LOCATION=$(pip show sapien | grep 'Location' | awk '{print $2}')/sapien # Adjust some code in wrapper/urdf_loader.py URDF_LOADER=$SAPIEN_LOCATION/wrapper/urdf_loader.py # ----------- before ----------- # 667 with open(urdf_file, "r") as f: # 668 urdf_string = f.read() # 669 # 670 if srdf_file is None: # 671 srdf_file = urdf_file[:-4] + "srdf" # 672 if os.path.isfile(srdf_file): # 673 with open(srdf_file, "r") as f: # 674 self.ignore_pairs = self.parse_srdf(f.read()) # ----------- after ----------- # 667 with open(urdf_file, "r", encoding="utf-8") as f: # 668 urdf_string = f.read() # 669 # 670 if srdf_file is None: # 671 srdf_file = urdf_file[:-4] + ".srdf" # 672 if os.path.isfile(srdf_file): # 673 with open(srdf_file, "r", encoding="utf-8") as f: # 674 self.ignore_pairs = self.parse_srdf(f.read()) sed -i -E 's/("r")(\))( as)/\1, encoding="utf-8") as/g' $URDF_LOADER echo "Adjusting code in mplib/planner.py ..." # location of mplib, like "~/.conda/envs/RoboTwin/lib/python3.10/site-packages/mplib" MPLIB_LOCATION=$(pip show mplib | grep 'Location' | awk '{print $2}')/mplib # Adjust some code in planner.py # ----------- before ----------- # 807 if np.linalg.norm(delta_twist) < 1e-4 or collide or not within_joint_limit: # 808 return {"status": "screw plan failed"} # ----------- after ----------- # 807 if np.linalg.norm(delta_twist) < 1e-4 or not within_joint_limit: # 808 return {"status": "screw plan failed"} PLANNER=$MPLIB_LOCATION/planner.py sed -i -E 's/(if np.linalg.norm\(delta_twist\) < 1e-4 )(or collide )(or not within_joint_limit:)/\1\3/g' $PLANNER echo "Installing Curobo ..." cd envs git clone https://github.com/NVlabs/curobo.git cd curobo pip install -e . --no-build-isolation cd ../.. echo "Installation basic environment complete!" echo -e "You need to:" echo -e " 1. \033[34m\033[1m(Important!)\033[0m Download asserts from huggingface." echo -e " 2. Install requirements for running baselines. (Optional)" echo "See INSTALLATION.md for more instructions." 这是install.sh文件 你能从中看出来 我应该把pytorch3d源码和Curobo源码放在哪个路径之下吗?
10-25
参数:(Hic) [scb3201@ln137%bscc-a6 ~]$ bedtools bedtools is a powerful toolset for genome arithmetic. Version: v2.31.1 About: developed in the quinlanlab.org and by many contributors worldwide. Docs: http://bedtools.readthedocs.io/ Code: https://github.com/arq5x/bedtools2 Mail: https://groups.google.com/forum/#!forum/bedtools-discuss Usage: bedtools <subcommand> [options] The bedtools sub-commands include: [ Genome arithmetic ] intersect Find overlapping intervals in various ways. window Find overlapping intervals within a window around an interval. closest Find the closest, potentially non-overlapping interval. coverage Compute the coverage over defined intervals. map Apply a function to a column for each overlapping interval. genomecov Compute the coverage over an entire genome. merge Combine overlapping/nearby intervals into a single interval. cluster Cluster (but don't merge) overlapping/nearby intervals. complement Extract intervals _not_ represented by an interval file. shift Adjust the position of intervals. subtract Remove intervals based on overlaps b/w two files. slop Adjust the size of intervals. flank Create new intervals from the flanks of existing intervals. sort Order the intervals in a file. random Generate random intervals in a genome. shuffle Randomly redistribute intervals in a genome. sample Sample random records from file using reservoir sampling. spacing Report the gap lengths between intervals in a file. annotate Annotate coverage of features from multiple files. [ Multi-way file comparisons ] multiinter Identifies common intervals among multiple interval files. unionbedg Combines coverage intervals from multiple BEDGRAPH files. [ Paired-end manipulation ] pairtobed Find pairs that overlap intervals in various ways. pairtopair Find pairs that overlap other pairs in various ways. [ Format conversion ] bamtobed Convert BAM alignments to BED (& other) formats. bedtobam Convert intervals to BAM records. bamtofastq Convert BAM records to FASTQ records. bedpetobam Convert BEDPE intervals to BAM records. bed12tobed6 Breaks BED12 intervals into discrete BED6 intervals. [ Fasta manipulation ] getfasta Use intervals to extract sequences from a FASTA file. maskfasta Use intervals to mask sequences from a FASTA file. nuc Profile the nucleotide content of intervals in a FASTA file. [ BAM focused tools ] multicov Counts coverage from multiple BAMs at specific intervals. tag Tag BAM alignments based on overlaps with interval files. [ Statistical relationships ] jaccard Calculate the Jaccard statistic b/w two sets of intervals. reldist Calculate the distribution of relative distances b/w two files. fisher Calculate Fisher statistic b/w two feature files. [ Miscellaneous tools ] overlap Computes the amount of overlap from two intervals. igv Create an IGV snapshot batch script. links Create a HTML page of links to UCSC locations. makewindows Make interval "windows" across a genome. groupby Group by common cols. & summarize oth. cols. (~ SQL "groupBy") expand Replicate lines based on lists of values in columns. split Split a file into multiple files with equal records or base pairs. summary Statistical summary of intervals in a file. [ General Parameters ] --cram-ref Reference used by a CRAM input [ General help ] --help Print this help menu. --version What version of bedtools are you using?. --contact Feature requests, bugs, mailing lists, etc.
12-09
内容概要:本文系统阐述了Java Persistence API(JPA)的核心概念、技术架构、核心组件及实践应用,重点介绍了JPA作为Java官方定义的对象关系映射(ORM)规范,如何通过实体类、EntityManager、JPQL和persistence.xml配置文件实现Java对象与数据库表之间的映射与操作。文章详细说明了JPA解决的传统JDBC开发痛点,如代码冗余、对象映射繁琐、跨数据库兼容性差等问题,并解析了JPA与Hibernate、EclipseLink等实现框架的关系。同时提供了基于Hibernate和MySQL的完整实践案例,涵盖Maven依赖配置、实体类定义、CRUD操作实现等关键步骤,并列举了常用JPA注解及其用途。最后总结了JPA的标准化优势、开发效率提升能力及在Spring生态中的延伸应用。 适合人群:具备一定Java基础,熟悉基本数据库操作,工作1-3年的后端开发人员或正在学习ORM技术的中级开发者。 使用场景及目标:①理解JPA作为ORM规范的核心原理与组件协作机制;②掌握基于JPA+Hibernate进行数据库操作的开发流程;③为技术选型、团队培训或向Spring Data JPA过渡提供理论与实践基础。 阅读建议:此资源以理论结合实践的方式讲解JPA,建议读者在学习过程中同步搭建环境,动手实现文中示例代码,重点关注EntityManager的使用、JPQL语法特点以及注解配置规则,从而深入理解JPA的设计思想与工程价值。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值