【Torch API】torch.index_select()用法详解

本文详细介绍了PyTorch中的index_select函数用法,通过多个示例对比了使用该函数与直接索引的区别,帮助读者深入理解如何在不同维度上高效地选择张量数据。
部署运行你感兴趣的模型镜像

torch.index_select()——数组索引

torch.index_select(input, dim, index, *, out=None) → Tensor

功能:选择根据给定的indexdiminput中选择张量数据,相当于更高级的索引功能。

输入:

  • input:需要索引的张量数组
  • dim:索引维度(沿dim维度进行索引)
  • index:索引值,可以是单个数字、也可以是一个序列(一维序列)

注意:

  • 返回的张量数组与原始的张量数组具有相同的维数,这里与直接进行索引有区别,具体见案例代码;
  • dim维度的尺寸大小与index的长度相同,其他尺寸大小与原始张量中的尺寸相同
  • index:维数必须小于等于1

案例代码

当index索引为单个数字时

import torch
a=torch.arange(40).view(2,4,5)
index=torch.tensor([1])
select_1=torch.index_select(a,dim=0,index=index)
select_2=torch.index_select(a,dim=1,index=index)
select_3=torch.index_select(a,dim=2,index=index)
print(select_1)
print(select_2)
print(select_3)
print(a.shape)
print(select_1.shape)
print(select_2.shape)
print(select_3.shape)
# dim=0
tensor([[[20, 21, 22, 23, 24],
         [25, 26, 27, 28, 29],
         [30, 31, 32, 33, 34],
         [35, 36, 37, 38, 39]]])
# dim=1
tensor([[[ 5,  6,  7,  8,  9]],

        [[25, 26, 27, 28, 29]]])
# dim=2
tensor([[[ 1],
         [ 6],
         [11],
         [16]],

        [[21],
         [26],
         [31],
         [36]]])
# 索引后的数组尺寸,除了dim部分,其他和原来大小一样
# 原始
torch.Size([2, 4, 5])
# dim=0
torch.Size([1, 4, 5])
# dim=1
torch.Size([2, 1, 5])
# dim=2
torch.Size([2, 4, 1])

在这里插入图片描述

当索引为列表时

import torch
a=torch.arange(40).view(2,4,5)
index=torch.tensor([1,3])
select_1=torch.index_select(a,dim=1,index=index)
select_2=torch.index_select(a,dim=2,index=index)
print(select_1)
print(select_2)
# dim=1
tensor([[[ 5,  6,  7,  8,  9],
         [15, 16, 17, 18, 19]],

        [[25, 26, 27, 28, 29],
         [35, 36, 37, 38, 39]]])
# dim=2
tensor([[[ 1,  3],
         [ 6,  8],
         [11, 13],
         [16, 18]],

        [[21, 23],
         [26, 28],
         [31, 33],
         [36, 38]]])

在这里插入图片描述

 

高维数组的选择

import torch
a=torch.arange(4*512*28*28).view(4,512,28,28)
index=np.random.choice(32,5)# 在0到31内随机选5个值
select=torch.index_select(a,1,index=torch.tensor(index,dtype=int))
print(index)
print(a.shape)
print(select.shape)
# 索引序列
[ 1  4 28  3  3]
# 原始数组
torch.Size([4, 512, 28, 28])
# 索引后的数组
torch.Size([4, 5, 28, 28])

torch.index_select与直接索引的区别

import torch
a=torch.arange(40).view(2,4,5)
index=torch.tensor([1])
select_1=torch.index_select(a,dim=0,index=index)
select_2=a[0,:,:]
print(select_1)
print(select_2)
print(select_1.shape)
print(select_2.shape)
# 利用torch.index_select,结果
tensor([[[20, 21, 22, 23, 24],
         [25, 26, 27, 28, 29],
         [30, 31, 32, 33, 34],
         [35, 36, 37, 38, 39]]])
# 直接进行索引,结果
tensor([[ 0,  1,  2,  3,  4],
        [ 5,  6,  7,  8,  9],
        [10, 11, 12, 13, 14],
        [15, 16, 17, 18, 19]])
# 利用torch.index_select,形状
torch.Size([1, 4, 5])
# 直接进行索引,形状
torch.Size([4, 5])

  

容易发现,直接进行索引和利用torch.index_select索引最大的区别就在于:直接进行索引数组维数会降低,利用torch.index_select索引数组维数不变,下面的案例更容易理解区别

import torch
a=torch.arange(40).view(2,4,5)
select_1=a[0,:,:]
select_2=a[0,0,:]
select_3=a[0,0,0]
print(select_1)
print(select_2)
print(select_3)
print(select_1.shape)
print(select_2.shape)
print(select_3.shape)
tensor([[ 0,  1,  2,  3,  4],
        [ 5,  6,  7,  8,  9],
        [10, 11, 12, 13, 14],
        [15, 16, 17, 18, 19]])
tensor([0, 1, 2, 3, 4])
tensor(0)
torch.Size([4, 5])
torch.Size([5])
torch.Size([])

直接索引就是一个逐步逼近的过程,随着给定的数字越多(从给1个到给3个),维数越小(结果从2维到0维),结果范围越精确。而torch.index_select只能沿着一个维度进行搜索查找,相当于对整个数组进行索引。

官方文档

torch.index_select():torch.index_select — PyTorch 1.13 documentation

您可能感兴趣的与本文相关的镜像

PyTorch 2.9

PyTorch 2.9

PyTorch
Cuda

PyTorch 是一个开源的 Python 机器学习库,基于 Torch 库,底层由 C++ 实现,应用于人工智能领域,如计算机视觉和自然语言处理

import os import math import argparse import sys import torch import torch.optim as optim import torch.optim.lr_scheduler as lr_scheduler from torch.utils.tensorboard import SummaryWriter from torchvision import transforms from regularization import Regularization #from torchsummary import summary #from pytorch_summary import torchsummary from torchinfo import summary #from my_dataset import MyDataSet from wsi_gene_dataset import MyDataSet as WSI_Gene_DataSet # from wsi_dataset_cox import MyDataSet as WSI_Dataset #from vit_model_gene_wsi_concat import my_model as create_model_wsi_gene from vit_model_gene_wsi_concat_label import my_model as create_model_wsi_gene from temp_yr.vit_model_gene_wsi_concat_no_contrastive_loss import my_model as create_model_wsi_gene_no_contrastive_loss #from vit_model_gene_wsi_concat_no_contrastive_loss import my_model as create_model_wsi_gene_no_contrastive_loss from vit_model_one_cls import my_model as create_model_wsi from utils_cox import read_split_data, train_one_epoch, evaluate from torch.nn import DataParallel import shutil def main(args): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") #os.environ["CUDA_VISIBLE_DEVICES"] = args.device tb_writer = SummaryWriter(log_dir=args.log_dir) train_dataset = WSI_Gene_DataSet(args.wsi_train_feat_dir, args.gene_train_feat_dir, args.cox_txt_path, mode='train') print('train patient count: {}'.format(str(len(train_dataset)))) val_dataset = WSI_Gene_DataSet(args.wsi_valid_feat_dir, args.gene_valid_feat_dir, args.cox_txt_path, mode='valid') print('valid patient count: {}'.format(str(len(val_dataset)))) #batch_size = int(args.batch_size / len(args.device.split(','))) batch_size = args.batch_size nw = min([os.cpu_count(), batch_size if batch_size > 1 else 0, 8]) # number of workers #nw = 0 print('Using {} dataloader workers every process'.format(nw)) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=nw, drop_last=False, collate_fn=train_dataset.collate_fn) val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False, pin_memory=True, num_workers=nw, drop_last=False, collate_fn=val_dataset.collate_fn) model_dpr = 0.2 wsi_block = 6 if args.train_flag == 0: if args.contrastive_loss_flag: model = create_model_wsi_gene(num_classes=args.num_classes, has_logits=False, wsi_block=wsi_block, gene_block=2, dpr=model_dpr) else: model = create_model_wsi_gene_no_contrastive_loss(num_classes=args.num_classes, has_logits=False, wsi_block=wsi_block, gene_block=2, dpr=model_dpr) elif args.train_flag == 1: model = create_model_wsi(num_classes=args.num_classes, has_logits=False, wsi_block=wsi_block, dpr=model_dpr) elif args.train_flag == 2: model = create_model_wsi_gene(num_classes=args.num_classes, has_logits=False, gene_block=2, dpr=model_dpr) else: raise ValueError('Invalid train flag : {}'.format(str(args.train_flag))) shutil.copy(os.path.join(os.getcwd(), sys.argv[0]), args.log_dir) model_log_path = os.path.join(args.log_dir, 'model_log.txt') model_log = open(model_log_path, 'w') model_log.write(str(model)) model_log.write('\n') model_log.write('Total params: {:.2f}M\n'.format(sum(p.numel() for p in model.parameters()) / 1000000.0)) # 输出参数数量 model.cuda() #model_report, _ = torchsummary.summary_string(model, [(500, 1280), (186,5245)], device='cuda') if args.train_flag == 0: model_log.write(str(summary(model, [(500, 384), (186,5245)], device='cuda'))) elif args.train_flag == 1: model_log.write(str(summary(model, input_size=(1, 500, 2048), device='cuda'))) model_log.close() #print(model) # 输出模型结构 #print('Total params: {:.2f}M'.format(sum(p.numel() for p in model.parameters()) / 1000000.0)) # 输出参数数量 ## 运行模型并输出显存占用情况 #input_tensor1 = torch.randn(1, 186, 5245).cuda() #input_tensor2 = torch.randn(1, 500, 1280).cuda() #model=model.cuda() #with torch.no_grad(): # output = model(input_tensor2, input_tensor1) #print('GPU memory used:', torch.cuda.memory_allocated()) model = DataParallel(model, device_ids=None) model=model.cuda() if args.weights != "": assert os.path.exists(args.weights), "weights file: '{}' not exist.".format(args.weights) weights_dict = torch.load(args.weights) # del_keys = ['head.weight', 'head.bias'] if model.has_logits \ else ['pre_logits.fc.weight', 'pre_logits.fc.bias', 'head.weight', 'head.bias', 'patch_embed.proj.bias', 'patch_embed.proj.weight'] for k in del_keys: del weights_dict[k] torch.nn.init.kaiming_normal_(model.patch_embed.proj_conv.weight,mode='fan_out',nonlinearity='relu') torch.nn.init.kaiming_normal_(model.patch_embed.proj_lin.weight,mode='fan_out',nonlinearity='relu') print(model.load_state_dict(weights_dict, strict=False)) if args.freeze_layers: for name, para in model.named_parameters(): # if "head" not in name and "pre_logits" not in name: para.requires_grad_(False) else: print("training {}".format(name)) pg = [p for p in model.parameters() if p.requires_grad] if args.weight_decay > 0: regular_loss = Regularization(model, args.weight_decay, p=1).to(device) else: regular_loss = False #optimizer = optim.SGD(pg, lr=args.lr, momentum=0.9, weight_decay=1E-5) #optimizer = optim.AdamW(pg, lr=args.lr, betas=(0.9, 0.999), eps=1E-3, weight_decay=5E-4) optimizer = optim.Adam(pg, lr=args.lr, weight_decay=1E-5) # Scheduler https://arxiv.org/pdf/1812.01187.pdf lf = lambda x: ((1 + math.cos(x * math.pi / args.epochs)) / 2) * (1 - args.lrf) + args.lrf # cosine scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lf) criterion = torch.nn.CrossEntropyLoss().cuda() #criterion = torch.nn.BCEWithLogitsLoss().cuda() best_val_cindex = 0 best_sum_cindex = 0 save_name_txt = os.path.join(args.log_dir, "train_valid_acc.txt") model_file = open(save_name_txt, "w") for epoch in range(args.epochs): # train train_loss, train_cox_acc,train_p_value, train_c_index = train_one_epoch(model=model, topK=args.topK, criterion=criterion, optimizer=optimizer, data_loader=train_loader, #device=device, epoch=epoch, reg_loss=regular_loss, train_flag = args.train_flag, contrastive_loss_flag=args.contrastive_loss_flag) scheduler.step() # validate val_loss, val_cox_acc, val_p_value, val_c_index = evaluate(model=model, topK=args.topK, criterion=criterion, data_loader=val_loader, #device=device, epoch=epoch, json_path='valid_log.txt', reg_loss=regular_loss, train_flag = args.train_flag, contrastive_loss_flag=args.contrastive_loss_flag) #tags = ["train_loss", "train_p_value", "val_loss", "val_acc", "learning_rate"] # "train_tpr", "train_tnr", "val_tpr", "val_tnr"] tb_writer.add_scalar('train_loss', train_loss, epoch) tb_writer.add_scalar('train_cox_acc', train_cox_acc, epoch) tb_writer.add_scalar('train_p_value', train_p_value, epoch) tb_writer.add_scalar('train_c_index', train_c_index, epoch) tb_writer.add_scalar('val_loss', val_loss, epoch) tb_writer.add_scalar('val_cox_acc', val_cox_acc, epoch) tb_writer.add_scalar('val_p_value', val_p_value, epoch) tb_writer.add_scalar('val_c_index', val_c_index, epoch) tb_writer.add_scalar('learning_rate', optimizer.param_groups[0]["lr"], epoch) #tb_writer.add_scalar(tags[5], train_tpr, epoch) #tb_writer.add_scalar(tags[6], train_tnr, epoch) #tb_writer.add_scalar(tags[7], val_tpr, epoch) #tb_writer.add_scalar(tags[8], val_tnr, epoch) model_file.write('Train-Epoch-' + str(epoch) + ' : train loss : ' + str(train_loss) + ' ; train cox acc : ' + str(train_cox_acc) + ' ; train p value : ' + str(train_p_value) + ' ; train c index : ' + str(train_c_index) + '\n') model_file.write('Valid-Epoch-' + str(epoch) + ' : valid loss : ' + str(val_loss) + ' ; valid cox acc : ' + str(val_cox_acc) + ' ; valid p value : ' + str(val_p_value) + ' ; valid c index : ' + str(val_c_index) + '\n') model_file.write('lrlrl-Epoch-' + str(epoch) + ' : learning rate : ' + str(optimizer.param_groups[0]["lr"]) + '\n') model_file.flush() torch.save(model.state_dict(), os.path.join(args.log_dir,'model-latest.pth')) if val_c_index >= best_val_cindex: torch.save(model.state_dict(), os.path.join(args.log_dir, 'model-val-best.pth')) if train_c_index >= 0.9: os.rename(os.path.join(args.log_dir, 'model-val-best.pth'), os.path.join(args.log_dir, 'model-val-{}.pth'.format(str(round(val_c_index, 4))))) best_val_cindex = val_c_index model_file.write('save best val c_index {} checkpoint'.format(str(val_c_index)) + '\n') if val_c_index + train_c_index >= best_sum_cindex: torch.save(model.state_dict(), os.path.join(args.log_dir, 'model-sum-best.pth')) if train_c_index >= 0.9: os.rename(os.path.join(args.log_dir, 'model-sum-best.pth'), os.path.join(args.log_dir, 'model-sum-{}.pth'.format(str(round(val_c_index, 4))))) best_sum_cindex = val_c_index + train_c_index model_file.write('save best sum c_index {} checkpoint'.format(str(best_sum_cindex)) + '\n') model_file.close() if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument('--num_classes', type=int, default=1) parser.add_argument('--epochs', type=int, default=1000) parser.add_argument('--topK', type=int, default=2) parser.add_argument('--batch_size', type=int, default=64) parser.add_argument('--lr', type=float, default=0.001) parser.add_argument('--lrf', type=float, default=0.00001) parser.add_argument('--weight_decay', type=float, default=0) #正则化参数 parser.add_argument('--log_dir', type=str, default='log_lusc_norm/wsi_6_gene_2_adam_lr_1E-3_l2_1E-5_dpr_2E-1_bs64_whole_40_40_4_20231121', help='log directory') parser.add_argument('--train_flag', type=int, default=0, help='train mode, 0: wsi + gene, 1: wsi, 2: gene') parser.add_argument('--contrastive_loss_flag', type=int, default=0, help='train mode, 0: no contrastive loss, 1: contrastive loss') parser.add_argument('--cox_txt_path', type=str, default="lusc_gene_data/cox_time_lusc.txt") parser.add_argument('--wsi_train_feat_dir', type=str, default='/home/sdd/zxy/TCGA_data/lusc_whole_select_feat_txt_dino/train') parser.add_argument('--wsi_valid_feat_dir', type=str, default='/home/sdd/zxy/TCGA_data/lusc_whole_select_feat_txt_dino/valid') parser.add_argument('--gene_train_feat_dir', type=str, default='lusc_gene_data/pathway_gene_features') parser.add_argument('--gene_valid_feat_dir', type=str, default='lusc_gene_data/pathway_gene_features') parser.add_argument('--weights', type=str, default='', help='initial weights path') parser.add_argument( '--freeze-layers', type=bool, default=False ) parser.add_argument('--device', default='7,6,5,4', type=str, help='device id (i.e. 0 or 0,1 or cpu)') opt = parser.parse_args() main(opt) 详细解释这段代码的每一个函数每一个类以及每一个参数值,先总体告诉我这段代码的作用,然后告诉我每一个区块/函数的作用,然后解释其中的每一句代码,注意是每一句每一个参数和每一个字符。我不想再因为细节问题要求你第二遍
11-13
rgv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) if training_args.should_log: # The default of training_args.log_level is passive, so we set log level at info here to have that default. transformers.utils.logging.set_verbosity_info() log_level = training_args.get_process_log_level() logger.setLevel(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_process_index}, device: {training_args.device}, n_gpu: {training_args.n_gpu}, " + f"distributed training: {training_args.parallel_mode.value == 'distributed'}, 16-bits training: {training_args.fp16}" ) logger.info(f"Training/evaluation parameters {training_args}") # Set seed before initializing model. set_seed(training_args.seed) # Initialize our dataset and prepare it for the audio classification task. raw_datasets = DatasetDict() raw_datasets["train"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=data_args.train_split_name, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) raw_datasets["eval"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=data_args.eval_split_name, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) if data_args.audio_column_name not in raw_datasets["train"].column_names: raise ValueError( f"--audio_column_name {data_args.audio_column_name} not found in dataset '{data_args.dataset_name}'. " "Make sure to set `--audio_column_name` to the correct audio column - one of " f"{', '.join(raw_datasets['train'].column_names)}." ) if data_args.label_column_name not in raw_datasets["train"].column_names: raise ValueError( f"--label_column_name {data_args.label_column_name} not found in dataset '{data_args.dataset_name}'. " "Make sure to set `--label_column_name` to the correct text column - one of " f"{', '.join(raw_datasets['train'].column_names)}." ) # Setting `return_attention_mask=True` is the way to get a correctly masked mean-pooling over # transformer outputs in the classifier, but it doesn't always lead to better accuracy feature_extractor = AutoFeatureExtractor.from_pretrained( model_args.feature_extractor_name or model_args.model_name_or_path, return_attention_mask=model_args.attention_mask, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) # `datasets` takes care of automatically loading and resampling the audio, # so we just need to set the correct target sampling rate. raw_datasets = raw_datasets.cast_column( data_args.audio_column_name, datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) ) model_input_name = feature_extractor.model_input_names[0] def train_transforms(batch): """Apply train_transforms across a batch.""" subsampled_wavs = [] for audio in batch[data_args.audio_column_name]: wav = random_subsample( audio["array"], max_length=data_args.max_length_seconds, sample_rate=feature_extractor.sampling_rate ) subsampled_wavs.append(wav) inputs = feature_extractor(subsampled_wavs, sampling_rate=feature_extractor.sampling_rate) output_batch = {model_input_name: inputs.get(model_input_name)} output_batch["labels"] = list(batch[data_args.label_column_name]) return output_batch def val_transforms(batch): """Apply val_transforms across a batch.""" wavs = [audio["array"] for audio in batch[data_args.audio_column_name]] inputs = feature_extractor(wavs, sampling_rate=feature_extractor.sampling_rate) output_batch = {model_input_name: inputs.get(model_input_name)} output_batch["labels"] = list(batch[data_args.label_column_name]) return output_batch # Prepare label mappings. # We'll include these in the model's config to get human readable labels in the Inference API. labels = raw_datasets["train"].features[data_args.label_column_name].names label2id, id2label = {}, {} for i, label in enumerate(labels): label2id[label] = str(i) id2label[str(i)] = label # Load the accuracy metric from the datasets package metric = evaluate.load("accuracy", cache_dir=model_args.cache_dir) # Define our compute_metrics function. It takes an `EvalPrediction` object (a namedtuple with # `predictions` and `label_ids` fields) and has to return a dictionary string to float. def compute_metrics(eval_pred): """Computes accuracy on a batch of predictions""" predictions = np.argmax(eval_pred.predictions, axis=1) return metric.compute(predictions=predictions, references=eval_pred.label_ids) config = AutoConfig.from_pretrained( model_args.config_name or model_args.model_name_or_path, num_labels=len(labels), label2id=label2id, id2label=id2label, finetuning_task="audio-classification", cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ) model = AutoModelForAudioClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, token=model_args.token, trust_remote_code=model_args.trust_remote_code, ignore_mismatched_sizes=model_args.ignore_mismatched_sizes, ) # freeze the convolutional waveform encoder if model_args.freeze_feature_encoder: model.freeze_feature_encoder() if training_args.do_train: if data_args.max_train_samples is not None: raw_datasets["train"] = ( raw_datasets["train"].shuffle(seed=training_args.seed).select(range(data_args.max_train_samples)) ) # Set the training transforms raw_datasets["train"].set_transform(train_transforms, output_all_columns=False) if training_args.do_eval: if data_args.max_eval_samples is not None: raw_datasets["eval"] = ( raw_datasets["eval"].shuffle(seed=training_args.seed).select(range(data_args.max_eval_samples)) ) # Set the validation transforms raw_datasets["eval"].set_transform(val_transforms, output_all_columns=False) # Initialize our trainer trainer = Trainer( model=model, args=training_args, train_dataset=raw_datasets["train"] if training_args.do_train else None, eval_dataset=raw_datasets["eval"] if training_args.do_eval else None, compute_metrics=compute_metrics, processing_class=feature_extractor, ) # Training if training_args.do_train: checkpoint = None if training_args.resume_from_checkpoint is not None: checkpoint = training_args.resume_from_checkpoint train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() trainer.log_metrics("train", train_result.metrics) trainer.save_metrics("train", train_result.metrics) trainer.save_state() # Evaluation if training_args.do_eval: metrics = trainer.evaluate() trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # Write model card and (optionally) push to hub kwargs = { "finetuned_from": model_args.model_name_or_path, "tasks": "audio-classification", "dataset": data_args.dataset_name, "tags": ["audio-classification"], } if training_args.push_to_hub: trainer.push_to_hub(**kwargs) else: trainer.create_model_card(**kwargs) 你上面说的这些步骤和这个音频模型处理是一样的么,看看有啥区别,为什么,还是说是一样的,讲解一下每段代码处于的流程阶段
最新发布
12-16
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值