Batch the files in the directory

本文介绍了一个使用Shell脚本来处理文件中特定行的实用脚本。该脚本接受两个参数:源文件夹路径和目标文件夹路径。它会遍历源文件夹中的所有文件,删除每一行以'H'或'T'开头的行,并将处理后的文件保存到目标文件夹。此脚本适用于批量处理大量文件,特别适合于清理数据或进行预处理。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

 

 

#!/bin/bash
#sourceFolder = /home/bigdatagfts/pl62716/refdata
#targetFolder = /home/bigdatagfts/pl62716/refdata_target
sourceFolder=$1
targetFolder=$2
if [ $# != 2 ] ; then 
    echo "USAGE: $0 sourceFolder hdfsFolder" 
    echo " e.g.: $0 /home/bigdatagfts/pl62716/refdata /home/bigdatagfts/pl62716/refdata_target" 
    exit 1; 
fi
if [ ! -d "$sourceFolder" ];then
    echo "$sourceFolder is not exist, please check!"
    exit 1;
else
    if [ ! -d "$targetFolder" ];then
        echo "$targetFolder is not exist! create $targetFolder !"
        mkdir -p "$targetFolder"
        
    fi
fi
echo "delete lines which begin with H/T and store to dir $targetFolder" 
cd $sourceFolder
for file in `ls`
    do
        if test -f $file
        then 
            if [ -f "$targetFolder/$file" ]; then 
                rm $targetFolder/$file
            fi
            sed '/^H\|^T/'d $file | cat -n > $targetFolder/$file
        fi
    done

 

转载于:https://www.cnblogs.com/liupuLearning/p/7049492.html

Training AttnSleep For updating the training parameters, you have to update the `config.json` file. In this file, you can update: - The experiment name (Recommended to update this for different experiments) - The number of GPUs. - Batch size. - Number of folds (as we use K-fold cross validation). - Optimizer type along with its parameters. - the loss function. (to update this you have to include the new loss function in the [loss.py](./model/loss.py) file). - the evaluation metrics (also to add more metrics, update the [metrics.py](./model/metric.py) file). - The number of training epochs. - The save directory (location of saving the results of experiment) - The save_period (the interval of saving the checkpoints and best model). - verbosity of log (0 for less logs, 2 for all logs, 1 in between) To perform the standard K-fold cross validation, specify the number of folds in `config.json` and run the following: ``` chmod +x batch_train.sh ./batch_train.sh 0 /path/to/npz/files ``` where the first argument represents the GPU id (If you want to use CPU, set the number of gpus to 0 in the config file) If you want to train only one specific fold (e.g. fold 10), use this command: ``` python train_Kfold_CV.py --device 0 --fold_id 10 --np_data_dir /path/to/npz/files ``` ## Results The log file of each fold is found in the fold directory inside the save_dir. The final classification report is found the experiment directory and note that it sums up all the folds results to calculate the metrics.
03-23
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值