Test Job Example- Slurm

本文指导如何在RC环境中使用Slurm调度系统进行作业测试,包括卸载Torque或Moab模块,加载Slurm模块,提交作业及使用Slurm命令,解释Slurm QOS参数并提供示例作业脚本。

Thanks for helping to test the Slurm job scheduling system in the RC environment.  In addition to this example page, we are building a FAQ page here:  https://www.rc.colorado.edu/support/examples/slurmfaqs

First, make sure you don't have Torque or Moab modules loaded:

$ module unload torque

$ module unload moab

Load the slurm module:

$ module load slurm

Since many Torque commands and directives are supported under Slurm, you may be able to submit your jobs just like you always have but please use Slurm commands when possible, e.g.,

$ qsub -q janus-debug run_script.sh

In many cases you'll find it's easier to use native Slurm commands, such as sbatch, squeue, and sinfo.  These are introduced in the Example section below.  You can also find a translation between Torque commands and their Slurm equivalents at slurm.schedmd.com/rosetta.pdf .

IMPORTANT: We are working on a job cleanup issue that will put an error in your job output file. The error is safe to ignore and it should not have any ill effects on your job. The error message is the following:

slurmstepd: task/cgroup: unable to remove step memcg : No such file or directory

Slurm QOS Parameters

With the transition to Slurm, we are making the queue structure simpler to help cut down on any confusion the previous queues might have caused. In Slurm, a QOS can be thought of like a queue in Torque/Moab; so when you see QOS in our documentation, think queue. We have created a QOS for each class of nodes we have (himem, serial, gpu, janus) and each has certain limitations on what type of jobs you can run on them.

The QOS's for the Janus nodes are the following:
    normal - This is the default QOS for all jobs. Any job using this QOS will run on a Janus compute node with the following restrictions: 24 hours max walltime, 480 max nodes per job
    janus-long - This QOS is for any long jobs on Janus compute nodes. There are a total of 80 nodes available for this QOS. The restrictions on this QOS are: 7 days max walltime, 40 nodes per user (this means you can have 40 single node jobs or one 40 node job or anything in between)
    janus-debug - This QOS is strictly for debugging purposes and has a higher priority than the normal QOS. You should not be doing any production work in this QOS. To prevent users from taking advantage of the higher priority, we made very restrictive settings on this QOS: 1 hour max walltime, 2 running jobs per user, 4 queued jobs per user including running jobs

The QOS's for all other Research Computing resources are the following:
     himem - This QOS allows a job to run on any of the HiMem nodes. The following restrictions apply: 14 day max walltime
    serial - This QOS allows a job to run on any of the serial nodes. The following restrictions apply: 14 day max walltime, 10 nodes per user (this means you can have 10 single node jobs, or a single 10 node job or anything in between)
    gpu - This QOS allows a job to run on any GPU node. The following restrictions apply: 4 hour max walltime, 2 running jobs per user

            

Slurm Test Job Example

The intent of this example is to demonstrate the process of submitting a compute job. The example job should run long enough to allow you to see it running and see the output file get created and updated. This example may seem overly simple to an experienced Linux user but one may still find it useful to see how a job gets queued and runs, and we'll demonstrate a few commands.

Before you begin, you need an RC account, a registered OTP device, an SSH program (or OSX terminal window) and an allocation of compute time. If you do not have an allocation your jobs will not run.

Please note: The job we are going to run does nothing but waste computer time, please resist the temptation to run it at a larger scale.

First, get logged into an RC Login node as per the Getting Started guide. Once you have done so, you should have a dollar prompt on the bottom of your screen, something like:

-bash-3.2$

What we are about to do is write a script that allows your work to be scheduled and performed by one or more of the available compute resources. Rather than executing a program immediately on demand as one does on their own computer, in this environment you have to request that your program be run sometime in the future as best fits the schedule and available resources. Your job script is how you define what resources you need, for example the number of compute nodes and cores and for how long you intend to use them. Your script then also runs the program, and can also set up notifications, move files, etc. This example is a very simple script that prints out a few lines of text.

Let's begin. When you see text in this blue color, you are to copy the text and paste it after the $ prompt in your SSH session to login.rc.colorado.edu. I will leave off the prompt to make this easier but you get the idea. I mean for you to cut and paste with the mouse and right-click or control-c, control-v. Hit enter on the SSH screen when you have pasted text.

For example when you see this:

pwd

You want to highlight and copy it, then paste it into your SSH screen at the dollar prompt. Then you would see this on your SSH terminal screen:

-bash-3.2$ pwd

You would then hit Enter (and in this case you Print your current Working Directory, which should be /home/your-username:

-bash-3.2$ pwd
/home/ralphie

Let's try one. Please highlight and copy the text below shown on a blue background:

echo $SHELL

Then paste it into your SSH screen at the dollar prompt and hit enter.

The output from the commands (if interesting) I will highlight in this gray color:

-bash-3.2$ echo $SHELL
/bin/bash

Does that work? I hope so. We just asked what shell you are using.

Next, in your home directory, where you start when you first log in, we will make a directory for your test job an move into it. First create a 'testjob' directory:

mkdir testjob

And move into it with 'cd' (Change Directory):

cd testjob

If you want to be sure you are in the right place, try a "pwd" to see what directory you are in:

pwd

Do you see something like this?

-bash-3.2$ pwd
/home/ralphie/testjob

Now let's create a shell script that we will submit as a compute job. Copy and Paste the following shell script into your ssh screen. In the top line the 'cat' command here will create a file which will comprise your job script. This method is an alternative to using an editor in Linux to write a job script, which would be harder to describe. Make sure you paste all 40 lines or so at once, NOT one line at a time.

cat >> testjob_submit.sh << EOF
#!/bin/bash
# Lines starting with #SBATCH are treated by bash as comments, but interpreted by slurm
# as arguments.  

#
# Set the name of the job
#SBATCH -J test_job

#
# Set a walltime for the job. The time format is HH:MM:SS - In this case we run for 5 minutes.

#SBATCH --time=0:05:00

#
# Select one node
#

#SBATCH -N 1
# Select one task per node (similar to one processor per node)
#SBATCH --ntasks-per-node 1

# Set output file name with job number
#SBATCH -o testjob-%j.out
# Use the janus-debug QOS
#SBATCH --qos=janus-debug

# The following commands will be executed when this script is run.
echo The job has begun
echo Wait one minute...
sleep 60
echo Wait a second minute...
sleep 60
echo Wait a third minute...
sleep 60
echo Enough waiting. Job completed.
 
# End of example job shell script
# 
EOF

Hit enter when you have pasted all this in. If something goes wrong, you may need to try to manually close the file by typing 'EOF' and then deleting it and starting again.

Next do an 'ls' to list the files in this directory and you should see the file we just created.

ls

-bash-3.2$ ls
testjob_submit.sh

Now check the file you just created to be sure that it's all there. Note that the first line, where we used 'cat' to create this file and the last line with the EOF will not appear, those were commands we used to create the file.

cat testjob_submit.sh

#!/bin/bash
# Lines starting with #SBATCH are treated by bash as comments, but interpreted
# as arguments.

#
# Set the name of the job
#SBATCH -J test_job

--snipped for brevity--

echo Enough waiting. Job completed.

# End of example job shell script
#

The file you just created is a Bash shell script that informs the job scheduler of your job's needs and then does some very basic things when executed. All our script does is write out some lines to the output file and wait 60 seconds several times. A proper compute job will do a great deal more than this. This is intended to be a very simple example.

To submit this job, we need to add Slurm to our environment. Slurm is a resource manager that can accept and schedule 10,000 jobs or more a day in our environment. We use sbatch to ask Slurm to accept the job, and after that we will use a command or two to ask Slurm how things are going.

module load slurm

-bash-3.2$ module load slurm
-bash-3.2$

Next let's submit this script to a QOS and waste some supercomputer time.  The 'sbatch' command asks Slurm to schedule the job based on the requirements we put in the #SBATCH lines of our job script. Slurm will work in the background to find resources to run it.

sbatch testjob_submit.sh

We get something like the following response:

-bash-3.2$ sbatch testjob_submit.sh
Submitted batch job 56

The number after "Submitted batch job" is the job ID number that we will use to check on the progress of our job, and our output file will also have this number in the filename.

Let's check on our job.

We'll use squeue to look at all of our jobs. I will use the '-u' flag to look at at a single user, and the $USER variable which is already set to your username (this is done so the command can be pasted in and it will work for anyone, you may also type in your username in place of $USER and you should get the same results.)

squeue -u $USER

-bash-3.2$ squeue -u $USER
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
                56     janus test_job ralphie  R       0:06     1 node1701


Here we see we have one job running and how much time it has used.

Since the job is running and we had 'echo' commands in the script, we should see output in our output file. First, an 'ls' to list the files and see if there is an output file. I like 'ls -l' because it formats the output nicely and shows dates and times.

ls -l

-bash-3.2$ ls -l

-rw-r--r-- 1 ralphie ralphiepgrp  310 Feb 28 12:21 testjob-56.out
-rw-r--r-- 1 ralphie ralphiepgrp  572 Feb 28 12:06 testjob_submit.sh

I see the output file has appeared so let's take a look at it. You will have to use the name of your output file, which should be similar but with a different job number embedded in it. So cutting and pasting won't work this time, but you can cut and paste the name off your own SSH screen or type it in carefully.

cat testjob-[type your job ID here].out

-bash-3.2$ cat testjob-56.out
The job has begun
Wait one minute...
Wait a second minute...
-bash-3.2$

The job has only run for a minute or so in my case, so I only see the first two echo statements.
You can see how the output file is built as the job executes and adds more lines to it. When the job is finished it will look like this:

-bash-3.2$ cat testjob-56.out
The job has begun
Wait one minute...
Wait a second minute...
Wait a third minute...
Enough waiting. Job completed.
-bash-3.2$

That's our basic job script example. Normally rather than just print some text and wait your job script would launch a program. Our other examples show both serial and parallel programs run by job scripts.

If you want to run this example again, you can perform the 'sbatch' operation a second time. You will get a different job number and will get a differently named output file. There are some more things you can also do to learn a bit more about running jobs:

• Use the squeue command to get information about your job while it's running, or use scontrol show job [job id] for more verbose output.
• For a list of options to SBATCH and their corresponding PBS options, see the following: SLURM Rosetta Stone

To get back to your home directory you can use the Change Directory (cd) command with no arguments:

cd

parser = OptionParser() parser.add_option("-f", "--file", action="store", dest="fileSimList", help="The filelist for simulation, can self-define", metavar="FILE") parser.add_option("-t", "--test", action="append", dest="testNameCtrl", help="Specify simulation testcase, support muilt-testcase simulation; E.G.: -t test1 -t test2 ...", metavar="LIST") parser.add_option("-q", "--quiet", dest="quietCtrl", default=0, type=int, help="Don't print status message to stdout") parser.add_option( "--assert", dest="assertCtrl", default=1, type=int, help="Assert parameter control, default open") parser.add_option( "--self", dest="selfDefineWork", default=0, type=int, help="You can self-define start to excute scripts directory") parser.add_option( "--proj", action="store", dest="projTopName",default="PRJ_HOME", type=str, help="Project root parameter, must set for start a simulation") parser.add_option( "--profile", action="store", dest="proFileCtrl", default=0, type=int, help="Open profile anaysis, this parameter effect simulation speed") parser.add_option( "--verbose", action="store", dest="verboseCtrl", default="medium", help="[none|low|medium|high|full] set UVM verbosity") parser.add_option( "--vrfdir", dest="vrfDirName", default="dv/vrf", type=str, help="Verification root directory name, default: \"dv/vrf\"") parser.add_option( "--fdir", dest="fileDirName", default="filelist", type=str, help="Verification filelist directory name, default: \"filelist\"") parser.add_option( "--fname", dest="fileListName", default="file_all", type=str, help="Verification filelist name, default: \"file_all\"") parser.add_option( "--pfname", dest="postFileName", default="file_post.f", type=str, help="Compile final file list name, default: \"file_post.f\"") parser.add_option( "--glbcfg", dest="glbCfgName", default="test.cfg", type=str, help="Global simulation variable, this file exist in work directory") parser.add_option( "--testcfg", dest="caseCfgName", default="test.cfg", type=str, help="Single simulation variable, this file exist in testcase directory") parser.add_option("-T", "--tconfig", action="append", dest="testCfgName", help="Single simulation variable, this file exist in testcase directory") parser.add_option( "--casedir", dest="testCaseDir", default="tests", help="Specify testcases directory name, can self-define") parser.add_option( "--compdir", dest="compileDir", default="compile_dir", type=str, help="Compile directory name, can self-define") parser.add_option( "--reportdir", dest="reportDir", default="report", type=str, help="Report file directory name, generate regression or multi-case results") parser.add_option( "--simprojname", dest="simProjName", default="PANGU_FM", type=str, help="Verification simulation project directory name, default: \"PANGU_FM\"") parser.add_option( "--simrootpath", dest="simRootPath", default="/project/ic_sim", type=str, help="Verification simulation result root directory name, default: \"/project/ic_sim\"") parser.add_option( "--comprootpath", dest="compRootPath", default="/project/ic_sim", type=str, help="Compile file in the root path for change compile directory") parser.add_option( "--coverage", dest="covCtrl", default=0, type=int, help="Control to open coverage collection") parser.add_option( "--covdir", dest="covDir", default="cov_work", type=str, help="Specify top workspace for coverage collection") parser.add_option( "--covscope", dest="covScope", default="scope", type=str, help="Specify coverage library collection for candence stimulus tool") parser.add_option( "--covmerge", dest="covMerge", default="merge", type=str, help="Specify coverage merge for coverage collection") parser.add_option( "--covhier", dest="covHierFile", default="cov_hier.cfg", type=str, help="Specify coverage hier file for vcs coverage collection") parser.add_option( "--merge", dest="covMergeCtrl", default=1, type=int, help="After collected coverage, control merge all coverage library") parser.add_option( "--mergeonly", dest="covMergeOnly", default=0, type=int, help="Only merge all coverage library for specifying option.workDirCtrl workspace") parser.add_option( "--buildonly", dest="buildOnlyCtrl", default=0, type=int, help="Just do compile, not launch simulation") parser.add_option( "--simonly", dest="simOnlyCtrl", default=0, type=int, help="Just do simulation, not compile") parser.add_option( "--rebuild", dest="reBuildCtrl", default=0, type=int, help="Force re-compile the simulation library") parser.add_option("-w", "--work", dest="workDirCtrl", help="Specify self simulation directory, If not sepcified, will create directory under /project/ic_sim/\$proj/\$user/\$ip_name") parser.add_option("-b", "--buildopts", action="append", dest="buildOptionCtrl", help="Append some parameter in compile phase, E.G.: -b +define+PARAM1 -b +define+PARAM2 ...") parser.add_option("-s", "--simopts", action="append", dest="simOptionCtrl", help="Append some parameter in simulation phase, E.G.. -s +PARAM1=3 -s +PARAM2 ...") parser.add_option( "--input", dest="inputCtrl", default="dump.tcl", help="Specify waveform dump tcl file, default: \"dump.tcl\" under work dir") parser.add_option("-c", "--count", dest="countCtrl", default=0, type=int, help="The count for excuting time with random seeds when specified by -t one test or multi-test") parser.add_option("-l", "--list", dest="testListFile", help="Specify test list file for running regression", metavar="FILE") parser.add_option("-d", "--seed", action="append", dest="seedCtrl", type=int, help="Specify seed for the test, support: \ -t test1 -d seed1, \ -t test1 -d seed1 -d seed2 ..., \ -t test1 -d seed1 -t test2 -d seed2 ..., \ if not specified, a random number will be generated") parser.add_option( "--maxerror", dest="maxErrorCtrl", default=500, type=int, help="Specify max error value when run UVM, if UVM_ERROR count reach this value, simulation will exit") parser.add_option("-j", "--maxjob", dest="maxJobCtrl", default=0, type=int, help="Set host max process number") parser.add_option("-g", "--gui", dest="guiCtrl", default=0, type=int, help="Run simulation interactively i.e. launch GUI. Only available in single test mode") parser.add_option( "--complog", dest="compileLogName", default="compile", help="Compile log file for compile debug") parser.add_option( "--simlog", dest="simLogName", default="sim", help="Simulation log file for simulation debug") parser.add_option("-W", "--wave", dest="waveCtrl", default=1, type=int, help="Waveform dump enable/disable") parser.add_option( "--seedwork", dest="seedWorkCtrl", default=0, type=int, help="Every seed create one's work directory") parser.add_option( "--timeout", dest="timeoutCtrl", type=str, help="Specify UVM timeout value for global time control") parser.add_option( "--skiptime", dest="skipTimeCtrl", default=0, type=int, help="Specify time value for skip error check during the time for options.simLogName") parser.add_option("-i", "--simtool", dest="simToolCtrl", default="vcs", help="Specify EDA tools, we support vcs(Synopsys) and ies(Cadence), default is ies(Cadence)") parser.add_option( "--uvmversion", dest="uvmVersionCtrl", type=str, help="Specify uvm version for simulation") parser.add_option( "--uvmhome", dest="uvmHomeCtrl", type=str, help="Specify uvm home for simulation") parser.add_option( "--cdnversion", dest="cdnVersionCtrl", default="2020", help="Specify ies(Cadence) version, default 2020, use 2014/2018 version") parser.add_option( "--synversion", dest="synVersionCtrl", default="2022", help="Specify ies(Synopsys) version, default 2018, use 2014 version") parser.add_option( "--local", dest="localSimCtrl", default=0, help="Simluation excute in current directory") parser.add_option( "--cdslib", dest="cdsLibCtrl", default=0, help="FPGA simulation, can use special cdslib") parser.add_option( "--resource", dest="resourceSetCtrl", default=0, help="Enable Stack, DATA, AS unlimited in simulation") parser.add_option( "--cdslibpath", action="append", dest="cdsLibPath", default="", type=str, help="FPGA simulation, can use special cdslib path") parser.add_option("-p", "--post", dest="postCtrl", default=0, type=int, help="When gate sim, please use this option to use filelist: file_all_gate.f \ and compile option -- \"+define+GATE_SIM\" will be added") parser.add_option( "--tfile", dest="tFileCtrl", default="", type=str, help="When gate sim, please use this option to add notimecheck files") parser.add_option( "--noglitch", dest="noGlitchtCtrl", default=1, type=int, help="When gate sim, ignore glitch message") parser.add_option( "--pulse", dest="pulseCtrl", default=0, type=int, help="Enable to specify module pulse_r/e value for tools") parser.add_option( "--pulse_r", dest="pulseRvalueCtrl", default=0, type=int, help="Specify module pulse_[int_]r value for tools") parser.add_option( "--pulse_e", dest="pulseEvalueCtrl", default=0, type=int, help="Specify module pulse_[int_]e value for tools") parser.add_option( "--split", dest="splitCompileCtrl", default=0, type=int, help="Can RTL design and verif env split compile") parser.add_option( "--rtlname", dest="rtlFileNameCtrl", default="design_asic", type=str, help="Compile rtl file name for split compile mode") parser.add_option( "--rtlfile", dest="rtlFileCtrl", type=str, help="Compile rtl file for split compile mode") parser.add_option( "--envname", dest="envFileNameCtrl", default="file_env", type=str, help="Compile env file name for split compile mode") parser.add_option( "--envfile", dest="envFileCtrl", type=str, help="Compile env file for split compile mode") parser.add_option( "--topname", dest="topNameCtrl", default="tb_top", type=str, help="Specify RTL top name") parser.add_option( "--autohref", dest="autoHrefCtrl", default="autohref.txt", type=str, help="Specify autohref file name for MSIE") parser.add_option( "--envonly", dest="onlyEnvCompCtrl", default=0, type=int, help="Only compile ENV files for MSIE") parser.add_option( "--tcheck", dest="testCheckCtrl", default=1, type=int, help="Check testcase exists or not for test directory") parser.add_option( "--pcheck", dest="pathCheckCtrl", default=0, type=int, help="Specify project path by cshrc") parser.add_option( "--cell", dest="dumpCellCtrl", default=0, type=int, help="dump cell waveform") parser.add_option( "--timeunit", dest="timeUnitCtrl", default="1ns", type=str, help="Set timescale unit") parser.add_option( "--timeprecision", dest="timePrecisionCtrl", default="1ps", type=str, help="Set timescale precision") parser.add_option( "--timescale", dest="timeScaleCtrl", default=1, type=int, help="Can overrid timescale for all files") parser.add_option( "--timescale_override", dest="timeScaleOverideCtrl", default=0, type=int, help="Can overrid timescale for all files") parser.add_option( "--genrpt", dest="genRptCtrl", default=0, type=int, help="Just generate report, do not run simulation. When specified, this script will scan all \ simulation log under specific workspace by -w, default is simulation root path") parser.add_option( "--bsub", dest="bSubCtrl", default=0, type=int, help="Enable to submit jobs in the cluster server") parser.add_option( "--bmax", dest="bMaxJobCtrl", default=100, type=int, help="Set bsub commit max process number") parser.add_option( "--bqueue", dest="bQueueCtrl", type=str, help="Specify queue resource for the submitting task") parser.add_option( "--bnode", dest="bNodeCtrl", type=str, help="Specify node resource for the submitting task") parser.add_option( "--bcpu", dest="bCoreCtrl", default=0, type=int, help="Specify remain cpu number for the submitting task") parser.add_option( "--bsubname", dest="bSubNameCtrl", default="zsub", type=str, help="Specify cluster sub commnad name bsub/zsub") parser.add_option( "--bjobname", dest="bJobNameCtrl", default="zjobs", type=str, help="Specify cluster job commnad name bjobs/zjobs") parser.add_option("-e", "--example", help="run_sim.py -t test1 -Ttest1.cfg -d seed1 \ run_sim.py -t test1 -Ttest1.cfg -d seed1 -t test2 -d seed2 ... -b +define+SPI_MODE_3 -s +NO_MCU \ run_sim.py -t test1 -Ttest1.cfg -dseed1 -t test2 -Ttest2.cfg -d seed2 ... \ run_sim.py -t test1 -b +define+SPI_MODE_3 -s +NO_MCU \ run_sim.py -t test1 -c 10 \ run_sim.py -l regression.list") (options, args) = parser.parse_args()
08-27
<think>我们正在处理一个使用Python的OptionParser模块来处理命令行参数的脚本。用户希望理解并可能修改或扩展这个脚本,该脚本用于模拟环境,涉及测试用例控制、编译选项、覆盖设置和集群作业提交。 根据提供的引用,我们可以了解到OptionParser的基本用法: - 创建OptionParser实例时,可以指定usage和version信息。 - 使用add_option方法添加选项,每个选项可以指定短选项名(如"-z")、长选项名(如"--zknum")、类型(type)、目标变量(dest)、帮助信息(help)以及默认值(default)等。 - 调用parse_args()方法解析命令行参数,返回一个元组(options, args),其中options是一个对象,包含所有选项的值(通过options.dest访问),args是剩余的位置参数。 用户提到的脚本功能包括: 1. 测试控制(test control) 2. 编译选项(compilation options) 3. 覆盖设置(coverage settings) 4. 集群作业提交(cluster job submission) 因此,我们需要设计一个OptionParser脚本来处理这些方面的命令行参数。 下面是一个示例脚本框架,根据用户需求设计: ```python from optparse import OptionParser def main(): # 定义用法和版本信息 usage = "usage: %prog [options] test_case1 [test_case2 ...]" version = "%prog 1.0" parser = OptionParser(usage=usage, version=version) # 添加选项 # 1. 测试控制相关选项 parser.add_option("-t", "--test", dest="test_list", action="append", default=[], help="Specify a test case to run. Can be specified multiple times.") parser.add_option("-e", "--exclude", dest="exclude_list", action="append", default=[], help="Exclude a test case. Can be specified multiple times.") parser.add_option("--test-file", dest="test_file", default=None, help="A file containing a list of test cases to run.") # 2. 编译选项 parser.add_option("-c", "--compile", dest="compile", action="store_true", default=False, help="Compile the project before running.") parser.add_option("--compiler", dest="compiler", default="gcc", help="Specify the compiler to use (default: gcc).") parser.add_option("--compile-opt", dest="compile_options", action="append", default=[], help="Additional compile options. Can be specified multiple times.") # 3. 覆盖设置 parser.add_option("--coverage", dest="coverage", action="store_true", default=False, help="Enable coverage collection.") parser.add_option("--coverage-tool", dest="coverage_tool", default="lcov", help="Specify the coverage tool to use (default: lcov).") parser.add_option("--coverage-dir", dest="coverage_dir", default="coverage", help="Directory to store coverage reports (default: coverage).") # 4. 集群作业提交 parser.add_option("-j", "--job", dest="job_submit", action="store_true", default=False, help="Submit the simulation as a cluster job.") parser.add_option("--job-queue", dest="job_queue", default="default", help="Specify the cluster queue to submit to (default: default).") parser.add_option("--job-nodes", dest="job_nodes", type="int", default=1, help="Number of nodes to request (default: 1).") parser.add_option("--job-time", dest="job_time", default="1:00:00", help="Time limit for the job (default: 1:00:00).") (options, args) = parser.parse_args() # 这里可以添加逻辑:根据选项执行相应的操作 # 例如,如果指定了测试文件,则读取文件并添加到测试列表中 if options.test_file: try: with open(options.test_file, 'r') as f: for line in f: test_case = line.strip() if test_case: options.test_list.append(test_case) except IOError: print(f"Error: Could not read test file {options.test_file}") # 打印一些选项以验证 print("Test cases to run:", options.test_list) print("Excluded test cases:", options.exclude_list) print("Compile:", options.compile) if options.compile: print("Compiler:", options.compiler) print("Compile options:", options.compile_options) print("Coverage enabled:", options.coverage) if options.coverage: print("Coverage tool:", options.coverage_tool) print("Coverage directory:", options.coverage_dir) print("Submit as job:", options.job_submit) if options.job_submit: print("Job queue:", options.job_queue) print("Number of nodes:", options.job_nodes) print("Job time limit:", options.job_time) # 剩余的位置参数(args)可能是额外的测试用例 if args: print("Additional test cases from command line:", args) options.test_list.extend(args) # 实际应用中,这里会调用相应的函数来执行测试、编译、提交作业等 if __name__ == "__main__": main() ``` ### 说明: 1. **测试控制**: - `-t/--test`:通过`action="append"`可以多次指定测试用例,所有测试用例会存储在一个列表中。 - `-e/--exclude`:同样使用`append`动作,存储要排除的测试用例。 - `--test-file`:指定一个文件,文件中每行一个测试用例,脚本会读取并添加到测试列表中。 2. **编译选项**: - `-c/--compile`:布尔选项,表示是否需要编译。 - `--compiler`:指定编译器,默认为gcc。 - `--compile-opt`:可以多次指定额外的编译选项。 3. **覆盖设置**: - `--coverage`:布尔选项,表示是否收集覆盖率。 - `--coverage-tool`:指定覆盖率工具,默认为lcov。 - `--coverage-dir`:指定覆盖率报告的输出目录。 4. **集群作业提交**: - `-j/--job`:布尔选项,表示是否提交为集群作业。 - `--job-queue`:指定提交的队列。 - `--job-nodes`:指定节点数,类型为整数。 - `--job-time`:指定作业运行的时间限制。 5. **位置参数**:在选项之后,用户还可以提供位置参数,这些参数会被视为额外的测试用例。 ### 使用示例: ```bash # 运行指定测试用例test1和test2,并启用编译,使用额外的编译选项-O2,同时收集覆盖率,并提交到集群 python script.py -t test1 -t test2 -c --compile-opt="-O2" --coverage --job --job-nodes=2 test3 test4 ``` ### 扩展性: 如果需要添加更多选项,可以按照同样的模式使用`add_option`方法。同时,这个脚本可以很容易地扩展,例如添加更多的编译控制、覆盖细节或作业提交参数。 ### 注意事项: - 对于需要多个值的选项,我们使用`action="append"`,这样用户可以通过多次指定同一个选项来传递多个值。 - 对于布尔选项,使用`action="store_true"`(或`store_false`)来设置标志。 - 对于需要数值的选项,指定`type="int"`(或其他类型)。 ### 相关问题 1. 如何在Python中处理命令行参数解析的异常? 2. 除了OptionParser,还有哪些Python库可以用于命令行参数解析?它们有什么优缺点? 3. 如何为这个脚本添加子命令(例如:compile, run, submit等)? 4. 如何将集群作业提交的选项进一步扩展以支持不同的集群调度系统(如Slurm、PBS等)?</think>### Python OptionParser脚本解析:模拟命令行接口处理 以下是使用Python的OptionParser模块处理模拟环境命令行接口的完整脚本,包含测试控制、编译选项、覆盖设置和集群作业提交功能: ```python from optparse import OptionParser def main(): # 创建解析器实例 usage = "usage: %prog [options] test_case1 [test_case2 ...]" version = "%prog 2.0" parser = OptionParser(usage=usage, version=version) # 测试控制选项 parser.add_option("-t", "--test", dest="test_list", action="append", default=[], help="指定要运行的测试用例(可重复使用)") parser.add_option("-e", "--exclude", dest="exclude_list", action="append", default=[], help="排除特定测试用例(可重复使用)") parser.add_option("--test-file", dest="test_file", default=None, help="从文件加载测试用例列表") # 编译选项 parser.add_option("-c", "--compile", dest="compile", action="store_true", default=False, help="执行编译阶段") parser.add_option("--compiler", dest="compiler", default="gcc", help="指定编译器 (gcc/llvm/msvc)") parser.add_option("--compile-opt", dest="compile_options", action="append", default=[], help="额外编译选项(可重复使用)") # 覆盖设置 parser.add_option("--coverage", dest="coverage", action="store_true", default=False, help="启用覆盖率收集") parser.add_option("--coverage-tool", dest="coverage_tool", default="lcov", help="覆盖率工具 (lcov/cobertura)") parser.add_option("--coverage-dir", dest="coverage_dir", default="coverage_reports", help="覆盖率报告输出目录") # 集群作业提交 parser.add_option("-j", "--job", dest="job_submit", action="store_true", default=False, help="提交为集群作业") parser.add_option("--job-queue", dest="job_queue", default="high_priority", help="集群队列名称") parser.add_option("--job-nodes", dest="job_nodes", type="int", default=4, help="请求的计算节点数") parser.add_option("--job-time", dest="job_time", default="02:00:00", help="作业最大运行时间 (HH:MM:SS)") # 解析命令行参数 (options, args) = parser.parse_args() # 处理位置参数(额外测试用例) if args: print(f"从命令行添加的额外测试用例: {', '.join(args)}") options.test_list.extend(args) # 配置文件处理(伪代码) if options.test_file: print(f"从文件加载测试用例: {options.test_file}") # 实际实现应读取文件内容并添加到test_list # 显示配置摘要 print("\n=== 模拟配置 ===") print(f"测试用例: {options.test_list or '所有'}") print(f"排除用例: {options.exclude_list or '无'}") print(f"编译启用: {'是' if options.compile else '否'}") if options.compile: print(f" 编译器: {options.compiler}") print(f" 编译选项: {options.compile_options}") print(f"覆盖率收集: {'是' if options.coverage else '否'}") if options.coverage: print(f" 工具: {options.coverage_tool}") print(f" 输出目录: {options.coverage_dir}") print(f"集群作业: {'是' if options.job_submit else '否'}") if options.job_submit: print(f" 队列: {options.job_queue}") print(f" 节点数: {options.job_nodes}") print(f" 最大时间: {options.job_time}") if __name__ == "__main__": main() ``` ### 关键功能解析 1. **测试控制**: - `-t/--test`:通过`action="append"`支持添加多个测试用例 - `-e/--exclude`:排除特定测试用例 - `--test-file`:从文件批量加载测试用例 - 位置参数:直接接收额外的测试用例名 2. **编译选项**: - `-c/--compile`:布尔标志控制编译阶段 - `--compiler`:支持多编译器选择 - `--compile-opt`:可添加多个编译参数 3. **覆盖设置**: - `--coverage`:启用覆盖率收集 - `--coverage-tool`:选择覆盖率工具 - `--coverage-dir`:自定义报告输出目录 4. **集群作业**: - `-j/--job`:提交为集群作业 - `--job-queue`:指定作业队列 - `--job-nodes`:整数类型指定节点数 - `--job-time`:设置作业时间限制 ### 使用示例 ```bash # 基本测试运行 python simulator.py -t test_algo -t test_memory --coverage # 完整工作流 python simulator.py \ --test-file test_suite.txt \ --exclude test_legacy \ --compile \ --compiler llvm \ --compile-opt "-O3" \ --coverage \ --coverage-tool cobertura \ --job \ --job-nodes 8 \ --job-time "04:30:00" ``` ### 扩展建议 1. **添加参数验证**: ```python # 在parse_args()后添加 if options.job_submit and options.job_nodes > 32: parser.error("节点数不能超过32个") ``` 2. **支持配置文件**: ```python parser.add_option("-f", "--config", dest="config_file", help="从配置文件加载选项") ``` 3. **添加子命令支持**(使用`argparse`更佳): ```python # 伪代码示例 parser.add_option("--mode", choices=["run", "analyze", "debug"], default="run", help="运行模式") ``` ### 相关技术点 1. `action="append"` 允许参数多次出现,值存储在列表中[^2] 2. `type="int"` 自动转换参数为整数类型[^4] 3. 布尔标志使用 `action="store_true"` 自动设置默认值[^1] 4. 位置参数通过 `args` 变量获取,用于处理额外测试用例 [^1]: Python之OptionParser模块使用 [^2]: Python模块学习--OptionParser [^4]: Python OptionParser允许选择类型选项的多个参数
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值