Write the code. Change the world.

Talk is cheap show me the code. 

// 废话少说 放码过来


决定开始在优快云上写博客了!

希望坚持下去!


頑張ってください!




这是我的mdp文件title = GROMOS 54A7 BSLA MD simulation integrator = md dt = 0.001 nsteps = 50000000 ; 50 ns ; OUTPUT CONTROL OPTIONS nstxout = 0 ; suppress .trr output nstvout = 0 ; suppress .trr output nstlog = 500000 ; Writing to the log file every 500 ps nstenergy = 500000 ; Writing out energy information every 500 ps nstxtcout = 500000 ; Writing coordinates every 500 ps cutoff-scheme = Verlet nstlist = 10 ns-type = Grid pbc = xyz rlist = 1.0 coulombtype = PME pme_order = 4 fourierspacing = 0.16 rcoulomb = 1.0 vdw-type = Cut-off rvdw = 1.0 Tcoupl = v-rescale tc-grps = Protein Non-Protein tau_t = 0.1 0.1 ref_t = 298 298 DispCorr = EnerPres Pcoupl = C-rescale Pcoupltype = Isotropic tau_p = 2.0 compressibility = 4.5e-5 ref_p = 1.0 gen_vel = no constraints = none continuation = yes constraint_algorithm = lincs lincs_iter = 2 lincs_order = 2 这是错误内容 Executable: /public/software/gromacs-2023.2-gpu/bin/gmx_mpi Data prefix: /public/software/gromacs-2023.2-gpu Working dir: /public/home/zkj/BSLA_in_Menthol_and_Olecicacid/run1 Command line: gmx_mpi mdrun -deffnm npt-nopr -v -update gpu -ntmpi 0 -ntomp 1 -nb gpu -bonded gpu -gpu_id 0 Reading file npt-nopr.tpr, VERSION 2023.2 (single precision) Changing nstlist from 10 to 100, rlist from 1 to 1.065 Update groups can not be used for this system because there are three or more consecutively coupled constraints Program: gmx mdrun, version 2023.2 Source file: src/gromacs/taskassignment/decidegpuusage.cpp (line 786) Function: bool gmx::decideWhetherToUseGpuForUpdate(bool, bool, PmeRunMode, bool, bool, gmx::TaskTarget, bool, const t_inputrec&, const gmx_mtop_t&, bool, bool, bool, bool, bool, const gmx::MDLogger&) Inconsistency in user input: Update task on the GPU was required, but the following condition(s) were not satisfied: The number of coupled constraints is higher than supported in the GPU LINCS code. For more information and tips for troubleshooting, please check the GROMACS website at http://www.gromacs.org/Documentation/Errors MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. [warn] Epoll MOD(1) on fd 8 failed. Old events were 6; read change was 0 (none); write change was 2 (del): Bad file descriptor [warn] Epoll MOD(4) on fd 8 failed. Old events were 6; read change was 2 (del); write change was 0 (none): Bad file descriptor
最新发布
03-13
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值