Page History
_______________________________________
- Get an account on the cluster - and consult DMSC computing cluster for how to access it!
- Please ensure that your simulation runs on your own desktop / laptop first - and estimate a required runtime to achieve your necessary statistics!
- When transferring input data to the cluster - please ensure that all necessary files are present, i.e. your
- instrument file
- any extra needed components
- any extra c-codes
- any extra datafiles
Use mpi-selector to set up a meaningful mpi to use, good options for use with mcstas are highlighted below, intel usually 2 x faster than gcc. Must be done at least once to set up your account. Log off and back on to the cluster.Load the mpi module of choice, one of the openmpi ones are recommended:
[willend@r1n2 ~]$ mpi-selector --list
mvapich2_gcc_qlc-1.6
mvapich2_intel_qlc-1.6
mvapich_gcc_qlc-1.2.0
mvapich_intel_qlc-1.2.0
openmpi_gcc_qlc-1.4.3 <-- Good
openmpi_intel_qlc-1.4.3 <-- Good
qlogicmpi_gnu-3.0.1
qlogicmpi_intel-3.0.1
qlogicmpi_pathscale-3.0.1
qlogicmpi_pgi-3.0.1module load openmpi<tab>
openmpi/3.0_gcc1020 openmpi/3.0_intel17 openmpi/4.0_gcc920
openmpi/3.0_gcc831 openmpi/4.0_gcc1020 openmpi/4.0_intel17
openmpi/3.0_gcc920 openmpi/4.0_gcc831[willend@r1n2 ~]$ mpi-selector --set openmpi_gcc_qlc-1.4.3Defaults already exist; overwrite them? (y/N) y$ module load openmpi/4.0_gcc831
Recommendation is module load openmpi/4.0_gcc831
- Load the mcstas of choice using e.g. the below command (which will give you the newest installed mcstas)
[willend@r1n2 ~]$ module load mcstas/
Compile your instrument at the compile node using a command like
[willend@r1n2 selftest_Abe3 ~]$ mcrun -c --mpi=1 -n0 BNL_H8.instr
or
willend@r1n2 ~]$ mcrun.pl -c --mpi -n0 BNL_H8.instr
You may experience some error output like below, but this can be safely ignored when compiling, since mpi processes are only allowed to run via the slurm scheduler.
Error obtaining unique transport key from ORTE (orte_precondition_transports not present in
the environment).
Local host: r1n2.esss.dk
--------------------------------------------------------------------------
*** The MPI_Init() function was called before MPI_INIT was invoked.
*** This is disallowed by the MPI standard.
--------------------------------------------------------------------------
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort. There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems. This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):PML add procs failed
--> Returned "Error" (-1) instead of "Success" (0)
--------------------------------------------------------------------------
*** Your MPI job will now abort.
[r1n2.esss.dk:11016] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!- You are welcome to use this perl script to On the ESS cluster, and optionally also on your own laptop, you can use the mcsub_slurm script to help you generate a batch file for the cluster. It defaults to running with the newest mcstas installed, so you will have to edit the output for running with previous versions. It It takes a few input arguments:
[willend@r1n2 ~]$ ./ess$ mcsub_slurm_mcsub.pl --help
Usage: ./essmcsub_slurm_mcsub.pl [options] [mcrun params]
-h --help Show this help
-rN --runtime=N Specify maximum runtime (hours) [default 1]
-qQNAME --queue=QNAME Specify wanted SLURM queue [default 'expressquark']
-e<mail> --email=<mail> Specify address to notify in reg. sim status [default none]
--nodes=NUM Specify wanted number of nodes [default 1]
--name=NAME Specify openPBS job name [default "McSub_<USERNAME>_<TIMESTAMP>"Usage: /mnt/lustre/apps/mcstas/2.4.1/bin/mcsub_slurm [options] [mcrun params]
-h --help Show this help
-rN --runtime=N Specify maximum runtime (hours) [default 1]
-qQNAME --queue=QNAME Specify wanted SLURM queue [default 'express']
--mpimodule=MODULE Specify wanted MPI module [default 'openmpi/3.0_gcc540']
-e<mail> --email=<mail> Specify address to notify in reg. sim status [default none]
--nodes=NUM Specify wanted number of nodes [default 1]
--name=NAME Specify slurm job name [default "McSub_<USERNAME>_<TIMESTAMP>"]]
After running ./essmcsub_slurm_mcsub.pl NAME.batch is ready for submission using the sbatch commandPlease make a test run on the express queue to see that everything works - it is intended for exactly that purpose!(On McStas installations v. 2.4.1 and newer, the template batchfile writing is also available via File → Configuration in mcgui on Linux / Python → Preferences on macOS - meta-comma is a shortcut.)
To generate a batch file repeating a simulation that worked on your Desktop, use the script "in front" of the mcrun command run at your Desktop, e.g.:
[willend@r1n2 ~]$ mcsub_slurm -qquark mcrun BNL_H8.instr -n1e6 Lambda=2.37
- Having seen the output from 8, you should be able to request a production queue and a longer runtime using the -q and -r options of the script. (Or simply edit the script output to your taste and need!)
- It is useful to receive output from the cluster jobs on start/termination etc. - please use the -e option for setting a relevant recipient address.
In case of issues with any of the above steps - feel free to contact support@esss.dk and peter.willendrup@esss.se