You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

  1. Please ensure that your simulation runs on your own desktop / laptop first - and estimate a required runtime to achieve your necessary statistics!
  2. When transferring input data to the cluster - please ensure that all necessary files are present, i.e. your
    1. instrument file
    2. any extra needed components
    3. any extra c-codes
    4. any extra datafiles
  3. Use mpi-selector to set up a meaningful mpi to use, good options for use with mcstas are highlighted below, intel usually 2 x faster than gcc. Must be done at least once to set up your account. Log off and back on to the cluster.

    [willend@r1n2 ~]$ mpi-selector --list
    mvapich2_gcc_qlc-1.6
    mvapich2_intel_qlc-1.6
    mvapich_gcc_qlc-1.2.0
    mvapich_intel_qlc-1.2.0
    openmpi_gcc_qlc-1.4.3    <-- Good
    openmpi_intel_qlc-1.4.3  <-- Good
    qlogicmpi_gnu-3.0.1
    qlogicmpi_intel-3.0.1
    qlogicmpi_pathscale-3.0.1
    qlogicmpi_pgi-3.0.1
    [willend@r1n2 ~]$ mpi-selector --set openmpi_gcc_qlc-1.4.3
    Defaults already exist; overwrite them? (y/N) y

  4. Load the mcstas of choice using e.g. the below command (which will give you the newest installed mcstas)
    [willend@r1n2 ~]$ module load mcstas/
  5. Compile your instrument at the compile node using a command like

    willend@r1n2 selftest_Abe3]$ mcrun -c --mpi -n0 BNL_H8.instr 

    You may experience some error output like below, but this can be safely ignored when compiling, since mpi processes are only allowed to run via the slurm scheduler.

    Error obtaining unique transport key from ORTE (orte_precondition_transports not present in
    the environment).
    Local host: r1n2.esss.dk
    --------------------------------------------------------------------------
    *** The MPI_Init() function was called before MPI_INIT was invoked.
    *** This is disallowed by the MPI standard.
    --------------------------------------------------------------------------
    It looks like MPI_INIT failed for some reason; your parallel process is
    likely to abort. There are many reasons that a parallel process can
    fail during MPI_INIT; some of which are due to configuration or environment
    problems. This failure appears to be an internal failure; here's some
    additional information (which may only be relevant to an Open MPI
    developer):
    PML add procs failed
    --> Returned "Error" (-1) instead of "Success" (0)
    --------------------------------------------------------------------------
    *** Your MPI job will now abort.
    [r1n2.esss.dk:11016] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
  6. You are welcome to use this perl script to help you generate a batch file for the cluster. It defaults to running with the newest mcstas installed, so you will have to edit the output for running with previous versions. It takes a few input arguments:
    [willend@r1n2 ~]$ ./ess_slurm_mcsub.pl --help
    Usage: ./ess_slurm_mcsub.pl [options] [mcrun params]
    -h --help Show this help
    -rN --runtime=N Specify maximum runtime (hours) [default 1]
    -qQNAME --queue=QNAME Specify wanted SLURM queue [default 'express']
    -e<mail> --email=<mail> Specify address to notify in reg. sim status [default none]
    --nodes=NUM Specify wanted number of nodes [default 1]
    --name=NAME Specify openPBS job name [default "McSub_<USERNAME>_<TIMESTAMP>"]

    After running ./ess_slurm_mcsub.pl NAME.batch is ready for submission using the sbatch command

  7. Please use step 6 with -qexpress to make you batch file for a test run on the express queue to see that everything works!

  8. Having seen the output from 7, you should be able to request a production queue and a longer runtime using the -q and -r options of the script. 
  9. It is useful to receive output from the cluster jobs on start/termination etc. - please use the -e option for setting a relevant recipient address.


In case of issues with any of the above steps - feel free to contact support@esss.dk and peter.willendrup@esss.se 


  • No labels