wiki:DevelopingWithDlx

Version 15 (modified by Gary J. Ferland, 5 years ago) (diff)

number of cores we can use on login node

How to develop and test code with Dell dlx

The dlx is the University of Kentucky's Dell commodity cluster. it is described here. The system blog is here.

OpenMPI is available.

Slurm is the batch mode job scheduler. Examples are sbatch and squeue.

The scratch disk /scratch/user_id is large amounts of storage but is not backed up. Files are saved for thirty days.


compilers

We have access to icc 13.0.0, gcc 4.4.6, and open MPI 1.6.2 by default.

To build non-mpi exec, execute make in source (for g++), sys_gcc (for g++), or sys_icc (for icc).

To build MPI enabled, do make in source/sys_mpi_icc (the default mpiCC is based on icc 13.0.0 on the dlx). The warning feupdateenv is not implemented and will always fail is benign and can be ignored.

Ryan email of 2012 July says he builds with gcc.

Ge Zhang email of 2012 Dec 9 says OK to use make -j 8. Login node has 16 cores so do not use more then 16.

Currently the login nodes (there are two) have 16 cores, it should be OK to use all 16 cores for building Cloudy.

The machine currently has 256 normal nodes with 16 Intel Xeon E5-2670 cores and 64 GiB of memory each, as well as fat nodes with 512 GiB of memory and GPU nodes. Cloudy should be run on the normal nodes, unless you need very large amounts of memory.

running the test suite

To run the test suite as a parallel batch job do

sbatch -n <nn> run_parallel.pl sys_gcc dlx

where <nn> is the number of cpus you want. The run_parallel.pl script has a complete description of how to run on this machine. The number of processors <nn> is set by the load leveling across the test suite. Peter suggests <nn> = 16 (i.e. one node) on the regular compute nodes for all test suites.

running a single model in batch mode

I created a script, brun, which is on my path. It contains

sbatch /home/gary/cloudy/trunk/source/sys_icc/cloudy.exe -r $1

Then the script model.in could be computed with the command

brun model

Note that you should not run single jobs on the dlx, it makes very inefficient use of the compute nodes!

a multi-way MPI rid/optimization run

Make sure you have the correct module loaded. An Intel-based openMPI version will be loaded by default. To load a different MPI do something like:

module load mpi/openmpi/gcc/default

You could do this in your login script.

For the default MPI use sys_mpi_icc under source. Running the mpi job is a two step process. First create a batch job, then submit it using the batch processor.

The minimal batch script to run mpi cloudy would be something like this:

#!/bin/csh
mpirun /home/gary/cloudy/trunk/source/sys_mpi_icc/cloudy.exe -r $1

If this is contained in mpirun.cs then it would be submitted as follows:

sbatch -n <nn> mpirun.cs feii

where <nn> is the number of processors you want. This will use an input file given by the last parameter on the sbatch command, feii.in in this case.


other slurm options

squeue -u<name>

will list all jobs in the queue belonging to user name

scancel <nn>

will cancel the run with job ID number <nn>

Other queues

The hi-mem queue requires "-p FatComp"


Return to DeveloperPages

Return to main wiki page