Singularity

Contents

  1. Summary and Version Information
  2. What is Singularity and why should I care?
  3. Using containerized applications
    1. Getting information about an existing container
    2. Working with third-party containers
      1. Base home directory does not exist error
      2. Kernel too old error
    3. Running Singularity containers from a script
  4. Building your own container
    1. For Singularity version < 2.4
    2. For Singularity version 2.4 and higher
  5. Useful links, more info re Singularity.

Summary and Version Information

Package Singularity
Description Singularity software containers
Categories Progamming/Development
Version Module tag Availability* GPU
Ready
Notes
2.3.1 singularity/2.3.1 Non-HPC Glue systems
Deepthought2 HPCC
RedHat6
N
2.4.2 singularity/2.4.2 Non-HPC Glue systems
Deepthought2 HPCC
RedHat6
N
2.6.0 singularity/2.6.0 Non-HPC Glue systems
Deepthought2 HPCC
RedHat6
N
3.5.3 singularity/3.5.3 Non-HPC Glue systems
Deepthought2 HPCC
RedHat6
N

Notes:
*: Packages labelled as "available" on an HPC cluster means that it can be used on the compute nodes of that cluster. Even software not listed as available on an HPC cluster is generally available on the login nodes of the cluster (assuming it is available for the appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters). This is due to the fact that the compute nodes do not use AFS and so have copies of the AFS software tree, and so we only install packages as requested. Contact us if you need a version listed as not available on one of the clusters.

In general, you need to prepare your Unix environment to be able to use this software. To do this, either:

  • tap TAPFOO
OR
  • module load MODFOO

where TAPFOO and MODFOO are one of the tags in the tap and module columns above, respectively. The tap command will print a short usage text (use -q to supress this, this is needed in startup dot files); you can get a similar text with module help MODFOO. For more information on the tap and module commands.

For packages which are libraries which other codes get built against, see the section on compiling codes for more help.

Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.

In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.

What is Singularity and why should I care?

Singularity is a "containerization" system. Basically, it allows for an application to run within a "container" which contains all of its software dependencies. This allows for the application to come with its own version of various system libraries, which can be older or newer than the system libraries provided by the operating system. The Deepthought2 cluster, for example, is currently running a release of RedHat Enterprise Linux 6 (RHEL6). Some applications really want to run with libraries that are not available on that version of RedHat, and really want some version of Ubuntu or Debian instead. While one can sometimes get around this constraints by compiling from source, it does not always work.

Furthermore, sometimes applications are very picky about the exact version of libraries or other applications that they depend on, and will not work (or perhaps even worse, give erroneous results) if used with even slightly different versions. Containers allow each application to "ship" with the exact versions of everything it wants. They can even allow the RHEL6 system running on the Deepthoughts look like Ubuntu 16.04 or some other variant of Linux to the application.

There are limitations, of course. Containers of any type still share the same OS kernel as the system, including all the drivers, and so the container cannot change that. Fortunately, most end user applications are not very fussy about the OS version. The "containment" of containers can also be problematic at times --- containers by design create an isolated environment just for a particular application, containing all of its dependencies. If you need use libraries for a containerized package "foo" in another application "bar", basically you need a new container for "bar" which has "foo" as well as "bar" installed.

Using containerized applications

We currently have only a few packages distributed as containers, but that is likely to increase over the coming months. Especially in certain fields of research which tend to have more difficult to install software. So, depending on your field of study, you might find yourself dealing with containerized applications soon.

The good news, is that hopefully you won't notice much of a difference. The container should still be able to access your home directory and lustre directory, and we provide wrapper scripts to launch the program within the container for you that will behave very much like the application would behave in a native install. So with luck, you won't even notice that you are using a containerized application.

The biggest issues will arise if you need to have a containerized application interact with another application on the system (containerized or not, unless it just happens to exist in the same container image as the first application). In general, this will not work. In such cases, it would be best if you could break the process into multiple steps scuh that at most one application is needed in each step, if this is possible. Otherwise, contact us and we will work with you to try to get around this.

Getting information about an existing container

Sometimes one has a Singularity image for a container and would like to know more about how the container was actually built. E.g., you have a container that provides the "foo" application in python, but want to know if the "bar" python module is available in it. Although testing directly is always a possibility, it is not always the most convenient.

As of version 2.3.x of Singularity, there is an inspect subcommand which will display various metainformation about the image. If your container image is in the file /my/dir/foo.img, you can use the command singularity inspect -d /my/dir/foo.img to get the recipe that was used to generate the file.

The command singularity -e /my/dir/foo.img will show the environment that will be used when you a program in the container. And the commands singularity -l /my/dir/foo.img or singularity /my/dir/foo.img will list any labels defined for the container. The labels are defined by the creator of the container to document the container, and while are a good place to find information about a well-documented container, not all containers are as well documented as they should be.

Containers build for Singularity version 2.4 or higher have support for multiple applications to be housed in the same container using the standard container integration format (SCI-F). These various "applications" are accessed with the --app or -a flag to the standard singularity subcommands, followed by the application name. To get a list of all defined application names for a container, use the singularity apps /my/dir/foo.img comamnd. Different applications within the container can have different labels, environments, etc., so in the above examples you would want to look at both the environment/labels/etc of the container as a whole AND for the specific application.

The above discussion applies for all singularity containers, regardless of their origin. If you are interested in getting more information about a singularity container installed by Division of IT staff on one of the Deepthought Clusters, the following tips are provided.

  • The module load command for a Singularity containerized application will typically define an environmental variable FOO_SING_IMAGEFILE, where FOO is the name (or abbreviated name) of the application. This is what you should give to the singularity inspect command.
  • The containers created by Division of IT staff will typically have a decent amount of information in the labels. In particular, for python based applications, there will typically be a bunch of labels of the sort PYTHONv_PACKAGES_n where v is 2 or 3 and n is an ordinal listing all the python packages provided.
  • E.g., to list all the python2 packages in the "keras" application, one could do something like

    bash
    module load keras
    module load singularity
    for app in `singularity apps $KERAS_SING_IMAGEFILE`
    do
    	singularity inspect $KERAS_SING_IMAGEFILE 
    	singularity inspect -a $app $KERAS_SING_IMAGEFILE 
    done | grep PYTHON2_PACKAGES | sed -e 's/^.*: //' | sort | uniq
    exit

    Working with third-party containers

    One advantage of using containers is that it tends to make software more portable between systems. While not perfect by any means, it is often possible to use containers made by other people, even at other institutions and built for other clusters and possibly other Linux distributions, on clusters at UMD.

    The Singularity Hub contains many software packages. These can often be used by giving as the container name an URL starting with shub:. E.g., one could run a simple "hello world" container with code like

    login-1: module load singularity
    login-1: singularity run --home /homes/${USER}:/home/${USER} shub://vsoch/hello-world
    Progress |===================================| 100.0% 
    RaawwWWWWWRRRR!!
    login-1:

    This downloads the "hello-world" container from the Singularity Hub and then runs it, producing the output shown. The --home argument is discussed below.

    Singularity also has some support for importing containers from Docker Hub, using a docker: prefix to the URL. Personally, I have had mixed results using Docker containers --- generally I can get them to work but sometimes it requires some effort. This may have improved with newer versions of Singularity.

    While downloading and running a container on the flt from the Singularity or Docker Hub is convenient for some quick tests, it is really not efficient if you plan to run the container image repeatedly, as it downloads the container each time it is run. Furthermore, it might even cause reproducibility issues if the author updates the container between your runs (i.e. it is possible that later runs might be using a different version of the container). In general, if you are going to do serious, production style work, it is probably best to download the container to local disk first, and then run the local copy of the container. E.g.

    login-1: module load singularity
    login-1: singularity pull shub://vsoch/hello-world
    Progress |===================================| 100.0% 
    Done. Container is at: /lustre/payerle/singularity-containers/vsoch-hello-world-master-latest.simg
    login-1: singularity run  --home /homes/${USER}:/home/${USER} /lustre/payerle/singularity-containers/vsoch-hello-world-master-latest.simg
    RaawwWWWWWRRRR!!
    login-1: singularity run  --home /homes/${USER}:/home/${USER} /lustre/payerle/singularity-containers/vsoch-hello-world-master-latest.simg
    RaawwWWWWWRRRR!!
    login-1: 

    Note in the above that the container image was downloaded only once, in the pull subcommand, and was run without the download in the run subcommands. You can give full or relative paths to the downloaded container.

    Sometimes there are issues with containers brought in from outside UMD. Some issues can be worked around, others cannot be. We discuss some of these below:

    1. Base home directory does not exist errors
    2. Kernel too old errors
    Base home directory does not exist errors

    The Deepthought2 cluster is currently running the Red Hat Linux 6. This version of the linux operating system does not support overlays in Singularity, so when Singularity binds a directory from the host filesystem into the container filesystem, the directory must already exist in the container filesystem or an error will occur. Although these errors can occur for any filesystem bound from the host into the container, the most common issue is with the home directories.

    The Singularity run and related commands by default will try to bind your home directory into the container (which is generally what you want, so that you can e.g. store results back to the native host filesystem). However, on Deepthought2 and Glue, home directories are under /homes. Most other systems place home directories under /home (note the plural vs singular), and therefore many containers built outside of UMD have a /home directory, but no /homes. When you try to run such a container on Deepthought2 or Glue with default arguments, it will return an error like

    login-1: module load singularity
    login-1: singularity run  shub://vsoch/hello-world
    Progress |===================================| 100.0% 
    ERROR  : Base home directory does not exist within the container: /homes
    ABORT  : Retval = 255

    This issue can occur on the shell and exec subcommands as well. There are basically several ways to address this issue; they work for all of run, shell and exec subcommands, but we only discuss run below; the differences needed for the other subcommands should be straight-forward:

    1. You can add the arguments --no-home after the run subcommand. This will cause singularity to skip the binding of your home directory, which will avoid this error. This is usually not desirable because you typically want your home directory available, but it can be useful with the shell subcommand to see where the container expects to put home directories if procedure #2 does not work.
    2. You can add the arguments --home /homes/${USER}:/home/${USER} to the singularity run command. E.g.
      singularity run  --home /homes/${USER}:/home/${USER} PATH-TO-CONTAINER

      This tells singularity that your home directory (/homes/${USER}) should be mapped to /home/${USER}. As the /home directory typically exists in the container (if not, you can try using procedure #1 to find a suitable "home" directory in the container and replace the second half of the --home bind path).

    3. If you plan to modify the container anyway, then you can just create a homes directory at the root of the new container. This new container should be able to run without the --home binding arguments.
    Kernel too old errors

    Singularity containers can allow one to run binaries created for other Linux distributions and/or versions, to a point. Although one can replace most of the libraries, etc. that the binary sees, one cannot change the kernel that is running. So no matter which latest and greatest linux distribution your container was built with, as long as the Deepthought2 cluster is using Red Hat Enterprise Linux 6, your container will be running a RHEL6 kernel. And this is the root of this error.

    Basically, the container is using libraries, and in particular, glibc (the core library for linus) from the distribution and version of linux used to bootstrap the container. The glibc library has many functions which interact directly with the kernel, and at some point the newer glibc libraries will no longer support older kernels.

    Unfortunately, there is no simple fix for this issue. Basically, either the OS on the cluster needs to be upgraded (this is non-trivial and disruptive, and there are no plans for this at this time), or downgrade the OS in the container. Typically, we find that you can use on the Deepthought2 cluster containers built for:

    • Ubuntu 14 (trusty) or Ubuntu 16 (xenial)
    • I believe Debian 7 (wheezy) or 8 (jessie)
    • I believe RedHat Enterprise Linux 7

    I believe that attempts to use containers built for Ubuntu 18 (cosmic) or Debian 9 (stretch) will fail with the "kernel too old" error.

    Running Singularity containers from a script

    While running Singularity image files interactively can be useful at times, for serious production work you will usually wish to run from within a script. This is particularly true on the HPC clusters, wherein you will typically wish to submit a job which launches an application in a singularity container.

    For containers installed by systems staff and accessible via the modules command, we provide wrapper scripts that tend to make things simple in the most common use cases. E.g., currently the tensorflow application is made available as a singularity container. Tensorflow is a python based package, and the command module load tensorflow adds a wrapper script tensorflow to your path which will launch a python interpretter inside the container which has the tensorflow application installed. (Actually, several wrapper scripts are installed, the default tensorflow (and equivalenyly tensorflow-python2 launches a Python2 interpretter, and tensorflow-python3 which does the same but with a Python3 interpretter. Plus there are wrappers for tensorboard and saved_model_cli). Any arguments given to the wrapper script is passed to the underlying command in the container. So to use in a slurm job script, you can simply do something like:

    #/bin/bash
    #SBATCH -n 20
    #SBATCH -t 2:00
    
    . ~/.profile
    module load tensorflow
    tensorflow my-python-script.py

    The tensorflow command starts a python interpretter in the container and passes the my-python-script.py argument to the python interpretter.

    To use a (fictitious) MPI-enabled application foo made available as a container, one would similarly do something like:

    #/bin/bash
    #SBATCH -n 200
    #SBATCH --mem-per-cpu 2000
    #SBATCH -t 60
    
    . ~/.profile
    module load gcc/6.1.0
    module load openmpi/1.10.2
    module load foo
    
    mpirun /bin/bash foo args-to-foo

    The mpirun command will launch the requested number of copies of the foo script on the requested nodes (the /bin/bash is needed because the foo wrapper is a script, not a binary executable), which will invoke an instance of the real foo binary inside a container for each MPI task.

    Sometimes you might want to use singularity directly instead of using the wrapper scripts, or you are using a third-party image or a container which you built yourself and which does not have a wrapper script. This is not problematic, but requires a slightly more complicated job script, mostly due to setting up the proper directory bindings and because the script when run by Slurm will not have a standard input attached. The latter can cause some confusion because something like

    #/bin/bash
    #SBATCH -n 1
    #SBATCH --mem 4096
    #SBATCH -t 60
    
    #
    # BAD EXAMPLE: python runs OUTSIDE the container
    #
    
    . ~/.profile
    module load tensorflow
    singularity shell $TFLOW_SING_IMAGEFILE
    python -c "import tensorflow"
    

    will fail reporting that no tensorflow module was found. That is because the python command above does NOT run within the container; the singularity shell command starts a shell in the container, reading commands from stdin. But there is no stdin attached to the script, so the shell exits as soon as it starts, and the script continues (natively on the host, not in a container) to run the native python command, which does not have tensorflow installed.

    Generally, if you wish to have a command in the script process additional content as coming from stdin, you need to do input redirection. This is almost always what you want with the shell subcommand, but might be useful with some of the others. So the above example should more properly be written as

    #/bin/bash
    #SBATCH -n 1
    #SBATCH --mem 4096
    #SBATCH -t 60
    
    #
    # BETTER EXAMPLE: python runs inside the container, but /lustre, GPU drivers not bound to container
    #
    
    . ~/.profile
    module load tensorflow
    singularity shell $TFLOW_SING_IMAGEFILE <<EOF
    python -c "import tensorflow"
    EOF
    

    Note the <<;EOF at the end of the singularity command; this causes everyting between the <<EOF and the EOF to be passed into the singularity command as standard input. (The EOF label is an arbitrary label, just be sure to use the same label after the << and alone on the line ending the input, and no spaces after the << or at the start of the line.) The python command gets passed as part of stdin to the singularity command, so it runs inside the container, as expected.

    The above example still has a couple of issues, though. While singularity will automatically bind your home directory for you (although see the section about changing the home directory binding if you do not have a /homes directory in the container), it does NOT automatically bind your lustre directory. You will likely want to do that, which will require passing the argument --bind /lustre to the singularity command (after the run, exec, or shell subcommand but before the container name).

    If you are planning to use GPUs/cuda, you need to also bind the GPU drivers into the container. The GPU drivers can be different from node to node (and should match the driver in the kernel), so it is best not to place these into the container. System installed containers expect the GPU drivers to be in /usr/local/nvidia in the container; the real drivers are installed in /usr/local/nvidia/${NVIDIA_VERSION} on the host (where $NVIDIA_VERSION should be set for you already on the host). To bind this, you would want something like --bind /usr/local/nvidia/${NVIDIA_VERSION}:/usr/local/nvidia.

    Combining all this, our final version would be

    #/bin/bash
    #SBATCH -n 1
    #SBATCH --mem 4096
    #SBATCH -t 60
    
    #
    # GOOD EXAMPLE: python runs inside the container, /lustre, and GPU drivers bound to container
    #
    
    . ~/.profile
    module load tensorflow
    singularity shell \
    	--bind /lustre \
    	--bind /usr/local/nvidia/${NVIDIA_VERSION}:/usr/local/nvidia \
    	$TFLOW_SING_IMAGEFILE <<EOF
    python -c "import tensorflow"
    EOF

    Building your own containers

    One advantage of Singularity over other containerization systems is that we can allow you to run containers not built by the Division of Information Technology at the University of Maryland. So you can in theory get Singularity containers built elsewhere, copy them to a Glue or HPC system, and run them. This includes Singularity containers that you build yourself.

    WARNING
    Singularity containers will not run reliably from AFS. HPC users are encourarged to place them in their home directory (if small enough) and/or in lustre. On non-HPC systems, you will probably need to place them on a local disk (e.g. /export).

    In order to build a Singularity container, you will need root access to a system with Singularity installed. Chances are, that is not a Glue/TerpConnect/HPC/DIT maintained system, but something like a desktop you installed Ubuntu or Fedora on. So the first step is to install Singularity on it. You can search for prebuilt binary packages, or you can follow the installation from source instructions; the latter are fairly straightforward.

    Once Singularity is installed, you can generate images for use with Singularity on DIT maintained systems. The steps to do so depend on the version of Singularity you are using (i.e. on the system you have root on). Glue/Terpconnect and the Deepthought HPC clusters support Singularity version 2.4.2 at the time of this writing ("module load singularity/2.4.2") --- images created with earlier versions of singularity should be runnable, but images created with newer versions might not be. (At the time of writing, 2.4.2 is the latest version released.)

    WARNING
    Singularity containers can to some extent masquerade as other Linux distributions/versions, but there are limits. The native kernel on the host system is the same kernel that appears within the container, and this can cause problems if the Linux distro used in the container is too much newer than the host system. See the section on kernel too old errors for more information. We recommend that for the Deepthought2 cluster, currently running Red Hat Enterprise Linux 6, you stick to images using Ubuntu 16 (xenial), Debian 8 (jessie) or RedHat 6.

    Building images with Singularity versions 2.x < 2.4

    For Singularity versions 2.2 and 2.3, the basics steps are:

    1. Create a blank image
    2. Populate the image

    Creating a blank image is easy, just use the singularity create command. You will need to run this as root; either su to root before running the command, or prefix the command with sudo. Assuming you opt for the latter, something like

    my-linux-box> sudo singularity create -s 4096 ./my-sing-image.img
    This will create a 4 GiB image (4096 Mib) in the current directory named my-sing-image.img. The default size, if you omit the -s argument, is 768 MiB, which is probably too small. Although this will try to make a sparse file (i.e. the newly created file from the above example will report a size of 4 GiB, but du on the file will report much less. However, as content is added, the discrepancy will become smaller.), the "sparseness" of the file can be lost as the file is transferred from system to system, and not all filesystems support sparseness to the same degree, so I recommend making the size as small as possible. (I usually create a test image, quite large, the first time around, install the application, and then figure out how much space is really needed, and repeat for the final image with a much more tightly fitting size). There are a number of ways to populate an image. We list some of them here:
    • Singularity maintains a Singularity hub or collection of containers much like Docker hub, although not nearly as well populated at this time.
    • Singularity has some support for importing images from Docker. There are several ways to do this, including:
      • singularity import my-image.img docker://ubuntu:latest
      • Using a bootstrap definition spec file, e.g.
        Bootstrap: docker
        From: ubuntu:latest
      • If you have Docker installed, something like docker export ubuntu:latest | singulrity import my-image.img
      • The above is made easier with the script docker2singularity.sh ubuntu:latest
    • Singularity comes with a recipe definition format wherein you can define the commands needed to setup a container and just bootstrap them. Basically, you start with a base OS image (e.g. Ubuntu or Debian) as specified in your header, and you install .deb or .rpm packages as appropriate for the OS. Details on the bootstrap definition spec file format.. After the bootstrap file is created, you simply run sudo singularity bootstrap my-image.img my-bootstrap.def.
    • Finally, you can start with an image created from one of the above mechanisms, and either mount it (sudo singularity mount my-image.img) and modify the contents directly, or start a shell (sudo singularity --writable shell my-image.img) and run yum or apt, etc. commands to set it up manually. This is not recommended, as it makes the container hard to reproduce, and is likely to cause you more problems in the long run when you need to upgrade the application version, etc.

    Building images with Singularity versions 2.4 (or higher?)

    For Singularity versions 2.4, the create and bootstrap subcommands have been combined into a single "build" subcommand. They have also added a couple of new formats for containers, namely a compressed read-only squashfs format (the default), and a chrooted directory format available with the --sandbox option. The old default ext3 based format has been deprecated, but is available with the --writable option.

    The command for building a container image has been simplified, and is basically

    singularity build [BUILD_OPTIONS] path-to-new-container SOURCE-SPEC

    The command singularity help build will give full details, but basically BUILD_OPTIONS can be --sandbox or --writable to specify the sandbox or ext3 image formats above (if neither specified, will use the squashfs format). path-to-new-container is the path to the new container to be built. And SOURCE-SPEC can be any of the following:

    • the path to a recipe definition as described in the previous section.
    • the path to an existing Singularity image (which can be used to copy it to a new format)
    • a tar (or .tar.gz) file (must have .tar in the name) which will be used to populate the image
    • or a URI starting with shub:// or docker:: to generate an image from a remote Singularity registry or Docker registry. My experiences with Singularity imports from Docker registries is that sometimes it works, but sometimes it is difficult.

    If you wish to modify an existing container to better suit your requirements, two strategies for doing so are:

    1. You can get the recipe file to build the original container using the singularity inspect -d path-to-container. This requires that the image was constructed using a recipe file and was generated under singularity 2.4 or higher, but those conditions should be met for newer images created by DIT. You can then edit that recipe and use it to build the image of your desire.
    2. You can convert an existing image into the sandbox format using singularity build -s path-to-new-container path-to-existing-container. Note that the new container will be a directory. You can then use singularity shell --writable path-to-new-container and you will have a shell in the new container. You can then interactively modify the container (e.g. install packages, etc) as needed, which will be retained in the image after you exit the shell. If desired, you can use a similar build command, dropping the -s and swapping the order of the sandbox and the non-sandbox containers to convert back to a file image.

    If you need to modify a container built by systems staff and accessible via the "module" command on the Deepthought2 cluster, you can use the "module display" command to find the location of the image file. E.g. for package "foo/1.2.3", module display foo/1.2.3 will display what the module load command does, and you will see a setting for an environmental variable named like FOO_SING_IMAGEFILE (the FOO part will change depending on the package you are looking at); you can copy this image file to the system you are using to build singularity packages in order to modify it as needed.

    Useful links, more information, etc.