Visit

Contents

  • Summary and Version Information
  • Using Visit on HPC clusters

    Summary and Version Information

    Package Visit
    Description ViSit Visualization and Graphical Analysis tool
    Categories Graphics,   Progamming/Development,   Research
    Version Module tag Availability* GPU
    Ready
    Notes
    2.9.0 visit/2.9.0 Non-HPC Glue systems
    Deepthought HPCC
    64bit-Linux
    N
    2.10.2 visit/2.10.2/no-osmesa Non-HPC Glue systems
    Deepthought HPCC
    64bit-Linux
    N
    2.10.2 visit/2.10.2/osmesa Non-HPC Glue systems
    Deepthought HPCC
    64bit-Linux
    N

    Notes:
    *: Packages labelled as "available" on an HPC cluster means that it can be used on the compute nodes of that cluster. Even software not listed as available on an HPC cluster is generally available on the login nodes of the cluster (assuming it is available for the appropriate OS version; e.g. RedHat Linux 6 for the two Deepthought clusters). This is due to the fact that the compute nodes do not use AFS and so have copies of the AFS software tree, and so we only install packages as requested. Contact us if you need a version listed as not available on one of the clusters.

    In general, you need to prepare your Unix environment to be able to use this software. To do this, either:

    • tap TAPFOO
    OR
    • module load MODFOO

    where TAPFOO and MODFOO are one of the tags in the tap and module columns above, respectively. The tap command will print a short usage text (use -q to supress this, this is needed in startup dot files); you can get a similar text with module help MODFOO. For more information on the tap and module commands.

    For packages which are libraries which other codes get built against, see the section on compiling codes for more help.

    Tap/module commands listed with a version of current will set up for what we considered the most current stable and tested version of the package installed on the system. The exact version is subject to change with little if any notice, and might be platform dependent. Versions labelled new would represent a newer version of the package which is still being tested by users; if stability is not a primary concern you are encouraged to use it. Those with versions listed as old set up for an older version of the package; you should only use this if the newer versions are causing issues. Old versions may be dropped after a while. Again, the exact versions are subject to change with little if any notice.

    In general, you can abbreviate the module tags. If no version is given, the default current version is used. For packages with compiler/MPI/etc dependencies, if a compiler module or MPI library was previously loaded, it will try to load the correct build of the package for those packages. If you specify the compiler/MPI dependency, it will attempt to load the compiler/MPI library for you if needed.

    Using Visit on the HPC clusters

    This section discusses various topics relating to using Visit on High Performance Computing (HPC) clusters.

    Remote Visualization

    One major concern when using visualization software such as Visit on HPC clusters is how to display the data. HPC clusters can generate large amounts of data, and visualization tools are useful in enabling researchers to understand the data that what produced. But generally the researchers are not sitting anywhere near the HPC clusters, HPC clusters generally do not have displays attached, and users usually wish to view the data on their desktop workstations. While users can copy the data files from the HPC clusters to their workstations, this can be time consuming as sometimes the data files are quite large. And that assumes there is room on the workstation disk.

    In the remainder of this subsection, we will discuss some ways to view data sitting on disks attached to an HPC cluster on your desktop or similar system.

    Remote Visualization using X

    If you have a desktop that has an X server available, then the easiest solution might be to simply to ssh to one of the login nodes and run visit on the login node with the X display tunnelled back to your desktop. The help pages on using X11 discuss the mechanics of this; basically you ssh to the login node with X11 tunnelling enabled, and then run visit in the remote shell.

    When this works, it can be the simplest way to view data remotely using Visit. However, even when it works, it can be sluggish. The visit process on the HPC system is sending all that graphics data back to your desktop for display, and things can become quite unresponsive at times. Furthermore, there can be quirks and incompatibilities between the version of X that Visit running on the HPC cluster was built against and the X server running on your desktop which can cause all sorts of issues. In general, if you encounter issues, it is probably easiest to just use Visit in client/server mode.

    Remote Visualization using Visit Client/Server mode

    Visit supports a client/server mode wherein you launch the Visit GUI on your workstation/desktop but the data processing is handled on one or more remote systems. Graphical processing is split between the workstation and the remote systems.

    This is particularly advantageous when working on High Performance Computing (HPC) clusters, as this mode of operation can:

    • enable you to work on large data sets on the HPC cluster, without needing to transfer many GBs (or TBs) of data back to your workstation.
    • allow you to leverage the CPU power of the HPC cluster to speed up the processing for visualization.

    NOTE: Although it should be possible to avail oneself of GPU enabled nodes for hardware accelerated processing of graphical data, this is NOT currently supported on the Deepthought clusters.

    Within ViSit, this client/server mode is controlled by "Host Profiles". The following subsection deals with setting up these profiles (and includes some standard profiles for the Deepthought clusters). After that, we discuss using the profiles for visualization tasks.

    Standard Host Profiles

    Before you can do client/server visualization with ViSit, you need to set up Host Profiles. You can probably do this fairly easily by just copying one or both of our standard profiles for the Deepthought clusters to the appropriate hosts directory on your workstation. The standard profiles can be downloaded at:

    These files should go into the appropriate "hosts" directory on your workstation. For Unix-like systems, this is usually ~/.visit/hosts. On Windows systems I believe it is something like My Documents\VisIt VISIT_VERSION\hosts. After copying the files there, you will need to start ViSit again for them to be detected.

    If you use one of these files, you can probably skip over the manual configuration described below, and proceed on to the section on using the profiles. However, that subsection is still useful if you wish to customize the standard profiles.

    Manually Defining Host Profiles

    (The following instructions are based on Visit 2.10, but things should be similar for later versions.)

    1. Start by opening the Options | Host Profiles page from the menu bar.
    2. If you copied one of the standard host profiles, the should be visible in the Hosts area to the left, and you can select one of them to edit it. Or can click the new host button to create a new host entry. Either way, it will open the entry with fields on the right side. There are two tabs on the right, Host Settings and Launch Profiles. We deal with Host Settings first.
    3. Host Nickname is the name that will be shown to you for the host profile. I suggest something like UMD Deepthought2 Cluster.
    4. Remote hostname is the hostname that Visit will ssh to to open the remote Visit process. Here you should give the appropriate hostname for the cluster, e.g
      • login.deepthought2.umd.edu for the Deepthought2 cluster
      • login.juggernaut.umd.edu for the Juggernaut cluster
    5. In the Hostname aliases field, you should include the pattern that will match the hostnames for specific login nodes for the cluster. E.g.:
      • login-*.deepthought2.umd.edu for Deepthought2
      • login-*.juggernaut.umd.edu for Juggernaut
    6. Leave both Maximum nodes and Maximum processors unchecked
    7. For Path to Visit Installation, enter the value /cell_root/software/visit for the Deepthought2 cluster, or /software/visit for the Juggernaut cluster. This will cause it to find custom wrapper scripts for these clusters which will ensure the correct environmental variables are set to run the compute engines, etc. on these clusters.
    8. For Username, enter your username on the cluster. Remember that on Bluecrab, your username includes @umd.edu.
    9. You will probably need to click the box for Tunnel data connections through SSH. This is required if your workstation has any sort of firewall on it, which is typically the case.
    10. The other fields can be left to the defaults.
    11. Now select the Launch Profiles tab. The previous tab gave basic information about connecting to the cluster, we now provide information about how to run on the cluster. You can select an existing launch profile and edit below, or use "New profile" button to create a new profile. We are going to define three profiles:
      1. serial: this runs ViSit in one process on the login node.
      2. parallel (debug partition): this will run ViSit in a job submitted to the debug partition. I.e., a short job, but run at somewhat higher priority for better interactive use.
      3. parallel: this will run ViSit in a more generic job. You can specify the number of cores/nodes/etc.
    12. The serial launch profile is easiest. Just click the "New Profile" button, and enter its name, e.g. serial. That's it.
    13. The two parallel profiles are defined similarly. Click the "New Profile" button and enter its name. Then select the Parallel tab, and:
      1. Click the Launch parallel engine checkbox.
      2. Click the Parallel launch method checkbox and select sbatch/mpirun in the drop down (probably the last entry).
      3. For the parallel (debug partition) profile, also click the Parition/PoolQueue checkbox and enter debug in the text box. For the generic parallel profile, you are probably best just leaving this unchecked/blank.
      4. You can adjust the Number of processors value to the desired default value. You will be able to adjust this each time you use the profile, but this will be the default value. I recommend a value of 20 for Deepthought2 and 28 for Juggernaut, as this is what is typically available on a single node.
      5. For the next 4 items (Number of nodes, Bank/Account, Time limit, and Machine file), if you check the checkbox you can set a default which can be modified each time you use the profile. If left unchecked, you will not be able to modify when using the profile, and it will default to whatever sbatch decides. I would recommend checking the boxes for Number of nodes, Bank/Account and Time Limit, but typically Machine File can be left unchecked.
      6. There is also an Advanced subtab just under the Launch parallel engine checkbox. You normally do not need to set anything here, but for some more complicated cases they might be needed.
        • The Use VisIt script to set up parallel envionrment checkbox should be checked (this should be by default).
        • The remaining arguments can typically be left unchecked except in certain cases. I would recommend that if you need to make modifications here, you create (or copy) a new profile for that specific need. E.g., if the default memory requested for the Slurm job is insufficient, you can create a "parallel-large memory" profile with everything copied from the "parallel" profile and then make the needed changes.
          • Launcher arguments: Here you can provide additional arguments to be passed to the sbatch command. E.g., to request 9 GB of RAM per CPU core (instead of the default 6 GB) you could add here something like --mem-per-cpu=9000.
          • Sublauncher arguments: Here you can provide additional arguments to be passed to the mpirun command.
          • Sublauncher pre-mpi command: Here you can provide a command to be run before the mpirun command in the batch script.
          • Sublauncher post-mpi command: Here you can provide a command to be run after the mpirun command in the batch script.
    14. Once you have things as you like them, click the Apply button to make it effective. If you editted anything (i.e. created new profiles or changed a profile), you should select the new/modified host profiles and Export host them to ensure they are saved and available in your next ViSit session.
    15. Click the Dismiss button to close the Host Profiles window.

    In this section we will briefly describe how to use the profiles. I am assuming you have a Host Profile for one of the Deepthought2 or Juggernaut clusters, with three launch profiles as described above, and that you have access to use the HPC cluster you have the profile for.

    NOTE: I believe the version of VisIt that you are running on your workstation must match what is available on the cluster, at least down to the minor (second/middle) version number. If you do not have that on your workstation, you can try running VisIt on the login node with the display going back to your workstation, but having the heavy work being done on the compute nodes using the client/server model discussed here.

    In general, using ViSit in client/server starts with opening the data file. Just start to open a file as usual, but in the Host dropdown at the top of the file open dialog, there should be an option for the various host profiles you have defined. Select the appropriate host profile. It will likely prompt you for a password (make sure the username given, if any is correct, and correct it if not) (If no username is given, it assumes your username on the workstation is the same as on the remote cluster). Within a few seconds you should see a file list corresponding to your home directory on the cluster. You can then select a file as usual.

    If multiple launch profiles exist for that host, you will be given an option of choosing which profile you wish to use, and what options you wish to use with that launch profile if it supports any. If there is only a single launch profile, you obviously cannot choose a different launch profile, but a pop up will appear if there are any options for that launch profile. Otherwise, visit will just launch the profile with the defaults.

    If you just wish to use ViSit on a file that resides on the HPC cluster (without copying the file to your local workstation) but do not need (or cannot use) the parallel capabilities of ViSit, the serial option is the easiest, and does not take additional options. Just select it and hit OK. It may take a couple of seconds to start the remote engine, but then should return and you can visualize your data as if it were local.

    The parallel launch options offer more power, but are a bit more complicated to use even though ViSit does a good job of hiding most of the complexity from the user. ViSit generally uses data based parallelization, which means generally one will need a parallel mesh data set to effectively use its parallelization.

    To use one of the parallel profiles, just select it after selecting the file. The parallel (debug partition) is good for a short interactive visualization, but is limited in number of processes/nodes and to 15 minutes. However, since it uses the debug partition, it generally will spend less time waiting in the queue. The generic parallel profile is less restrictive, but depends on jobs submitted via sbatch and can have significant wait times before the job starts running.

    When you select the profile, you typically will have the opportunity to change the defaults for wall time, number of nodes, and allocation account to which the job is to be charged. NOTE: ViSit seems to assume 8 processes per node by default, so e.g. if you request 20 processes on Deepthought2, it will try to spread them over 3 nodes. I strongly advise manually setting the number of nodes appropriately. Note also that the memory of the node is split evenly over all the ViSit processes on the node, so you might need to adjust the node count to use more than the minimal number of nodes in cases where memory requirements are higher.

    When you finish updating options and hit "OK", your ViSit GUI will ssh to the login node for the cluster and submit a batch job requesting the desired number of nodes/cores. Typically you will see a pop up showing that ViSit is awaiting a connection from the compute engines --- this will not occur until after the batch job starts. For batch jobs submitted to the debug partition, this should typically be within a minute or two, but it is likely significantly longer for the general parallel profile.

    When the job starts, after 20 seconds or so the connection should be made and the pop up will go away. At this point you can use ViSit as normal.

    At some point, the scheduler may terminate your compute engines (e.g. due to exceeding walltime). You should be able to continue using the GUI, and when you try to do something that requires the compute engine, a pop up will appear allowing you to start up a new launch profile.

    Example of using parallel visit in client/server mode

    Here we provide a quick example of using VisIt with parallel processing in client/server mode. This example assumes that you have already downloaded the appropriate standard profile above and placed it in the proper VisIt configuration directory. We are going to assume you going to run VisIt from the cluster login node, and so put the host profile in your ~/.visit/hosts directory on the cluster.

    1. Ssh to the cluster, and module load visit/VERSION.
    2. Startup visit with the visit command.
    3. From the main control window, under the File tab, select Open File.
      1. This will open the File Open dialog.
      2. On the top line (Host), click on the drop down to the right. You should see an option for UMD CLUSTERNAME Cluster matching the profile you downloaded. If not, you did not download the profile or did not put the profile file in the correct location. Exit VisIt, fix the issue with the profile, and restart VisIt. If you see it select it.
      3. The dialog will gray out for a couple of seconds as VisIt ssh-es to the login node and looks at your home directory. In general, it will ask for a password (and if it does, verify the username is correct), but since you presumably just logged into the login node, your Kerberos tickets should still be valid and it should just ssh in w/out requiring a password. After ssh you in, it should display a file browser for your home directory.
      4. We are going to load one of the VisIt example files, so in the second line, labelled Path, replace the existing text with either /cell_root/software/visit (for the Deepthought2 cluster) or /software/visit (for the Juggernaut cluster). After the file browser window updates, descend down the directory named after the VisIt version and related subdirs (e.g. 2.10.2/osmesa/sys/data on Deepthought2 or 2.13.2/osmesa/linux-rhel7-x86_64/data on Juggernaut (since we are just looking for an example file, the version/builds do not need to exactly match what you are running).
      5. Double click on the multi_ucd3d.silo file. You can try other files if you want, but only the multi_* files are multidomain datasets --- if you open a file not starting with multi_*, it will be a single domain dataset and you will not be able to effectively use the parallelism in VisIt.
      6. You will now get the dialog for selecting options on the cluster. Basically, a list wherein you can select one of the launch profiles for the cluster. Click either parallel (debug partition) or parallel (generally parallel (debug partition) will give you quicker turn around for this simple example). Set the time limit to something reasonable (like 15 minutes), and select the number of cores and nodes you want. The multi_ucd3d.silo file I believe has 36 data domains, so it is probably best to choose a number which divides 36 evenly. Be sure to select an appropriate number of nodes (there are 20 cores/node on Deepthought2, 28-40 on Juggernaut). More than 36 cores is wasteful.
      7. Click OK when satisfied. A window should come up showing that it is waiting for the compute engine to start up. This will cause a batch job to be submitted to the scheduler to run your compute engine. It should take only a few minutes if you are using the debug queue. If it is going on for a long time, you might wish to check if there is a job submitted on your behalf on the cluster (name should be something like visit.USERNAME, and the number should be in the terminal from which you launched visit.
    4. Shortly after the job for the compute engine starts, you should be back to the normal VisIt GUI.
    5. To finish our example, define a procid scalar.
      1. From Controls menu, select Expressions
      2. Click New, and enter procid in the Name field.
      3. Click Insert function, go to the Miscellaneous submenu item, and click on procid
      4. Click Insert variable, go to the Meshes submenu item, and click on mesh1.
      5. The definition box should now display procid(mesh1). If not, fix it so it does. When done, click Apply and Dismiss.
    6. Now in the Plots section, click on the Add button, and in the Pseudocolor submenu select procid. If no procid appears, you did not define the procid scalar properly above.
    7. Now in the Plots section, click Draw.
    8. You should see a multicolored 3D half cylinder (if you chose the multi_ucd3d.silo file; other multi_* files will be different shapes but still multicolored. Single domain files will be a solid color). The different colors represent the processor id of the processor responsible for rendering that section of the data.
    9. Congratulations. You just used VisIt in parallel mode.