Child pages
  • Lumerical on the FAS Cannon cluster (SEAS users only)


In the instructions below, the term  RC-Cluster refers to the Cannon cluster.

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Important message on Lumerical License server

Starting October 14, 2021, we will be using Ansys license server for Lumerical. This is necessitated by the acquisition of Lumerical by Ansys. With this change, all currently installed Lumerical software on FASRC and elsewhere that use the older Lumerical license server will cease to function on October 14, 2021 and they cannot be reconfigured to work with the new license server. A recent version of Lumerical (using the new Ansys license server)  has been installed on the RC-Cluster (Cannon).  See below for instructions on using it.

New Lumerical License server

The new lumerical license server is: research-license-ansys.int.seas.harvard.edu

Port: 1055

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Lumerical is a suite of software for simulating photonic components, circuits and systems (https://www.lumerical.com/tcad-products/). 

Lumerical FDTD is a finite-difference-time-domain solver for time-dependent Maxwell's equations. The solver is parallelized using MPI. In order to use FDTD, one needs to first build the model using Lumerical CAD software. Currently SEAS Computing has limited CAD licenses. If you need to use CAD on a regular basis, we encourage you to buy a license for CAD directly from Lumerical. Once you have the CAD based model available, you can use FAS RC resources for solving the model with FDTD.

Lumerical Mode consists of an eigenmode solver and a propagator (describes propagation of light in planar integrated optical systems). 

Lumerical has also introduced a multiphysics suite with finite element capabilities (charge, heat, DGTD etc.). Please refer to the above Lumerical page for details.

With the introduction of a convenient virtual desktop (VDI) on the FASRC Cluster (see below), the only supported mode for working with Lumerical is now on FASRC.

Getting started

Obtain a FAS RC account by visiting:

https://portal.rc.fas.harvard.edu/request/account/new

Please consult the documentation available via the above link for connecting to FAS systems and familiarizing yourself with the SLURM queue management system. For any help, contact RC using the information available on:

https://rc.fas.harvard.edu/about/contact/

To use Lumerical, you need to be in the SEAS group. You may want to request FASRC to be added to SEAS group (rchelp@rc.fas.harvard.edu).

OnDemand Virtual Desktop (VDI)

You can now work conveniently via a browser on the FASRC cluster using the virtual desktop infrastructure (VDI) known as OnDemand. For more details, please visit:

https://www.rc.fas.harvard.edu/resources/documentation/virtual-desktop/

You may have to adjust the number of cores and memory when you request an OnDemand session. Start with the default and change options after you have a better idea of resource requirements. In a terminal, type:

Lumerical initial setup

Starting October 14, 2021, there is only one version of Lumerical that is operational. You can work with it after loading the corresponding module:

module load lumerical-seas/2021_7bf43e7149-fasrc01

Lumerical – Interactive Simulations

Lumerical can be used interactively on the FASRC cluster via the new VDI. After loading the module as above, you can bring up the software as follows:

launcher

This should open a GUI. You should be able to choose the software component (FDTD, MODE, etc.) by clicking on "Solvers". Start a new project by clicking on "New Project". You should also be able to download examples via the "Examples" item on the main page. You need to have (or create) an account on Lumerical to be able to do this.

Running Lumerical (FDTD, MODE, Device) in batch using SLURM

Important for all batch submissions : 1. Do not submit batch jobs from an OnDemand VDI session. Submit batch jobs from a simple ssh login session to a login node.

                                                                       (https://docs.rc.fas.harvard.edu/kb/terminal-access/)

                                                                   2. Actual memory and time requirements for a job can vary widely depending on the problem and the scripts below are just examples to get you started.

                                                                   3. Do not run Lumerical interactively from a login node.

Running Lumerical FDTD in batch using SLURM

FDTD  Parallel run limitations -  Number of cores is limited to 128 under academic licensing.

The following SLURM script 'runscript_lumerical.sh' will load the necessary SW using modules and run the model using parallel (OpenMPI) version of FDTD. The script requests 2 CPU cores (as indicated by '#SBATCH -n 2', and solves the problem using 2 MPI processes (as indicated by 'mpirun -np  ...'):

#!/bin/bash
#
#----------------------------------------------------------------------------
#  This is a sample script. Please examine all the variables to make sure
#  they are relevant for the RC Cluster at the time of submission
#-------------------------------------------------------------------
#SBATCH -n 16        # Number of cores
#SBATCH -t 25 # Runtime in minutes
#SBATCH -p shared # Partition to submit to. Based on your needs, you may need to change this.
#SBATCH --mem=10000 # Memory per cpu in MB (see also --mem)
module load intel/21.2.0-fasrc01 openmpi/4.1.1-fasrc01 lumerical-seas/2021_7bf43e7149-fasrc01

echo "SLURM_NTASKS= " $SLURM_NTASKS

srun -n $SLURM_NTASKS --mpi=pmix fdtd-engine-ompi-lcl -fullinfo <your_input>.fsp

date

exit

You can copy and paste the above into a file named runscript_lumerical.sh  and submit it to SLURM as follows:

sbatch ./runscript_lumerical.sh


Running Lumerical Mode in batch using SLURM

Important: Parallel runs with Lumerical Mode -  Number of cores is limited to 128 under academic licensing.

The following (example) SLURM script will load the necessary modules and run the model using parallel (OpenMPI) version of FDTD. The script requests 2 CPU cores (as indicated by '#SBATCH -n 2', and solves the problem using 2 MPI processes (as indicated by 'mpiexec -n  ...'):

#!/bin/bash
#
#----------------------------------------------------------------------------
#  This is a sample script. Please examine all the variables to make sure
#  they are relevant for the RC Cluster at the time of submission
#-------------------------------------------------------------------
#SBATCH -n 16        # Number of cores
#SBATCH -t 25 # Runtime in minutes
#SBATCH -p shared # Partition to submit to
#SBATCH --mem=10000 # Memory per cpu in MB (see also --mem)
module load intel/21.2.0-fasrc01 openmpi/4.1.1-fasrc01 lumerical-seas/2021_7bf43e7149-fasrc01

echo "SLURM_NTASKS= " $SLURM_NTASKS

srun -n $SLURM_NTASKS --mpi=pmix varfdtd-engine-ompi -fullinfo <your_input>.lms 

date
 
exit 

You can copy and paste the above into a file named runscript_mode.sh  and submit it to SLURM as follows:

sbatch ./runscript_mode.sh


For further details on using the RC-Cluster and  SLURM (such as monitoring or canceling your jobs) please visit https://rc.fas.harvard.edu or the list of convenient SLURM commands at https://rc.fas.harvard.edu/resources/documentation/convenient-slurm-commands/.

Running Lumerical Device in batch using SLURM

Lumerical Device, Thermal, etc. cannot be run across nodes. They can however make use of the cores on a single node.   The following is a batch script for submitting a job to Lumerical Device:

#!/bin/bash
#
#SBATCH -N 1        # Number of nodes
#SBATCH -c 8        # Number of CPUs
#SBATCH -t 0-00:05  # Runtime in D-HH:MM format
#SBATCH -p shared  # Partition to submit to 
# SLURM requires memory specification. For lumerical, this may be found from
# CAD via the "Check" button at the top.
#SBATCH --mem-per-cpu=500

module purge
module load lumerical-seas/2021_7bf43e7149-fasrc01

echo $SLURM_NTASKS

#Note: device-engine cannot use processors across nodes. It can use
#processors (or cores) on the same node.

export OMP_NUM_THREADS=$SLURM_NTASKS

#This assumes you have prepared an input file <your_input>.dev
device-engine <your_input>.dev

date

exit                                                                                                                                               

Documentation and help

  • No labels