Allocation Category | small |
---|---|
Estimated RAM Usage | 128 GB |
Estimated Disk Usage | 5 GB |
Computational Description | I use MPI to do task parallelization of an optimization algorithm (simulated annealing in Fortran) that repeatedly
solves a set differential equations and searches for the "best" parameter values that minimize a user-defined
cost function. I've used the Intel Fortran compilers in the past.
The actual codes and optimization algorithm libraries are small (maybe a few megabytes at most), as are the output data files that the code generates. |
General Area of Research | human movement simulation |
---|---|
Funded vs Exploratory | exploratory |
Research Description | I use a mathematical model (differential equations) of the human musculoskeletal system that simulates
movements like walking and running. Variables such as muscle forces and the contact forces within joints
that can't be measured on real humans can be measured on the model, which is relevant to research questions
on knee osteoarthritis and various other orthopaedic conditions. Solving the equations takes seconds with
an efficient code on a modern CPU, but getting the model to walk like a real human requires optimization and
repeatedly solving the equations, sometimes millions of times.
In the past I've used a cluster with 64 nodes, which typically requires a small number of days in wall-time for these sorts of optimizations. |
Allocation Category | development |
---|---|
Estimated RAM Usage | 1 GB |
Estimated Disk Usage | 1 GB |
Computational Description | The simulation is about the light scattering on fractal aggregates with 50 primary particles. With given coordinates, a FORTRAN code is used to do the calculation, which needs to be compiled and to run on a parallel computer cluster. |
General Area of Research | aerosol science |
---|---|
Funded vs Exploratory | exploratory |
Research Description | On the Optical Properties of Fractal Aggregates relevant to Climate Change.
We would like to simulate light scattering by fractal aggregates consisting of 50~100 primary particles using a FORTRAN-90 code, MSTM. The application is toward a better understand of the radiation balance to the earth by fractal particles in the atmosphere, and as a tool to evaluate diagnostic tool development. MSTM is coded for calculating the time-harmonic electromagnetic scattering properties of a group of spheres applying the multiple sphere T matrix method. Parallelization is employed during the computational task using MPI. The programming is designed to optimally use the memory and processor resources of the parallel platform. With a pre-created coordinate file, we would use this code to calculate the absorption and extinction of light by aggregates, which is of great interest to optically probe the structure of fractal aggregates. The FORTRAN code is designed for parallel computation, which we know compiles and runs well on the Deepthought cluster. I would like to apply for an account, for which these calculation will be part of my PhD dissertation research. According to the manual, the execution time on 128 processors is 45 minutes for aggregates with 375 monomers. So the running time for our calculation is estimated to be less than 10 minutes. The purpose of the calculation is to model the effect of orientation on light scattering and absorption for an instrument development in our group in which electric fields will orient fractal aggregate particles. |
Allocation Category | development |
---|---|
Estimated RAM Usage | 32 GB |
Estimated Disk Usage | 1000 GB |
Computational Description | In the applications of the high-performance computing, self-established fortran 90 code will be used, with several fortran-base libraries, such as NCAR-FFT, which can be replaced by fftw. The source codes are programed for a single-CPU application, and I will not use multi-cpu computation so far. The environment I need for my computation is a fortran compiler. I will perform a direct numerical simulation of turbulence with up to 10^7 order of grid points, and 50-100 datasets of three-dimensional velocity, pressure and scalar fields are stored on the disk, requiring up to 0.5TB storage. The data will be retrieved and eliminated from the HDDs immediately after the data are obtained. |
General Area of Research | Environmental Sciences |
---|---|
Funded vs Exploratory | exploratory |
Research Description | The submission of application of the high-performance computing will be used to investigate the interactions of turbulence with scalar transfer, for example, heat, gas, and particle transfer. A number of grid points for this computation will be up to 10^7, and a finite difference method is used to solve the equations of highly-nonlinear fluid flow problems. The effect of turbulence structures on scalar transfer in fluid will be examined by a series of high-performance computing. |