HPCE Highlight


Sunetra Sarkar

Department of Aerospace Engineering

Areas of interest: Fluid-structure interactions, Unsteady fluid dynamics, Uncertainty quantification in flow and fluid structure interactions


Sunetra Sarkar is a Professor at the Department of Aerospace Engineering, Indian Institute of Technology (IIT) Madras, India. She joined IIT Madras in 2007 as an Assistant Professor. She obtained her Ph.D. from Indian Institute of Science, Bangalore in 2005. Before joining IIT Madras, she has worked at the Faculty of Aerospace Engineering, Technical University of Delft, Netherlands as a Post Doctoral Fellow during 2005-2006. She was a NWO Rubicon fellow in The Netherlands during 2006-2007. She was a visiting scientist in the Department of Applied Mathematics, Chalmers University, Sweden in 2010. She had won the prestigious Amelia Earhardt fellowship award for women scientists in Aerospace Sciences   Engineering in 2001. Her broad research interests include, unsteady aerodynamics, nonlinear dynamics, fluid structure and acoustic-structure interactions and stochastic uncertainty quantifications. Her current projects are in the areas of flow induced oscillation based energy harvesting, stochastic modeling of unsteady flows past fins, fluid structure interaction of flapping wings and noise induced dynamics of bluff-bodies in cross-flow.


How does your group keep HPCE cluster busy?

Over the years, our research has been focused towards the development of high fidelity computational models for simulating unsteady fluid flow patterns and nonlinear fluid-structure interaction (FSI) dynamics. A number of interesting application areas have been examined ranging from biomimetic flapping (birds, insects, and aquatic animals) to dynamics of rotors, bluff bodies, and turbomachinery components. These are complex problems, many of which involve strong nonlinearity as well as stochasticity. They require long-term time-accurate simulations using very high fidelity large order models that need monumental computational resources. One of the major challenges here lies in devising algorithms to enhance the efficiency of the solvers without compromising on the accuracy. This also includes development of efficient Reduced Order Models, analysis of large time data, development of stochastic algorithms to propagate uncertainties through high fidelity models and application of nonlinear dynamical (classical and non-classical) tools to characterize system response. Our research group utilizes Virgo and GNR clusters for serial and parallel computations to investigate the above problems. We use both in-house and open source platforms for running our simulations.

How do you see HPCE landscape in the domain of your research area change over the years?

Over the years, we have shifted gears from low to moderate order simulations to highly computationally intensive models. It would not have been possible to carry on with the type of problems we do, without the HPCE support of IITM. Presently, both Virgo and GNR clusters are being used extensively, especially Virgo. Very recently, the number of processors per user at HPCE clusters has been made double which is a very welcome change. Also, Virgo allows extensive parallelization through OpenMP and MPI platforms, which is essential for our simulations. We have often relied on HPCE personnel in providing us with necessary tips with parallelization of our codes. Our group has also benefitted from the workshops (such as high performance and cluster building workshops) and software training programs that were organized by the Center for Computing. Our experience with Intel 2013 compiler at HPCE for C++ codes is excellent and we find it considerably faster than the g++ compiler that we use in our lab systems. The prescribed storage space of 30GB is not enough for our typical datasets but some of my students have been able to extend it with special permission in the recent past. We have come to realize that cases decomposed to single node (up to 16 processors) takes lesser queuing time than that of multi-node cases. I think our group was one of the firsts to use the open source CFD platform, OpenFOAM-Extend in HPCE, but we installed it in our individual accounts. However, to the best of our knowledge, a less advanced version of OpenFOAM is also in place centrally at Virgo.

What would you suggest to new faculty members and new students in your research area?

HPCE has procured large number of subscriptions to a variety of commercial platforms useful in our domain of work, such as Matlab (including stand-alone licenses), Ansys, Comsol, Abaqas, Solid works. So, I would urge people working in related research areas to utilize these. Moreover, use of open source tools is gaining rapid popularity in our community. There should be a mechanism to install them at HPCE, if possible centrally, else with the necessary help from HPCE personnel, at individual user's level. There should be a smooth mechanism to facilitate this so that students do not have to divert too much of their time to build the necessary environment for their simulations. For example, there is no centralized installation of the OpenFOAM-Extend version that we require and the recent version of the software shows incompatibility issues. A centralized installation would be a big help, especially as a large number of HPCE users have started relying on OpenFOAM in Virgo. We would also like to request other users to advocate for the installation of Tecplot, an excellent tool for visualization and analysis of large datasets. The priority at which jobs for student users are executed at HPCE could also be given some fresh thoughts and if possible, could be decided based on the seniority level of the students. The above points are solely based on our limited experience, and we think would be relevant for the faculty members and students planning to engage with HPCE in the near future.

(with inputs from: Dr. Chandan Bose, Aswathy M.S., and Dipanjan Majumdar)




Updated on: January 26, 2020


HPCE Highlight showcases the work of IIT Madras faculty members and their groups in High Performance Computing. It is powered by HPCE, Computer Center, IIT Madras.