I didn't have time to write a short letter, so I wrote a long one instead.
- Mark Twain.
My area of research is "Programming Languages and Compilers". It will remain this, or will at least revolve around this.
In this domain, I am more inclined towards the optimizations of parallel programs.
Currently, I am trying to make OpenMP threads talk less and work more.
OpenMP is a standard API for writing shared memory parallel programs in C/C++ and Fortran.
I am also busy in the development of IIT Madras OpenMP (IMOP) framework that can be used to implement source-to-source transformations
and source-code analyses for OpenMP programs written in C.
To know more about IMOP, click here.
-
Patents and Publications
-
(Accepted for publication) Nougrahiya, Aman, and Nandivada, V. Krishna.
Homeostasis: Design and Implementation of a Self-Stabilizing Compiler.
ACM Transactions on Programming Languages and Systems (ACM TOPLAS), January 2024.
-
(Preprint) Nougrahiya, Aman, and Nandivada, V. Krishna (2021).
Homeostasis: Design and Implementation of a Self-Stabilizing Compiler.
arXiv preprint arXiv:2106.01768.
-
(Patent Granted) Nandivada, V. Krishna, and Nougrahiya, Aman.
System and Method for Performing Self-Stabilizing Compilation (Patent No. 383458). IIT Madras.
Indian Patent Office. (2020)
DBLP | Google Scholar
-
Areas of interest
Following is a (forever incomplete) list of my research interests.
-
Concurrency analysis
Concurrency analysis, also known as May Happen in Parallel (MHP) analysis, is a fundamental analysis for optimization and profiling of parallel programs.
Given a pair of program statements, MHP analysis infers if they may get executed in parallel by different threads.
Such an analysis serves as a basis for porting various standard serial optimizations into the context of parallel programs.
Furthermore, it is also used in various other analyses that are specific to the domain of parallel programs, e.g. data race detection.
Presently, one of my works at IIT Madras deals with the development of an incremental MHP analysis for OpenMP programs written in C.
-
Optimizations of thread synchronizations
When threads communicate with each other in a multi-core system, the overheads of communication may reduce
(even overshadow, in some cases) the benefits of parallelism.
One of my current works aims to reduce the number of such synchronization (communication) operations encountered during the execution of a parallel program.
I am also interested to try and find some new ways of reducing the cost per synchronization operation.
-
Memory fence optimizations
In various parallel programming models, memory is categorized into thread-private and shared portions.
Threads communicate with each other using the shared memory.
However, they cache frequently accessed locations from shared memory for faster accesses.
Synchronization across this private cache and the shared memory is achieved with the help of memory fence instructions,
which are costly to perform.
With my PhD advisor, I have found some approaches to remove unnecessary fence operations from a parallel program,
and to reduce the cost of various fence operations.
We plan to explore these ideas in depth in near future.
-
Other topics of interest
My other interests include, but are not limited to, the following: synchronization optimizations for distributed systems;
performance analysis of parallel programs; formal verification of parallel programs; etc.
-
Services
-
Served as an Artifact Evaluation PC member, for Principles and Practice of Parallel Programming (PPoPP) 2018.
(CORE Rating: A)
-
Served as an Artifact Evaluation PC member, for Principles and Practice of Parallel Programming (PPoPP) 2019.
(CORE Rating: A)
-
Associations
My PhD advisor is Prof. V. Krishna Nandivada, Dept. of Computer Science and Engineering, IIT Madras.
It is fun to work under the guidance of Krishna.
He is good at pushing me forward when I get lazy, and pulling me up when I fall.
He is a great source of motivation.
His knowledge, logical reasoning skills, and a healthy attitude towards problems
have always helped me in my research.
His honesty towards his work (and towards everything else in life) is one of the primary reasons
why I converted from MS to PhD program at IIT Madras.
I have learnt a lot from him about research (and life), and a lot is still left to be learnt.
In the winter of 2014, I was a research intern at Microsoft Research India Pvt. Ltd., Bangalore for a period of 15 weeks,
in the Programming Languages and Tools (PLATO) group.
I worked in the region-based memory management module of an ongoing project "Broom".
My mentor was Dr. Ganesan Ramalingam, Principal Researcher, MSR Bangalore.
I count myself fortunate to get an opportunity to work with Rama and other members of the project.
It was always surprising to see how Rama explained and solved complicated problems in a simple and intuitive manner.
He is always helpful and humble. I hope to work with him again sometime soon in future.