Machine Learning is a wide and fast growing field in Computer Science. We are engaged in a theoretical study of models such as RBM and DBNs that are prevalent in Deep Learning networks. Our work aims to understand and improve the training algorithms used there in.
Constraint Satisfaction Problems or CSPs encapsulate a huge number of real-world optimization problems. The complexity of solving or approximating the optimum of such problems is thus an important object of study.
A common thread in much of my work has been the study of convex relaxations for designing approximation algorithms. This yields results of the following flavour: for a fairly general class of problems, a specific relaxation already yields optimal approximations, upto current algorithmic techniques.