Unpublished Mansucripts

    2017

  1. Sharma, S., Suresh, A., Ramesh, R. and Ravindran, B. (2017) "Learning to Factor Policies and Action-Value Functions: Factored Action Space Representations for Deep Reinforcement learning". arXiv

  2. Sharma, S., Ramesh S., Raguvir G. and Ravindran, B. (2017) "Learning to Mix n-Step Returns: Generalizing lambda-Returns for Deep Reinforcement Learning". arXiv

  3. Sharma, S., Jha, A., Hegde, P. and Ravindran, B. (2017) "Learning to Multi-Task by Active Sampling". arXiv (Expanded version of the ICLR 2017 workshop paper)

  4. Deshpande, P., and Ravindran, B. (2017) "MCEIL: An Improved Scoring Function for Overlapping Community Detection using Seed Expansion Methods". Presented at the Sixth International Workshop on Social Networks Analytics in Applications (SNAA 2017) held with the Ninth IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2017), Sydney, Australia.

  5. Mukherjee, S., Naveen, K. P., Sudarsanam, N., and Ravindran, B. (2017) "Thresholding Bandits with Augmented UCB". arXiv

  6. Ganapathy, S., Venkataramani, S., Ravindran, B., and Raghunathan, A. (2017) "DyVEDeep: Dynamic Variable Effort Deep Neural Networks". arXiv

  7. Sharma, S., Ravindran, B. (2017) "Online Multi-Task Learning Using Active Sampling". Accepted at the Fifth International Conference on Learning Representations (ICLR 2017) Workshop Track.

    2016

  8. Mishra, P., and Ravindran, B. (2016) "A Developmental Approach to Learning Affordances". Accepted at the 2016 NIPS workshop on Continual Learning and Deep Learning. PDF

  9. Ansari, G. A., Sagar, J P, Chandar, S., and Ravindran, B. (2016) "Language Expansion In Text-Based Games". Accepted at the 2016 NIPS workshop on Deep Reinforcement Learning. PDF

  10. Choudhary, M., Muthuravichandran, G., and Ravindran, B. (2016) "Imitation Learning by Programs". Accepted at the 2016 NIPS workshop on Deep Reinforcement Learning. PDF

  11. Bangaru, S. P., Suhas, J., Ravindran, B. (2016) "Exploration for Multi-task Reinforcement Learning with Deep Generative Models". Accepted at the 2016 NIPS workshop on Deep Reinforcement Learning. arXiv

  12. Prasad, V., Singh, S., Pareekutty, N., Ravindran, B. and Krishna, M. (2016) "SLAM-Safe Planner: Preventing Monocular SLAM Failure using Reinforcement Learning", arXiv

  13. Lakshminarayanan, A. S., Sharma, S., and Ravindran, B. (2016) " Dynamic Frame skip Deep Q Network". Accepted at the IJCAI Workshop on Deep Reinforcement Learning: Frontiers and Challenges, New Yok City, July 2016. arXiv version.

  14. Sudarsanam, N. and Ravindran, B. (2016) : "Linear Bandit algorithms using the Bootstrap". Expanded version of the RLDM poster below. arXiv

  15. Krishnamurthy, R., Lakshminarayanan, A. S., Kumar, P., Ravindran, B. (2016) "Hierarchical Reinforcement Learning using Spatio-Temporal Abstractions and Deep Neural Networks". Accepted at the ICML Workshop on Abstraction in Reinforcement Learning, New York City, June 2016. arXiv version.

  16. 2015

  17. Prasanna, P., Chandar, S., and Ravindran, B. (2015) "TSEB: More Efficient Thompson Sampling for Policy Learning". Expanded version of he RLDM poster below. arXiv

  18. Sudarsanam, N., Ravindran, B., and Saha, A. (2015) "Bootstrapped Linear Bandits". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  19. Nagarajan, V., and Ravindran, B. (2015) "KWIK Inverse Reinforcement Learning". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  20. Krishnamurthy, R., Kumar, P., Nainani, N., and Ravindran, B. (2015) "Hierarchical Decision Making using Spatio-Temporal Abstractions In Reinforcement Learning". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  21. Prasanna, P., Chandar, S., and Ravindran, B. (2015) "Thompson Sampling with Adaptive Exploration Bonus for Near-Optimal Policy Learning". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  22. Muralidharan, V., Balasubramani, P., Chakravarthy, S., Ravindran B., Lewis S., and Moustafa A. (2015) Nagarajan, V., and Ravindran, B. (2015) "A Computational Model of Gait Changes in Parkinson’s Disease Patients Passing Through Doorways". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  23. Rajendran J., Prasanna, P., Ravindran, B., and Khapra, M. M. (2015) "ADAAPT: A Deep Architecture for Adaptive Policy Transfer from Multiple Sources". arXiv

  24. Saha, A., Misra, R., and Ravindran, B. (2015) "Scalable Bayesian Matrix Factorization". Accepted at the Sixth International Workshop on Mining Ubiquitous and Social Environments (MUSE), co-located with ECML/PKDD 2015, Porto, Portugal.

  25. Garlapati, A., Raghunathan, A., Nagarajan, V., and Ravindran, B. (2015) "A Reinforcement Learning Approach to Online Learning of Decision Trees". Accepted at the European Workshop on Reinforcement Learning (EWRL 2015). arXiv

  26. 2014

  27. Prasanna P., Sarath Chandar, A. P., and Ravindran, B. (2014) "iBayes: A Thompson Sampling Approach to Reinforcement Learning with Instructions". Presented at the NIPS workshop on Novel Trends and Applications in Reinforcement Learning.

  28. Pasumarthi, R. K., Narayanam, R., and Ravindran, B. (2014) "Targeted Influence Maximization through a Social Network". Presented at the NIPS workshop on Networks: From Graphs to Rich Data. PDF

  29. Gupte, P. V. and Ravindran, B. (2014) "Scalable Positional Analysis for Studying Evolution of Nodes in Networks". Accepted for presentation in the Workshop on Mining Networks and Graphs, at the Siam Conference on Data Mining (SDM 14). arXiv

  30. 2013

  31. Sarath Chandar, A. P., Khapra, M., Ravindran, B., Raykar, V., and Saha, A. (2013) "Multilingual Deep Learning". In the Deep Learning Workshop at NIPS 2013.

  32. Kumar, P., Narasimhan, N., and Ravindran, B. (2013) "Spectral Clustering as Mapping to a Simplex". Accepted at the 2013 ICML workshop on Spectral Learning. Atlanta, GA, USA.

  33. Jain, S. K., Satchidanand, S. N., Maurya, A. K., and Ravindran, B. (2013) "Studying Indian Railways Network using Hypergraphs". Accepted as a poster presentation at the International School and Conference on Network Science (NetSci 2013). Copenhagen, Denmark.

  34. 2012

  35. Pradyot K. V. N., Manimaran, S. S., Ravindran, B., and Natarajan, S. (2012) "Integrating Human Instructions and Reinforcement Learners : An SRL Approach". In the Proceedings of the UAI workshop on Statistical Relational AI (StarAI 2012).

  36. Kumar, P., Mathew, V., and Ravindran, B. (2012) "Abstraction in Reinforcement Learning in Terms of Metastability". In the Proceedings of the European Workshop on Reinforcement Learning (EWRL 2012).
  37. Chaganty, A., and Ravindran, B. (2012) "Discovering Continuous Homomorphisms for Transfer". In the Proceedings of the European Workshop on Reinforcement Learning (EWRL 2012).
  38. Gupte, P. and Ravindran, B. (2012) "Multiple Epsilon Equitable Partitions - Roles and Positional Analysis for Real World Networks". Accepted for presentation at the Thirty Second Sunbelt Social Networks Conference (Sunbelt XXXII).

  39. Dharwez, S., Shivashankar, S., and Ravindran, B. (2012) "Why Collective Classification is Successful? : A Homophily based Analysis on Network Data". Accepted for presentation at the Thirty Second Sunbelt Social Networks Conference (Sunbelt XXXII).

  40. 2011

  41. Pradyot, K. V. N. and Ravindran, B. (2011) "Beyond Rewards: Learning with Richer Supervision". In the Proceedings of the Ninth European Workshop on Reinforcement Learning (EWRL 2011).

  42. Saravanan, M., Bharanidharan, S., and Ravindran, B. (2011) "Collective Learning of the Community Effect on Chrun". Accepted at the Workshop on Collective Learning and Inference on Structured Data, held in conjunction with the Twenty Second European Conference on Machine Learning (ECML PKDD 2011).

  43. 2009

  44. Malpani, A., Ravindran, B., and Murthy, H. A. (2009) "Personalized Intelligent Tutoring System Using Reinforcement Learning". Presented at the Multidisciplinary Symposium on Reinforcement Learning. (Preliminary version of the paper at FLAIRS 2011.)

  45. Mohamed, M., Chakravarthy, V. S., Subramanian, D., and Ravindran, B. (2009) "The Role of Basal Ganglia in Performing Simple Reaching Movements: A Computational Model". Presented at the Multidisciplinary Symposium on Reinforcement Learning. (Shorter version of the paper at IGS09.)

  46. 2008

  47. Balaji, L. and Ravindran, B. (2008) "Transfer Learning with Differently-abled Robots". In the IROS Workshop on Robotics Challenges for Machine Learning.

  48. Cheboli, D. and Ravindran, B. (2008) "Detection of keratoconus by semi-supervised learning". In the ICML/UAI workshop on Machine Learning in health care applications. Abstract  PDF

  49. 2007

  50. Jayarajan, D., Deodhare, D., Ravindran, B., and Sarkar, S. (2007) "Document Clustering using Lexical Chains". In the Proceedings of the Workshop on Text-Mining & Link-Analysis (TextLink 2007). Abstract  PDF

  51. 2006

  52. Awasthi, P., Rao, D. G., and Ravindran, B. (2006) "Part Of Speech Tagging and Chunking with HMM and CRF". In the Proceedings of NLPAI Machine Learning Contest 2006 , Mumbai, India.

  53. 2005

  54. Saravanan, M., Ravindran, B., and Raman, S. (2005) "A Review of Automatic Summarization". Presented in the Workshop on Optical character Recognition with workflow and Document Summarization, IIIT Allahabad, March 19-20.

  55. Saravanan, M., Ravindran, B., and Raman, S. (2005) "Learn to Teach Autistic Children". Presented in the National Conference on Computational Intelligence (St. Joseph's College, Trichy), Feb 16-18.
+ No printed proceedings