Unpublished Mansucripts

    2019

  1. Maurya, D., Ravindran, B., Narasimhan, S. (2019) "Hypergraph Partitioning using Tensor Eigenvalue Decomposition". Accepted as a poster presentation at the Sets & Partitions workshop in NeurIPS 2019.

  2. Saphal, R., Ravindran, B., Mudigere, D., Avancha, S., Kaul, B. (2019) "SEERL : Sample Efficient Ensemble Reinforcement Learning". Accepted as a poster presentation at the Deep Reinforcement Learning workshop in NeurIPS 2019.

  3. Narayanaswami, S. K., Sudarsanam, N., Ravindran, B. (2019) "An active learning framework for efficient robust policy search". Accepted as a poster presentation at the Safety and Robustness in Decision Making workshop in NeurIPS 2019.

  4. Kamarthi, H., Vijayan, P., Wilder, B., Ravindran, B., Tambe, M. (2019) "Network discovery using Reinforcement Learning". Accepted as a poster presentation at the Graph Representation Learning workshop in NeurIPS 2019. PDF

  5. Maurya, D., Ravindran, B., Narasimhan, S. (2019) "Hyperedge Prediction using Tensor Eigenvalue Decomposition". Accepted as a poster presentation at the Tensor Methods for Emerging Data Science Challenges (TMEDSC) workshop in KDD 2019.

  6. Moghe, N., Vijayan, P., Ravindran, B., Khapra, M. (2019) "On Incorporating Structural Information to Improve Dialogue Response Generation". Accepted as a poster presentation at the first annual EurNLP Summit (EurNLP). London.

  7. Kamarthi, H., Vijayan, P., Wilder, B., Ravindran, B., Tambe, M. (2019) "Learning policies for Social network discovery with Reinforcement learning" arXiv

  8. Madan, R., Santara, A., Ravindran, B., Mitra P. (2019) "ExTra: Transfer-guided Exploration" arXiv Accepted as a poster presentation at the Montreal AI Symposium. Montreal, Canada.

  9. Ghose, A., Ravindran, B. (2019) "Learning Interpretable Models Using an Oracle" arXiv

  10. Ghose, A., Ravindran, B. (2019) "Optimal Resampling for Learning Small Models" arXiv

  11. Gurukar, S., Vijayan, P., Srinivasan, A., Bajaj, G., Cai C., Keymanesh, M., Kumar, S., Maneriker, P., Mitra, A., Patel, V., Ravindran, B., Parthasarathy, S. (2019) "Network Representation Learning: Consolidation and Renewed Bearing" arXiv

  12. Kumar, H., Ravindran, B. (2019) "Polyphonic Music Composition with LSTM Neural Networks and Reinforcement Learning" arXiv

  13. Narayanaswami, S.K., Sudarsanam, N., Ravindran, B. (2019) "An Active Learning Framework for Efficient Robust Policy Search" arXiv

    2018

  14. Kumar, T., Vaidyanathan, S., Ananthapadmanabhan, H., Parthasarathy, S., Ravindran, B. (2018) "Hypergraph Clustering: A Modularity Maximization Approach" arXiv

  15. Deshpande, A., Ravindran, B. (2018) "Discovering hierarchies using Imitation Learning from hierarchy aware policies" arXiv

  16. Vijayan, P., Chandak, Y., Khapra, M.M., Ravindran, B. (2018) "HOPF: Higher Order Propagation Framework for Deep Collective Classification" arXiv Presented at the Eighth International Workshop on Statistical Relational AI at the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018).

  17. Vijayan, P., Chandak, Y., Khapra, M.M., Ravindran, B. (2018) "Fusion Graph Convolutional Networks" arXiv Presented at the 14th International Workshop on Machine Learning with Graphs, 24th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2018). (Expanded version of NIPS 2016 workshop paper)

  18. Dewangan, P., Phaniteja, S., Krishna, K.M., Sarkar, A., Ravindran, B. (2018) "DiGrad: Multi-Task Reinforcement Learning with Shared Actions" arXiv

    2017

  19. Sudarsanam, N., Kumar, N., Sharma, A., Ravindran, B. (2017) "Rate of Change Analysis for Interestingness Measures" arXiv

  20. Menon, R.R., Ravindran, B. (2017) "Shared Learning : Enhancing Reinforcement in Q-Ensembles" arXiv

  21. Sharma, S., Suresh, A., Ramesh, R. and Ravindran, B. (2017) "Learning to Factor Policies and Action-Value Functions: Factored Action Space Representations for Deep Reinforcement learning". arXiv

  22. Sharma, S., Ramesh S., Raguvir G. and Ravindran, B. (2017) "Learning to Mix n-Step Returns: Generalizing lambda-Returns for Deep Reinforcement Learning". arXiv

  23. Sharma, S., Jha, A., Hegde, P. and Ravindran, B. (2017) "Learning to Multi-Task by Active Sampling". arXiv (Expanded version of the ICLR 2017 workshop paper)

  24. Deshpande, P., and Ravindran, B. (2017) "MCEIL: An Improved Scoring Function for Overlapping Community Detection using Seed Expansion Methods". Presented at the Sixth International Workshop on Social Networks Analytics in Applications (SNAA 2017) held with the Ninth IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2017), Sydney, Australia.

  25. Ganapathy, S., Venkataramani, S., Ravindran, B., and Raghunathan, A. (2017) "DyVEDeep: Dynamic Variable Effort Deep Neural Networks". arXiv

  26. Menon, R., and Ravindran, B. (2017) "Prediction Error-based Transfer in Q-Ensembles". Accepted at the 2017 NIPS Deep Reinforcement Learning Symposium. PDF

  27. Sharma, S., Ravindran, B. (2017) "Online Multi-Task Learning Using Active Sampling". Accepted at the Fifth International Conference on Learning Representations (ICLR 2017) Workshop Track.

    2016

  28. Mishra, P., and Ravindran, B. (2016) "A Developmental Approach to Learning Affordances". Accepted at the 2016 NIPS workshop on Continual Learning and Deep Learning. PDF

  29. Ansari, G. A., Sagar, J P, Chandar, S., and Ravindran, B. (2016) "Language Expansion In Text-Based Games". Accepted at the 2016 NIPS workshop on Deep Reinforcement Learning. PDF

  30. Choudhary, M., Muthuravichandran, G., and Ravindran, B. (2016) "Imitation Learning by Programs". Accepted at the 2016 NIPS workshop on Deep Reinforcement Learning. PDF

  31. Bangaru, S. P., Suhas, J., Ravindran, B. (2016) "Exploration for Multi-task Reinforcement Learning with Deep Generative Models". Accepted at the 2016 NIPS workshop on Deep Reinforcement Learning. arXiv

  32. Prasad, V., Singh, S., Pareekutty, N., Ravindran, B. and Krishna, M. (2016) "SLAM-Safe Planner: Preventing Monocular SLAM Failure using Reinforcement Learning", arXiv

  33. Lakshminarayanan, A. S., Sharma, S., and Ravindran, B. (2016) " Dynamic Frame skip Deep Q Network". Accepted at the IJCAI Workshop on Deep Reinforcement Learning: Frontiers and Challenges, New Yok City, July 2016. arXiv version.

  34. Sudarsanam, N. and Ravindran, B. (2016) : "Linear Bandit algorithms using the Bootstrap". Expanded version of the RLDM poster below. arXiv

  35. Krishnamurthy, R., Lakshminarayanan, A. S., Kumar, P., Ravindran, B. (2016) "Hierarchical Reinforcement Learning using Spatio-Temporal Abstractions and Deep Neural Networks". Accepted at the ICML Workshop on Abstraction in Reinforcement Learning, New York City, June 2016. arXiv version.

  36. 2015

  37. Prasanna, P., Chandar, S., and Ravindran, B. (2015) "TSEB: More Efficient Thompson Sampling for Policy Learning". Expanded version of he RLDM poster below. arXiv

  38. Sudarsanam, N., Ravindran, B., and Saha, A. (2015) "Bootstrapped Linear Bandits". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  39. Nagarajan, V., and Ravindran, B. (2015) "KWIK Inverse Reinforcement Learning". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  40. Krishnamurthy, R., Kumar, P., Nainani, N., and Ravindran, B. (2015) "Hierarchical Decision Making using Spatio-Temporal Abstractions In Reinforcement Learning". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  41. Prasanna, P., Chandar, S., and Ravindran, B. (2015) "Thompson Sampling with Adaptive Exploration Bonus for Near-Optimal Policy Learning". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  42. Muralidharan, V., Balasubramani, P., Chakravarthy, S., Ravindran B., Lewis S., and Moustafa A. (2015) Nagarajan, V., and Ravindran, B. (2015) "A Computational Model of Gait Changes in Parkinson’s Disease Patients Passing Through Doorways". Accepted at the 2nd Multidisciplinary Conference on Reinforcement Learning and Decision Making, Alberta, Canada, June 2015.

  43. Rajendran J., Prasanna, P., Ravindran, B., and Khapra, M. M. (2015) "ADAAPT: A Deep Architecture for Adaptive Policy Transfer from Multiple Sources". arXiv

  44. Saha, A., Misra, R., and Ravindran, B. (2015) "Scalable Bayesian Matrix Factorization". Accepted at the Sixth International Workshop on Mining Ubiquitous and Social Environments (MUSE), co-located with ECML/PKDD 2015, Porto, Portugal.

  45. Garlapati, A., Raghunathan, A., Nagarajan, V., and Ravindran, B. (2015) "A Reinforcement Learning Approach to Online Learning of Decision Trees". Accepted at the European Workshop on Reinforcement Learning (EWRL 2015). arXiv

  46. 2014

  47. Prasanna P., Sarath Chandar, A. P., and Ravindran, B. (2014) "iBayes: A Thompson Sampling Approach to Reinforcement Learning with Instructions". Presented at the NIPS workshop on Novel Trends and Applications in Reinforcement Learning.

  48. Pasumarthi, R. K., Narayanam, R., and Ravindran, B. (2014) "Targeted Influence Maximization through a Social Network". Presented at the NIPS workshop on Networks: From Graphs to Rich Data. PDF

  49. Gupte, P. V. and Ravindran, B. (2014) "Scalable Positional Analysis for Studying Evolution of Nodes in Networks". Accepted for presentation in the Workshop on Mining Networks and Graphs, at the Siam Conference on Data Mining (SDM 14). arXiv

  50. 2013

  51. Sarath Chandar, A. P., Khapra, M., Ravindran, B., Raykar, V., and Saha, A. (2013) "Multilingual Deep Learning". In the Deep Learning Workshop at NIPS 2013.

  52. Kumar, P., Narasimhan, N., and Ravindran, B. (2013) "Spectral Clustering as Mapping to a Simplex". Accepted at the 2013 ICML workshop on Spectral Learning. Atlanta, GA, USA.

  53. Jain, S. K., Satchidanand, S. N., Maurya, A. K., and Ravindran, B. (2013) "Studying Indian Railways Network using Hypergraphs". Accepted as a poster presentation at the International School and Conference on Network Science (NetSci 2013). Copenhagen, Denmark.

  54. 2012

  55. Pradyot K. V. N., Manimaran, S. S., Ravindran, B., and Natarajan, S. (2012) "Integrating Human Instructions and Reinforcement Learners : An SRL Approach". In the Proceedings of the UAI workshop on Statistical Relational AI (StarAI 2012).

  56. Kumar, P., Mathew, V., and Ravindran, B. (2012) "Abstraction in Reinforcement Learning in Terms of Metastability". In the Proceedings of the European Workshop on Reinforcement Learning (EWRL 2012).
  57. Chaganty, A., and Ravindran, B. (2012) "Discovering Continuous Homomorphisms for Transfer". In the Proceedings of the European Workshop on Reinforcement Learning (EWRL 2012).
  58. Gupte, P. and Ravindran, B. (2012) "Multiple Epsilon Equitable Partitions - Roles and Positional Analysis for Real World Networks". Accepted for presentation at the Thirty Second Sunbelt Social Networks Conference (Sunbelt XXXII).

  59. Dharwez, S., Shivashankar, S., and Ravindran, B. (2012) "Why Collective Classification is Successful? : A Homophily based Analysis on Network Data". Accepted for presentation at the Thirty Second Sunbelt Social Networks Conference (Sunbelt XXXII).

  60. 2011

  61. Pradyot, K. V. N. and Ravindran, B. (2011) "Beyond Rewards: Learning with Richer Supervision". In the Proceedings of the Ninth European Workshop on Reinforcement Learning (EWRL 2011).

  62. Saravanan, M., Bharanidharan, S., and Ravindran, B. (2011) "Collective Learning of the Community Effect on Chrun". Accepted at the Workshop on Collective Learning and Inference on Structured Data, held in conjunction with the Twenty Second European Conference on Machine Learning (ECML PKDD 2011).

  63. 2009

  64. Malpani, A., Ravindran, B., and Murthy, H. A. (2009) "Personalized Intelligent Tutoring System Using Reinforcement Learning". Presented at the Multidisciplinary Symposium on Reinforcement Learning. (Preliminary version of the paper at FLAIRS 2011.)

  65. Mohamed, M., Chakravarthy, V. S., Subramanian, D., and Ravindran, B. (2009) "The Role of Basal Ganglia in Performing Simple Reaching Movements: A Computational Model". Presented at the Multidisciplinary Symposium on Reinforcement Learning. (Shorter version of the paper at IGS09.)

  66. 2008

  67. Balaji, L. and Ravindran, B. (2008) "Transfer Learning with Differently-abled Robots". In the IROS Workshop on Robotics Challenges for Machine Learning.

  68. Cheboli, D. and Ravindran, B. (2008) "Detection of keratoconus by semi-supervised learning". In the ICML/UAI workshop on Machine Learning in health care applications. Abstract  PDF

  69. 2007

  70. Jayarajan, D., Deodhare, D., Ravindran, B., and Sarkar, S. (2007) "Document Clustering using Lexical Chains". In the Proceedings of the Workshop on Text-Mining & Link-Analysis (TextLink 2007). Abstract  PDF

  71. 2006

  72. Awasthi, P., Rao, D. G., and Ravindran, B. (2006) "Part Of Speech Tagging and Chunking with HMM and CRF". In the Proceedings of NLPAI Machine Learning Contest 2006 , Mumbai, India.

  73. 2005

  74. Saravanan, M., Ravindran, B., and Raman, S. (2005) "A Review of Automatic Summarization". Presented in the Workshop on Optical character Recognition with workflow and Document Summarization, IIIT Allahabad, March 19-20.

  75. Saravanan, M., Ravindran, B., and Raman, S. (2005) "Learn to Teach Autistic Children". Presented in the National Conference on Computational Intelligence (St. Joseph's College, Trichy), Feb 16-18.
+ No printed proceedings