Devi Ganesan

I have listed below some of my AIDB Labtalks along with links to references.

Cargo Cult Science - on Nov 28, 2017

Some remarks on science, pseudoscience, and learning how to not fool yourself. Caltech’s 1974 commencement address by Richard P.Feynman.

Labtalk PPT

Semantic Word Embeddings - on June 22, 2017

The popular skip-gram model of word embeddings is completely data-driven. In this paper, the authors have proposed a general framework to incorporate lexical semantic knowledge into the data-driven neural model. Semantic knowledge is represented using ranking inequalities and learning is posed as a constrained optimization problem. Standard Stochastic Gradient Descent is used to solve this problem.

Labtalk PPT

References :
J. Bian, B. Gao, and T.-Y. Liu, “Knowledge-Powered Deep Learning for Word Embedding.”
M. Faruqui, J. Dodge, S. K. Jauhar, C. Dyer, E. Hovy, and N. A. Smith, “Retrofitting Word Vectors to Semantic Lexicons,” 2014.
M. Yu and M. Dredze, “Improving Lexical Embeddings with Semantic Knowledge,” pp. 545–550.

Word Embeddings - on June 20, 2017

Capturing the meaning of a word is a hard task. Distributional Semantics helps to capture word meanings by representing a word in terms of its co-occurrences. The term ‘Word Embedding’ which is prevalent in Neural Network literature is synonymous to ‘Distributional Semantic Models’ . I discuss the various methods for extracting distributional representation for words , and highlight the advantage of rich word representations from a feature extraction perspective.

Labtalk PPT

References :
CS7015 – Deep Learning – Course slides
Vector Semantics – Jurafsky & Martin
Mikolov, Tomas, et al. “Efficient estimation of word representations in vector space.” arXiv preprint arXiv:1301.3781 (2013)
Mikolov, Tomas, Wen-tau Yih, and Geoffrey Zweig. “Linguistic Regularities in Continuous Space Word Representations.” Hlt-naacl. Vol. 13. 2013
LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. “Deep learning.” Nature 521.7553 (2015): 436-444.