A Differentiable Generative Adversarial Network for Open Domain Dialogue
López Zorrilla, Asier
De Velasco Vázquez, Mikel
Torres Barañano, María Inés
MetadataShow full item record
IWSDS 2019, Siracusa, Italy, April 24-26, 2019
This work presents a novel methodology to train open domain neural dialogue systems within the framework of Generative Adversarial Networks with gradient-based optimization methods. We avoid the non-differentiability related to text-generating networks approximating the word vector corresponding to each generated token via a top-k softmax. We show that a weighted average of the word vectors of the most probable tokens computed from the probabilities resulting of the top-k softmax leads to a good approximation of the word vector of the generated token. Finally we demonstrate through a human evaluation process that training a neural dialogue system via adversarial learning with this method successfully discourages it from producing generic responses. Instead it tends to produce more informative and variate ones.