End-to-End Memory Networks at NIPS 2015: Enhancing Neural Models with External Memory π
Discover how End-to-End Memory Networks leverage a recurrent attention mechanism to effectively utilize large external memories, advancing neural network capabilities. Presented at NIPS 2015.

Rishabh Kumar
550 views β’ Aug 16, 2019

About this video
Title: End-To-End Memory Networks
Conference: NIPS 2015
A neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (Weston et al., 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.
link: https://arxiv.org/pdf/1503.08895.pdf
Slides : https://docs.google.com/presentation/d/14wuLoMK8jsdJKMhd_sGD6v1Ejfw6QVWDeDlLtdhOJPI/edit?usp=sharing
Conference: NIPS 2015
A neural network with a recurrent attention model over a possibly large external memory. The architecture is a form of Memory Network (Weston et al., 2015) but unlike the model in that work, it is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. It can also be seen as an extension of RNNsearch to the case where multiple computational steps (hops) are performed per output symbol. The flexibility of the model allows us to apply it to tasks as diverse as (synthetic) question answering and to language modeling. For the former our approach is competitive with Memory Networks, but with less supervision. For the latter, on the Penn TreeBank and Text8 datasets our approach demonstrates comparable performance to RNNs and LSTMs. In both cases we show that the key concept of multiple computational hops yields improved results.
link: https://arxiv.org/pdf/1503.08895.pdf
Slides : https://docs.google.com/presentation/d/14wuLoMK8jsdJKMhd_sGD6v1Ejfw6QVWDeDlLtdhOJPI/edit?usp=sharing
Tags and Topics
Browse our collection to discover more content in these categories.
Video Information
Views
550
Likes
1
Duration
7:54
Published
Aug 16, 2019
Related Trending Topics
LIVE TRENDSRelated trending topics. Click any trend to explore more videos.
Trending Now