Yoel Zeldes 3/22/2018

Gated Multimodal Units for Information Fusion

Read Original

This technical article details the Gated Multimodal Unit (GMU), a neural network component for multimodal information fusion. It explains the GMU's self-attention mechanism, which allows a model to dynamically weight input from different modalities (e.g., vision and text) based on their relevance. The post includes the model's equations and a practical implementation with a synthetic dataset to demonstrate how the GMU learns to ignore noisy input channels.

Gated Multimodal Units for Information Fusion

Comments

No comments yet

Be the first to share your thoughts!

Browser Extension

Get instant access to AllDevBlogs from your browser