Yoel Zeldes 7/16/2018

Neural Networks gone wild! They can sample from discrete distributions now!

Read Original

This technical article addresses the challenge of training neural networks with stochastic nodes that sample from discrete distributions, where gradients cannot normally propagate. It introduces the Gumbel-Max trick and the Gumbel-Softmax (or Concrete) distribution as solutions, allowing for gradient-based optimization. The piece includes a breakdown of the Gumbel distribution and a practical, coded toy example for implementation.

Neural Networks gone wild! They can sample from discrete distributions now!

Comments

No comments yet

Be the first to share your thoughts!