A Set of Shannon Entropy

Shannon Entropy For discrete random variable $X$ with events $\{ x_1, …, x_n \}$ and probability mass function $P(X)$, we defien the Shannon Entropy $H(X)$ as $$H(X) = E[-log_b \ P(X)] = - \sum_{i = 1}^{i = n} \ P(x_i) log_b \ P(x_i)$$ where $b$ is the base of the logarithm. The unit of Shannon entropy is bit for $b = 2$ while nat for $b = e$ The Perspective of Venn Diagram We can illustrate the relation between joint entropy, conditional entropy, and mutual entropy as the following figure...

February 23, 2021 · 3 min · SY Chou

A Guide Of Variational Lower Bound
  [draft]

Problem Setup The Variational Lower Bound is also knowd as Evidence Lower Bound(ELBO) or VLB. It is quite useful that we can derive a lower bound of a model containing a hidden variable. Futhermore, we can even maximize the bound to maximize the log probability. We can assume that $X$ are observations (data) and $Z$ are hidden/latent variables which is unobservable. In general, we can also imagine $Z$ as a parameter and the relationship between $Z$ and $X$ are represented as the following...

February 23, 2021 · 4 min · SY Chou