[HN Gopher] Maximum Likelihood Estimation and Loss Functions
___________________________________________________________________
Maximum Likelihood Estimation and Loss Functions
Author : snprajwal
Score : 34 points
Date : 2024-12-15 18:07 UTC (4 hours ago)
(HTM) web link (rish-01.github.io)
(TXT) w3m dump (rish-01.github.io)
| skzv wrote:
| To bring things full circle: the cross-entropy loss is the KL
| divergence. So intuitively, when you're minimizing cross-entropy
| loss, you're trying to minimize the "divergence" between the true
| distribution and your model distribution.
|
| This intuition really helped me understand CE loss.
| sidr wrote:
| Cross-entropy is not the KL divergence. There is an additional
| term in cross-entropy which is the entropy of the data
| distribution (i.e., independent of the model). So, you're right
| in that minimizing one is equivalent to minimizing the other.
|
| https://stats.stackexchange.com/questions/357963/what-is-the...
___________________________________________________________________
(page generated 2024-12-15 23:00 UTC)