Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why compute bernoulli entropy in this way? #13

Open
lucas-yang256 opened this issue May 1, 2020 · 0 comments
Open

why compute bernoulli entropy in this way? #13

lucas-yang256 opened this issue May 1, 2020 · 0 comments

Comments

@lucas-yang256
Copy link

lucas-yang256 commented May 1, 2020

as the code written [here](def logit_bernoulli_entropy(logits_B):
ent_B = (1.-tensor.nnet.sigmoid(logits_B))*logits_B - logsigmoid(logits_B)
return ent_B), bernoulli was computed by this way

def logit_bernoulli_entropy(logits_B):
ent_B = (1.-tensor.nnet.sigmoid(logits_B))*logits_B - logsigmoid(logits_B)
return ent_B

but it's different to the equation of binary entropy:
$-p\log p - (1-p)\log(1-p)$

is there any relationship between these two expressions? or why does openai compute bernoulli entropy that way? is there any theoretical equation support?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant