You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some applications the gradient vector product dot(∇ϕx, θ) is significantly cheaper to compute than the full gradient ∇ϕx. For the samplers where it makes sense to support this, it could be nice to implement it.
(Would be pretty cool if ∇ϕx could be lazy in some sense, dot(∇ϕx,v) evaluated the directional derivative and reflect!(∇ϕx,... forced a full gradient evaluation.)
The text was updated successfully, but these errors were encountered:
In some applications the gradient vector product
dot(∇ϕx, θ)
is significantly cheaper to compute than the full gradient∇ϕx
. For the samplers where it makes sense to support this, it could be nice to implement it.(Would be pretty cool if
∇ϕx
could be lazy in some sense,dot(∇ϕx,v)
evaluated the directional derivative andreflect!(∇ϕx,...
forced a full gradient evaluation.)The text was updated successfully, but these errors were encountered: