-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
likelihood gradient acceleration computing from Sigma matrix #29
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request adds the function to compute likelihood gradient from Sigma matrix expression, instead of Qff. It needs to be merged after #27
The
compute_likelihood_gradient
method constructQff
matrix and invert it via QR decomposition. The cost isN_full^3
. Whilecompute_likelihood_stable
uses theSigma
matrix which avoids the usage ofQff
.I added the method
compute_likelihood_gradient_stable
which computes the gradient ofcompute_likelihood_stable
. Now the most expensive part when usingcompute_likelihood_gradient_stable
is the QR decomposition of matrix A to get Sigma, and the cost isN_full * N_sparse^2
. Therefore, the speedup and memory saving are both(N_full / N_sparse)^2
.Notes:
For
NormalizedDotProduct
kernel, the computational cost can be further reduced (a bit) using the fact that Kuf_grad is only a constant different from Kuf. So for this kernel, some results can be precomputed and stored, which are used repeatedly during the training and no need for re-computing. Therefore, there are optionsprecompute...
added for some functions. This part of code might be re-organized a bit for a cleaner way.Below is a timing report from Cameron (on the warm & melt Au dataset), where the
tot_time
is the total time for hyperparameter optimization. Each frame selects 5 sparse envs.This comparison is not exact, because different training sizes have different numbers of iterations. But the speedup is still significant.
Todo
The likelihood scales with
n_labels
, maybe we should normalize it withn_labels
?