Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-conditioning CG #12

Open
balbasty opened this issue Dec 21, 2020 · 0 comments
Open

Pre-conditioning CG #12

balbasty opened this issue Dec 21, 2020 · 0 comments
Labels
enhancement New feature or request

Comments

@balbasty
Copy link
Collaborator

Hi Mikael,

I think you can get CG to converge dramatically faster by using a relatively simple preconditioner.
From what I understand, CG is really just gradient descent, except that you don't descend along directions that you have already visited (you project the current gradient on the subspace orthogonal to all previous gradients). This means that without preconditioning, we can take very tiny steps, and preconditioning can be seen as using an approximation of the Hessian to take larger (better) steps. I found that using a diagonal preconditioner that is more positive definite than the true Hessian makes a very good conditioner. In your case, your Hessian (the system that you are solving) is t*A'A + lam*D'D, so your preconditioner can be inv(t*diag(A'A*1) + lam*diag(D'D)). Since D'D is just a convolution, its diagonal is constant and equal to 2/sum(vx ** 2) (see the Dartel paper: that is the central weight in the membrane energy).

So at each step, preconditioning just means dividing voxel-wise by t*A'A*1 + 2/sum(vx ** 2)

Happy Christmas!

@balbasty balbasty added the enhancement New feature or request label Dec 21, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant