Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Manopt as the optimization package #105

Open
wants to merge 23 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
5767ba3
Add rough draft
pbrehmer Dec 13, 2024
db9821a
Add new RecordActions and return info in leading_boundary
pbrehmer Dec 16, 2024
23b9e28
Add symmetrization
pbrehmer Dec 17, 2024
ce68d55
Remove OptimKit, fix PEPSOptimize constructors
pbrehmer Dec 17, 2024
1822104
Fix OhMyThreads precompilation warning
pbrehmer Dec 17, 2024
9e2b4d6
Update cost function cache
pbrehmer Dec 18, 2024
ca54e0b
Make fixedpoint runnable
pbrehmer Dec 18, 2024
d36b2ff
Make fixedpoint optimize (kind of)
pbrehmer Dec 18, 2024
bb6cdf1
Improve debug printing
pbrehmer Dec 18, 2024
443e58b
Add LineSearches, add docstrings
pbrehmer Dec 19, 2024
d0d42f6
Update Defaults and Defaults docstring
pbrehmer Dec 19, 2024
e38dbce
Adjust scripts to new `leading_boundary` return values
pbrehmer Dec 19, 2024
3b1deef
Fix precompilation and deepcopy(env) in fixedpoint
pbrehmer Dec 20, 2024
714eefb
Merge branch 'master' into pb-manopt
pbrehmer Dec 20, 2024
7832de5
Fix horrible typo in cost function
pbrehmer Dec 20, 2024
a8f113d
Merge branch 'pb-manopt' of github.com:QuantumKitHub/PEPSKit.jl into …
pbrehmer Dec 20, 2024
6d3f233
Improve optimization debug printing, add default stepsize
pbrehmer Dec 20, 2024
6b4a403
Fix tests (mostly), fix _condition_number
pbrehmer Dec 20, 2024
7925e8b
Merge branch 'master' into pb-manopt
pbrehmer Jan 7, 2025
8872d20
Break up peps_opt.jl file
pbrehmer Jan 17, 2025
b020eb4
Update example tests
pbrehmer Jan 17, 2025
ab3cc46
Merge branch 'master' into pb-manopt
pbrehmer Jan 17, 2025
131e49b
Fix formatting and sequential CTMRG
pbrehmer Jan 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,14 @@ ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
Compat = "34da2185-b29b-5c13-b0c7-acf172513d20"
FiniteDifferences = "26cc04aa-876d-5657-8c51-4c34ba976000"
KrylovKit = "0b1a1467-8014-51b9-945f-bf0ae24f4b77"
LineSearches = "d3d80556-e9d4-5f37-9878-2ab0fcc64255"
LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e"
LoggingExtras = "e6f89c97-d47a-5376-807f-9c37f3926c36"
MPSKit = "bb1c41ca-d63c-52ed-829e-0820dda26502"
MPSKitModels = "ca635005-6f8c-4cd1-b51d-8491250ef2ab"
Manifolds = "1cead3c2-87b3-11e9-0ccd-23c62b72b94e"
Manopt = "0fc0a36d-df90-57f3-8f93-d78a9fc72bb5"
OhMyThreads = "67456a42-1dca-4109-a031-0a68de7e3ad5"
OptimKit = "77e91f04-9b3b-57a6-a776-40b61faaebe0"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Expand All @@ -29,12 +31,14 @@ ChainRulesCore = "1.0"
Compat = "3.46, 4.2"
FiniteDifferences = "0.12"
KrylovKit = "0.8"
LineSearches = "7.3.0"
LinearAlgebra = "1"
LoggingExtras = "1"
MPSKit = "0.11"
MPSKitModels = "0.3.5"
Manifolds = "0.10.8"
Manopt = "0.5.4"
OhMyThreads = "0.7"
OptimKit = "0.3"
Printf = "1"
QuadGK = "2.11.1"
Random = "1"
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ opt_alg = PEPSOptimize(;

# ground state search
state = InfinitePEPS(2, D)
ctm = leading_boundary(CTMRGEnv(state, ComplexSpace(chi)), state, ctm_alg)
ctm, = leading_boundary(CTMRGEnv(state, ComplexSpace(chi)), state, ctm_alg)
result = fixedpoint(state, H, opt_alg, ctm)

@show result.E # -0.6625...
Expand Down
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ opt_alg = PEPSOptimize(;

# ground state search
state = InfinitePEPS(2, D)
ctm = leading_boundary(CTMRGEnv(state, ComplexSpace(chi)), state, ctm_alg)
ctm, = leading_boundary(CTMRGEnv(state, ComplexSpace(chi)), state, ctm_alg)
result = fixedpoint(state, H, opt_alg, ctm)

@show result.E # -0.6625...
Expand Down
4 changes: 2 additions & 2 deletions examples/boundary_mps.jl
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ mps, envs, ϵ = leading_boundary(mps, T, VUMPS())
N = abs(prod(expectation_value(mps, T)))

# This can be compared to the result obtained using the CTMRG algorithm
ctm = leading_boundary(
ctm, = leading_boundary(
peps, SimultaneousCTMRG(; verbosity=1), CTMRGEnv(peps, ComplexSpace(20))
)
N´ = abs(norm(peps, ctm))
Expand All @@ -55,7 +55,7 @@ mps2 = PEPSKit.initializeMPS(T2, fill(ComplexSpace(20), 2, 2))
mps2, envs2, ϵ = leading_boundary(mps2, T2, VUMPS())
N2 = abs(prod(expectation_value(mps2, T2)))

ctm2 = leading_boundary(
ctm2, = leading_boundary(
peps2, SimultaneousCTMRG(; verbosity=1), CTMRGEnv(peps2, ComplexSpace(20))
)
N2´ = abs(norm(peps2, ctm2))
Expand Down
6 changes: 3 additions & 3 deletions examples/heisenberg.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
using LinearAlgebra
using TensorKit, OptimKit
using PEPSKit, KrylovKit
using TensorKit, KrylovKit, PEPSKit
using Manopt

# Square lattice Heisenberg Hamiltonian
# We use the parameters (J₁, J₂, J₃) = (-1, 1, -1) by default to capture
Expand All @@ -26,6 +26,6 @@ opt_alg = PEPSOptimize(;
# E/N = −0.6694421, which is a QMC estimate from https://arxiv.org/abs/1101.3281.
# Of course there is a noticable bias for small χbond and χenv.
ψ₀ = InfinitePEPS(2, χbond)
env₀ = leading_boundary(CTMRGEnv(ψ₀, ℂ^χenv), ψ₀, ctm_alg)
env₀, = leading_boundary(CTMRGEnv(ψ₀, ℂ^χenv), ψ₀, ctm_alg)
result = fixedpoint(ψ₀, H, opt_alg, env₀)
@show result.E
2 changes: 1 addition & 1 deletion examples/hubbard_su.jl
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ Espace = Vect[fℤ₂](0 => χenv0 / 2, 1 => χenv0 / 2)
envs = CTMRGEnv(randn, Float64, peps, Espace)
for χ in [χenv0, χenv]
ctm_alg = SequentialCTMRG(; maxiter=300, tol=1e-7)
envs = leading_boundary(envs, peps, ctm_alg)
envs, = leading_boundary(envs, peps, ctm_alg)
end

# Benchmark values of the ground state energy from
Expand Down
169 changes: 119 additions & 50 deletions src/PEPSKit.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@ using Base: @kwdef
using Compat
using Accessors: @set
using VectorInterface
using TensorKit, KrylovKit, MPSKit, OptimKit, TensorOperations
using TensorKit, KrylovKit, MPSKit, TensorOperations
using TensorKit: ℂ, ℝ # To avoid conflict with Manifolds
using ChainRulesCore, Zygote
using LoggingExtras
using MPSKit: loginit!, logiter!, logfinish!, logcancel!
Expand Down Expand Up @@ -56,87 +57,153 @@ include("algorithms/time_evolution/simpleupdate.jl")

include("algorithms/toolbox.jl")

include("algorithms/peps_opt.jl")

include("utility/symmetrization.jl")

include("algorithms/optimization/fixed_point_differentiation.jl")
include("algorithms/optimization/manopt.jl")
include("algorithms/optimization/peps_optimization.jl")

"""
module Defaults
const ctmrg_maxiter = 100
const ctmrg_miniter = 4
const ctmrg_tol = 1e-8
const fpgrad_maxiter = 30
const fpgrad_tol = 1e-6
const reuse_env = true
const trscheme = FixedSpaceTruncation()
const fwd_alg = TensorKit.SDD()
const rrule_alg = Arnoldi(; tol=1e-2fpgrad_tol, krylovdim=48, verbosity=-1)
const svd_alg = SVDAdjoint(; fwd_alg, rrule_alg)
const projector_alg_type = HalfInfiniteProjector
const projector_alg = projector_alg_type(svd_alg, trscheme, 2)
const ctmrg_alg = SimultaneousCTMRG(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)
const optimizer = LBFGS(32; maxiter=100, gradtol=1e-4, verbosity=2)
const gradient_linsolver = KrylovKit.BiCGStab(;
maxiter=Defaults.fpgrad_maxiter, tol=Defaults.fpgrad_tol
)
const iterscheme = :fixed
const gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
const scheduler = Ref{Scheduler}(Threads.nthreads() == 1 ? SerialScheduler() : DynamicScheduler())
end

Module containing default values that represent typical algorithm parameters.
Module containing default algorithm parameter values and arguments.

- `ctmrg_maxiter`: Maximal number of CTMRG iterations per run
- `ctmrg_miniter`: Minimal number of CTMRG carried out
- `ctmrg_tol`: Tolerance checking singular value and norm convergence
- `fpgrad_maxiter`: Maximal number of iterations for computing the CTMRG fixed-point gradient
- `fpgrad_tol`: Convergence tolerance for the fixed-point gradient iteration
- `reuse_env`: If `true`, the current optimization step is initialized on the previous environment
- `trscheme`: Truncation scheme for SVDs and other decompositions
- `fwd_alg`: SVD algorithm that is used in the forward pass
# CTMRG
- `ctmrg_tol=1e-8`: Tolerance checking singular value and norm convergence
- `ctmrg_maxiter=100`: Maximal number of CTMRG iterations per run
- `ctmrg_miniter=4`: Minimal number of CTMRG carried out
- `trscheme=FixedSpaceTruncation()`: Truncation scheme for SVDs and other decompositions
- `fwd_alg=TensorKit.SDD()`: SVD algorithm that is used in the forward pass
- `rrule_alg`: Reverse-rule for differentiating that SVD
- `svd_alg`: Combination of `fwd_alg` and `rrule_alg`
- `projector_alg_type`: Default type of projector algorithm

```
rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
```

- `svd_alg=SVDAdjoint(; fwd_alg, rrule_alg)`: Combination of `fwd_alg` and `rrule_alg`
- `projector_alg_type=HalfInfiniteProjector`: Default type of projector algorithm
- `projector_alg`: Algorithm to compute CTMRG projectors

```
projector_alg = projector_alg_type(; svd_alg, trscheme, verbosity=0)
```

- `ctmrg_alg`: Algorithm for performing CTMRG runs
- `optimizer`: Optimization algorithm for PEPS ground-state optimization

```
ctmrg_alg = SimultaneousCTMRG(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)
```

# Optimization
- `optim_alg=quasi_Newton`: Manopt optimizer function
- `optim_maxiter=100`: Maximal number of optimization iterations
- `optim_tol=1e-4`: Gradient norm convergence tolerance
- `stopping_criterion`: Manopt stopping criterion

```
stopping_criterion = StopAfterIteration(optim_maxiter) | StopWhenGradientNormLess(optim_tol)
```

- `record_group`: Values (`RecordAction`s) that are recorded during Manopt optimization

```
record_group = [
RecordCost() => :Cost,
RecordGradientNorm() => :GradientNorm,
RecordTruncationError() => :TruncationError,
RecordConditionNumber() => :ConditionNumber,
RecordUnitCellGradientNorm() => :UnitcellGradientNorm,
RecordTime(; mode=:iterative) => :Time,
]
```

- `debug_group = [DebugPEPSOptimize(), :Stop]`: Optimization iteration info printing
- `stepsize = AdaptiveWNGradient()`:

- `optim_kwargs`: All keyword arguments that are passed onto a Manopt optimization call

```
optim_kwargs = (;
stopping_criterion, record=record_group, debug=debug_group, stepsize
)
```

- `fpgrad_maxiter=30`: Maximal number of iterations for computing the CTMRG fixed-point gradient
- `fpgrad_tol=1e-6`: Convergence tolerance for the fixed-point gradient iteration
- `iterscheme=:fixed`: Scheme for differentiating one CTMRG iteration
- `gradient_linsolver`: Default linear solver for the `LinSolver` gradient algorithm
- `iterscheme`: Scheme for differentiating one CTMRG iteration

```
gradient_linsolver=KrylovKit.BiCGStab(; maxiter=fpgrad_maxiter, tol=fpgrad_tol)
```

- `gradient_alg`: Algorithm to compute the gradient fixed-point
- `scheduler`: Multi-threading scheduler which can be accessed via `set_scheduler!`

```
gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
```

- `reuse_env=true`: If `true`, the current optimization step is initialized on the previous environment

# OhMyThreads scheduler
- `scheduler=Ref{Scheduler}(...)`: Multi-threading scheduler which can be accessed via `set_scheduler!`
"""
module Defaults
using TensorKit, KrylovKit, OptimKit, OhMyThreads
using TensorKit, KrylovKit, OhMyThreads
using Manopt
using PEPSKit:
LinSolver,
FixedSpaceTruncation,
SVDAdjoint,
HalfInfiniteProjector,
SimultaneousCTMRG
SimultaneousCTMRG,
RecordTruncationError,
RecordConditionNumber,
RecordUnitCellGradientNorm,
DebugPEPSOptimize

# CTMRG
const ctmrg_tol = 1e-8
const ctmrg_maxiter = 100
const ctmrg_miniter = 4
const fpgrad_maxiter = 30
const fpgrad_tol = 1e-6
const sparse = false
const reuse_env = true
const trscheme = FixedSpaceTruncation()
const fwd_alg = TensorKit.SDD()
const rrule_alg = Arnoldi(; tol=1e-2fpgrad_tol, krylovdim=48, verbosity=-1)
const rrule_alg = Arnoldi(; tol=ctmrg_tol, krylovdim=48, verbosity=-1)
const svd_alg = SVDAdjoint(; fwd_alg, rrule_alg)
const projector_alg_type = HalfInfiniteProjector
const projector_alg = projector_alg_type(svd_alg, trscheme, 2)
const projector_alg = projector_alg_type(; svd_alg, trscheme, verbosity=0)
const ctmrg_alg = SimultaneousCTMRG(
ctmrg_tol, ctmrg_maxiter, ctmrg_miniter, 2, projector_alg
)
const optimizer = LBFGS(32; maxiter=100, gradtol=1e-4, verbosity=2)
const gradient_linsolver = KrylovKit.BiCGStab(;
maxiter=Defaults.fpgrad_maxiter, tol=Defaults.fpgrad_tol

# Optimization
const optim_alg = quasi_Newton
const optim_maxiter = 100
const optim_tol = 1e-4
const stopping_criterion =
StopAfterIteration(optim_maxiter) | StopWhenGradientNormLess(optim_tol)
const record_group = [
RecordCost() => :Cost,
RecordGradientNorm() => :GradientNorm,
RecordTruncationError() => :TruncationError,
RecordConditionNumber() => :ConditionNumber,
RecordUnitCellGradientNorm() => :UnitcellGradientNorm,
RecordTime(; mode=:iterative) => :Time,
]
const debug_group = [DebugPEPSOptimize(), :Stop]
const stepsize = AdaptiveWNGradient()
const optim_kwargs = (;
stopping_criterion, record=record_group, debug=debug_group, stepsize
)
const fpgrad_maxiter = 30
const fpgrad_tol = 1e-6
const gradient_linsolver = KrylovKit.BiCGStab(; maxiter=fpgrad_maxiter, tol=fpgrad_tol)
const iterscheme = :fixed
const gradient_alg = LinSolver(; solver=gradient_linsolver, iterscheme)
const reuse_env = true

# OhMyThreads scheduler defaults
const scheduler = Ref{Scheduler}()
Expand Down Expand Up @@ -192,6 +259,8 @@ export LocalOperator
export expectation_value, costfun, product_peps, correlation_length
export leading_boundary
export PEPSOptimize, GeomSum, ManualIter, LinSolver
export RecordTruncationError, RecordConditionNumber, RecordUnitCellGradientNorm
export DebugPEPSOptimize
export fixedpoint

export absorb_weight
Expand Down
8 changes: 6 additions & 2 deletions src/algorithms/ctmrg/ctmrg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -38,10 +38,14 @@ function MPSKit.leading_boundary(envinit, state, alg::CTMRGAlgorithm)
env = deepcopy(envinit)
log = ignore_derivatives(() -> MPSKit.IterLog("CTMRG"))

truncation_error = 0.0
condition_number = 1.0
return LoggingExtras.withlevel(; alg.verbosity) do
ctmrg_loginit!(log, η, state, envinit)
for iter in 1:(alg.maxiter)
env, = ctmrg_iteration(state, env, alg) # Grow and renormalize in all 4 directions
env, info = ctmrg_iteration(state, env, alg) # Grow and renormalize in all 4 directions
truncation_error = info.truncation_error
condition_number = info.condition_number
η, CS, TS = calc_convergence(env, CS, TS)

if η ≤ alg.tol && iter ≥ alg.miniter
Expand All @@ -54,7 +58,7 @@ function MPSKit.leading_boundary(envinit, state, alg::CTMRGAlgorithm)
ctmrg_logiter!(log, iter, η, state, env)
end
end
return env
return env, (; truncation_error, condition_number)
end
end

Expand Down
23 changes: 16 additions & 7 deletions src/algorithms/ctmrg/sequential.jl
Original file line number Diff line number Diff line change
Expand Up @@ -31,18 +31,20 @@
end

function ctmrg_iteration(state, envs::CTMRGEnv, alg::SequentialCTMRG)
ϵ = zero(real(scalartype(state)))
truncation_error = zero(real(scalartype(state)))
condition_number = zero(real(scalartype(state)))

Check warning on line 35 in src/algorithms/ctmrg/sequential.jl

View check run for this annotation

Codecov / codecov/patch

src/algorithms/ctmrg/sequential.jl#L34-L35

Added lines #L34 - L35 were not covered by tests
for _ in 1:4 # rotate
for col in 1:size(state, 2) # left move column-wise
projectors, info = sequential_projectors(col, state, envs, alg.projector_alg)
envs = renormalize_sequentially(col, projectors, state, envs)
ϵ = max(ϵ, info.err)
truncation_error = max(truncation_error, info.truncation_error)
condition_number = max(condition_number, info.condition_number)

Check warning on line 41 in src/algorithms/ctmrg/sequential.jl

View check run for this annotation

Codecov / codecov/patch

src/algorithms/ctmrg/sequential.jl#L40-L41

Added lines #L40 - L41 were not covered by tests
end
state = rotate_north(state, EAST)
envs = rotate_north(envs, EAST)
end

return envs, (; err=ϵ)
return envs, (; truncation_error, condition_number)

Check warning on line 47 in src/algorithms/ctmrg/sequential.jl

View check run for this annotation

Codecov / codecov/patch

src/algorithms/ctmrg/sequential.jl#L47

Added line #L47 was not covered by tests
end

"""
Expand All @@ -53,21 +55,28 @@
for a specific `coordinate` (where `dir=WEST` is already implied in the `:sequential` scheme).
"""
function sequential_projectors(
col::Int, state::InfiniteSquareNetwork, envs::CTMRGEnv, alg::ProjectorAlgorithm
)
col::Int, state::InfiniteSquareNetwork{T}, envs::CTMRGEnv, alg::ProjectorAlgorithm
) where {T}
# SVD half-infinite environment column-wise
ϵ = Zygote.Buffer(zeros(real(scalartype(envs)), size(envs, 2)))
ϵ = Zygote.Buffer(zeros(real(scalartype(T)), size(envs, 2)))
S = Zygote.Buffer(

Check warning on line 62 in src/algorithms/ctmrg/sequential.jl

View check run for this annotation

Codecov / codecov/patch

src/algorithms/ctmrg/sequential.jl#L61-L62

Added lines #L61 - L62 were not covered by tests
zeros(size(envs, 2)), tensormaptype(spacetype(T), 1, 1, real(scalartype(T)))
)
coordinates = eachcoordinate(envs)[:, col]
projectors = dtmap(coordinates) do (r, c)
trscheme = truncation_scheme(alg, envs.edges[WEST, _prev(r, size(envs, 2)), c])
proj, info = sequential_projectors(
(WEST, r, c), state, envs, @set(alg.trscheme = trscheme)
)
ϵ[r] = info.err / norm(info.S)
S[r] = info.S

Check warning on line 72 in src/algorithms/ctmrg/sequential.jl

View check run for this annotation

Codecov / codecov/patch

src/algorithms/ctmrg/sequential.jl#L72

Added line #L72 was not covered by tests
return proj
end

return (map(first, projectors), map(last, projectors)), (; err=maximum(copy(ϵ)))
truncation_error = maximum(copy(ϵ))
condition_number = maximum(_condition_number, copy(S))
info = (; truncation_error, condition_number)
return (map(first, projectors), map(last, projectors)), info

Check warning on line 79 in src/algorithms/ctmrg/sequential.jl

View check run for this annotation

Codecov / codecov/patch

src/algorithms/ctmrg/sequential.jl#L76-L79

Added lines #L76 - L79 were not covered by tests
end
function sequential_projectors(
coordinate::NTuple{3,Int},
Expand Down
Loading
Loading