Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@ links = InterLinks(
joinpath(@__DIR__, "src", "inventories", "TimerOutputs.toml")
),
"QuantumPropagators" => "https://juliaquantumcontrol.github.io/QuantumPropagators.jl/$DEV_OR_STABLE",
"QuantumGradientGenerators" => "https://juliaquantumcontrol.github.io/QuantumGradientGenerators.jl/$DEV_OR_STABLE",
"QuantumControl" => "https://juliaquantumcontrol.github.io/QuantumControl.jl/$DEV_OR_STABLE",
"GRAPE" => "https://juliaquantumcontrol.github.io/GRAPE.jl/$DEV_OR_STABLE",
"Examples" => "https://juliaquantumcontrol.github.io/QuantumControlExamples.jl/$DEV_OR_STABLE",
Expand Down
10 changes: 10 additions & 0 deletions docs/src/refs.bib
Original file line number Diff line number Diff line change
Expand Up @@ -231,3 +231,13 @@ @article{MachnesPRL2018
Pages = {150401},
Volume = {120},
}

@article{GoerzNJP2014,
Author = {Goerz, Michael H. and Reich, Daniel M. and Koch, Christiane P.},
Title = {Optimal control theory for a unitary operation under dissipative evolution},
Journal = njp,
Year = {2014},
Doi = {10.1088/1367-2630/16/5/055012},
Pages = {055012},
Volume = {16},
}
85 changes: 47 additions & 38 deletions src/optimize.jl
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
using QuantumControlBase.QuantumPropagators.Generators: Operator
using QuantumControlBase.QuantumPropagators.Controls: evaluate, evaluate!
using QuantumControlBase.QuantumPropagators: prop_step!, set_state!, reinit_prop!
using QuantumControlBase.QuantumPropagators: prop_step!, set_state!, reinit_prop!, propagate
using QuantumControlBase.QuantumPropagators.Storage: write_to_storage!, get_from_storage!
using QuantumGradientGenerators: resetgradvec!
using QuantumControlBase: make_chi, make_grad_J_a, set_atexit_save_optimization
using QuantumControlBase: @threadsif
using QuantumControlBase: @threadsif, Trajectory
using LinearAlgebra
using Printf

Expand All @@ -15,9 +15,8 @@ import QuantumControlBase: optimize
result = optimize(problem; method=:GRAPE, kwargs...)
```

optimizes the given
control [`problem`](@ref QuantumControlBase.ControlProblem) via the GRAPE
method, by minimizing the functional
optimizes the given control [`problem`](@ref QuantumControlBase.ControlProblem)
via the GRAPE method, by minimizing the functional

```math
J(\{ϵ_{ln}\}) = J_T(\{|ϕ_k(T)⟩\}) + λ_a J_a(\{ϵ_{ln}\})
Expand All @@ -31,19 +30,21 @@ the time grid.
Returns a [`GrapeResult`](@ref).

Keyword arguments that control the optimization are taken from the keyword
arguments used in the instantiation of `problem`.
arguments used in the instantiation of `problem`; any of these can be overridden
with explicit keyword arguments to `optimize`.


# Required problem keyword arguments

* `J_T`: A function `J_T(ϕ, objectives; τ=τ)` that evaluates the final time
* `J_T`: A function `J_T(ϕ, trajectories; τ=τ)` that evaluates the final time
functional from a vector `ϕ` of forward-propagated states and
`problem.objectives`. For all `objectives` that define a `target_state`, the
element `τₖ` of the vector `τ` will contain the overlap of the state `ϕₖ`
with the `target_state` of the `k`'th objective, or `NaN` otherwise.
`problem.trajectories`. For all `trajectories` that define a `target_state`,
the element `τₖ` of the vector `τ` will contain the overlap of the state `ϕₖ`
with the `target_state` of the `k`'th trajectory, or `NaN` otherwise.

# Optional problem keyword arguments

* `chi`: A function `chi!(χ, ϕ, objectives)` what receives a list `ϕ`
* `chi`: A function `chi!(χ, ϕ, trajectories)` what receives a list `ϕ`
of the forward propagated states and must set ``|χₖ⟩ = -∂J_T/∂⟨ϕₖ|``. If not
given, it will be automatically determined from `J_T` via [`make_chi`](@ref)
with the default parameters.
Expand Down Expand Up @@ -81,7 +82,7 @@ arguments used in the instantiation of `problem`.
* `pulse_options`: A dictionary that maps every control (as obtained by
[`get_controls`](@ref
QuantumControlBase.QuantumPropagators.Controls.get_controls) from the
`problem.objectives`) to a dict with the following possible keys:
`problem.trajectories`) to a dict with the following possible keys:

- `:upper_bounds`: A vector of upper bound values, one for each intervals of
the time grid. Values of `Inf` indicate an unconstrained upper bound for
Expand Down Expand Up @@ -112,26 +113,34 @@ arguments used in the instantiation of `problem`.
* `optimizer`: An optional Optim.jl optimizer (`Optim.AbstractOptimizer`
instance). If not given, an [L-BFGS-B](https://github.com/Gnimuc/LBFGSB.jl)
optimizer will be used.
* `prop_method`/`fw_prop_method`/`bw_prop_method`: The propagation method to
use for each objective, see below.
* `prop_method`/`fw_prop_method`/`grad_prop_method`: The propagation method to
use for the extended gradient vector for each objective, see below.
* `prop_method`: The propagation method to use for each trajectory, see below.
* `verbose=false`: If `true`, print information during initialization

The propagation method for the forward propagation of each objective is
determined by the first available item of the following:
# Trajectory propagation

GRAPE may involve three types of propagation:

* A forward propagation for every [`Trajectory`](@ref) in the `problem`
* A backward propagation for every trajectory
* A backward propagation of a
[gradient generator](@extref QuantumGradientGenerators.GradGenerator)
for every trajectory.

The keyword arguments for each propagation (see [`propagate`](@ref)) are
determined from any properties of each [`Trajectory`](@ref) that have a `prop_`
prefix, cf. [`init_prop_trajectory`](@ref).

* a `fw_prop_method` keyword argument
* a `prop_method` keyword argument
* a property `fw_prop_method` of the objective
* a property `prop_method` of the objective
* the value `:auto`
In situations where different parameters are required for the forward and
backward propagation, instead of the `prop_` prefix, the `fw_prop_` and
`bw_prop_` prefix can be used, respectively. These override any setting with
the `prop_` prefix. Similarly, properties for the backward propagation of the
gradient generators can be set with properties that have a `grad_prop_` prefix.
These prefixes apply both to the properties of each [`Trajectory`](@ref) and
the problem keyword arguments.

The propagation method for the backward propagation is determined similarly,
but with `bw_prop_method` instead of `fw_prop_method`. The propagation method
for the backward propagation of the extended gradient vector for each objective
is determined from `grad_prop_method`, `fw_prop_method`, `prop_method` in order
of precedence.
Note that the propagation method for each propagation must be specified. In
most cases, it is sufficient (and recommended) to pass a global `prop_method`
problem keyword argument.
"""
optimize(problem, method::Val{:GRAPE}) = optimize_grape(problem)
optimize(problem, method::Val{:grape}) = optimize_grape(problem)
Expand All @@ -156,10 +165,10 @@ function optimize_grape(problem)
wrk = GrapeWrk(problem; verbose)

χ = wrk.chi_states
Ψ₀ = [obj.initial_state for obj ∈ wrk.objectives]
Ψ₀ = [traj.initial_state for traj ∈ wrk.trajectories]
Ψtgt = Union{eltype(Ψ₀),Nothing}[
(hasproperty(obj, :target_state) ? obj.target_state : nothing) for
obj ∈ wrk.objectives
(hasproperty(traj, :target_state) ? traj.target_state : nothing) for
traj ∈ wrk.trajectories
]

J = wrk.J_parts
Expand All @@ -173,7 +182,7 @@ function optimize_grape(problem)
chi! = wrk.kwargs[:chi]
else
# we only want to evaluate `make_chi` if `chi` is not a kwarg
chi! = make_chi(J_T_func, wrk.objectives)
chi! = make_chi(J_T_func, wrk.trajectories)
end
grad_J_a! = nothing
if !isnothing(J_a_func)
Expand All @@ -183,7 +192,7 @@ function optimize_grape(problem)
τ = wrk.result.tau_vals
∇τ = wrk.tau_grads
N_T = length(tlist) - 1
N = length(wrk.objectives)
N = length(wrk.trajectories)
L = length(wrk.controls)
Φ = wrk.fw_storage

Expand Down Expand Up @@ -215,7 +224,7 @@ function optimize_grape(problem)
τ[k] = isnothing(Ψtgt[k]) ? NaN : (Ψtgt[k] ⋅ Ψₖ)
end
Ψ = [p.state for p ∈ wrk.fw_propagators]
J[1] = J_T_func(Ψ, wrk.objectives; τ=τ)
J[1] = J_T_func(Ψ, wrk.trajectories; τ=τ)
if !isnothing(J_a_func)
J[2] = λₐ * J_a_func(pulsevals, tlist)
end
Expand All @@ -239,7 +248,7 @@ function optimize_grape(problem)

# backward propagation of combined χ-state and gradient
Ψ = [p.state for p ∈ wrk.fw_propagators]
chi!(χ, Ψ, wrk.objectives; τ=τ) # τ from f(...)
chi!(χ, Ψ, wrk.trajectories; τ=τ) # τ from f(...)
@threadsif wrk.use_threads for k = 1:N
local Ψₖ = wrk.fw_propagators[k].state
local χ̃ₖ = wrk.bw_grad_propagators[k].state
Expand Down Expand Up @@ -283,12 +292,12 @@ function optimize_grape(problem)

# backward propagation of χ-state
Ψ = [p.state for p ∈ wrk.fw_propagators]
chi!(χ, Ψ, wrk.objectives; τ=τ) # τ from f(...)
chi!(χ, Ψ, wrk.trajectories; τ=τ) # τ from f(...)
@threadsif wrk.use_threads for k = 1:N
local Ψₖ = wrk.fw_propagators[k].state
reinit_prop!(wrk.bw_propagators[k], χ[k]; transform_control_ranges)
local χₖ = wrk.bw_propagators[k].state
local Hₖ⁺ = wrk.adjoint_objectives[k].generator
local Hₖ⁺ = wrk.adjoint_trajectories[k].generator
local Hₖₙ⁺ = wrk.taylor_genops[k]
for n = N_T:-1:1 # N_T is the number of time slices
# TODO: It would be cleaner to encapsulate this in a
Expand Down Expand Up @@ -491,7 +500,7 @@ end
# minus sign in front of the derivative, compensated by the minus sign in the
# factor ``(-2)`` of the final ``(∇J_T)_{ln}``.
function _grad_J_T_via_chi!(∇J_T, τ, ∇τ)
N = length(τ) # number of objectives
N = length(τ) # number of trajectories
L, N_T = size(∇τ[1]) # number of controls/time intervals
∇J_T′ = reshape(∇J_T, L, N_T) # writing to ∇J_T′ modifies ∇J_T
for l = 1:L
Expand Down
8 changes: 4 additions & 4 deletions src/result.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,17 +25,17 @@ mutable struct GrapeResult{STST}

function GrapeResult(problem)
tlist = problem.tlist
controls = get_controls(problem.objectives)
controls = get_controls(problem.trajectories)
iter_start = get(problem.kwargs, :iter_start, 0)
iter_stop = get(problem.kwargs, :iter_stop, 5000)
iter = iter_start
secs = 0
tau_vals = zeros(ComplexF64, length(problem.objectives))
tau_vals = zeros(ComplexF64, length(problem.trajectories))
guess_controls = [discretize(control, tlist) for control in controls]
J_T = 0.0
J_T_prev = 0.0
optimized_controls = [copy(guess) for guess in guess_controls]
states = [similar(obj.initial_state) for obj in problem.objectives]
states = [similar(traj.initial_state) for traj in problem.trajectories]
start_local_time = now()
end_local_time = now()
records = Vector{Tuple}()
Expand Down Expand Up @@ -74,7 +74,7 @@ Base.show(io::IO, ::MIME"text/plain", r::GrapeResult) = print(
GRAPE Optimization Result
-------------------------
- Started at $(r.start_local_time)
- Number of objectives: $(length(r.states))
- Number of trajectories: $(length(r.states))
- Number of iterations: $(max(r.iter - r.iter_start, 0))
- Number of pure func evals: $(r.f_calls)
- Number of func/grad evals: $(r.fg_calls)
Expand Down
Loading