A PyTorch toolbox for solving learning tasks with neural ODEs. An important element of this toolbox is that it allows for time-dependent weights (controls), and costs involving integrals of the state. Generally speaking, there is flexibility in using different functionals and weight penalties (beyond simply
A sample experiment may be found in generate_fig.py, with the main modules being a simple instantiation of the neural ODE
model = NeuralODE(device,
data_dim=2,
hidden_dim=5,
augment_dim=1,
non_linearity='relu',
architecture='bottleneck',
T=10,
time_steps=20,
fixed_projector=False,
cross_entropy=False)
and then of the optimization algorithm
trainer = Trainer(model,
optimizer_anode,
device,
cross_entropy=False,
turnpike=True,
bound=0.,
fixed_projector=False)
If you are using this toolbox for your scientific publication, we would be very appreciative if you were to cite one of our following articles on this topic.
- Turnpike in optimal control of PDEs, ResNets, and beyond
@article{geshkovski2022turnpike,
title={Turnpike in optimal control of PDEs, ResNets, and beyond},
author={Geshkovski, Borjan and Zuazua, Enrique},
journal={Acta Numerica},
volume={31},
pages={135--263},
year={2022},
publisher={Cambridge University Press}
}
- Large-time asymptotics in deep learning
@article{esteve2021large,
title={Large-time asymptotics in deep learning},
author={Esteve-Yag{\"u}e, Carlos and Geshkovski, Borjan and Pighin, Dario and Zuazua, Enrique},
year={2021}
}
- Sparse approximation in learning via neural ODEs
@article{esteve2023sparsity,
title={Sparsity in long-time control of neural ODEs},
author={Esteve-Yag{\"u}e, Carlos and Geshkovski, Borjan},
journal={Systems \& Control Letters},
volume={172},
pages={105452},
year={2023},
publisher={Elsevier}
}