You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: changes.md
+11-15
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@
4
4
There are some **major breaking changes**, to support users providing previous calibration results, e.g. from previous model selection runs. The following changes are reflected in the notebook examples.
5
5
-**breaking change** previously, calibration tools would call `candidates` at each iteration of model selection. `candidates` has now been renamed to `start_iteration`, and tools are now expected to run `end_iteration` after calibrating the iteration's models. This structure also simplifies the codebase for other features of PEtab Select.
6
6
-**breaking change** previously, calibration tools would determine whether to continue model selection based on whether the candidate space contains any models. Now, calibration tools should rely on the `TERMINATE` signal provided by `end_iteration` to determine whether to continue model selection.
7
-
-**breaking change** PEtab Select hides user-calibrated models from the calibration tool, until `end_iteration` is called. Hence, if a calibration tool does some analysis on the calibrated models of the current iteration, the tool should use the `MODELS` provided by `end_iteration`, and not the MODELS provided by `start_iteration`.
7
+
-**breaking change** PEtab Select hides user-calibrated models from the calibration tool, until `end_iteration` is called. Hence, if a calibration tool does some analysis on the calibrated models of the current iteration, the tool should use the `MODELS` provided by `end_iteration`, and not the `MODELS` provided by `start_iteration`.
8
8
In summary, here's some pseudocode showing the old way.
9
9
```python
10
10
from petab_select.ui import candidates
@@ -17,7 +17,7 @@ while True:
17
17
# Calibrate iteration models
18
18
for model in models:
19
19
calibrate(model)
20
-
# Print a summary/analysis of current iteration models
20
+
# Print a summary/analysis of current iteration models (dummy code)
21
21
print_summary_of_iteration_models(models)
22
22
```
23
23
And here's the new way. Full working examples are given in the updated notebooks, including how to handle the candidate space.
@@ -32,29 +32,25 @@ while True:
32
32
calibrate(model)
33
33
# Finalize iteration, get all iteration models and results
34
34
iteration_results = end_iteration(...)
35
-
# Print a summary/analysis of all iteration models
35
+
# Print a summary/analysis of all iteration models (dummy code)
- GitHub CI fixes and GHA deployments to PyPI and Zenodo
45
+
- fixed a bug introduced in 0.1.8, where FAMoS "jump to most distant" moves were not handled correctly
43
46
- the renamed `candidates`->`start_iteration`:
44
47
- no longer accepts `calibrated_models`, as they are automatically stored in the `CandidateSpace` now with each `end_iteration`
45
-
- exclusions via `exclude_models` is no longer supported. exclusions can be supplied with `set_excluded_hashes`
46
48
-`calibrated_models` and `newly_calibrated_models` no longer need to be tracked between iterations. They are now tracked by the candidate space.
47
-
- some refactoring
48
-
- PEtab hashes are now computed for each model, to determine whether they are unique, e.g. for assessing whether a model is already excluded.
49
-
Two models are considered equivalent if their PEtab hashes match. The PEtab hash is composed of the location of the PEtab YAML in the filesystem,
50
-
the nominal values of the parameters in the model's PEtab problem, and the estimated parameters of the model's PEtab problem. The PEtab hash
51
-
digest size is automatically computed to ensure a collision probability of <2^{-64}, given some assumptions. Users can also manually set the digest size.
52
-
More details are available at the documentation for `petab_select.model.ModelHash`.
53
-
- model hashes are more readable and composed of three parts:
49
+
- exclusions via `exclude_models` is no longer supported. exclusions can be supplied with `set_excluded_hashes`
50
+
- model hashes are more readable and composed of two parts:
54
51
1. the model subspace ID
55
52
2. the location of the model in its subspace (the model subspace indices)
56
-
3. the PEtab hash
57
-
- users can now specify a "PEtab Select problem ID" in their YAML files
53
+
- users can now provide model calibrations from previous model selection runs. This enables them to skip re-calibration of the same models.
58
54
59
55
## 0.1.13
60
56
- fixed bug when no predecessor model is provided, introduced in 0.1.11 (#83)
0 commit comments