@@ -12,6 +12,7 @@ This cookbook provides an overview of using the results.
12
12
13
13
- **Model Fit **: Perform a simple model-fit to create a ``Result `` object.
14
14
- **Info **: Print the ``info `` attribute of the ``Result `` object to display a summary of the model-fit.
15
+ - **Loading From Hard-disk **: Loading results from hard-disk to Python variables via the aggregator.
15
16
- **Samples **: The ``Samples `` object contained in the ``Result ``, containing all non-linear samples (e.g. parameters, log likelihoods, etc.).
16
17
- **Maximum Likelihood **: The maximum likelihood model instance.
17
18
- **Posterior / PDF **: The median PDF model instance and PDF vectors of all model parameters via 1D marginalization.
@@ -98,6 +99,69 @@ The output appears as follows:
98
99
normalization 24.79 (24.65, 24.94)
99
100
sigma 9.85 (9.78, 9.90)
100
101
102
+ Loading From Hard-disk
103
+ ----------------------
104
+
105
+ When performing fits which output results to hard-disk, a `files ` folder is created containing .json / .csv files of
106
+ the model, samples, search, etc. You should check it out now for a completed fit on your hard-disk if you have
107
+ not already!
108
+
109
+ These files can be loaded from hard-disk to Python variables via the aggregator, making them accessible in a
110
+ Python script or Jupyter notebook. They are loaded as the internal **PyAutoFit ** objects we are familiar with,
111
+ for example the `model ` is loaded as the `Model ` object we passed to the search above.
112
+
113
+ Below, we will access these results using the aggregator's ``values `` method. A full list of what can be loaded is
114
+ as follows:
115
+
116
+ - ``model ``: The ``model `` defined above and used in the model-fit (``model.json ``).
117
+ - ``search ``: The non-linear search settings (``search.json ``).
118
+ - ``samples ``: The non-linear search samples (``samples.csv ``).
119
+ - ``samples_info ``: Additional information about the samples (``samples_info.json ``).
120
+ - ``samples_summary ``: A summary of key results of the samples (``samples_summary.json ``).
121
+ - ``info ``: The info dictionary passed to the search (``info.json ``).
122
+ - ``covariance ``: The inferred covariance matrix (``covariance.csv ``).
123
+ - ``data ``: The 1D noisy data used that is fitted (``data.json ``).
124
+ - ``noise_map ``: The 1D noise-map fitted (``noise_map.json ``).
125
+
126
+ The ``samples `` and ``samples_summary `` results contain a lot of repeated information. The ``samples `` result contains
127
+ the full non-linear search samples, for example every parameter sample and its log likelihood. The ``samples_summary ``
128
+ contains a summary of the results, for example the maximum log likelihood model and error estimates on parameters
129
+ at 1 and 3 sigma confidence.
130
+
131
+ Accessing results via the ``samples_summary `` is much faster, because as it does not reperform calculations using the full
132
+ list of samples. Therefore, if the result you want is accessible via the ``samples_summary `` you should use it
133
+ but if not you can revert to the ``samples.
134
+
135
+ .. code-block:: python
136
+
137
+ from autofit.aggregator.aggregator import Aggregator
138
+
139
+ agg = Aggregator.from_directory(
140
+ directory=path.join("output", "cookbook_result"),
141
+ )
142
+
143
+ Before using the aggregator to inspect results, lets discuss Python generators.
144
+
145
+ A generator is an object that iterates over a function when it is called. The aggregator creates all of the objects
146
+ that it loads from the database as generators (as opposed to a list, or dictionary, or another Python type).
147
+
148
+ This is because generators are memory efficient, as they do not store the entries of the database in memory
149
+ simultaneously. This contrasts objects like lists and dictionaries, which store all entries in memory all at once.
150
+ If you fit a large number of datasets, lists and dictionaries will use a lot of memory and could crash your computer!
151
+
152
+ Once we use a generator in the Python code, it cannot be used again. To perform the same task twice, the
153
+ generator must be remade it. This cookbook therefore rarely stores generators as variables and instead uses the
154
+ aggregator to create each generator at the point of use.
155
+
156
+ To create a generator of a specific set of results, we use the `values` method. This takes the `name` of the
157
+ object we want to create a generator of, for example inputting `name=samples` will return the results `Samples`
158
+ object (which is illustrated in detail below).
159
+
160
+ .. code-block:: python
161
+
162
+ for samples in agg.values("samples"):
163
+ print(samples.parameter_lists[0])
164
+
101
165
Samples
102
166
-------
103
167
0 commit comments