diff --git a/conda-recipe/meta.yaml b/conda-recipe/meta.yaml
index 6f0c42007a..75f288a57c 100644
--- a/conda-recipe/meta.yaml
+++ b/conda-recipe/meta.yaml
@@ -45,7 +45,7 @@ requirements:
- prompt_toolkit
- sphinx # documentation
- recommonmark
- - sphinx_rtd_theme
+ - sphinx_rtd_theme >0.5
test:
requires:
diff --git a/doc/sphinx/source/QA/dummy.md b/doc/sphinx/source/QA/dummy.md
deleted file mode 100644
index f42ba720fa..0000000000
--- a/doc/sphinx/source/QA/dummy.md
+++ /dev/null
@@ -1,2 +0,0 @@
-## F.A.Q.
-
diff --git a/doc/sphinx/source/QA/index.rst b/doc/sphinx/source/QA/index.rst
deleted file mode 100644
index b0a38e0d9d..0000000000
--- a/doc/sphinx/source/QA/index.rst
+++ /dev/null
@@ -1,7 +0,0 @@
-Q&A
-===
-
-.. toctree::
- :maxdepth: 1
-
- ./dummy.md
diff --git a/doc/sphinx/source/_static/LogoNNPDF.png b/doc/sphinx/source/_static/LogoNNPDF.png
new file mode 100644
index 0000000000..97cb77f74d
Binary files /dev/null and b/doc/sphinx/source/_static/LogoNNPDF.png differ
diff --git a/doc/sphinx/source/_static/css/custom.css b/doc/sphinx/source/_static/css/custom.css
new file mode 100644
index 0000000000..856ef85fa7
--- /dev/null
+++ b/doc/sphinx/source/_static/css/custom.css
@@ -0,0 +1,8 @@
+.wy-nav-content {
+ max-width: 80%;
+}
+
+.wy-side-nav-search, .wy-nav-top {
+ background: #77C3EC;
+}
+
diff --git a/doc/sphinx/source/code/buildmaster.md b/doc/sphinx/source/buildmaster.md
similarity index 84%
rename from doc/sphinx/source/code/buildmaster.md
rename to doc/sphinx/source/buildmaster.md
index a029d8579c..9fde1acb1a 100644
--- a/doc/sphinx/source/code/buildmaster.md
+++ b/doc/sphinx/source/buildmaster.md
@@ -1,14 +1,13 @@
-## Buildmaster
-
+## Handling experimental data: Buildmaster
Buildmaster is the code that allows the user to generate the ``DATA`` and
-``SYSTYPE`` files that contain, respectively, the experimental data and the
-information pertaining to the treatment of systematic errors. It is available
+``SYSTYPE`` files that contain, respectively, the experimental data and the
+information pertaining to the treatment of systematic errors. It is available
as a folder within the nnpdf project
```
https://github.com/NNPDF/nnpdf/buildmaster
```
-The structure of the files generated by the buildmaster project
+The structure of the files generated by the buildmaster project
is documented in [Experimental data files](exp_data_files).
Once the remainder of the nnpdf project is compiled and installed, the buildmaster code can
@@ -16,25 +15,23 @@ be compiled and installed as follows
```
mkdir build
cd build
-cmake ..
+cmake ..
make -j && make install
```
This will generate the `buildmaster` executable which shall be run as
```
./buildmaster
```
-The `DATA_` and `SYSTYPE_` files will be generated, respectively, in
+The `DATA_` and `SYSTYPE_` files will be generated, respectively, in
```
results/
results/systypes/
```
-They should then be copied into
+They should then be copied into
```
nnpdf/nnpdfcpp/data/commondata
nnpdf/nnpdfcpp/data/commondata/systypes/
```
Whenever a new data set is implemented, the buildmaster code should be
-updated accordingly. Detailed instructions on how to do so are provided in
+updated accordingly. Detailed instructions on how to do so are provided in
the tutorial [Implementing a new experiment in buildmaster](../tutorials/buildmaster.md).
-
-
diff --git a/doc/sphinx/source/code/index.rst b/doc/sphinx/source/code/index.rst
deleted file mode 100644
index ab05d0026a..0000000000
--- a/doc/sphinx/source/code/index.rst
+++ /dev/null
@@ -1,9 +0,0 @@
-Code
-====
-
-.. toctree::
- :maxdepth: 1
-
- ./dummy.md
- ./buildmaster.md
- ./apfelcomb.md
diff --git a/doc/sphinx/source/conf.py b/doc/sphinx/source/conf.py
index d29d8b0e4c..8c3601ae79 100644
--- a/doc/sphinx/source/conf.py
+++ b/doc/sphinx/source/conf.py
@@ -103,12 +103,19 @@
# further. For a list of options available for each theme, see the
# documentation.
#
-# html_theme_options = {}
+html_theme_options = {'logo_only' : True,
+ 'display_version' : False}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = []
+html_static_path = ["_static"]
+
+html_css_files = [
+ 'css/custom.css',
+]
+
+html_logo = "_static/LogoNNPDF.png"
# Custom sidebar templates, must be a dictionary that maps document names
# to template names.
diff --git a/doc/sphinx/source/data/index.rst b/doc/sphinx/source/data/index.rst
index d4ed511d83..d18ed4b31d 100644
--- a/doc/sphinx/source/data/index.rst
+++ b/doc/sphinx/source/data/index.rst
@@ -1,6 +1,9 @@
Storage of data and theory predictions
======================================
+This section describes the way that two crucial types of information are stored in the NNPDF system,
+namely data files and the corresponding files containing theoretical predictions.
+
.. toctree::
:maxdepth: 1
diff --git a/doc/sphinx/source/code/apfelcomb.md b/doc/sphinx/source/external-code/apfelcomb.md
similarity index 88%
rename from doc/sphinx/source/code/apfelcomb.md
rename to doc/sphinx/source/external-code/apfelcomb.md
index 90479f0490..87a9a3c9c4 100644
--- a/doc/sphinx/source/code/apfelcomb.md
+++ b/doc/sphinx/source/external-code/apfelcomb.md
@@ -1,4 +1,7 @@
-## APFELcomb
+```eval_rst
+.. _apfelcomb:
+```
+# Using APFELcomb
APFELcomb is the project that allows the user to generate ``FK`` tables. These
are lookup tables that contain the relevant information to compute theoretical
@@ -17,19 +20,19 @@ nnpdf/nnpdfcpp/data/doc/data_layout.pdf
```
APFELcomb depends on the following libraries
-* [APFEL](github.com/scarrazza/apfel.git)
-* [nnpdf](github.com/NNPDF/nnpdf)
-* [APPLgrid](github.com/NNPDF/external/applgrid-1.4.70-nnpd)
+* [APFEL](https://github.com/scarrazza/apfel)
+* [nnpdf](https://github.com/NNPDF/nnpdf)
+* [APPLgrid](https://github.com/NNPDF/external/tree/master/applgrid-1.4.70-nnpdf)
and data files from
-* [applgrids](github.com/NNPDF/applgrids)
+* [applgrids](https://github.com/NNPDF/applgrids)
There are various ways of generating the latter, as explained in [How to
generate applgrids](../tutorials/APPLgrids.md).
Once the above libraries and data files are set up, the APFELcomb project can be
-compield as follows
+compiled as follows
```
make
```
diff --git a/doc/sphinx/source/external-code/cross-secs.md b/doc/sphinx/source/external-code/cross-secs.md
new file mode 100644
index 0000000000..355e1795b1
--- /dev/null
+++ b/doc/sphinx/source/external-code/cross-secs.md
@@ -0,0 +1,35 @@
+# Partonic cross section generation
+
+Many programmes exist to evaluate partonic cross sections. Some are general purpose, such as
+MadGraph5\_aMC@NLO and MCFM, in that they compute predictions for a variety of physical processes
+(e.g. drell-yan production, single top production, ...) up to a given order. Others are more
+specific, such as top++, which makes predictions for top quark pair production only. Some of these
+programmes will be briefly outlined here. Note that to produce predictions at NNLO in QCD, which is
+the highest order used in NNPDF fits, one usually produces APPLgrids at NLO in QCD, and then
+supplements these with NNLO QCD corrections which are computed with a code with NNLO capabilities.
+These C-factors are often provided to the collaboration by external parties, rather than the code
+being run in-house.
+
+[MadGraph5\_aMC@NLO](https://launchpad.net/mg5amcnlo) is the programme that will be used for most of
+the future NNPDF calculations of partonic cross sections. This is in large part due to its ability
+to compute predictions at NLO in QCD with NLO EW corrections. To generate APPLgrids from
+MadGraph5\_aMC@NLO, one can use [aMCfast](https://amcfast.hepforge.org/), which interfaces between
+the two formats.
+
+## Other codes
+
+[MCFM](https://mcfm.fnal.gov/) ('Monte Carlo for FeMtobarn processes') is an alternative programme
+to MadGraph5\_aMC@NLO, which instead uses mcfm-bridge as an interface to generate APPLgrids.
+
+[FEWZ](https://arxiv.org/abs/1011.3540) ('Fully Exclusive W and Z Production') is a programme for
+calculating (differential) cross sections for the Drell-Yan production of lepton pairs up to NNLO
+in QCD.
+
+[NLOjet++](http://www.desy.de/~znagy/Site/NLOJet++.html) is a programme that can compute cross
+sections for a variety of processes up to NLO. The processes include electron-positron annihilation,
+deep-inelastic scattering (DIS), photoproduction in electron-proton collisions, and a variety of
+processes in hadron-hadron collisions.
+
+[Top++](http://www.precision.hep.phy.cam.ac.uk/top-plus-plus/) is a programme for computing top
+quark pair production inclusive cross sections at NNLO in QCD with soft gluon resummation included
+up to next-to-next-to-leading log (NNLL).
diff --git a/doc/sphinx/source/external-code/grids.md b/doc/sphinx/source/external-code/grids.md
new file mode 100644
index 0000000000..d7d2e95924
--- /dev/null
+++ b/doc/sphinx/source/external-code/grids.md
@@ -0,0 +1,32 @@
+# Grid generation
+
+Grids play a crucial role in NNPDF fits. This is because they enable otherwise very time consuming
+computations to be computed on the fly during an NNPDF fit. The guiding principle behind producing
+grids is that the maximum possible amount of information should be computed before a PDF fit, so
+that the smallest possible number of operations has to be carried out during a fit. There are two
+particularly important types of grid: APPLgrids and FK ('Fast Kernel') tables. APPLgrids contain
+information on the partonic cross section (otherwise known as hard cross sections or coefficient
+functions) while FK tables combine APPLgrids with DGLAP evolution kernels from APFEL. This therefore
+means that FK tables can simply be combined with PDFs at the fitting scale to produce predictions
+for observables at the scale of the process.
+
+[APPLgrid](https://applgrid.hepforge.org/) is a C++ programme that allows the user to change certain
+settings within observable calculations a posteriori. Most importantly, the user can change the PDF
+set used, but they can also alter the renormalisation scale, factorisation scale and the strong
+coupling constant. Without APPLgrids, such changes would usually require a full rerun of the code,
+which is very time consuming. Moreover, these features are crucial for PDF fits, where hard cross
+sections must be convolved with different PDFs on the fly many times. APPLgrid works for hadron
+collider processes up to NLO in QCD, although work is ongoing to also include NLO electroweak
+corrections in the APPLgrid format. In addition to the standard version of APPLgrid, a modified
+version of APPLgrid exists which includes photon channels. This is known as APPLgridphoton. To
+learn how to generate APPLgrids, please see the tutorial [here](../tutorials/APPLgrids.md).
+
+APFELcomb generates FK tables for NNPDF fits. Information on how to use it can be found
+[here](./apfelcomb.md). You can read about the mechanism behind APFELcomb
+[here](https://arxiv.org/abs/1605.02070) and find more information about the theory behind FK tables
+in the [Theory section](../Theory/FastInterface.rst).
+
+## Other codes
+
+[fastNLO](https://fastnlo.hepforge.org/) is an alternative code to APPLgrid, which is currently not
+used by NNPDF, since the grids produced by fastNLO are not interfaced with the NNPDF code.
diff --git a/doc/sphinx/source/external-code/index.rst b/doc/sphinx/source/external-code/index.rst
new file mode 100644
index 0000000000..d5340ef808
--- /dev/null
+++ b/doc/sphinx/source/external-code/index.rst
@@ -0,0 +1,21 @@
+External codes
+==============
+
+This section details the external codes which are useful in producing theory predictions.
+
+
+**"What does any of this have to do with apples?"**
+
+There are many ingredients that go into a PDF fit. For example, one must generate the partonic cross
+sections for use in the fit, convert these predictions into a format that is suitable for PDF fits,
+i.e. the format must be suitable for on-the-fly convolutions, and evolve the PDFs according to the
+DGLAP equations. To perform each of these steps, different codes are used. In this section
+various external codes that you will frequently encounter are described.
+
+.. toctree::
+ :maxdepth: 1
+
+ ./pdf-codes.md
+ ./grids.md
+ ./cross-secs.md
+ ./apfelcomb
diff --git a/doc/sphinx/source/external-code/pdf-codes.md b/doc/sphinx/source/external-code/pdf-codes.md
new file mode 100644
index 0000000000..ecddfb979d
--- /dev/null
+++ b/doc/sphinx/source/external-code/pdf-codes.md
@@ -0,0 +1,36 @@
+```eval_rst
+.. _lhapdf:
+```
+# PDF set storage and interpolation
+
+*Author: Cameron Voisey, 13/10/2019*
+
+
+[LHAPDF](https://lhapdf.hepforge.org/) is a C++ library that evaluates PDFs by interpolating the
+discretised PDF 'grids' that PDF collaborations produce. It also gives its users access to proton
+and nuclear PDF sets from a variety of PDF collaborations, including NNPDF, MMHT and CTEQ. A list
+of all currently available PDF sets can be found on their
+[website](https://lhapdf.hepforge.org/pdfsets.html). Particle physics programmes that typically make
+use of PDFs, such as Monte Carlo event generators, will usually be interfaced with LHAPDF, to allow
+a user to easily specify the PDF set that they wish to use in their calculations. You can read more
+about LHAPDF by reading the [paper](https://arxiv.org/abs/1412.7420) that marked their latest
+release.
+
+## PDF evolution
+
+[APFEL](https://apfel.hepforge.org/) ('A PDF Evolution Library') is the PDF evolution code currently
+used by the NNPDF Collaboration. In addition to its PDF evolution capabilities, it also produces
+predictions of deep-inelastic scattering structure functions. In recent years it has been developed
+alongside NNPDF, and so it therefore contains the features and settings required in an NNPDF fit.
+That is, it includes quark masses in the MSbar scheme, the various FONLL heavy quark schemes, scale
+variations up to NLO, etc. Note that at the time of writing, a more streamlined code is being
+written to replace APFEL, which is currently dubbed EKO ('Evolution Kernel Operator'). To find more
+general information about PDF evolution and the DGLAP equations, you can go to the [Theory
+section](dglap.md).
+
+### Other codes
+
+[Hoppet](https://hoppet.hepforge.org/) ('Higher Order Perturbative Parton Evolution Toolkit') is an
+alternative PDF evolution code which is capable of evolving unpolarised PDFs to NNLO and linearly
+polarised PDFs to NLO. The unpolarised evolution includes heavy-quark thresholds in the MSbar
+scheme.
diff --git a/doc/sphinx/source/get-started/index.rst b/doc/sphinx/source/get-started/index.rst
index d5deee29e2..743cf6c82e 100644
--- a/doc/sphinx/source/get-started/index.rst
+++ b/doc/sphinx/source/get-started/index.rst
@@ -1,15 +1,22 @@
-Getting Started
+.. _getstarted:
+
+Getting started
===============
+This section provides an introduction to the NNPDF code and workflow.
+Essential first steps
+---------------------
.. toctree::
:maxdepth: 1
./access
- ./sphinx-documentation.md
- ./installation
- ./installation-source
./git
+ ./installation
+
+Necessary for developers
+------------------------
+.. toctree::
+ :maxdepth: 1
+
./rules
- ./prs
- ./tools
./python-tools
diff --git a/doc/sphinx/source/get-started/installation-source.md b/doc/sphinx/source/get-started/installation-source.md
deleted file mode 100644
index fe117cd4e1..0000000000
--- a/doc/sphinx/source/get-started/installation-source.md
+++ /dev/null
@@ -1,94 +0,0 @@
-```eval_rst
-.. _source:
-```
-## Installation from source
-
-If you intend to work on the NNPDF code, then building from source is the recommended installation
-procedure. However, you can still use conda to get all the dependencies and setup the validphys and
-C++ development environment. Further information is available in the
-[vp-guide](https://data.nnpdf.science/validphys-docs/guide.html#development-installs). Note that
-the `binary-bootstrap.sh` should be downloaded and run as explained above, if the user has not
-already done so.
-
-1. Create an NNPDF developer environment `nnpdf-dev` and install all relevant dependencies using
-
- conda create -n nnpdf-dev
- conda activate nnpdf-dev
- conda install --only-deps nnpdf
-
- Note that the user should be in the conda environment `nnpdf-dev` whenever they wish to work on
- NNPDF code. The conda environment can be exited using `conda deactivate`.
-
-
- Important note for Mac users
-
- If you are a macOS user, you will need to download the
- [Mac Software Development Kit](https://github.com/phracker/MacOSX-SDKs) or
- SDK for short. This is necessary to get the correct C compiler. Assuming
- that you already have access to the [server](NNPDF-server), you can
- download the version of the SDK used by the [Continuous Integration](CI)
- system by doing
-
- curl -L -O https://data.nnpdf.science/MacOSX10.9.sdk.tar.xz
-
- You can then unpack it into your root conda directory by running
-
- tar xfz MacOSX10.9.sdk.tar.xz -C
-
- where you can find `` by typing
- `echo $CONDA_PREFIX` when your base conda environment is activated. You
- should then export the following path
-
- export CONDA_BUILD_SYSROOT=/MacOSX10.9.sdk
-
- which you may wish to write to one of your `~/.bashrc` or
- `~/.bash_profile` scripts so that the SDK is easily accessible from the
- shell.
-
-
-
-2. Install the appropriate C++ compilers using
-
- conda install gxx_linux-64
-
- macOS users should replace `gxx_linux-64` with `clangxx_osx-64`.
-
-3. Ensure that the NNPDF repositories `nnpdf` and `apfel` are in the `nnpdfgit` directory. These
- are required to be able to run fits and can be obtained respectively by
-
- git clone git@github.com:NNPDF/nnpdf.git
- git clone https://github.com/scarrazza/apfel.git
-
-4. Obtain the dependencies of the code you want to build. Where to find those depends on the
- particular code. For example, something linking to `libnnpdf` will likely require `pkg-config`.
- Projects based on `autotools` (those that have a `./configure` script) will additionally
- require `automake` and `libtool`. Similarly projects based on `cmake` will require installing
- the `cmake` package. In the case of `nnpdf` itself, the build dependencies can be found in
- `/conda-recipe/meta.yaml`. We have to install the remaining ones manually:
-
- conda install pkg-config swig=3.0.10 cmake
-
-5. We now need to make the installation prefix point to our `nnpdf-dev` environment, this can be
- done using:
-
- $CONDA_PREFIX=~/miniconda3/envs/nnpdf-dev/
-
- this assumes `miniconda3` is installed in the default place which is the home directory.
-
-6. Navigate to the `nnpdf` directory obtained from the Github repository and create a folder called
- `conda-bld` by
-
- nnpdf$ mkdir conda-bld
- nnpdf$ cd conda-bld
-
- Note that it is important that for the following step to be executed while the user is in the
- `nnpdf-dev` conda environment. The project can be built using:
-
- nnpdf/conda-bld$ cmake .. -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX
-
-7. When the user wishes to work on the NNPDF code, they should do so in, for example,
- `'/nnpdfgit/nnpdf/libnnpdf'`. To compile the code navigate to the `conda-bld` directory created
- above and run
-
- make
- make install
diff --git a/doc/sphinx/source/get-started/installation.md b/doc/sphinx/source/get-started/installation.md
index 7d485ace0a..50b3f8fe00 100644
--- a/doc/sphinx/source/get-started/installation.md
+++ b/doc/sphinx/source/get-started/installation.md
@@ -1,3 +1,10 @@
+# Installing the code
+
+There are two methods for installing the code, both of which require conda. You can either install
+the code entirely with conda or install the code from source, with the dependencies still being
+installed via conda. The [first method](conda) is preferable if you simply want to run the code,
+while the [second](source) is necessary if you want to develop the code.
+
```eval_rst
.. _conda:
```
@@ -91,3 +98,98 @@ want to keep around the older installation). The command for symlinking would be
This will avoid symlinking the existing LHAPDF configuration, which may be corrupted or
incompatible. You should make sure only the grid folders are transferred if you copy or move
instead.
+
+```eval_rst
+.. _source:
+```
+## Installation from source
+
+If you intend to work on the NNPDF code, then building from source is the recommended installation
+procedure. However, you can still use conda to get all the dependencies and setup the validphys and
+C++ development environment. Further information is available in the
+[vp-guide](https://data.nnpdf.science/validphys-docs/guide.html#development-installs). Note that
+the `binary-bootstrap.sh` should be downloaded and run as explained above, if the user has not
+already done so.
+
+1. Create an NNPDF developer environment `nnpdf-dev` and install all relevant dependencies using
+
+ conda create -n nnpdf-dev
+ conda activate nnpdf-dev
+ conda install --only-deps nnpdf
+
+ Note that the user should be in the conda environment `nnpdf-dev` whenever they wish to work on
+ NNPDF code. The conda environment can be exited using `conda deactivate`.
+
+
+ Important note for Mac users
+
+ If you are a macOS user, you will need to download the
+ [Mac Software Development Kit](https://github.com/phracker/MacOSX-SDKs) or
+ SDK for short. This is necessary to get the correct C compiler. Assuming
+ that you already have access to the [server](NNPDF-server), you can
+ download the version of the SDK used by the [Continuous Integration](CI)
+ system by doing
+
+ curl -L -O https://data.nnpdf.science/MacOSX10.9.sdk.tar.xz
+
+ You can then unpack it into your root conda directory by running
+
+ tar xfz MacOSX10.9.sdk.tar.xz -C
+
+ where you can find `` by typing
+ `echo $CONDA_PREFIX` when your base conda environment is activated. You
+ should then export the following path
+
+ export CONDA_BUILD_SYSROOT=/MacOSX10.9.sdk
+
+ which you may wish to write to one of your `~/.bashrc` or
+ `~/.bash_profile` scripts so that the SDK is easily accessible from the
+ shell.
+
+
+
+2. Install the appropriate C++ compilers using
+
+ conda install gxx_linux-64
+
+ macOS users should replace `gxx_linux-64` with `clangxx_osx-64`.
+
+3. Ensure that the NNPDF repositories `nnpdf` and `apfel` are in the `nnpdfgit` directory. These
+ are required to be able to run fits and can be obtained respectively by
+
+ git clone git@github.com:NNPDF/nnpdf.git
+ git clone https://github.com/scarrazza/apfel.git
+
+4. Obtain the dependencies of the code you want to build. Where to find those depends on the
+ particular code. For example, something linking to `libnnpdf` will likely require `pkg-config`.
+ Projects based on `autotools` (those that have a `./configure` script) will additionally
+ require `automake` and `libtool`. Similarly projects based on `cmake` will require installing
+ the `cmake` package. In the case of `nnpdf` itself, the build dependencies can be found in
+ `/conda-recipe/meta.yaml`. We have to install the remaining ones manually:
+
+ conda install pkg-config swig=3.0.10 cmake
+
+5. We now need to make the installation prefix point to our `nnpdf-dev` environment, this can be
+ done using:
+
+ $CONDA_PREFIX=~/miniconda3/envs/nnpdf-dev/
+
+ this assumes `miniconda3` is installed in the default place which is the home directory.
+
+6. Navigate to the `nnpdf` directory obtained from the Github repository and create a folder called
+ `conda-bld` by
+
+ nnpdf$ mkdir conda-bld
+ nnpdf$ cd conda-bld
+
+ Note that it is important that for the following step to be executed while the user is in the
+ `nnpdf-dev` conda environment. The project can be built using:
+
+ nnpdf/conda-bld$ cmake .. -DCMAKE_INSTALL_PREFIX=$CONDA_PREFIX
+
+7. When the user wishes to work on the NNPDF code, they should do so in, for example,
+ `'/nnpdfgit/nnpdf/libnnpdf'`. To compile the code navigate to the `conda-bld` directory created
+ above and run
+
+ make
+ make install
diff --git a/doc/sphinx/source/get-started/prs.md b/doc/sphinx/source/get-started/prs.md
deleted file mode 100644
index b5cc1d07b4..0000000000
--- a/doc/sphinx/source/get-started/prs.md
+++ /dev/null
@@ -1,75 +0,0 @@
-```eval_rst
-.. _reviews:
-```
-# Reviewing pull requests
-
-All changes to the code [should](rules) be reviewed by at least one person (and ideally
-at least two). The expected benefits of the policy are:
-
- - It should improve the overall quality of the code.
-
- - It should provide the author of the change with a reasonably quick feedback
- loop to discuss the technical details involved in the changes.
-
- - It should make at least two people (the author and the reviewer) familiar
- with the changes. It should also ensure that the changes are easy to read
- and maintain in the future, and conform to the structure of the rest of the
- project.
-
-## Reviewing guidelines
-
-The following approach has been found helpful for reviewers, when reviewing pull
-requests:
-
- - Make sure you actually understand what the changes are about. Unclear
- details should not pass code review. Ask for clarifications, documentation,
- and changes in the code that make it more clear. If you are not in the
- position of taking the time, consider asking somebody else to help reviewing
- the changes. If the changes are big and difficult to comprehend at once,
- consider requesting that the author breaks them down in easier to
- understand, self contained, pull requests. Note that it is for the authors
- to proactively discuss the proposed changes before they become too difficult
- for anyone else to follow, and, failing that, it is fair to ask them to go
- through the work of making them intelligible.
-
- - Look at the big picture first. Think about whether the overall idea and
- implementation is sound or instead could benefit from going in a different
- direction. Ideally before a lot of work has gone into fine tuning details.
-
-
- - Review the code in detail. Try to identify areas where the changes
- could be clearly improved in terms of clarity, speed or style. Consider
- implementing minor changes yourself, although note that there are
- trade-offs: People are more likely to assimilate good patterns if they
- implement them a few times, which may be a win long term, even if it takes
- longer to ship this particular code change.
-
- - Ideally changes should come with automatic tests supporting their
- correctness.
-
- - Use [automated tools](pytoolsqa) which could catch a few extra
- problems. In particular
- * Do look at the automated tests that run with the PR.
- New code should not break them.
- * Use [`pylint`](https://www.pylint.org/) with [our default
- configuration](https://github.com/NNPDF/nnpdf/blob/master/.pylintrc) to
- catch common problems with Python code.
- * New Python code should come formatted with
- [`black` tool](https://github.com/psf/black).
- * Changes in compiled code should be tested in debug mode, with
- the address sanitizer enabled. This is done with the
- `-DCMAKE_BUILD_TYPE=Debug -DENABLE_ASAN=ON` options in `cmake`.
-
- - Regardless of automated tests, always run code with the new changes
- manually. This gives great insight into possible pitfalls and areas of
- improvement.
-
- - Make sure the changes are appropriately documented: Interface functions
- should come with rich docstrings, ideally with examples, larger pieces of
- functionality should come with some prose explaining what they are for.
-
- - Consider the effects on the larger system: Did this change make some example
- or piece of documentation obsolete and therefore mean needs to be updated?
- Did it break compatibility with something that we rely on? Should an email
- be sent around announcing the change? Does the change solve or unblock some
- outstanding issues?
diff --git a/doc/sphinx/source/get-started/rules.md b/doc/sphinx/source/get-started/rules.md
index a8576a3f2b..3690c8bc52 100644
--- a/doc/sphinx/source/get-started/rules.md
+++ b/doc/sphinx/source/get-started/rules.md
@@ -1,30 +1,113 @@
```eval_rst
.. _rules:
```
-# Code development rules
+# Code development
*Author: Cameron Voisey, 13/10/2019*
-Developers must never commit modifications directly to the master version of the code. Instead, the
-modifications you have made should be put into a branch and then a corresponding pull request (PR)
-should be opened. This PR should adhere to the following rules:
+Code development is carried out using Github.
+For more information on the Git workflow that NNPDF adopts, see the [Git and GitHub](./git.md) section.
-* A clear explanation of the aims of the PR should be given, i.e. what issue(s) are you trying to
+## Rules
+
+**Never commit modifications directly to the master version of the code! Instead,
+open a new branch of the code and make modifications on that branch. To merge
+changes to master a pull request (PR) should be opened.**
+
+This PR should adhere to the following rules:
+
+* **A clear explanation of the aims of the PR** should be given, i.e. what issue(s) are you trying to
address? If the reason for the PR has already been detailed in an issue, then this issue should be
linked in the PR.
-* The PR should contain documentation describing the new features, if applicable. This obviously
-does not apply if the PR is itself proposing an addition or an alteration to the documentation.
+* The PR should contain **documentation describing the new features**, if applicable. This obviously
+does not apply if the PR is itself proposing an addition or an alteration to the documentation. For
+information on adding to the documentation see [this section](../sphinx-documentation.md).
* If the PR is fixing a bug, information should be given such that a reviewer can reproduce the bug.
-* The PR should have at least one developer assigned to it, whose task it is to [review](reviews) the
+* The PR should have **at least one developer assigned to it**, whose task it is to [review](reviews) the
code. The PR cannot be merged into master before the reviewer has approved it.
-* Before a PR can be merged into master, the Travis build for it must pass. Practically, this means
-that you should find a green tick next to your PR on the relevant [PR
+* Before a PR can be merged into master, the **Travis build for it must pass** (see [here](../ci/index.md)).
+Practically, this means that you should find a green tick next to your PR on the relevant [PR
page](https://github.com/NNPDF/nnpdf/pulls). If you instead find a red cross next to your PR, the
reason for the failure must be investigated and dealt with appropriately.
-For more information on the Git workflow that NNPDF adopts, see the [Git and GitHub](./git.md)
-section.
+* Please use the recommended resources detailed [here](../vp/examples.rst).
+
+```eval_rst
+.. _reviews:
+```
+## Reviewing pull requests
+
+All changes to the code [should](rules) be reviewed by at least one person (and ideally
+at least two). The expected benefits of the policy are:
+
+ - It should improve the overall quality of the code.
+
+ - It should provide the author of the change with a reasonably quick feedback
+ loop to discuss the technical details involved in the changes.
+
+ - It should make at least two people (the author and the reviewer) familiar
+ with the changes. It should also ensure that the changes are easy to read
+ and maintain in the future, and conform to the structure of the rest of the
+ project.
+
+### Guidelines for reviewing
+
+The following approach has been found helpful for reviewers, when reviewing pull
+requests:
+
+ - Make sure you actually understand what the changes are about. Unclear
+ details should not pass code review. Ask for clarifications, documentation,
+ and changes in the code that make it more clear. If you are not in the
+ position of taking the time, consider asking somebody else to help reviewing
+ the changes. If the changes are big and difficult to comprehend at once,
+ consider requesting that the author breaks them down in easier to
+ understand, self contained, pull requests. Note that it is for the authors
+ to proactively discuss the proposed changes before they become too difficult
+ for anyone else to follow, and, failing that, it is fair to ask them to go
+ through the work of making them intelligible.
+
+ - Look at the big picture first. Think about whether the overall idea and
+ implementation is sound or instead could benefit from going in a different
+ direction. Ideally before a lot of work has gone into fine tuning details.
+
+
+ - Review the code in detail. Try to identify areas where the changes
+ could be clearly improved in terms of clarity, speed or style. Consider
+ implementing minor changes yourself, although note that there are
+ trade-offs: People are more likely to assimilate good patterns if they
+ implement them a few times, which may be a win long term, even if it takes
+ longer to ship this particular code change.
+
+ - Ideally changes should come with automatic tests supporting their
+ correctness.
+
+ - Use [automated tools](pytoolsqa) which could catch a few extra
+ problems. In particular
+ * Do look at the automated tests that run with the PR.
+ New code should not break them.
+ * Use [`pylint`](https://www.pylint.org/) with [our default
+ configuration](https://github.com/NNPDF/nnpdf/blob/master/.pylintrc) to
+ catch common problems with Python code.
+ * New Python code should come formatted with
+ [`black` tool](https://github.com/psf/black).
+ * Changes in compiled code should be tested in debug mode, with
+ the address sanitizer enabled. This is done with the
+ `-DCMAKE_BUILD_TYPE=Debug -DENABLE_ASAN=ON` options in `cmake`.
+
+ - Regardless of automated tests, always run code with the new changes
+ manually. This gives great insight into possible pitfalls and areas of
+ improvement.
+
+ - Make sure the changes are appropriately documented: Interface functions
+ should come with rich docstrings, ideally with examples, larger pieces of
+ functionality should come with some prose explaining what they are for.
+
+ - Consider the effects on the larger system: Did this change make some example
+ or piece of documentation obsolete and therefore mean needs to be updated?
+ Did it break compatibility with something that we rely on? Should an email
+ be sent around announcing the change? Does the change solve or unblock some
+ outstanding issues?
diff --git a/doc/sphinx/source/get-started/tools.md b/doc/sphinx/source/get-started/tools.md
deleted file mode 100644
index 3ab416f028..0000000000
--- a/doc/sphinx/source/get-started/tools.md
+++ /dev/null
@@ -1,119 +0,0 @@
-# Tools, or "What does any of this have to do with apples?"
-
-*Author: Cameron Voisey, 13/10/2019*
-
-There are many ingredients that go into a PDF fit. For example, one must generate the partonic cross
-sections for use in the fit, convert these predictions into a format that is suitable for PDF fits,
-i.e. the format must be suitable for on-the-fly convolutions, and evolve the PDFs according to the
-DGLAP equations. To perform each of these steps, different codes are used. In this subsection
-various codes that you will frequently encounter are described. Note that to find out more about
-collinear factorisation, which is the method that enables this separation of a calculation into
-distinct parts, you can refer to the [Theory section](collinear.md).
-
-## NNPDF specific codes
-
-validphys and reportengine are two internal NNPDF codes that form the basis of much of the work that
-NNPDF does. To read about what validphys and reportengine are and what they do, please refer to the
-introductory section of the [vp-guide](./../vp/introduction.md).
-
-```eval_rst
-.. _lhapdf:
-```
-## PDF set storage and interpolation
-
-[LHAPDF](https://lhapdf.hepforge.org/) is a C++ library that evaluates PDFs by interpolating the
-discretised PDF 'grids' that PDF collaborations produce. It also gives its users access to proton
-and nuclear PDF sets from a variety of PDF collaborations, including NNPDF, MMHT and CTEQ. A list
-of all currently available PDF sets can be found on their
-[website](https://lhapdf.hepforge.org/pdfsets.html). Particle physics programmes that typically make
-use of PDFs, such as Monte Carlo event generators, will usually be interfaced with LHAPDF, to allow
-a user to easily specify the PDF set that they wish to use in their calculations. You can read more
-about LHAPDF by reading the [paper](https://arxiv.org/abs/1412.7420) that marked their latest
-release.
-
-## PDF evolution
-
-[APFEL](https://apfel.hepforge.org/) ('A PDF Evolution Library') is the PDF evolution code currently
-used by the NNPDF Collaboration. In addition to its PDF evolution capabilities, it also produces
-predictions of deep-inelastic scattering structure functions. In recent years it has been developed
-alongside NNPDF, and so it therefore contains the features and settings required in an NNPDF fit.
-That is, it includes quark masses in the MSbar scheme, the various FONLL heavy quark schemes, scale
-variations up to NLO, etc. Note that at the time of writing, a more streamlined code is being
-written to replace APFEL, which is currently dubbed EKO ('Evolution Kernel Operator'). To find more
-general information about PDF evolution and the DGLAP equations, you can go to the [Theory
-section](dglap.md).
-
-### Other codes
-
-[Hoppet](https://hoppet.hepforge.org/) ('Higher Order Perturbative Parton Evolution Toolkit') is an
-alternative PDF evolution code which is capable of evolving unpolarised PDFs to NNLO and linearly
-polarised PDFs to NLO. The unpolarised evolution includes heavy-quark thresholds in the MSbar
-scheme.
-
-## Grid generation
-
-Grids play a crucial role in NNPDF fits. This is because they enable otherwise very time consuming
-computations to be computed on the fly during an NNPDF fit. The guiding principle behind producing
-grids is that the maximum possible amount of information should be computed before a PDF fit, so
-that the smallest possible number of operations has to be carried out during a fit. There are two
-particularly important types of grid: APPLgrids and FK ('Fast Kernel') tables. APPLgrids contain
-information on the partonic cross section (otherwise known as hard cross sections or coefficient
-functions) while FK tables combine APPLgrids with DGLAP evolution kernels from APFEL. This therefore
-means that FK tables can simply be combined with PDFs at the fitting scale to produce predictions
-for observables at the scale of the process.
-
-[APPLgrid](https://applgrid.hepforge.org/) is a C++ programme that allows the user to change certain
-settings within observable calculations a posteriori. Most importantly, the user can change the PDF
-set used, but they can also alter the renormalisation scale, factorisation scale and the strong
-coupling constant. Without APPLgrids, such changes would usually require a full rerun of the code,
-which is very time consuming. Moreover, these features are crucial for PDF fits, where hard cross
-sections must be convolved with different PDFs on the fly many times. APPLgrid works for hadron
-collider processes up to NLO in QCD, although work is ongoing to also include NLO electroweak
-corrections in the APPLgrid format. In addition to the standard version of APPLgrid, a modified
-version of APPLgrid exists which includes photon channels. This is known as APPLgridphoton. To
-learn how to generate APPLgrids, please see the tutorial [here](../tutorials/APPLgrids.md).
-
-APFELcomb generates FK tables for NNPDF fits. You can read about the mechanism behind APFELcomb
-[here](https://arxiv.org/abs/1605.02070) and find more information about the theory behind FK tables
-in the [Theory section](../Theory/FastInterface.rst).
-
-#### Other codes
-
-[fastNLO](https://fastnlo.hepforge.org/) is an alternative code to APPLgrid, which is currently not
-used by NNPDF, since the grids produced by fastNLO are not interfaced with the NNPDF code.
-
-## Partonic cross section generation
-
-Many programmes exist to evaluate partonic cross sections. Some are general purpose, such as
-MadGraph5\_aMC@NLO and MCFM, in that they compute predictions for a variety of physical processes
-(e.g. drell-yan production, single top production, ...) up to a given order. Others are more
-specific, such as top++, which makes predictions for top quark pair production only. Some of these
-programmes will be briefly outlined here. Note that to produce predictions at NNLO in QCD, which is
-the highest order used in NNPDF fits, one usually produces APPLgrids at NLO in QCD, and then
-supplements these with NNLO QCD corrections which are computed with a code with NNLO capabilities.
-These C-factors are often provided to the collaboration by external parties, rather than the code
-being run in-house.
-
-[MadGraph5\_aMC@NLO](https://launchpad.net/mg5amcnlo) is the programme that will be used for most of
-the future NNPDF calculations of partonic cross sections. This is in large part due to its ability
-to compute predictions at NLO in QCD with NLO EW corrections. To generate APPLgrids from
-MadGraph5\_aMC@NLO, one can use [aMCfast](https://amcfast.hepforge.org/), which interfaces between
-the two formats.
-
-### Other codes
-
-[MCFM](https://mcfm.fnal.gov/) ('Monte Carlo for FeMtobarn processes') is an alternative programme
-to MadGraph5\_aMC@NLO, which instead uses mcfm-bridge as an interface to generate APPLgrids.
-
-[FEWZ](https://arxiv.org/abs/1011.3540) ('Fully Exclusive W and Z Production') is a programme for
-calculating (differential) cross sections for the Drell-Yan production of lepton pairs up to NNLO
-in QCD.
-
-[NLOjet++](http://www.desy.de/~znagy/Site/NLOJet++.html) is a programme that can compute cross
-sections for a variety of processes up to NLO. The processes include electron-positron annihilation,
-deep-inelastic scattering (DIS), photoproduction in electron-proton collisions, and a variety of
-processes in hadron-hadron collisions.
-
-[Top++](http://www.precision.hep.phy.cam.ac.uk/top-plus-plus/) is a programme for computing top
-quark pair production inclusive cross sections at NNLO in QCD with soft gluon resummation included
-up to next-to-next-to-leading log (NNLL).
diff --git a/doc/sphinx/source/index.rst b/doc/sphinx/source/index.rst
index 09901a7a02..19deef147a 100644
--- a/doc/sphinx/source/index.rst
+++ b/doc/sphinx/source/index.rst
@@ -5,20 +5,31 @@
NNPDF documentation
===================
+* The `NNPDF collaboration `_ is an organisation performing
+ research in high-energy physics to determine the structure of the proton by producing
+ **parton distribution functions (PDFs)**.
+* This documentation is for the `NNPDF code `_, which
+ allows the user to perform PDF fits and analyse the output.
+
+* If you are a new user head along to :ref:`getstarted` and check out the :ref:`tutorials`.
+
+Contents
+========
.. toctree::
:maxdepth: 2
get-started/index
- theory/index
- data/index
- vp/index
n3fit/index
- code/index
- serverconf/index
+ vp/index
+ ./buildmaster.md
+ data/index
+ theory/index
ci/index
+ serverconf/index
+ external-code/index
tutorials/index
- QA/index
+ ./sphinx-documentation.md
Indices and tables
==================
diff --git a/doc/sphinx/source/n3fit/index.rst b/doc/sphinx/source/n3fit/index.rst
index 9f1d542e35..e84cedb586 100644
--- a/doc/sphinx/source/n3fit/index.rst
+++ b/doc/sphinx/source/n3fit/index.rst
@@ -1,10 +1,24 @@
-n3fit
-=====
+Fitting code: n3fit
+===================
+- `n3fit` is the next generation fitting code for NNPDF developed by the
+ N3PDF team - see `(hep-ph/1907.05075) `_
+- `n3fit` is responsible for fitting PDFs from NNPDF4.0 onwards.
+- The code is implemented in python using `Tensorflow `_
+ and `Keras `_.
+- The sections below are an overview of the `n3fit` design.
+
+n3fit design
+------------
.. toctree::
:maxdepth: 1
- introduction
methodology
hyperopt
runcard_detailed
+
+.. important::
+ If you just want to know how to run a fit using `n3fit`, head to :ref:`n3fit-usage`.
+
+
+
diff --git a/doc/sphinx/source/n3fit/introduction.md b/doc/sphinx/source/n3fit/introduction.md
deleted file mode 100644
index b130e378fc..0000000000
--- a/doc/sphinx/source/n3fit/introduction.md
+++ /dev/null
@@ -1,17 +0,0 @@
-Introduction to `n3fit`
-=============================
-
-`n3fit` is the next generation fitting code for NNPDF developed by the N3PDF team.
-
-We present a new regression model for the determination of parton distribution functions (PDF) using techniques inspired from deep learning projects. In the context of the NNPDF methodology, we implement a new efficient computing framework based on graph generated models for PDF parametrization and gradient descent optimization. The best model configuration is derived from a robust cross-validation mechanism through a hyperparametrization tune procedure. We show that results provided by this new framework outperforms the current state-of-the-art PDF fitting methodology in terms of best model selection and computational resources usage.
-
-References: Towards a new generation of parton densities with deep learning models [hep-ph/1907.05075](https://arxiv.org/abs/1907.05075)
-
-The documentation for the code has been autogenerated using sphinx and can be found [here](../modules/n3fit-code/n3fit).
-
-There are currently two example fits using the new code `n3fit` of the NNPDF 3.1 dataset uploaded to the validphys server that can be obtained with `vp-get`, these correspond to hyperoptimized DIS and Global runs with a chi2 similar to the corresponding `nnfit` server:
-
-- Global: `PN3_Global_ada_150519`
-- DIS: `PN3_DIS_130519`
-
-In the next sections we provide a methodological overview about the `n3fit` design.
diff --git a/doc/sphinx/source/serverconf/index.md b/doc/sphinx/source/serverconf/index.md
index 85c7f2f4d0..656af250bf 100644
--- a/doc/sphinx/source/serverconf/index.md
+++ b/doc/sphinx/source/serverconf/index.md
@@ -1,8 +1,8 @@
```eval_rst
.. _server:
```
-Server configuration
-====================
+Servers
+=======
The NNPDF collaboration employs a storage server that host various data files,
meant for both public and internal consumption. It hosts the following URLs:
@@ -19,9 +19,9 @@ SSH is used to interact with the server, as described in [Access](#access)
below.
-The NNPDF server is a virtual machine (VM) maintained by
-the Centro Calcolo at the physics department of the
-University of Milan. The machine has 2 CPUs, 4GB of RAM,
+The NNPDF server is a virtual machine (VM) maintained by
+the Centro Calcolo at the physics department of the
+University of Milan. The machine has 2 CPUs, 4GB of RAM,
1 TB of disk and it is running CentOS7.
The full disk is backed up every week by the Centro Calcolo.
@@ -41,7 +41,7 @@ The access to the server is provided by
`ssh`/[`vp-upload`](upload) with the following restrictions:
- `ssh` access to `root` is forbidden.
-- There is a shared `nnpdf` user with low privileges. In order to login
+- There is a shared `nnpdf` user with low privileges. In order to login
the user must send his public ssh key (usually in `~/.ssh/id_rsa.pub`) to SC.
The `nnpdf` is not allowed to login with password.
@@ -85,7 +85,7 @@ The relevant passwords can be found
```eval_rst
.. _web-scripts:
```
-Web Scripts
+Web scripts
-----------
Validphys2 interacts with the NNPDF server by [downloading resources](download)
@@ -198,7 +198,7 @@ server {
listen 80;
listen [::]:80;
server_name vp.nnpdf.science;
-
+
root /home/nnpdf/WEB/validphys-reports;
location / {
try_files $uri $uri/ =404;
@@ -258,7 +258,7 @@ packages.nnpdf.science. 1799 IN A 159.149.47.24
SSL encription is provided by [Let's Encrypt](https://letsencrypt.org).
The certificates are created using the `certbot` program with the `nginx` module.
-In order to create new ssl certificates, first prepare the `nginx` server block
+In order to create new ssl certificates, first prepare the `nginx` server block
configuration file and then run the interactive command:
```
sudo certbot --nginx -d
@@ -267,8 +267,3 @@ This will ask you several questions, including if you would like to automaticall
update the `nginx` server block file. We fully recommend this approach.
The certificate is automatically renewed by a [cron job](#cron-jobs).
-
-
-
-
-
diff --git a/doc/sphinx/source/get-started/sphinx-documentation.md b/doc/sphinx/source/sphinx-documentation.md
similarity index 97%
rename from doc/sphinx/source/get-started/sphinx-documentation.md
rename to doc/sphinx/source/sphinx-documentation.md
index 3df7a3337f..94df2bbaa1 100644
--- a/doc/sphinx/source/get-started/sphinx-documentation.md
+++ b/doc/sphinx/source/sphinx-documentation.md
@@ -1,8 +1,7 @@
-# NNPDF code and standards documentation
-
-## Sphinx Documentation
-
-### Generating the Documentation
+```eval_rst
+.. _add_docs:
+```
+# Adding to the Documentation
The NNPDF documentation is produced by the
[sphinx](http://www.sphinx-doc.org/en/master/) resource. To generate the sphinx
@@ -11,8 +10,6 @@ html`, ensuring one is inside the appropriate `nnpdf` conda environment. This pr
documentation in the `build/index/` directory. The `index.html` can be viewed with any appropriate
browser.
-### Adding to the Documentation
-
New documentation can be added in markdown, naming the source files with the `.md` suffix, or
restructured text, with the `.rst` suffix formats.
```eval_rst
diff --git a/doc/sphinx/source/theory/collinear.md b/doc/sphinx/source/theory/collinear.md
deleted file mode 100644
index dbcf8bf8a0..0000000000
--- a/doc/sphinx/source/theory/collinear.md
+++ /dev/null
@@ -1 +0,0 @@
-## Collinear Factorisation
diff --git a/doc/sphinx/source/theory/dglap.md b/doc/sphinx/source/theory/dglap.md
deleted file mode 100644
index 5759e7cd86..0000000000
--- a/doc/sphinx/source/theory/dglap.md
+++ /dev/null
@@ -1 +0,0 @@
-## DGLAP evolution
diff --git a/doc/sphinx/source/theory/index.rst b/doc/sphinx/source/theory/index.rst
index 38bf6eae91..fa8da85280 100644
--- a/doc/sphinx/source/theory/index.rst
+++ b/doc/sphinx/source/theory/index.rst
@@ -1,12 +1,17 @@
Theory
======
+This section contains information specific to the way theoretical predictions are stored in the
+NNPDF system. First, theoretical background is given on the FK table format, which is the file
+format used to store theoretical predictions. Next, the parameters used to define a set of FK
+tables, otherwise known as a theory, are given. A table showing the correspondence between the
+available theories and these parameters is then given in the next section. Finally, a way to access
+this information using the NNPDF code itself is detailed.
+
.. toctree::
:maxdepth: 1
- ./collinear
- ./dglap
- ./theoryparamsinfo
+ ./FastInterface
./theoryparamsdefinitions
./theoryindex
- ./FastInterface
+ ./theoryparamsinfo
diff --git a/doc/sphinx/source/tutorials/closuretest.md b/doc/sphinx/source/tutorials/closuretest.md
index f69b21e83d..3cd187abba 100644
--- a/doc/sphinx/source/tutorials/closuretest.md
+++ b/doc/sphinx/source/tutorials/closuretest.md
@@ -1,3 +1,7 @@
+```eval_rst
+.. _tut_closure:
+```
+
# How to run a closure test
Closure tests are a way to validate methodology by fitting on pseudodata
diff --git a/doc/sphinx/source/tutorials/datthcomp.md b/doc/sphinx/source/tutorials/datthcomp.md
index 563a7b4b78..1d110610ba 100644
--- a/doc/sphinx/source/tutorials/datthcomp.md
+++ b/doc/sphinx/source/tutorials/datthcomp.md
@@ -1,4 +1,5 @@
```eval_rst
+.. _tut_datthcomp:
.. _datthcomp:
```
# How to do a data theory comparison
diff --git a/doc/sphinx/source/tutorials/index.rst b/doc/sphinx/source/tutorials/index.rst
index 0abcb3c476..821d2fe57b 100644
--- a/doc/sphinx/source/tutorials/index.rst
+++ b/doc/sphinx/source/tutorials/index.rst
@@ -1,29 +1,57 @@
+.. _tutorials:
+
Tutorials
=========
+This section contains tutorials for common things you might want to do using the code.
+If you think of something which is missing please open an issue or, better still,
+a pull request (see :ref:`add_docs` and :ref:`reviews`).
+Running fits
+------------
.. toctree::
:maxdepth: 1
./run-fit.md
./run-legacy-fit.rst
./run-iterated-fit.rst
+
+Analysing results
+-----------------
+.. toctree::
+ :maxdepth: 1
+
./compare-fits.md
- ./list-resources.md
./report.md
+ ./plot_pdfs.rst
+ ./pdfbases.rst
+ ./datthcomp.md
+
+Adding new data
+---------------
+.. toctree ::
+ :maxdepth: 1
+
./buildmaster.md
./APPLgrids.md
./APPLgrids_comp.md
./apfelcomb.md
- ./datthcomp.md
+
+Closure tests
+-------------
+.. toctree::
+ :maxdepth: 1
+
./closuretest.md
./closureestimators.rst
- ./addspecialgrouping.rst
- ./conda.md
- ./pseudodata.md
- ./plot_pdfs.rst
- ./pdfbases.rst
- ./newplottingfn.rst
-=======
+Miscellaneous
+-------------
+.. toctree ::
+ :maxdepth: 1
+ ./list-resources.md
+ ./pseudodata.md
+ ./newplottingfn.rst
+ ./addspecialgrouping.rst
+ ./conda.md
diff --git a/doc/sphinx/source/tutorials/report.md b/doc/sphinx/source/tutorials/report.md
index f3023e3497..4b7f432bd3 100644
--- a/doc/sphinx/source/tutorials/report.md
+++ b/doc/sphinx/source/tutorials/report.md
@@ -1,3 +1,7 @@
+```eval_rst
+.. _tut_report:
+```
+
# How to generate a report
Suppose that we want to generate a custom report that includes plots and
diff --git a/doc/sphinx/source/tutorials/run-legacy-fit.rst b/doc/sphinx/source/tutorials/run-legacy-fit.rst
index 720d06ad8f..d91d193f4e 100644
--- a/doc/sphinx/source/tutorials/run-legacy-fit.rst
+++ b/doc/sphinx/source/tutorials/run-legacy-fit.rst
@@ -1,7 +1,7 @@
.. _nnfit-usage:
-How to run a legacy PDF fit
----------------------------
+How to run a legacy PDF fit (NNPDF3.1 style)
+============================================
This tutorial explains how to run a PDF fit using the legacy code,
:code:`nnfit`. To find out how to run a PDF fit with the code that is currently
diff --git a/doc/sphinx/source/vp/design.md b/doc/sphinx/source/vp/design.md
index e1aed7e2ed..4e4aa3b9b4 100644
--- a/doc/sphinx/source/vp/design.md
+++ b/doc/sphinx/source/vp/design.md
@@ -1,3 +1,7 @@
+```eval_rst
+.. _design:
+```
+
The design of `validphys 2`
==========================
@@ -9,7 +13,7 @@ underpinning the design of `validphys 2`, and of `reportengine`, the code it is
based on. It should be useful for anyone aiming to understand the general
philosophy and design goals of the project or take in in qualitatively different
directions. More concrete information on how to use the code can be found in
-the **USAGE** section.
+the sections under Using validphys.
Some specific issues of scientific code
---------------------------------------
diff --git a/doc/sphinx/source/vp/download.md b/doc/sphinx/source/vp/download.md
index b5a1d60a93..121ed344a5 100644
--- a/doc/sphinx/source/vp/download.md
+++ b/doc/sphinx/source/vp/download.md
@@ -8,9 +8,9 @@ Downloading resources
`validphys` is designed so that, by default, resources stored in known remote
locations are downloaded automatically and seamlessly used where necessary.
Available resources include PDF sets, completed fits, theories, and results of
-past `validphys` runs [uploaded to the server](upload). The `vp-get` tool,
-[described below](#the-vp-get-tool), can be used to download the same items
-manually.
+past `validphys` runs that have been [uploaded to the server](upload).
+The `vp-get` tool, [described below](#the-vp-get-tool),
+can be used to download the same items manually.
Automatic operation
-------------------
diff --git a/doc/sphinx/source/vp/examples.rst b/doc/sphinx/source/vp/examples.rst
index 0bd5e90818..62484b5385 100644
--- a/doc/sphinx/source/vp/examples.rst
+++ b/doc/sphinx/source/vp/examples.rst
@@ -1,3 +1,5 @@
+.. _vpexamples:
+
========
Examples
========
@@ -14,6 +16,29 @@ within ``validphys``, which by convention are lower case.
Here we detail the examples that already exist and list the resources which it is recommended that
you use when writing a new example runcard.
+Existing examples
+=================
+
+============================= =========================== =========================================================
+Runcard/folder name Tutorial What it does
+============================= =========================== =========================================================
+API_example.ipynb :ref:`vpapi` Jupyter notebook example with API
+closure_templates/ :ref:`tut_closure` Running closure tests
+cuts_options.yaml N/A Shows results for different cuts policites
+dataspecs.yaml N/A Shows how to use ``dataspecs``
+data_theory_comparison.yaml :ref:`tut_datthcomp` Data theory comparison
+export_data.yaml N/A Makes tables of experimental data and covariance matrices
+generate_a_report.yaml :ref:`tut_report` Shows how to generate a report
+kiplot.yaml N/A Plot kinematic coverage of data
+looping_example.yaml N/A Shows how to do actions in a loop over resources
+mc_gen_example.yaml N/A Analysis of pseudodata generation
+new_data_specification.yaml N/A Shows how to specify data in runcards
+pdfdistanceplots.yaml How to plot PDFs Distance PDF plots
+simple_runcard.yaml N/A Simple runcard example
+taking_data_from_fit.yaml N/A Shows how to take ``theoryids`` and ``pdfs`` from a fit
+theory_covariance/ :ref:`vptheorycov-index` Runcards for the ``theorycovariance`` module
+============================= =========================== =========================================================
+
Recommended resources
=====================
diff --git a/doc/sphinx/source/vp/getting-started.rst b/doc/sphinx/source/vp/getting-started.rst
new file mode 100644
index 0000000000..ff4d465882
--- /dev/null
+++ b/doc/sphinx/source/vp/getting-started.rst
@@ -0,0 +1,60 @@
+Getting started with validphys
+==============================
+
+To use ``validphys`` you must provide an input runcard which includes
+
+* The resources you need (PDFs, fits, etc.)
+* The actions (functions) you would like to be carried out
+* Additional flags and parameters for fine-tuning
+* Metadata describing the author, title and keywords
+
+To get an idea of the layout, :ref:`vpexamples` details the example runcards that can be found in
+`this folder `_. The :ref:`tutorials`
+section also takes you through how to make runcards for various tasks.
+
+Once you have created a runcard (e.g. ``runcard.yaml``), simply run
+
+.. code::
+
+ validphys runcard.yaml
+
+to set the ball rolling.
+
+Another useful command to be aware of is ``vp-comparefits - i``, which launches an interactive
+session to compare two fits. See the tutorial :ref:`compare-fits` for more information.
+
+For more tailored analysis, the API provides a high level interface to the code, allowing you to
+extract objects and play around with them. See :ref:`vpapi`.
+
+Finally, the ``validphys --help`` command can give you information on modules and specific actions, e.g.
+
+.. code::
+
+ $ validphys --help fits_chi2_table
+
+ fits_chi2_table
+
+ Defined in: validphys.results
+
+ Generates: table
+
+ fits_chi2_table(fits_total_chi2_data, fits_datasets_chi2_table,
+ fits_groups_chi2_table, show_total: bool = False)
+
+ Show the chiĀ² of each and number of points of each dataset and
+ experiment of each fit, where experiment is a group of datasets
+ according to the `experiment` key in the PLOTTING info file, computed
+ with the theory corresponding to the fit. Dataset that are not
+ included in some fit appear as `NaN`
+
+
+
+ The following additionl arguments can be used to control the
+ behaviour. They are set by default to sensible values:
+
+ show_total(bool) = False
+ per_point_data(bool) = True [Used by fits_groups_chi2_table]
+
+
+
+
diff --git a/doc/sphinx/source/vp/index.rst b/doc/sphinx/source/vp/index.rst
index 23876c17cc..d82dc2d008 100644
--- a/doc/sphinx/source/vp/index.rst
+++ b/doc/sphinx/source/vp/index.rst
@@ -1,19 +1,74 @@
.. _vp-index:
-vp-guide
-========
+Code for data: validphys
+========================
+Introduction to ``validphys 2``
+-------------------------------
+
+* ``validphys 2`` is a Python code that implements the data model of NNPDF
+ resources.
+
+* It provides an executable, called ``validphys`` which is used to
+ analyze NNPDF specific data, which takes runcards written in
+ `YAML `_ as an input and can produce plots,
+ tables or entire reports as an output.
+
+* The code also provides a Python library
+ (also called ``validphys``) which is used to implement executables providing
+ interfaces to more specific analyses such as the ``vp-comparefits``, and to
+ serve as basis to other NNPDF codes such as ``n3fit``.
+
+* ``validphys 2`` is implemented on top of the ``reportengine`` framework.
+ ``reportengine`` provides the logic to process the runcards by building task
+ execution graphs based on individual actions (which are Python functions). The
+ runcards can execute complex analysis and parameter scans with the appropriate
+ use of namespaces.
+
+* Some parts of ``validphys`` use the ``libnnpdf`` library in C++, through SWIG
+ wrappers.
+
+* The ideas behind the design of the code are explained in the
+ :ref:`Design ` section.
+
+Some things which ``validphys`` does
+-------------------------------------
+
+* Download resources (``vp-get``) - see :ref:`download`
+* Upload resources (``vp-upload``, ``wiki-upload`` and ``--upload`` flag) - see :ref:`upload`
+* Prepare fits for running with ``n3fit`` (``vp_setupfit``) - see :ref:`scripts`
+* Postprocess a fit (``postfit``) - see :ref:`scripts`
+* Rename a fit or PDF (``vp-fitrename`` and ``vp-pdfrename``) - see :ref:`scripts`
+* Sample a PDF (``vp-pdffromreplicas``) - see :ref:`scripts`
+* Generate a report with information about possible inefficiencies in fitting methodology (``vp-deltachi2``) - see :ref:`scripts`
+* Allow analysis via a high level interface - see :ref:`vpapi`
+* Analyse results - see :ref:`tutorials`
+
+Using validphys
+---------------
.. toctree::
- :maxdepth: 2
+ :maxdepth: 1
- ./introduction.md
- ./design.md
- ./api.md
- ./filters.md
+ ./getting-started.rst
./download.md
./upload.md
./nnprofile.md
./scripts.rst
- ./dataspecification.rst
- ./theorycov/index
- ./pydataobjs.rst
+ ./api.md
./examples.rst
+
+How validphys handles data
+--------------------------
+.. toctree::
+ :maxdepth: 1
+
+ ./pydataobjs.rst
+ ./filters.md
+ ./theorycov/index
+ ./dataspecification.rst
+
+Structure and design of validphys
+---------------------------------
+.. toctree::
+ :maxdepth: 1
+
+ ./design.md
diff --git a/doc/sphinx/source/vp/introduction.md b/doc/sphinx/source/vp/introduction.md
deleted file mode 100644
index 7208a02b96..0000000000
--- a/doc/sphinx/source/vp/introduction.md
+++ /dev/null
@@ -1,24 +0,0 @@
-Introduction to `validphys 2`
-=============================
-
-`validphys 2` is a Python code that implements the data model of NNPDF
-resources. It provides an executable, called `validphys` which is used to
-analyze NNPDF specific data, which takes runcards written in
-[YAML](https://en.wikipedia.org/wiki/YAML) as an input and can produce plots,
-tables or entire reports as an output. The code also provides a Python library
-(also called `validphys`) which is used to implement executables providing
-interfaces to more specific analyses such as the `vp-comparefits`, and to
-serve as basis to other NNPDF codes such as `n3fit`.
-
-`validphys 2` is implemented on top of the `reportengine` framework.
-`reportengine` provides the logic to process the runcards by building task
-execution graphs based on individual actions (which are Python functions). The
-runcards can execute complex analysis and parameter scans with the appropriate
-use of namespaces.
-
-Some parts of validphys use the `libnnpdf` library in C++, through SWIG
-wrappers.
-
-The ideas behind the design of the code are explained in the
-[Design](./design.md) section.
-
diff --git a/doc/sphinx/source/vp/scripts.rst b/doc/sphinx/source/vp/scripts.rst
index 22d10c122d..5225e3610d 100644
--- a/doc/sphinx/source/vp/scripts.rst
+++ b/doc/sphinx/source/vp/scripts.rst
@@ -1,4 +1,4 @@
-.. _upload:
+.. _scripts:
=================
Validphys scripts
diff --git a/doc/sphinx/source/vp/theorycov/index.rst b/doc/sphinx/source/vp/theorycov/index.rst
index 8868b47e35..3db4b1439a 100644
--- a/doc/sphinx/source/vp/theorycov/index.rst
+++ b/doc/sphinx/source/vp/theorycov/index.rst
@@ -1,11 +1,67 @@
.. _vptheorycov-index:
-The ``theorycovariance`` module
-=============================
+
+The theorycovariance module
+===============================
+
+:Author: Rosalyn Pearson (r.l.pearson@ed.ac.uk)
+
+The ``theorycovariance`` module deals with constructing, testing and
+outputting theory covariance matrices (covmats). Primarily, it is concerned
+with scale variation covariance matrices used to model missing higher order
+uncertainties. See the `short
+`_ and `long
+`_ NNPDF papers for in-depth information.
+
+Summary
+-------
+
+- The module of ``validphys2`` which deals with computation and
+ interpretation of theoretical covariance matrices can be found in
+ ``nnpdf/validphys2/src/validphys/theorycovariance/``, which consists
+ of three files:
+
+ #. ``construction.py``: deals with construction of covariance
+ matrices and associated quantities
+
+ #. ``output.py``: plots and tables
+
+ #. ``tests.py``: actions for validating the covariance matrices against
+ the NNLO-NLO shift
+
+- Theoretical covariance matrices are built according to the various prescriptions
+ in :ref:`prescrips`.
+
+- The prescription must be one of 3 point, 5 point, 5bar point, 7 point or 9 point.
+
+- As input you need theories for the relevant scale combinations which
+ correspond to the prescription. This information is taken from the
+ ``scalevariations`` module, which consists of two files:
+
+ #. ``pointprescriptions.yaml``: correspondence between each point prescription
+ and the scale combinations that are used to construct it
+
+ #. ``scalevariationtheoryids.yaml``: correspondence between each scale combination
+ and a theoryid for a given central theoryid
+
+- Renormalisation scales should be correlated within each
+ process type. These process types are categorised as {DIS CC, DIS NC,
+ Drell-Yan, Jets, Top}.
+
+- **Outputs** include tables and heat plots of theoretical and combined
+ (theoretical + experimental) covariance matrices, comparisons of
+ theoretical and experimental errors, and plots and tables of
+ :math:`\chi^2` values.
+
+- Various **validation** outputs also exist, including tables of eigenvalues,
+ plots of eigenvectors and shift vs theory comparisons.
+
+
+More information
+-----------------
.. toctree::
:maxdepth: 1
- ./summary.rst
./runcard_layout.rst
./outputs.rst
./point_prescrip.rst
diff --git a/doc/sphinx/source/vp/theorycov/point_prescrip.rst b/doc/sphinx/source/vp/theorycov/point_prescrip.rst
index bf994251e8..0a1712811a 100644
--- a/doc/sphinx/source/vp/theorycov/point_prescrip.rst
+++ b/doc/sphinx/source/vp/theorycov/point_prescrip.rst
@@ -1,3 +1,5 @@
+.. _prescrips:
+
Point prescriptions for theory covariance matrices
==================================================
diff --git a/doc/sphinx/source/vp/theorycov/summary.rst b/doc/sphinx/source/vp/theorycov/summary.rst
deleted file mode 100644
index 2df834ef25..0000000000
--- a/doc/sphinx/source/vp/theorycov/summary.rst
+++ /dev/null
@@ -1,65 +0,0 @@
-========
-Summary
-========
-
-:Author: Rosalyn Pearson (r.l.pearson@ed.ac.uk)
-
-.. raw:: latex
-
- \maketitle
-
-.. raw:: latex
-
- \tableofcontents
-
-See the `short
-`_ and `long
-`_ papers for reference.
-
-- The module of ``validphys2`` which deals with computation and
- interpretation of theoretical covariance matrices can be found in
- ``nnpdf/validphys2/src/validphys/theorycovariance/``, which consists
- of three files:
-
- #. ``construction.py``: deals with construction of covariance
- matrices and associated quantities
-
- #. ``output.py``: plots and tables
-
- #. ``tests.py``: actions for validating the covariance matrices against
- the NNLO-NLO shift
-
-- Theoretical covariance matrices are built according to the various prescriptions.
-
-- As input you need theories for the relevant scale combinations which
- correspond to the prescription. This information is taken from the
- ``scalevariations`` module, which consists of two files:
-
- #. ``pointprescriptions.yaml``: correspondence between each point prescription
- and the scale combinations that are used to construct it
-
- #. ``scalevariationtheoryids.yaml``: correspondence between each scale combination
- and a theoryid for a given central theoryid
-
-- The prescription must be one of 3 point, 5 point, 7 point or 9 point.
-
-- In the case of 5 theories, you must further specify whether the 5 or
- :math:`\bar{5}` prescription is required. You can do this by
- allocating the flag ``fivetheories`` to ``nobar`` or ``bar`` in the
- runcard.
-
-- In the case of 7 theories, there are two options. The default is the
- modified prescription that Gavin Salam proposed. To use the original
- prescription instead, specify ``seventheories: original`` in the runcard.
-
-- Currently the renormalisation scales are correlated within each
- process type. These process types are categorised as {DIS CC, DIS NC,
- Drell-Yan, Jets, Top}.
-
-- Outputs include tables and heat plots of theoretical and combined
- (theoretical + experimental) covariance matrices, comparisons of
- theoretical and experimental errors, and plots and tables of
- :math:`\chi^2` values.
-
-- Various validation outputs also exist, including tables of eigenvalues,
- plots of eigenvectors and shift vs theory comparisons.
diff --git a/validphys2/examples/W Plot.ipynb b/validphys2/examples/API_example.ipynb
similarity index 100%
rename from validphys2/examples/W Plot.ipynb
rename to validphys2/examples/API_example.ipynb
diff --git a/validphys2/examples/mc_gen_checks_example.yaml b/validphys2/examples/mc_gen_example.yaml
similarity index 90%
rename from validphys2/examples/mc_gen_checks_example.yaml
rename to validphys2/examples/mc_gen_example.yaml
index ef255ff136..ff38aa993f 100644
--- a/validphys2/examples/mc_gen_checks_example.yaml
+++ b/validphys2/examples/mc_gen_example.yaml
@@ -10,7 +10,7 @@ fit: NNPDF31_nlo_as_0118
theoryid: 52
-template: mc_gen_checks_report.md
+template: mc_gen_report.md
actions_:
- report(main=true)
diff --git a/validphys2/examples/mc_gen_checks_report.md b/validphys2/examples/mc_gen_report.md
similarity index 100%
rename from validphys2/examples/mc_gen_checks_report.md
rename to validphys2/examples/mc_gen_report.md