Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some sphinx fixes and removed private and special members from sphinx… #1296

Merged
merged 3 commits into from
Aug 16, 2017
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions doc/sphinx/conf.py.in
Original file line number Diff line number Diff line change
Expand Up @@ -300,8 +300,7 @@ texinfo_documents = [
# http://stackoverflow.com/questions/12206334/sphinx-autosummary-toctree-contains-reference-to-nonexisting-document-warnings
numpydoc_show_class_members = False
autodoc_mock_imports = ['featuredefs', ]
autodoc_default_flags = ['members', 'private-members',
'special-members',
autodoc_default_flags = ['members',
'show-inheritance', 'undoc-members']
# Replacements
rst_epilog = """
Expand Down
13 changes: 12 additions & 1 deletion doc/sphinx/dg.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ Required Development Tools
the distributed versioning control system Git [1]_.

- The documentation is currently being converted from LaTeX to Sphinx. To build the old user and developer guides, you will need LaTeX. For building the sphinx documentation, you will need the Python packages listed in ``requirements.txt`` in the top-level source directory. To install them, issue::

pip install --user -r requirements.txt

Note, that some distributions now use ``pip`` for Python3 and ``pip2`` for Python 2.
Expand Down Expand Up @@ -195,12 +196,15 @@ developers.
Source code structure
---------------------
The source tree has the following structure:

* src: The actual source code

* core: The C++ source code of the simulation core
* python/espressomd: Source of the espressomd Python module and its submodules
* script_interface: C++ source code of the script_interface component, which links Python classes to functionality in the simulation core

* doc: Documentation

* sphinx: The sphinx-based documentation, consisting of user and developer guide.
* tutorials/python: Source and pdf files for the introductory tutorials
* doxygen: Build directory for the C++ in-code documentation
Expand All @@ -211,6 +215,7 @@ The source tree has the following structure:

* libs: External dependencies (at this point h5xx)
* maintainer: Files used by the maintainers

* configs: Collection of myconfig.hpp files which activate different sets of features for testing.
* docker: Definitions of the docker images for various distributions used for continuous integration testing
* travis: Support files for the continuous integration testing run on the Travis-CI service.
Expand All @@ -225,7 +230,9 @@ Espresso uses two communication models, namely master-slave and synchronous.
runs the Python script, whereas all other nodes are idle until they receive a command from the head node. Such commands include particle creation,
changing of particle properties and changing global simulation parameters.
When a Python command such as:::

system.part.add(pos=(1,2,3))

is issued, the head node determines, which node is responsible for the given position, and then sends the node the command to place the particle.

* When an integration is started in Python on the head node, a command to start the integration is sent to all nodes, in the master-slave framework described above.
Expand All @@ -236,7 +243,7 @@ Espresso uses two communication models, namely master-slave and synchronous.
involved in the communication. If this is not done, a deadlock will result.

Adding calls to the master-slave framework
-----------------------------------------
------------------------------------------

Using an instance of MpiCallback
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Expand All @@ -252,6 +259,7 @@ Using an instance of MpiCallback
void register_my_callback() {
Communication::mpiCallbacks().add(my_callback);
}

You can, e.g., call your registration from initialize.cpp:on_program_start()
Instead of a static function, from which a ``std::function<void(int,int)>`` can be constructed can
be used. For example::
Expand All @@ -260,13 +268,15 @@ Using an instance of MpiCallback
void register_my_callback() {
Communication::mpiCallbacks().add([](int, int){ /* Do something */ });
}

can be used to add a lambda function as callback.
* Then, you can use your callback from the head node::

#include "MpiCallbacks.hpp"
void call_my_callback() {
Communication::mpiCallbacks.call(my_callback, param1, param2);
}

This only works outside the integration loop. After the callback has been called, synchronous mpi communication can be done.

Legacy callbacks
Expand All @@ -278,6 +288,7 @@ Adding New Bonded Interactions
------------------------------

To add a new bonded interaction, the following steps have to be taken

* Simulation core:

* Define a structure holding the parameters (prefactors, etc.) of the interaction
Expand Down
206 changes: 5 additions & 201 deletions doc/sphinx/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,13 +150,15 @@ re-registered when the same checkpoint id is used later.
Following the example above, the next example loads the last checkpoint,
restores the state of all checkpointed objects and registers a signal.

.. code:: python
.. code::

import espressomd from espressomd import checkpointing import signal

checkpoint = checkpointing.Checkpointing(checkpoint\_id=“mycheckpoint”)
checkpoint = checkpointing.Checkpointing(checkpoint_id=“mycheckpoint”)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are smart quotes used here (“”) instead of straight ones (""), which makes this invalid Python code. Since you're touching the line anyway, please fix this.

checkpoint.load()

system = espressomd.System() system.cell\_system.skin = skin
system = espressomd.System()
system.cell_system.skin = skin
system.actors.add(p3m)

#signal.SIGINT: signal 2, is sent when ctrl+c is pressed
Expand Down Expand Up @@ -237,88 +239,6 @@ h5.write()
After the last write call, you have to call the close() method to remove
the backup file and to close the datasets etc.

Writing and reading binary files
--------------------------------

Binary files are written using the command

writemd …

This will write out particle data to the Tcl channel for all particles
in binary format. Apart from the mandatory particle id, only limited
information can be stored. The coordinates (, and ), velocities (, and )
and forces (, and ). Other information should be stored in a blockfile
or reconstructed differently. Note that since both ``blockfile`` and
``writemd`` are using a Tcl channel, it is actually possible to mix
them, so that you can write a single checkpoint file. However, the
``blockfile read auto`` mechanism cannot handle the binary section, thus
you need to read this section manually. Reading of binary particle data
happens through

readmd

For the exact format of the written binary sequence, see
``src/tcl/binary_file_tcl.cpp``.

MPI-IO
------

When using with MPI, blockfiles and writemd have the disadvantage, that
the master node does *all* the output. This is done by sequentially
communicating all particle data to the master node. MPI-IO offers the
possibility to write out particle data in parallel using binary IO. To
output variables and other non-array information, use normal blockfiles
(section [sec:structured-file-format]).

To dump data using MPI-IO, use the following syntax:

mpiio …

This command writes data to several files using as common filename
prefix. Beware, that must not be a Tcl channel but a string which must
not contain colons. The data can be positions (), velocities (),
particle types () and particle bonds () or any combination of these. The
particle ids are always dumped. For safety reasons, MPI-IO will not
overwrite existing files, so if the command fails and prints
``MPI_ERR_IO`` make sure the files are non-existent.

The files produced by this command are (assumed is “1”):

1.head
Internal information (Dumped fields, bond partner num); always
produced

1.pref
Internal information (Exscan results of nlocalparts); always
produced

1.ids
Particle ids; always produced

1.type
Particle types; optional

1.pos
Particle positions; optional

1.vel
Particle velocities; optional

1.bond
Bond information; optional

1.boff
Internal bond prefix information; optional, necessary to read 1.bond

Currently, these files have to be read by exactly the same number of MPI
processes that was used to dump them, otherwise an error is signalled.
Also, the same type of machine (endianess, byte order) has to be used.
Otherwise only garbage will be read. The read command replaces the
particles, i.e. all previous existent particles will be *deleted*.

There is a python script (``tools/mpiio2blockfile.py``) which converts
MPI-IO snapshots to regular blockfiles.

Writing VTF files
-----------------

Expand Down Expand Up @@ -431,122 +351,6 @@ vtfpid
If is the id of a particle as used in , this command returns the atom id
used in the VTF, VSF or VCF formats.

``writevtk``: Particle Visualization in paraview
------------------------------------------------

This feature allows to export the particle positions in a paraview [3]_
compatible VTK file. Paraview is a powerful and easy to use open-source
visualization program for scientific data. Since can export the
lattice-Boltzmann velocity field [ssec:LBvisualization] in the VTK
format as well and paraview allows to visualize particles with glyphs
and vector fields with stream lines, glyphs, contourplots, etc., one can
use it so completely visualize a coupled lattice-Boltzmann MD
simulation. It can also create videos without much effort if one exports
data of individual time steps into separate files with filenames
including a running index (``data_0.vtk``, ``data_1.vtk``, ...).

writevtk

Name of the file to export the particle positions into.

Specifies a list of particle types which should be exported. The default
is . Alternatively, a list of type number can be given. Exporting the
positions of all particles but in separate files might make sense if one
wants to distinguish the different particle types in the visualization
(i.e. by color or size). To export a type ``1`` use something along
``writevtk test.tcl 1``. To export types ``1``, ``5``, ``7``, which are
not to be distinguished in the visualization, use
``writevtk test.tcl 7 1 5``. The order in the list is arbitrary, but
duplicates are *not* ignored!

Reading and Writing PDB/PSF files
---------------------------------

The PDB (Brookhaven Protein DataBase) format is a widely used format for
describing atomistic configurations. PSF is a format that is used to
describe the topology of a PDB file.

When visualizing your system with VMD, it is recommended to use the VTF
format instead (see section [sec:vtf]), as it was specifically designed
for visualizations with VMD. In contrast to the PDB/PSF formats, in VTF
files it is possible to specify the VDW radii of the particles, to have
a varying simulation box size, etc.

: Writing the topology
~~~~~~~~~~~~~~~~~~~~~~

writepsf

Writes the current topology to the file (here, is not a channel, since
additional information cannot be written anyway). , and so on are
parameters describing a system consisting of equally long charged
polymers, counterions and salt. This information is used to set the
residue name and can be used to color the atoms in VMD. If you specify ,
the residue name is taken from the molecule identity of the particle. Of
course different kinds of topologies can also be handled by modified
versions of .

: Writing the coordinates
~~~~~~~~~~~~~~~~~~~~~~~~~

writepdb writepdbfoldchains writepdbfoldtopo

Variant writes the corresponding particle data.

Variant writes folded particle data where the folding is performed on
chain centers of mass rather than single particles. In order to fold in
this way the chain topology and box length must be specified. Note that
this method is outdated. Use variant instead.

Variant writes folded particle data where the folding is performed on
chain centers of mass rather than single particles. This method uses the
internal box length and topology information from espresso. If you wish
to shift particles prior to folding then supply the optional shift
information. should be a three member tcl list consisting of x, y, and z
shifts respectively and each number should be a floating point (ie with
decimal point).

: Reading the coordinates and interactions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

| readpdb pdb\_file type first\_id

Reads the positions and possibly charges, types and Lennard-Jones
interactions from the file and a corresponding Gromacs topology file .
The topology file must contain the ``atoms`` and ``atomtypes`` sections,
it may be necessary to use the Gromacs preprocessor to obtain a complete
file from a system configuration and a force field.

Any offset of the particle positions if removed, such that the lower
left corner bounding box of the particles is in the origin. If
``fit_to_box`` is given, the box size if increased to hold the particles
if necessary. If it is not set and the particles do not fit into the
box, the behavior is undefined.

sets the particle type for the added particles. If there is a topology
file give that contains a types for the particles, the particles get
types by the order in the topology file plus . If the corresponding type
in the topology file has a charge, it is used, otherwise the particle
charge defaults to zero.

The particles get consecutive id’s in the order of the pdb file,
starting at . Please be aware that existing particles get overwritten by
values from the file.

The ``lj_with`` section produces Lennard-Jones interactions between the
type and the types defined by the topology file. The interaction
parameters are calculated as :math:`\epsilon_{\text{othertype},j} =
\sqrt{\epsilon_{\text{othertype}} \epsilon_j}` and
:math:`\sigma_{\text{othertype},j}
=\frac{1}{2}\left( \sigma_{\text{othertype}} + \sigma_j \right)`, where
:math:`j` runs over the atomtypes defined in the topology file. This
corresponds to the combination rule 2 of Gromacs. There may be multiple
such sections. The cutoff is determined by as
:math:`\text{cutoff}\times \sigma_{ij}` in a relative fashion. The
potential is shifted so that it vanishes at the cutoff. The command
returns the number of particles that were successfully added.

Reading bonded interactions and dihedrals is currently not supported.

Online-visualisation with Mayavi or OpenGL
------------------------------------------
Expand Down