Skip to content

Commit

Permalink
Fix typos in the documentation [ci skip] (#3441)
Browse files Browse the repository at this point in the history

Signed-off-by: Marcel Ribeiro-Dantas <mribeirodantas@seqera.io>
  • Loading branch information
mribeirodantas authored Nov 28, 2022
1 parent 811e7ca commit ae95d90
Show file tree
Hide file tree
Showing 11 changed files with 16 additions and 16 deletions.
2 changes: 1 addition & 1 deletion docs/aws.rst
Original file line number Diff line number Diff line change
Expand Up @@ -405,7 +405,7 @@ of a specific job e.g. to define custom mount paths or other Batch Job special s

To do that first create a *Job Definition* in the AWS Console (or with other means). Note the name of the *Job Definition*
you created. You can then associate a process execution with this *Job definition* by using the :ref:`process-container`
directive and specifing, in place of the container image name, the Job definition name prefixed by the
directive and specifying, in place of the container image name, the Job definition name prefixed by the
``job-definition://`` string, as shown below::

process.container = 'job-definition://your-job-definition-name'
Expand Down
2 changes: 1 addition & 1 deletion docs/cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -934,7 +934,7 @@ execution metadata.
+---------------------------+------------+--------------------------------------------------------------------------------+
| -fields, -f | | Comma separated list of fields to include in the printed log. |
+---------------------------+------------+--------------------------------------------------------------------------------+
| -filter, -F | | Filter log entires by a custom expression |
| -filter, -F | | Filter log entries by a custom expression |
| | | e.g. ``process =~ /foo.*/ && status == 'COMPLETED'`` |
+---------------------------+------------+--------------------------------------------------------------------------------+
| -help, -h | false | Print the command usage. |
Expand Down
2 changes: 1 addition & 1 deletion docs/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ Name Description
cliPath The path where the AWS command line tool is installed in the host AMI.
jobRole The AWS Job Role ARN that needs to be used to execute the Batch Job.
logsGroup The name of the logs group used by Batch Jobs (default: ``/aws/batch``, requires ``22.09.0-edge`` or later).
volumes One or more container mounts. Mounts can be specified as simple e.g. `/some/path` or canonical format e.g. ``/host/path:/mount/path[:ro|rw]``. Multiple mounts can be specifid separating them with a comma or using a list object.
volumes One or more container mounts. Mounts can be specified as simple e.g. `/some/path` or canonical format e.g. ``/host/path:/mount/path[:ro|rw]``. Multiple mounts can be specified separating them with a comma or using a list object.
delayBetweenAttempts Delay between download attempts from S3 (default `10 sec`).
maxParallelTransfers Max parallel upload/download transfer operations *per job* (default: ``4``).
maxTransferAttempts Max number of downloads attempts from S3 (default: `1`).
Expand Down
2 changes: 1 addition & 1 deletion docs/container.rst
Original file line number Diff line number Diff line change
Expand Up @@ -455,7 +455,7 @@ Multiple containers

It is possible to specify a different Singularity image for each process definition in your pipeline script. For example,
let's suppose you have two processes named ``foo`` and ``bar``. You can specify two different Singularity images
specifing them in the ``nextflow.config`` file as shown below::
specifying them in the ``nextflow.config`` file as shown below::

process {
withName:foo {
Expand Down
2 changes: 1 addition & 1 deletion docs/dsl2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@ Finally, we have a third project B with a workflow that includes again P1 and P2
└-main.nf

With the possibility to keep the template files inside the project L, A and B can use the modules defined in L without any changes.
A future prject C would do the same, just cloning L (if not available on the system) and including its module script.
A future project C would do the same, just cloning L (if not available on the system) and including its module script.

Beside promoting sharing modules across pipelines, there are several advantages in keeping the module template under the script path:

Expand Down
2 changes: 1 addition & 1 deletion docs/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ and ``datasetFile``):
In our example above would now have the folder ``broccoli`` in the results directory which would
contain the file ``broccoli.aln``.

If the input file has multiple extensions (e.g. ``brocolli.tar.gz``), you will want to use
If the input file has multiple extensions (e.g. ``broccoli.tar.gz``), you will want to use
``file.simpleName`` instead, to strip all of them.


Expand Down
2 changes: 1 addition & 1 deletion docs/flux.rst
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ Here is an a demo workflow ``demo.nf`` of a job we want to run!
}
We will be using these files to run our test workflow. Next, assuming you don't have one handy,
let's set up an envrionment with Flux.
let's set up an environment with Flux.

Container Environment
---------------------
Expand Down
6 changes: 3 additions & 3 deletions docs/operator.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1535,7 +1535,7 @@ It prints the following output::
result = 15

.. tip::
A common use case for this operator is to use the first paramter as an `accumulator`
A common use case for this operator is to use the first parameter as an `accumulator`
the second parameter as the `i-th` item to be processed.

Optionally you can specify a `seed` value in order to initialise the accumulator parameter
Expand Down Expand Up @@ -1812,7 +1812,7 @@ the required fields, or just specify ``record: true`` as in the example shown be
.view { record -> record.readHeader }

Finally the ``splitFastq`` operator is able to split paired-end read pair FASTQ files. It must be applied to a channel
which emits tuples containing at least two elements that are the files to be splitted. For example::
which emits tuples containing at least two elements that are the files to be split. For example::

Channel
.fromFilePairs('/my/data/SRR*_{1,2}.fastq', flat: true)
Expand All @@ -1833,7 +1833,7 @@ Available parameters:
Field Description
=========== ============================
by Defines the number of *reads* in each `chunk` (default: ``1``)
pe When ``true`` splits paired-end read files, therefore items emitted by the source channel must be tuples in which at least two elements are the read-pair files to be splitted.
pe When ``true`` splits paired-end read files, therefore items emitted by the source channel must be tuples in which at least two elements are the read-pair files to be split.
limit Limits the number of retrieved *reads* for each file to the specified value.
record Parse each entry in the FASTQ file as record objects (see following table for accepted values)
charset Parse the content by using the specified charset e.g. ``UTF-8``
Expand Down
4 changes: 2 additions & 2 deletions docs/plugins.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Alternatively, plugins can be required using the ``-plugins`` command line optio
nextflow run <PIPELINE NAME> -plugins nf-hello@0.1.0

Multiple plugins can be specified by separating them with a comma.
When specifiying plugins via the command line, any plugin declarations in the Nextflow config file are ignored.
When specifying plugins via the command line, any plugin declarations in the Nextflow config file are ignored.


Index
Expand Down Expand Up @@ -95,7 +95,7 @@ And this function can be used by the pipeline::

channel.of( reverseString('hi') )

The above snipped includes a function from the plugin and allows the channel to call it directly.
The above snippet includes a function from the plugin and allows the channel to call it directly.

In the same way as operators, functions can be aliased::

Expand Down
6 changes: 3 additions & 3 deletions docs/process.rst
Original file line number Diff line number Diff line change
Expand Up @@ -749,7 +749,7 @@ each time a new value is received. For example::

workflow {
sequences = Channel.fromPath('*.fa')
methods = ['regular', 'expresso', 'psicoffee']
methods = ['regular', 'espresso', 'psicoffee']

alignSequences(sequences, methods)
}
Expand All @@ -774,14 +774,14 @@ Input repeaters can be applied to files as well. For example::

workflow {
sequences = Channel.fromPath('*.fa')
methods = ['regular', 'expresso']
methods = ['regular', 'espresso']
libraries = [ file('PQ001.lib'), file('PQ002.lib'), file('PQ003.lib') ]

alignSequences(sequences, methods, libraries)
}

In the above example, each sequence input file emitted by the ``sequences`` channel triggers six alignment tasks,
three with the ``regular`` method against each library file, and three with the ``expresso`` method.
three with the ``regular`` method against each library file, and three with the ``espresso`` method.

.. note::
When multiple repeaters are defined, the process is executed for each *combination* of them.
Expand Down
2 changes: 1 addition & 1 deletion docs/script.rst
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ Name Description
``launchDir`` The directory where the workflow is run (requires version ``20.04.0`` or later).
``moduleDir`` The directory where a module script is located for DSL2 modules or the same as ``projectDir`` for a non-module script (requires version ``20.04.0`` or later).
``nextflow`` Dictionary like object representing nextflow runtime information (see :ref:`metadata-nextflow`).
``params`` Dictionary like object holding workflow parameters specifing in the config file or as command line options.
``params`` Dictionary like object holding workflow parameters specifying in the config file or as command line options.
``projectDir`` The directory where the main script is located (requires version ``20.04.0`` or later).
``workDir`` The directory where tasks temporary files are created.
``workflow`` Dictionary like object representing workflow runtime information (see :ref:`metadata-workflow`).
Expand Down

0 comments on commit ae95d90

Please sign in to comment.