Skip to content

Massive cleanup of trivial linking issues and some typos #284

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Oct 4, 2012
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 5 additions & 4 deletions source/administration/backups.txt
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ With ":option:`--oplog <mongodump --oplog>`" , :program:`mongodump`
copies all the data from the source database, as well as all of the
:term:`oplog` entries from the beginning of the backup procedure to
until the backup procedure completes. This backup procedure, in
conjunction with :option:`mongorestore --oplogReplay`, allows you to
conjunction with :option:`mongorestore --oplogReplay <mongorestore --oplogReplay>`, allows you to
restore a backup that reflects a consistent and specific moment in
time.

Expand Down Expand Up @@ -491,7 +491,7 @@ the ``dump-2011-10-25`` directory to the :program:`mongod` instance
running on the localhost interface. By default, :program:`mongorestore`
will look for a database dump in the ``dump/`` directory and restore
that. If you wish to restore to a non-default host, the
":option:`--host <mongod>`" and ":option:`--port <mongod --port>`"
":option:`--host <mongorestore --host>`" and ":option:`--port <mongorestore --port>`"
options allow you to specify a non-local host to connect to capture
the dump. Consider the following example:

Expand All @@ -504,8 +504,9 @@ username and password credentials as above.

If you created your database dump using the :option:`--oplog
<mongodump --oplog>` option to ensure a point-in-time snapshot, call
:program:`mongorestore` with the ":option:`--oplogReplay <mongorestore
--oplogReplay>`" option as in the following example:
:program:`mongorestore` with the
:option:`--oplogReplay <mongorestore --oplogReplay>`
option as in the following example:

.. code-block:: sh

Expand Down
2 changes: 1 addition & 1 deletion source/administration/import-export.txt
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ documents will return on standard output.

By default, :program:`mongoexport` returns one :term:`JSON document`
per MongoDB document. Specify the ":option:`--jsonArray <mongoexport
--jsonArrray>`" argument to return the export as a single :term:`JSON`
--jsonArray>`" argument to return the export as a single :term:`JSON`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another problem here is that the link is on different lines.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's a problem throughout the docs.
Where links would be too long I started to move them to separate lines but
didn't actively try to do that throughout.

array. Use the ":option:`--csv <mongoexport --csv>`" file to return
the result in CSV (comma separated values) format.

Expand Down
4 changes: 2 additions & 2 deletions source/administration/indexes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Create an Index
~~~~~~~~~~~~~~~

To create an index, use :method:`db.collection.ensureIndex()` or a similar
:api:`method your driver <>`. For example
:api:`method from your driver <>`. For example
the following creates [#ensure]_ an index on the ``phone-number`` field
of the ``people`` collection:

Expand Down Expand Up @@ -383,4 +383,4 @@ operation is an index build. The ``msg`` field also indicates the
percent of the build that is complete.

If you need to terminate an ongoing index build, You can use the
:method:`db.killOP()` method in the :program:`mongo` shell.
:method:`db.killOp()` method in the :program:`mongo` shell.
8 changes: 4 additions & 4 deletions source/administration/sharding-architectures.txt
Original file line number Diff line number Diff line change
Expand Up @@ -86,15 +86,15 @@ instance or replica set (i.e. a :term:`shard`.)
Every database has a "primary" [#overloaded-primary-term]_ shard that
holds all un-sharded collections in that database. All collections
that *are not* sharded reside on the primary for their database. Use
the :dbcommand:`moveprimary` command to change the primary shard for a
the :dbcommand:`movePrimary` command to change the primary shard for a
database. Use the :dbcommand:`printShardingStatus` command or the
:method:`sh.status()` to see an overview of the cluster, which contains
information about the chunk and database distribution within the
cluster.

.. warning::

The :dbcommand:`moveprimary` command can be expensive because
The :dbcommand:`movePrimary` command can be expensive because
it copies all non-sharded data to the new shard, during which
that data will be unavailable for other operations.

Expand All @@ -103,9 +103,9 @@ the primary for all databases before enabling sharding. Databases
created subsequently, may reside on any shard in the cluster.

.. [#sharding-databases] As you configure sharding, you will use the
:dbcommand:`enablesharding` command to enable sharding for a
:dbcommand:`enableSharding` command to enable sharding for a
database. This simply makes it possible to use the
:dbcommand:`shardcollection` on a collection within that database.
:dbcommand:`shardCollection` on a collection within that database.

.. [#overloaded-primary-term] The term "primary" in the context of
databases and sharding, has nothing to do with the term
Expand Down
4 changes: 2 additions & 2 deletions source/administration/sharding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -316,7 +316,7 @@ The procedure to remove a shard is as follows:
You must specify the name of the shard. You may have specified this
shard name when you first ran the :dbcommand:`addShard` command. If not,
you can find out the name of the shard by running the
:dbcommand:`listshards` or :dbcommand:`printShardingStatus`
:dbcommand:`listShards` or :dbcommand:`printShardingStatus`
commands or the :method:`sh.status()` shell helper.

The following examples will remove a shard named ``mongodb0`` from the cluster.
Expand Down Expand Up @@ -637,7 +637,7 @@ To migrate chunks, use the :dbcommand:`moveChunk` command.

.. note::

To return a list of shards, use the :dbcommand:`listshards`
To return a list of shards, use the :dbcommand:`listShards`
command.

Specify shard names using the :dbcommand:`addShard` command
Expand Down
8 changes: 5 additions & 3 deletions source/core/indexes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ You cannot delete the index on ``_id``.
The ``_id`` field is the :term:`primary key` for the collection, and
every document *must* have a unique ``_id`` field. You may store any
unique value in the ``_id`` field. The default value of ``_id`` is
:term:`ObjectID` on every :method:`insert` operation. An :term:`ObjectId`
:term:`ObjectID` on every insert() <db.collection.insert()` operation. An :term:`ObjectId`
is a 12-byte unique identifiers suitable for use as the value of an
``_id`` field.

Expand Down Expand Up @@ -114,8 +114,10 @@ In general, you should have secondary indexes that support all of your
primary, common, and user-facing queries and require MongoDB to scan
the fewest number of documents possible.

To create a secondary index, use the :method:`ensureIndex()`
method. The argument to :method:`ensureIndex() <db.collection.ensureIndex()>`
To create a secondary index, use the
:method:`ensureIndex() <db.collection.ensureIndex()>`
method. The argument to
:method:`ensureIndex() <db.collection.ensureIndex()>`
will resemble the following in the MongoDB shell:

.. code-block:: javascript
Expand Down
2 changes: 1 addition & 1 deletion source/reference/collection-statistics.txt
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ Fields
Reports the flags on this collection set by the user. In version
2.2 the only user flag is ``usePowerOf2Sizes``.

See :dbcommand:`collMod`` for more information on setting user
See :dbcommand:`collMod` for more information on setting user
flags and :ref:`usePowerOf2Sizes <usePowerOf2Sizes>`.

.. stats:: totalIndexSize
Expand Down
26 changes: 13 additions & 13 deletions source/reference/command/cloneCollection.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,19 +9,19 @@ cloneCollection
The :dbcommand:`cloneCollection` command copies a collection from a
remote server to the server where you run the command.

:opt from: Specify a resolvable hostname, and optional port number
of the remote server where the specified collection resides.

:opt query: Optional. A query document, in the form of a
:term:`document`, that filters the documents
in the remote collection that
:dbcommand:`cloneCollection` will copy to the
current database. See :method:`db.collection.find()`.

:opt Boolean copyIndexes: Optional. ``true`` by default. When set
to ``false`` the indexes on the
originating server are *not* copied with
the documents in the collection.
:param from: Specify a resolvable hostname, and optional port number
of the remote server where the specified collection resides.

:param query: Optional. A query document, in the form of a
:term:`document`, that filters the documents
in the remote collection that
:dbcommand:`cloneCollection` will copy to the
current database. See :method:`db.collection.find()`.

:param Boolean copyIndexes: Optional. ``true`` by default. When set
to ``false`` the indexes on the
originating server are *not* copied with
the documents in the collection.

Consider the following example:

Expand Down
4 changes: 2 additions & 2 deletions source/reference/command/compact.txt
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ compact
maintenance periods.

- If you terminate the operation with the
:method:`db.killOp() <db.killOP()>` method or restart the
:method:`db.killOp() <db.killOp()>` method or restart the
server before it has finished:

- If you have journaling enabled, the data remains consistent and
Expand All @@ -152,7 +152,7 @@ compact
space while running but unlike :dbcommand:`repairDatabase` it does
*not* free space on the file system.

- You may also wish to run the :dbcommand:`collstats` command before and
- You may also wish to run the :dbcommand:`collStats` command before and
after compaction to see how the storage space changes for the
collection.

Expand Down
2 changes: 1 addition & 1 deletion source/reference/command/fsync.txt
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ fsync
helpers in the shell.

In the :program:`mongo` shell, you may use the
:method:`db.fsyncLock()` and :method:`db.fsyncUnLock()` wrappers for
:method:`db.fsyncLock()` and :method:`db.fsyncUnlock()` wrappers for
the :dbcommand:`fsync` lock and unlock process:

.. code-block:: javascript
Expand Down
60 changes: 30 additions & 30 deletions source/reference/command/getLastError.txt
Original file line number Diff line number Diff line change
Expand Up @@ -21,36 +21,36 @@ getLastError

The following options are available:

:option boolean j: If ``true``, wait for the next journal commit
before returning, rather than a full disk
flush. If :program:`mongod` does not have
journaling enabled, this option has no effect.

:option w: When running with replication, this is the number of
servers to replica to before returning. A ``w`` value of
1 indicates the primary only. A ``w`` value of 2
includes the primary and at least one secondary, etc.
In place of a number, you may also set ``w`` to
``majority`` to indicate that the command should wait
until the latest write propagates to a majority of
replica set members. If using ``w``, you should also use
``wtimeout``. Specifying a value for ``w`` without also
providing a ``wtimeout`` may cause
:dbcommand:`getLastError` to block indefinitely.

:option boolean fsync: If ``true``, wait for :program:`mongod` to write this
data to disk before returning. Defaults to
false. In most cases, use the ``j`` option
to ensure durability and consistency of the
data set.

:option integer wtimeout: (Milliseconds; Optional.) Specify a value
in milliseconds to control how long the
to wait for write propagation to
complete. If replication does not
complete in the given timeframe, the
:dbcommand:`getLastError` command will
return with an error status.
:param boolean j: If ``true``, wait for the next journal commit
before returning, rather than a full disk
flush. If :program:`mongod` does not have
journaling enabled, this option has no effect.

:param w: When running with replication, this is the number of
servers to replica to before returning. A ``w`` value of
1 indicates the primary only. A ``w`` value of 2
includes the primary and at least one secondary, etc.
In place of a number, you may also set ``w`` to
``majority`` to indicate that the command should wait
until the latest write propagates to a majority of
replica set members. If using ``w``, you should also use
``wtimeout``. Specifying a value for ``w`` without also
providing a ``wtimeout`` may cause
:dbcommand:`getLastError` to block indefinitely.

:param boolean fsync: If ``true``, wait for :program:`mongod` to write this
data to disk before returning. Defaults to
false. In most cases, use the ``j`` option
to ensure durability and consistency of the
data set.

:param integer wtimeout: (Milliseconds; Optional.) Specify a value
in milliseconds to control how long the
to wait for write propagation to
complete. If replication does not
complete in the given timeframe, the
:dbcommand:`getLastError` command will
return with an error status.

.. seealso:: ":ref:`Replica Set Write Concern <replica-set-write-concern>`"
and ":method:`db.getLastError()`."
2 changes: 1 addition & 1 deletion source/reference/command/isMaster.txt
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ isMaster
.. versionadded:: 2.1.1

Returns the local server time in UTC. This value is a
:term:`ISOdate`. You can use the :method:`toString()`
:term:`ISOdate`. You can use the :js:func:`toString()`
JavaScript method to convert this value to a local date string,
as in the following example:

Expand Down
8 changes: 4 additions & 4 deletions source/reference/command/moveChunk.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,10 @@ moveChunk
to : <destination>,
<options> } )

:param command moveChunk: The name of the :term:`collection` which
the :term:`chunk` exists. Specify the
collection's full namespace, including
the database name.
:param moveChunk: The name of the :term:`collection` which
the :term:`chunk` exists. Specify the
collection's full namespace, including
the database name.

:param find: A query expression that will select a document within
the chunk you wish to move. The query need not specify
Expand Down
4 changes: 2 additions & 2 deletions source/reference/command/movePrimary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,15 @@ movePrimary

When the command returns, the database's primary location will
shift to the designated :term:`shard`. To fully decommission a
shard, use the :dbcommand:`removeshard` command.
shard, use the :dbcommand:`removeShard` command.

.. warning::

Before running :dbcommand:`movePrimary` you must ensure that
*no* sharded data exists on this shard. You must drain this
shard before running this command because it will move *all*
data in this database from this shard. Use the
:dbcommand:`removeshard` command to migrate sharded data from
:dbcommand:`removeShard` command to migrate sharded data from
this shard.

If you do not remove all sharded data collections before running
Expand Down
5 changes: 3 additions & 2 deletions source/reference/glossary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -349,7 +349,8 @@ Glossary

padding
The extra space allocated to document on the disk to prevent
moving a document when it grows as the result of :method:`update`
moving a document when it grows as the result of
:method:`update() <db.collection.update()>`
operations.

record size
Expand Down Expand Up @@ -555,7 +556,7 @@ Glossary
from one :term:`shard` to another. Administrators must drain
shards before removing them from the cluster.

.. seealso:: :dbcommand:`removeshard`, :term:`sharding`.
.. seealso:: :dbcommand:`removeShard`, :term:`sharding`.

single-master replication
A :term:`replication` topology where only a single database
Expand Down
2 changes: 1 addition & 1 deletion source/reference/method/db.killOp.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ db.killOp()

.. default-domain:: mongodb

.. method:: db.killOP(opid)
.. method:: db.killOp(opid)

:param opid: Specify an operation ID.

Expand Down
6 changes: 6 additions & 0 deletions source/reference/method/rs.conf.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,3 +8,9 @@ rs.conf()

:returns: a :term:`document` that contains the current
:term:`replica set` configuration object.

.. method:: rs.config()

:method:`rs.config()` is an alias of :method:`rs.conf()`.


4 changes: 2 additions & 2 deletions source/reference/mongoexport.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
.. default-domain:: mongodb

======================
:program:`mongoexport`
mongoexport
======================

Synopsis
Expand Down Expand Up @@ -146,7 +146,7 @@ Options

.. option:: --jsonArray

Modifies the output of :program:`mongoexport` so that to write the
Modifies the output of :program:`mongoexport` to write the
entire contents of the export as a single :term:`JSON` array. By
default :program:`mongoexport` writes data using one JSON document
for every MongoDB document.
Expand Down
2 changes: 1 addition & 1 deletion source/reference/mongoimport.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
.. default-domain:: mongodb

======================
:program:`mongoimport`
mongoimport
======================

Synopsis
Expand Down
2 changes: 1 addition & 1 deletion source/reference/mongorestore.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
.. default-domain:: mongodb

=======================
:program:`mongorestore`
mongorestore
=======================

Synopsis
Expand Down
9 changes: 9 additions & 0 deletions source/reference/mongos.txt
Original file line number Diff line number Diff line change
Expand Up @@ -130,6 +130,15 @@ Options
data to the end of the logfile rather than overwriting the content
of the log when the process restarts.

.. option:: --syslog

.. versionadded: 2.1.0

Sends all logging output to the host's :term:`syslog` system rather
than to standard output or a log file as with :option:`--logpath`.

.. warning:: You cannot use :option:`--syslog` with :option:`--logpath`.

.. option:: --pidfilepath <path>

Specify a file location to hold the ":term:`PID`" or process ID of the
Expand Down
2 changes: 1 addition & 1 deletion source/reference/operator/pull.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,5 @@ $pull
:operator:`$pull` removes the value ``value1`` from the array in ``field``,
in the document that matches the query statement ``{ field: value
}`` in ``collection``. If ``value1`` existed multiple times in the
``field`` array, :operator:`pull` would remove all instances of
``field`` array, :operator:`$pull` would remove all instances of
``value1`` in this array.
Loading