From 22143656f989e4dd1406437e0cf530893e493764 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Wed, 3 Oct 2012 23:46:30 -0400 Subject: [PATCH 1/2] epc-prod: fix a ton of broken links and some formatting things in the neverending quest for a better, faster, build --- source/administration/backups.txt | 4 +- source/administration/import-export.txt | 2 +- .../administration/sharding-architectures.txt | 8 +-- source/administration/sharding.txt | 4 +- source/core/indexes.txt | 8 ++- source/reference/collection-statistics.txt | 2 +- source/reference/command/cloneCollection.txt | 26 ++++---- source/reference/command/compact.txt | 4 +- source/reference/command/fsync.txt | 2 +- source/reference/command/getLastError.txt | 60 +++++++++---------- source/reference/command/isMaster.txt | 2 +- source/reference/command/moveChunk.txt | 8 +-- source/reference/command/movePrimary.txt | 4 +- source/reference/glossary.txt | 5 +- source/reference/method/db.killOp.txt | 2 +- source/reference/method/rs.conf.txt | 6 ++ source/reference/mongoimport.txt | 2 +- source/reference/mongorestore.txt | 2 +- source/reference/mongos.txt | 9 +++ source/reference/operator/pull.txt | 2 +- source/reference/operator/within.txt | 2 +- source/release-notes/2.2.txt | 2 +- .../change-hostnames-in-a-replica-set.txt | 13 ++-- ...eplica-set-to-replicated-shard-cluster.txt | 2 +- .../convert-secondary-into-arbiter.txt | 16 ++--- .../convert-standalone-to-replica-set.txt | 2 +- ...geographically-distributed-replica-set.txt | 4 +- ...ce-unique-keys-for-sharded-collections.txt | 10 ++-- source/tutorial/expand-replica-set.txt | 4 +- .../tutorial/install-mongodb-on-windows.txt | 4 +- ...e-replica-set-with-unavailable-members.txt | 14 ++--- .../tutorial/remove-shards-from-cluster.txt | 2 +- source/use-cases/hierarchical-aggregation.txt | 4 +- source/use-cases/pre-aggregated-reports.txt | 16 ++--- source/use-cases/product-catalog.txt | 4 +- source/use-cases/storing-log-data.txt | 2 +- 36 files changed, 141 insertions(+), 122 deletions(-) diff --git a/source/administration/backups.txt b/source/administration/backups.txt index 746c5015446..ae5ef0a10ac 100644 --- a/source/administration/backups.txt +++ b/source/administration/backups.txt @@ -447,7 +447,7 @@ With ":option:`--oplog `" , :program:`mongodump` copies all the data from the source database, as well as all of the :term:`oplog` entries from the beginning of the backup procedure to until the backup procedure completes. This backup procedure, in -conjunction with :option:`mongorestore --oplogReplay`, allows you to +conjunction with :option:`mongorestore --oplogReplay `, allows you to restore a backup that reflects a consistent and specific moment in time. @@ -491,7 +491,7 @@ the ``dump-2011-10-25`` directory to the :program:`mongod` instance running on the localhost interface. By default, :program:`mongorestore` will look for a database dump in the ``dump/`` directory and restore that. If you wish to restore to a non-default host, the -":option:`--host `" and ":option:`--port `" +":option:`--host `" and ":option:`--port `" options allow you to specify a non-local host to connect to capture the dump. Consider the following example: diff --git a/source/administration/import-export.txt b/source/administration/import-export.txt index 024f97c148e..200e2bfdc75 100644 --- a/source/administration/import-export.txt +++ b/source/administration/import-export.txt @@ -201,7 +201,7 @@ documents will return on standard output. By default, :program:`mongoexport` returns one :term:`JSON document` per MongoDB document. Specify the ":option:`--jsonArray `" argument to return the export as a single :term:`JSON` +--jsonArray>`" argument to return the export as a single :term:`JSON` array. Use the ":option:`--csv `" file to return the result in CSV (comma separated values) format. diff --git a/source/administration/sharding-architectures.txt b/source/administration/sharding-architectures.txt index 6e8211395d1..0f9a5394d12 100644 --- a/source/administration/sharding-architectures.txt +++ b/source/administration/sharding-architectures.txt @@ -86,7 +86,7 @@ instance or replica set (i.e. a :term:`shard`.) Every database has a "primary" [#overloaded-primary-term]_ shard that holds all un-sharded collections in that database. All collections that *are not* sharded reside on the primary for their database. Use -the :dbcommand:`moveprimary` command to change the primary shard for a +the :dbcommand:`movePrimary` command to change the primary shard for a database. Use the :dbcommand:`printShardingStatus` command or the :method:`sh.status()` to see an overview of the cluster, which contains information about the chunk and database distribution within the @@ -94,7 +94,7 @@ cluster. .. warning:: - The :dbcommand:`moveprimary` command can be expensive because + The :dbcommand:`movePrimary` command can be expensive because it copies all non-sharded data to the new shard, during which that data will be unavailable for other operations. @@ -103,9 +103,9 @@ the primary for all databases before enabling sharding. Databases created subsequently, may reside on any shard in the cluster. .. [#sharding-databases] As you configure sharding, you will use the - :dbcommand:`enablesharding` command to enable sharding for a + :dbcommand:`enableSharding` command to enable sharding for a database. This simply makes it possible to use the - :dbcommand:`shardcollection` on a collection within that database. + :dbcommand:`shardCollection` on a collection within that database. .. [#overloaded-primary-term] The term "primary" in the context of databases and sharding, has nothing to do with the term diff --git a/source/administration/sharding.txt b/source/administration/sharding.txt index 0e799d6563e..bf30cb187c6 100644 --- a/source/administration/sharding.txt +++ b/source/administration/sharding.txt @@ -316,7 +316,7 @@ The procedure to remove a shard is as follows: You must specify the name of the shard. You may have specified this shard name when you first ran the :dbcommand:`addShard` command. If not, you can find out the name of the shard by running the - :dbcommand:`listshards` or :dbcommand:`printShardingStatus` + :dbcommand:`listShards` or :dbcommand:`printShardingStatus` commands or the :method:`sh.status()` shell helper. The following examples will remove a shard named ``mongodb0`` from the cluster. @@ -637,7 +637,7 @@ To migrate chunks, use the :dbcommand:`moveChunk` command. .. note:: - To return a list of shards, use the :dbcommand:`listshards` + To return a list of shards, use the :dbcommand:`listShards` command. Specify shard names using the :dbcommand:`addShard` command diff --git a/source/core/indexes.txt b/source/core/indexes.txt index 9a9d8e4afab..fbfc83b80a6 100644 --- a/source/core/indexes.txt +++ b/source/core/indexes.txt @@ -74,7 +74,7 @@ You cannot delete the index on ``_id``. The ``_id`` field is the :term:`primary key` for the collection, and every document *must* have a unique ``_id`` field. You may store any unique value in the ``_id`` field. The default value of ``_id`` is -:term:`ObjectID` on every :method:`insert` operation. An :term:`ObjectId` +:term:`ObjectID` on every insert() ` +To create a secondary index, use the +:method:`ensureIndex() ` +method. The argument to +:method:`ensureIndex() ` will resemble the following in the MongoDB shell: .. code-block:: javascript diff --git a/source/reference/collection-statistics.txt b/source/reference/collection-statistics.txt index cd7af1422a2..b008fa6668d 100644 --- a/source/reference/collection-statistics.txt +++ b/source/reference/collection-statistics.txt @@ -152,7 +152,7 @@ Fields Reports the flags on this collection set by the user. In version 2.2 the only user flag is ``usePowerOf2Sizes``. - See :dbcommand:`collMod`` for more information on setting user + See :dbcommand:`collMod` for more information on setting user flags and :ref:`usePowerOf2Sizes `. .. stats:: totalIndexSize diff --git a/source/reference/command/cloneCollection.txt b/source/reference/command/cloneCollection.txt index 78caf1fc050..3cb946b77fb 100644 --- a/source/reference/command/cloneCollection.txt +++ b/source/reference/command/cloneCollection.txt @@ -9,19 +9,19 @@ cloneCollection The :dbcommand:`cloneCollection` command copies a collection from a remote server to the server where you run the command. - :opt from: Specify a resolvable hostname, and optional port number - of the remote server where the specified collection resides. - - :opt query: Optional. A query document, in the form of a - :term:`document`, that filters the documents - in the remote collection that - :dbcommand:`cloneCollection` will copy to the - current database. See :method:`db.collection.find()`. - - :opt Boolean copyIndexes: Optional. ``true`` by default. When set - to ``false`` the indexes on the - originating server are *not* copied with - the documents in the collection. + :param from: Specify a resolvable hostname, and optional port number + of the remote server where the specified collection resides. + + :param query: Optional. A query document, in the form of a + :term:`document`, that filters the documents + in the remote collection that + :dbcommand:`cloneCollection` will copy to the + current database. See :method:`db.collection.find()`. + + :param Boolean copyIndexes: Optional. ``true`` by default. When set + to ``false`` the indexes on the + originating server are *not* copied with + the documents in the collection. Consider the following example: diff --git a/source/reference/command/compact.txt b/source/reference/command/compact.txt index ff5ddaa4762..c6d6b152496 100644 --- a/source/reference/command/compact.txt +++ b/source/reference/command/compact.txt @@ -125,7 +125,7 @@ compact maintenance periods. - If you terminate the operation with the - :method:`db.killOp() ` method or restart the + :method:`db.killOp() ` method or restart the server before it has finished: - If you have journaling enabled, the data remains consistent and @@ -152,7 +152,7 @@ compact space while running but unlike :dbcommand:`repairDatabase` it does *not* free space on the file system. - - You may also wish to run the :dbcommand:`collstats` command before and + - You may also wish to run the :dbcommand:`collStats` command before and after compaction to see how the storage space changes for the collection. diff --git a/source/reference/command/fsync.txt b/source/reference/command/fsync.txt index 60adeeef6e1..769933f1fe6 100644 --- a/source/reference/command/fsync.txt +++ b/source/reference/command/fsync.txt @@ -78,7 +78,7 @@ fsync helpers in the shell. In the :program:`mongo` shell, you may use the - :method:`db.fsyncLock()` and :method:`db.fsyncUnLock()` wrappers for + :method:`db.fsyncLock()` and :method:`db.fsyncUnlock()` wrappers for the :dbcommand:`fsync` lock and unlock process: .. code-block:: javascript diff --git a/source/reference/command/getLastError.txt b/source/reference/command/getLastError.txt index 0e875c0eed7..d08ccee31c5 100644 --- a/source/reference/command/getLastError.txt +++ b/source/reference/command/getLastError.txt @@ -21,36 +21,36 @@ getLastError The following options are available: - :option boolean j: If ``true``, wait for the next journal commit - before returning, rather than a full disk - flush. If :program:`mongod` does not have - journaling enabled, this option has no effect. - - :option w: When running with replication, this is the number of - servers to replica to before returning. A ``w`` value of - 1 indicates the primary only. A ``w`` value of 2 - includes the primary and at least one secondary, etc. - In place of a number, you may also set ``w`` to - ``majority`` to indicate that the command should wait - until the latest write propagates to a majority of - replica set members. If using ``w``, you should also use - ``wtimeout``. Specifying a value for ``w`` without also - providing a ``wtimeout`` may cause - :dbcommand:`getLastError` to block indefinitely. - - :option boolean fsync: If ``true``, wait for :program:`mongod` to write this - data to disk before returning. Defaults to - false. In most cases, use the ``j`` option - to ensure durability and consistency of the - data set. - - :option integer wtimeout: (Milliseconds; Optional.) Specify a value - in milliseconds to control how long the - to wait for write propagation to - complete. If replication does not - complete in the given timeframe, the - :dbcommand:`getLastError` command will - return with an error status. + :param boolean j: If ``true``, wait for the next journal commit + before returning, rather than a full disk + flush. If :program:`mongod` does not have + journaling enabled, this option has no effect. + + :param w: When running with replication, this is the number of + servers to replica to before returning. A ``w`` value of + 1 indicates the primary only. A ``w`` value of 2 + includes the primary and at least one secondary, etc. + In place of a number, you may also set ``w`` to + ``majority`` to indicate that the command should wait + until the latest write propagates to a majority of + replica set members. If using ``w``, you should also use + ``wtimeout``. Specifying a value for ``w`` without also + providing a ``wtimeout`` may cause + :dbcommand:`getLastError` to block indefinitely. + + :param boolean fsync: If ``true``, wait for :program:`mongod` to write this + data to disk before returning. Defaults to + false. In most cases, use the ``j`` option + to ensure durability and consistency of the + data set. + + :param integer wtimeout: (Milliseconds; Optional.) Specify a value + in milliseconds to control how long the + to wait for write propagation to + complete. If replication does not + complete in the given timeframe, the + :dbcommand:`getLastError` command will + return with an error status. .. seealso:: ":ref:`Replica Set Write Concern `" and ":method:`db.getLastError()`." diff --git a/source/reference/command/isMaster.txt b/source/reference/command/isMaster.txt index 078d6decd3a..07b281d9645 100644 --- a/source/reference/command/isMaster.txt +++ b/source/reference/command/isMaster.txt @@ -66,7 +66,7 @@ isMaster .. versionadded:: 2.1.1 Returns the local server time in UTC. This value is a - :term:`ISOdate`. You can use the :method:`toString()` + :term:`ISOdate`. You can use the :js:func:`toString()` JavaScript method to convert this value to a local date string, as in the following example: diff --git a/source/reference/command/moveChunk.txt b/source/reference/command/moveChunk.txt index ccd5012d4c1..d251b494bea 100644 --- a/source/reference/command/moveChunk.txt +++ b/source/reference/command/moveChunk.txt @@ -17,10 +17,10 @@ moveChunk to : , } ) - :param command moveChunk: The name of the :term:`collection` which - the :term:`chunk` exists. Specify the - collection's full namespace, including - the database name. + :param moveChunk: The name of the :term:`collection` which + the :term:`chunk` exists. Specify the + collection's full namespace, including + the database name. :param find: A query expression that will select a document within the chunk you wish to move. The query need not specify diff --git a/source/reference/command/movePrimary.txt b/source/reference/command/movePrimary.txt index 610caa58923..bd8e5cbe165 100644 --- a/source/reference/command/movePrimary.txt +++ b/source/reference/command/movePrimary.txt @@ -24,7 +24,7 @@ movePrimary When the command returns, the database's primary location will shift to the designated :term:`shard`. To fully decommission a - shard, use the :dbcommand:`removeshard` command. + shard, use the :dbcommand:`removeShard` command. .. warning:: @@ -32,7 +32,7 @@ movePrimary *no* sharded data exists on this shard. You must drain this shard before running this command because it will move *all* data in this database from this shard. Use the - :dbcommand:`removeshard` command to migrate sharded data from + :dbcommand:`removeShard` command to migrate sharded data from this shard. If you do not remove all sharded data collections before running diff --git a/source/reference/glossary.txt b/source/reference/glossary.txt index 565e67e18c6..1962fca9376 100644 --- a/source/reference/glossary.txt +++ b/source/reference/glossary.txt @@ -349,7 +349,8 @@ Glossary padding The extra space allocated to document on the disk to prevent - moving a document when it grows as the result of :method:`update` + moving a document when it grows as the result of + :method:`update() ` operations. record size @@ -555,7 +556,7 @@ Glossary from one :term:`shard` to another. Administrators must drain shards before removing them from the cluster. - .. seealso:: :dbcommand:`removeshard`, :term:`sharding`. + .. seealso:: :dbcommand:`removeShard`, :term:`sharding`. single-master replication A :term:`replication` topology where only a single database diff --git a/source/reference/method/db.killOp.txt b/source/reference/method/db.killOp.txt index 780c825a100..941ee010c74 100644 --- a/source/reference/method/db.killOp.txt +++ b/source/reference/method/db.killOp.txt @@ -4,7 +4,7 @@ db.killOp() .. default-domain:: mongodb -.. method:: db.killOP(opid) +.. method:: db.killOp(opid) :param opid: Specify an operation ID. diff --git a/source/reference/method/rs.conf.txt b/source/reference/method/rs.conf.txt index d5158466c80..375e59560a2 100644 --- a/source/reference/method/rs.conf.txt +++ b/source/reference/method/rs.conf.txt @@ -8,3 +8,9 @@ rs.conf() :returns: a :term:`document` that contains the current :term:`replica set` configuration object. + +.. method:: rs.config() + + :method:`rs.config()` is an alias of :method:`rs.conf()`. + + diff --git a/source/reference/mongoimport.txt b/source/reference/mongoimport.txt index d27a194d37a..a3b36864cab 100644 --- a/source/reference/mongoimport.txt +++ b/source/reference/mongoimport.txt @@ -3,7 +3,7 @@ .. default-domain:: mongodb ====================== -:program:`mongoimport` +mongoimport ====================== Synopsis diff --git a/source/reference/mongorestore.txt b/source/reference/mongorestore.txt index a316574af6c..5091f0df21a 100644 --- a/source/reference/mongorestore.txt +++ b/source/reference/mongorestore.txt @@ -3,7 +3,7 @@ .. default-domain:: mongodb ======================= -:program:`mongorestore` +mongorestore ======================= Synopsis diff --git a/source/reference/mongos.txt b/source/reference/mongos.txt index 374febebdbd..b23b645e224 100644 --- a/source/reference/mongos.txt +++ b/source/reference/mongos.txt @@ -130,6 +130,15 @@ Options data to the end of the logfile rather than overwriting the content of the log when the process restarts. +.. option:: --syslog + + .. versionadded: 2.1.0 + + Sends all logging output to the host's :term:`syslog` system rather + than to standard output or a log file as with :option:`--logpath`. + + .. warning:: You cannot use :option:`--syslog` with :option:`--logpath`. + .. option:: --pidfilepath Specify a file location to hold the ":term:`PID`" or process ID of the diff --git a/source/reference/operator/pull.txt b/source/reference/operator/pull.txt index e778ed4385a..7e0e60b05b1 100644 --- a/source/reference/operator/pull.txt +++ b/source/reference/operator/pull.txt @@ -16,5 +16,5 @@ $pull :operator:`$pull` removes the value ``value1`` from the array in ``field``, in the document that matches the query statement ``{ field: value }`` in ``collection``. If ``value1`` existed multiple times in the - ``field`` array, :operator:`pull` would remove all instances of + ``field`` array, :operator:`$pull` would remove all instances of ``value1`` in this array. diff --git a/source/reference/operator/within.txt b/source/reference/operator/within.txt index 62022b64807..279d479b6b9 100644 --- a/source/reference/operator/within.txt +++ b/source/reference/operator/within.txt @@ -49,7 +49,7 @@ $within although this is subject to the imprecision of floating point numbers. - Use :operator:`uniqueDocs` to control whether documents with + Use :operator:`$uniqueDocs` to control whether documents with many location fields show up multiple times when more than one of its fields match the query. diff --git a/source/release-notes/2.2.txt b/source/release-notes/2.2.txt index 8f1951d10da..6947eecf86b 100644 --- a/source/release-notes/2.2.txt +++ b/source/release-notes/2.2.txt @@ -36,7 +36,7 @@ Synopsis - You can upgrade the members of a replica set one-by-one [#secondaries-first]_ **except** when using authentication. In deployments that use authentication (i.e. where the - :program:`mongod` runs with :option:`--keyFile `), + :program:`mongod` runs with :option:`--keyFile `), you must shut down the entire set and perform the upgrade at once. The 2.2.1 release will remove this requirement. See diff --git a/source/tutorial/change-hostnames-in-a-replica-set.txt b/source/tutorial/change-hostnames-in-a-replica-set.txt index 0c76035d903..a3668259890 100644 --- a/source/tutorial/change-hostnames-in-a-replica-set.txt +++ b/source/tutorial/change-hostnames-in-a-replica-set.txt @@ -68,7 +68,7 @@ Given a :term:`replica set` with three members: - ``database2.example.com:27017`` -And with the following :method:`rs.config()` output: +And with the following :method:`rs.conf()` output: .. code-block:: javascript @@ -170,7 +170,7 @@ This procedure uses the above :ref:`assumptions ` run-time option. Changing + :option:`--replSet ` run-time option. Changing the port number during maintenance prevents clients from connecting - to this host while you perform maintenance. Use the member's usual :option:`--dbpath`, which in this + to this host while you perform maintenance. Use the member's usual + :option:`--dbpath `, which in this example is ``/data/db1``. Use a command that resembles the following: .. code-block:: sh @@ -252,7 +253,7 @@ This procedure uses the above :ref:`assumptions ` option. For + number and use the :option:`--replSet ` option. For example: .. code-block:: sh @@ -266,7 +267,7 @@ This procedure uses the above :ref:`assumptions :") #. Verify that the replica set no longer includes the secondary by - calling the :method:`rs.config()` method in the :program:`mongo` shell: + calling the :method:`rs.conf()` method in the :program:`mongo` shell: .. code-block:: javascript - rs.config() + rs.conf() #. Move the secondary's data directory to an archive folder. For example: @@ -97,11 +97,11 @@ Convert a Secondary to an Arbiter and Reuse the Port Number rs.addArb(":") #. Verify the arbiter belongs to the replica set by calling the - :method:`rs.config()` method in the :program:`mongo` shell. + :method:`rs.conf()` method in the :program:`mongo` shell. .. code-block:: javascript - rs.config() + rs.conf() The arbiter member should include the following: @@ -143,11 +143,11 @@ Convert a Secondary to an Arbiter Running on a New Port Number rs.addArb(":") #. Verify the arbiter has been added to the replica set by calling the - :method:`rs.config()` method in the :program:`mongo` shell. + :method:`rs.conf()` method in the :program:`mongo` shell. .. code-block:: javascript - rs.config() + rs.conf() The arbiter member should include the following: @@ -165,11 +165,11 @@ Convert a Secondary to an Arbiter Running on a New Port Number rs.remove(":") #. Verify that the replica set no longer includes the old secondary by - calling the :method:`rs.config()` method in the :program:`mongo` shell: + calling the :method:`rs.conf()` method in the :program:`mongo` shell: .. code-block:: javascript - rs.config() + rs.conf() #. Move the secondary's data directory to an archive folder. For example: diff --git a/source/tutorial/convert-standalone-to-replica-set.txt b/source/tutorial/convert-standalone-to-replica-set.txt index cb186d781a4..f7bf360d937 100644 --- a/source/tutorial/convert-standalone-to-replica-set.txt +++ b/source/tutorial/convert-standalone-to-replica-set.txt @@ -28,7 +28,7 @@ installed. If you have not already installed MongoDB, see the :ref:`installation tutorials `. 1. Shut down the your MongoDB instance and then restart using - the :option:`--replSet ` option and the name of the + the :option:`--replSet ` option and the name of the :term:`replica set`, which is ``rs0`` in the example below. Use a command similar to the following: diff --git a/source/tutorial/deploy-geographically-distributed-replica-set.txt b/source/tutorial/deploy-geographically-distributed-replica-set.txt index 69a802ce3ae..b51f5433aa0 100644 --- a/source/tutorial/deploy-geographically-distributed-replica-set.txt +++ b/source/tutorial/deploy-geographically-distributed-replica-set.txt @@ -198,7 +198,7 @@ To deploy a geographically distributed three-member set: .. code-block:: javascript - rs.config() + rs.conf() #. In the :data:`member array `, save the :data:`members[n]._id` value. The example in the next step assumes @@ -400,7 +400,7 @@ To deploy a geographically distributed four-member set: .. code-block:: javascript - rs.config() + rs.conf() #. In the :data:`member array `, save the :data:`members[n]._id` value. The example in the next step assumes diff --git a/source/tutorial/enforce-unique-keys-for-sharded-collections.txt b/source/tutorial/enforce-unique-keys-for-sharded-collections.txt index 4a142cc6c1d..0f677f0b642 100644 --- a/source/tutorial/enforce-unique-keys-for-sharded-collections.txt +++ b/source/tutorial/enforce-unique-keys-for-sharded-collections.txt @@ -7,7 +7,7 @@ Enforce Unique Keys for Sharded Collections Overview -------- -The :dbcommand:`unique ` constraint on indexes ensures +The :method:`unique ` constraint on indexes ensures that only one document can have a value for a field in a :term:`collection`. For :ref:`sharded collections these unique indexes cannot enforce uniqueness ` because @@ -58,7 +58,7 @@ Process ~~~~~~~ To shard a collection -using the ``unique`` constraint, specify the :dbcommand:`shardcollection` command in the following form: +using the ``unique`` constraint, specify the :dbcommand:`shardCollection` command in the following form: .. code-block:: javascript @@ -72,7 +72,7 @@ use this as the shard key. To use the .. code-block:: javascript - db.runCommand( { shardcollection : "test.users" } ) + db.runCommand( { shardCollection : "test.users" } ) .. warning:: @@ -135,7 +135,7 @@ using the ``email`` field as the :term:`shard key`: .. code-block:: javascript - db.runCommand( { shardcollection : "records.proxy" , key : { email : 1 } , unique : true } ); + db.runCommand( { shardCollection : "records.proxy" , key : { email : 1 } , unique : true } ); If you do not need to shard the proxy collection, use the following command to create a unique index on the ``email`` field: @@ -176,7 +176,7 @@ continue by inserting the actual document into the ``information`` collection. .. see:: The full documentation of: :method:`db.collection.ensureIndex()`, - :dbcommand:`ensureIndex`, and :dbcommand:`shardcollection`. + :dbcommand:`ensureIndex`, and :dbcommand:`shardCollection`. Considerations ~~~~~~~~~~~~~~ diff --git a/source/tutorial/expand-replica-set.txt b/source/tutorial/expand-replica-set.txt index fc8f12d7bd7..5241acf12d1 100644 --- a/source/tutorial/expand-replica-set.txt +++ b/source/tutorial/expand-replica-set.txt @@ -129,12 +129,12 @@ This procedure uses the above :ref:`example configuration `: .. code-block:: javascript - rs.config() + rs.conf() You can use the :method:`rs.status()` function to provide an overview of :doc:`replica set status `. diff --git a/source/tutorial/install-mongodb-on-windows.txt b/source/tutorial/install-mongodb-on-windows.txt index 63c010c211b..66787a31143 100644 --- a/source/tutorial/install-mongodb-on-windows.txt +++ b/source/tutorial/install-mongodb-on-windows.txt @@ -115,7 +115,7 @@ command sequence: .. note:: You may specify an alternate path for ``\data\db`` with the - :setting:`dbpath` setting for :program:`mongod.ext`, as in the + :setting:`dbpath` setting for :program:`mongod.exe`, as in the following example: .. code-block:: powershell @@ -267,7 +267,7 @@ Run all of the following commands in :guilabel:`Command Prompt` with If you wish to use an alternate path for your :setting:`dbpath` specify it in the config file (e.g. ``C:\mongodb\mongod.cfg``) on that you specified in the :option:`--install ` - operation. You may also specify :option:`--dbpath ` + operation. You may also specify :option:`--dbpath ` on the command line; however, always prefer the configuration file. If the :setting:`dbpath`` directory does not exist, diff --git a/source/tutorial/reconfigure-replica-set-with-unavailable-members.txt b/source/tutorial/reconfigure-replica-set-with-unavailable-members.txt index f3653534beb..2ca7df12c1c 100644 --- a/source/tutorial/reconfigure-replica-set-with-unavailable-members.txt +++ b/source/tutorial/reconfigure-replica-set-with-unavailable-members.txt @@ -47,7 +47,7 @@ To force reconfiguration: .. code-block:: javascript - cfg = rs.config() + cfg = rs.conf() printjson(cfg) @@ -111,10 +111,10 @@ This option replaces the :term:`replica set` with a :term:`standalone` server. mongod --dbpath /data/db/ --shutdown - Set :option:`--dbpath` to the data directory of your + Set :option:`--dbpath ` to the data directory of your :program:`mongod` instance. -#. Move the data directory (i.e. :setting:`dbpath`) from each +#. Move the data directory (i.e. :setting:`dbpath `) from each surviving member to an archive folder. For example: .. code-block:: sh @@ -125,7 +125,7 @@ This option replaces the :term:`replica set` with a :term:`standalone` server. this data. #. Restart one of the :program:`mongod` instances *without* the - :option:`--replSet ` parameter. + :option:`--replSet ` parameter. You are back online with a single server that is not a replica set member. Clients can use this server for both reads and writes. @@ -145,10 +145,10 @@ members must resync from this new primary. mongod --dbpath /data/db/ --shutdown - Set :option:`--dbpath` to the data directory of your + Set :option:`--dbpath ` to the data directory of your :program:`mongod` instance. -#. Move the data directory (i.e. :setting:`dbpath`) from each +#. Move the data directory (i.e. :setting:`dbpath `) from each surviving member to an archive. For example: .. code-block:: sh @@ -166,7 +166,7 @@ members must resync from this new primary. mongo --replSet rs1 - See :setting:`replSet` and :option:`--replSet ` + See :setting:`replSet` and :option:`--replSet ` for more information. #. On the new primary, add the other instances as members of the replica diff --git a/source/tutorial/remove-shards-from-cluster.txt b/source/tutorial/remove-shards-from-cluster.txt index 5011f979c82..51629dd102a 100644 --- a/source/tutorial/remove-shards-from-cluster.txt +++ b/source/tutorial/remove-shards-from-cluster.txt @@ -36,7 +36,7 @@ Complete this procedure by connecting to any :program:`mongos` in the cluster using the :program:`mongo` shell. You can only remove a shard by its shard name. To discover or -confirm the name of a shard, use the :dbcommand:`listshards` command, +confirm the name of a shard, use the :dbcommand:`listShards` command, :dbcommand:`printShardingStatus` command, or :method:`sh.status()` shell helper. The example commands in this document remove a shard named ``mongodb0``. diff --git a/source/use-cases/hierarchical-aggregation.txt b/source/use-cases/hierarchical-aggregation.txt index dd035739ef6..1949476b36f 100644 --- a/source/use-cases/hierarchical-aggregation.txt +++ b/source/use-cases/hierarchical-aggregation.txt @@ -33,7 +33,7 @@ process until you have generated all required views. The solution uses several collections: the raw data (i.e. ``events``) collection as well as collections for aggregated hourly, daily, weekly, monthly, and yearly statistics. All aggregations use the -:dbcommand:`mapreduce` :term:`command `, in a +:dbcommand:`mapReduce` :term:`command `, in a hierarchical process. The following figure illustrates the input and output of each job: @@ -140,7 +140,7 @@ Aggregation Although this solution uses Python and :api:`PyMongo ` to connect with MongoDB, you must pass JavaScript functions (i.e. ``mapf``, ``reducef``, and ``finalizef``) to the - :dbcommand:`mapreduce` command. + :dbcommand:`mapReduce` command. Begin by creating a map function, as below: diff --git a/source/use-cases/pre-aggregated-reports.txt b/source/use-cases/pre-aggregated-reports.txt index df6a108c62b..9099529a26b 100644 --- a/source/use-cases/pre-aggregated-reports.txt +++ b/source/use-cases/pre-aggregated-reports.txt @@ -570,11 +570,11 @@ your deployment, consider using ``{ metadata.site: 1, metadata.page: event) will lead to a well balanced cluster for most deployments. Enable sharding for the daily statistics collection with the following -:dbcommand:`shardcollection`` command in the Python/PyMongo console: +:dbcommand:`shardCollection` command in the Python/PyMongo console: .. code-block:: pycon - >>> db.command('shardcollection', 'stats.daily', { + >>> db.command('shardCollection', 'stats.daily', { ... key : { 'metadata.site': 1, 'metadata.page' : 1 } }) Upon success, you will see the following response: @@ -584,12 +584,12 @@ Upon success, you will see the following response: { "collectionsharded" : "stats.daily", "ok" : 1 } Enable sharding for the monthly statistics collection with the -following :dbcommand:`shardcollection`` command in the Python/PyMongo +following :dbcommand:`shardCollection` command in the Python/PyMongo console: .. code-block:: pycon - >>> db.command('shardcollection', 'stats.monthly', { + >>> db.command('shardCollection', 'stats.monthly', { ... key : { 'metadata.site': 1, 'metadata.page' : 1 } }) Upon success, you will see the following response: @@ -607,22 +607,22 @@ unavoidable, since all update for a single page are going to a single You may wish to include the date in addition to the site, and page fields so that MongoDB can split histories so that you can serve different historical ranges with different shards. Use the following -:dbcommand:`shardcollection`` command to shard the daily statistics +:dbcommand:`shardCollection` command to shard the daily statistics collection in the Python/PyMongo console: .. code-block:: pycon - >>> db.command('shardcollection', 'stats.daily', { + >>> db.command('shardCollection', 'stats.daily', { ... 'key':{'metadata.site':1,'metadata.page':1,'metadata.date':1}}) { "collectionsharded" : "stats.daily", "ok" : 1 } Enable sharding for the monthly statistics collection with the -following :dbcommand:`shardcollection`` command in the Python/PyMongo +following :dbcommand:`shardCollection` command in the Python/PyMongo console: .. code-block:: pycon - >>> db.command('shardcollection', 'stats.monthly', { + >>> db.command('shardCollection', 'stats.monthly', { ... 'key':{'metadata.site':1,'metadata.page':1,'metadata.date':1}}) { "collectionsharded" : "stats.monthly", "ok" : 1 } diff --git a/source/use-cases/product-catalog.txt b/source/use-cases/product-catalog.txt index 0db11551d73..0ada92bc337 100644 --- a/source/use-cases/product-catalog.txt +++ b/source/use-cases/product-catalog.txt @@ -487,12 +487,12 @@ actual activity and distribution. Consider that: In the following example, assume that the ``details.genre`` field is the second-most queried field after ``type``. Enable sharding using -the following :dbcommand:`shardcollection` operation at the +the following :dbcommand:`shardCollection` operation at the Python/PyMongo console: .. code-block:: pycon - >>> db.command('shardcollection', 'product', { + >>> db.command('shardCollection', 'product', { ... key : { 'type': 1, 'details.genre' : 1, 'sku':1 } }) { "collectionsharded" : "details.genre", "ok" : 1 } diff --git a/source/use-cases/storing-log-data.txt b/source/use-cases/storing-log-data.txt index c5c7dc2a118..00822091f59 100644 --- a/source/use-cases/storing-log-data.txt +++ b/source/use-cases/storing-log-data.txt @@ -491,7 +491,7 @@ The :term:`aggregation framework` provides the capacity for queries that select, process, and aggregate results from large numbers of documents. The :method:`aggregate()` (and :dbcommand:`aggregate` :term:`command `) offers greater flexibility, -capacity with less complexity than the existing :dbcommand:`mapreduce` +capacity with less complexity than the existing :dbcommand:`mapReduce` and :dbcommand:`group` aggregation. Consider the following aggregation :term:`pipeline`: [#sql-aggregation-equivalents]_ From c4f327ef6a229ecbe5fb1cb969923493bdbda320 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Thu, 4 Oct 2012 00:44:20 -0400 Subject: [PATCH 2/2] minor tweaks in links and a typo edit in indexes.txt --- source/administration/backups.txt | 5 +++-- source/administration/indexes.txt | 4 ++-- source/reference/mongoexport.txt | 4 ++-- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/source/administration/backups.txt b/source/administration/backups.txt index ae5ef0a10ac..e330ad22448 100644 --- a/source/administration/backups.txt +++ b/source/administration/backups.txt @@ -504,8 +504,9 @@ username and password credentials as above. If you created your database dump using the :option:`--oplog ` option to ensure a point-in-time snapshot, call -:program:`mongorestore` with the ":option:`--oplogReplay `" option as in the following example: +:program:`mongorestore` with the +:option:`--oplogReplay ` +option as in the following example: .. code-block:: sh diff --git a/source/administration/indexes.txt b/source/administration/indexes.txt index c6e31ff991b..520758a7747 100644 --- a/source/administration/indexes.txt +++ b/source/administration/indexes.txt @@ -22,7 +22,7 @@ Create an Index ~~~~~~~~~~~~~~~ To create an index, use :method:`db.collection.ensureIndex()` or a similar -:api:`method your driver <>`. For example +:api:`method from your driver <>`. For example the following creates [#ensure]_ an index on the ``phone-number`` field of the ``people`` collection: @@ -383,4 +383,4 @@ operation is an index build. The ``msg`` field also indicates the percent of the build that is complete. If you need to terminate an ongoing index build, You can use the -:method:`db.killOP()` method in the :program:`mongo` shell. +:method:`db.killOp()` method in the :program:`mongo` shell. diff --git a/source/reference/mongoexport.txt b/source/reference/mongoexport.txt index 63e984111df..6c7169bfc2a 100644 --- a/source/reference/mongoexport.txt +++ b/source/reference/mongoexport.txt @@ -3,7 +3,7 @@ .. default-domain:: mongodb ====================== -:program:`mongoexport` +mongoexport ====================== Synopsis @@ -146,7 +146,7 @@ Options .. option:: --jsonArray - Modifies the output of :program:`mongoexport` so that to write the + Modifies the output of :program:`mongoexport` to write the entire contents of the export as a single :term:`JSON` array. By default :program:`mongoexport` writes data using one JSON document for every MongoDB document.