From be3c2d72416f9284cf2571aaa7507d905b13820f Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Fri, 14 Jun 2013 12:17:32 -0400 Subject: [PATCH 1/3] minor: remove dup config-database entry from toc --- source/sharding.txt | 1 - 1 file changed, 1 deletion(-) diff --git a/source/sharding.txt b/source/sharding.txt index 07e3d4fd28d..6ba0ef96e67 100644 --- a/source/sharding.txt +++ b/source/sharding.txt @@ -36,7 +36,6 @@ Reference - :doc:`/reference/sharding-commands` - :doc:`/reference/config-database` - :doc:`/reference/program/mongos` -- :doc:`/reference/config-database` .. toctree:: :hidden: From 16809f122fb51ce0c4cd5ef3f66083bf10a567f4 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Fri, 14 Jun 2013 18:15:03 -0400 Subject: [PATCH 2/3] minor: typos and copy-editing --- source/administration/production-notes.txt | 2 +- source/core/aggregation.txt | 4 ++-- source/core/replication.txt | 14 ++++++------- source/faq/fundamentals.txt | 2 +- source/reference/command/compact.txt | 2 +- source/reference/command/replSetSyncFrom.txt | 2 +- source/reference/glossary.txt | 22 ++++++++++---------- source/tutorial/getting-started.txt | 2 +- source/tutorial/shard-gridfs-data.txt | 4 ++-- 9 files changed, 27 insertions(+), 27 deletions(-) diff --git a/source/administration/production-notes.txt b/source/administration/production-notes.txt index d141b16bbc7..b643135e076 100644 --- a/source/administration/production-notes.txt +++ b/source/administration/production-notes.txt @@ -408,7 +408,7 @@ here: get some reporting back of its occurrence. #. Or, if you want to fail over manually, you can set your secondaries - to ``priority:0`` in their configuration. Then manual action would be + to ``priority:0`` in their configurations. Then manual action would be required for a failover. This is practical for a small cluster; for a large cluster you will want automation. diff --git a/source/core/aggregation.txt b/source/core/aggregation.txt index c8d7b2c61da..1e4b484cedb 100644 --- a/source/core/aggregation.txt +++ b/source/core/aggregation.txt @@ -250,7 +250,7 @@ Pipeline Sequence Optimization .. versionchanged:: 2.4 :term:`Aggregation` operations have an optimization phase which -attempts to re-arrange the pipeline for improved performance. +attempts to re-arrange the pipeline for improved performance. ``$sort`` + ``$skip`` + ``$limit`` Sequence Optimization ```````````````````````````````````````````````````````` @@ -284,7 +284,7 @@ the following: ``$limit`` + ``$skip`` + ``$limit`` + ``$skip`` Sequence Optimization ````````````````````````````````````````````````````````````````````` -When you have continuous sequence of :pipeline:`$limit` pipeline +When you have a continuous sequence of a :pipeline:`$limit` pipeline stage followed by a :pipeline:`$skip` pipeline stage, the aggregation will attempt to re-arrange the pipeline stages to combine the limits together and the skips together. For example, if the diff --git a/source/core/replication.txt b/source/core/replication.txt index 13dabb91b40..0044058098f 100644 --- a/source/core/replication.txt +++ b/source/core/replication.txt @@ -80,7 +80,7 @@ different usage patterns than the other members and require separation from normal traffic. Typically, hidden members provide reporting, dedicated backups, and dedicated read-only testing and integration support. - + To configure a member to be a hidden member, see :doc:`/tutorial/configure-a-hidden-replica-set-member`. @@ -97,7 +97,7 @@ the latest entry in this member's oplog will not be more recent than one hour old, and the state of data for the member will reflect the state of the set an hour earlier. -.. example:: If the current time is 09:52 and the secondary is a +.. example:: If the current time is 09:52 and the secondary is delayed by an hour, no operation will be more recent than 08:52. Delayed members may help recover from various kinds of human error. Such @@ -147,7 +147,7 @@ interactions with the rest of the replica set: documentation :doc:`/tutorial/configure-ssl` for more information. As with all MongoDB components, run arbiters on secure networks. - + To add an arbiter to the replica set, see :doc:`/tutorial/add-replica-set-arbiter`. @@ -209,7 +209,7 @@ that member remains available and accessible to a majority of the replica set, there will be no rollback. Rollbacks remove those operations from the instance that were never -replicated to the set so that the data set is in a consistent state. +replicated to so that the data set is in a consistent state. The :program:`mongod` program writes rolled back data to a :term:`BSON` file that you can view using :program:`bsondump`, applied manually using :program:`mongorestore`. @@ -518,7 +518,7 @@ The following factors affect how MongoDB uses space in the oplog: - If a significant portion of your workload entails in-place updates. In-place updates create a large number of operations but do not - change the quantity data on disk. + change the quantity of data on disk. If you can predict your replica set's workload to resemble one of the above patterns, then you may want to consider creating an oplog @@ -541,7 +541,7 @@ Replica Set Deployment Without replication, a standalone MongoDB instance represents a single point of failure and any disruption of the MongoDB system will render the database unusable and potentially unrecoverable. Replication -increase the reliability of the database instance, and replica sets +increases the reliability of the database instance, and replica sets are capable of distributing reads to :term:`secondary` members depending on :term:`read preference`. For database work loads dominated by read operations, (i.e. "read heavy") replica sets can greatly increase the @@ -646,7 +646,7 @@ other. The content of the key file is arbitrary but must be the same on all members of the replica set and on all :program:`mongos` instances that connect to the set. -The key file must be less one kilobyte in size and may only contain +The key file must be less than one kilobyte in size and may only contain characters in the base64 set. The key file must not have group or "world" permissions on UNIX systems. Use the following command to use the OpenSSL package to generate "random" content for use in a key file: diff --git a/source/faq/fundamentals.txt b/source/faq/fundamentals.txt index e2548f8d14d..a604d491d45 100644 --- a/source/faq/fundamentals.txt +++ b/source/faq/fundamentals.txt @@ -9,7 +9,7 @@ FAQ: MongoDB Fundamentals :local: This document addresses basic high level questions about MongoDB and -it's use. +its use. If you don't find the answer you're looking for, check the :doc:`complete list of FAQs ` or post your question to the diff --git a/source/reference/command/compact.txt b/source/reference/command/compact.txt index 0b209cc8b6e..130c8547e8a 100644 --- a/source/reference/command/compact.txt +++ b/source/reference/command/compact.txt @@ -85,7 +85,7 @@ compact :dbcommand:`compact` compacts existing documents, but does not reset ``paddingFactor`` statistics for the collection. After the - :dbcommand:`compact` MongoDB will use the existing + :dbcommand:`compact` operation, MongoDB will use the existing ``paddingFactor`` when allocating new records for documents in this collection. diff --git a/source/reference/command/replSetSyncFrom.txt b/source/reference/command/replSetSyncFrom.txt index cf15540913c..3233b943cae 100644 --- a/source/reference/command/replSetSyncFrom.txt +++ b/source/reference/command/replSetSyncFrom.txt @@ -32,7 +32,7 @@ replSetSyncFrom behind the current member, :program:`mongod` will return and log a warning, but it still *will* replicate from the member that is behind. - If you run :method:`rs.syncFrom()` during initial sync, MongoDB + If you run :dbcommand:`replSetSyncFrom` during initial sync, MongoDB produces no error messages, but the sync target will not change until after the initial sync operation. diff --git a/source/reference/glossary.txt b/source/reference/glossary.txt index 92e94b0f2be..da4ee70eb4b 100644 --- a/source/reference/glossary.txt +++ b/source/reference/glossary.txt @@ -116,21 +116,21 @@ Glossary total data set. In production, all shards should be replica sets. See :term:`sharding`. - .. seealso:: The documents in the :doc:`/sharding` section of manual. + .. seealso:: The documents in the :doc:`/sharding` section of this manual. sharding A database architecture that enable horizontal scaling by splitting data into key ranges among two or more replica sets. This architecture is also known as "range-based partitioning." See :term:`shard`. - .. seealso:: The documents in the :doc:`/sharding` section of manual. + .. seealso:: The documents in the :doc:`/sharding` section of this manual. sharded cluster The set of nodes comprising a :term:`sharded ` MongoDB deployment. A sharded cluster consists of three config processes, one or more replica sets, and one or more :program:`mongos` routing processes. - .. seealso:: The documents in the :doc:`/sharding` section of manual. + .. seealso:: The documents in the :doc:`/sharding` section of this manual. partition A distributed system architecture that splits data into ranges. @@ -268,18 +268,18 @@ Glossary btree A data structure used by most database management systems - for to store indexes. MongoDB uses b-trees for its indexes. + to store indexes. MongoDB uses b-trees for its indexes. ISODate The international date format used by :program:`mongo` to display dates. E.g. ``YYYY-MM-DD HH:MM.SS.milis``. journal - A sequential, binary transaction used to bring the database into + A sequential, binary transaction log used to bring the database into a consistent state in the event of a hard shutdown. MongoDB enables journaling by default for 64-bit builds of MongoDB version 2.0 and newer. Journal files are pre-allocated and will - exist as three 1GB file in the data directory. To make journal + exist as three 1GB files in the data directory. To make journal files smaller, use :setting:`smallfiles`. When enabled, MongoDB writes data first to the journal and then @@ -371,7 +371,7 @@ Glossary haystack index In the context of :term:`geospatial` queries, haystack indexes - enhance searches by creating "bucket" of objects grouped by a second + enhance searches by creating "buckets" of objects grouped by a second criterion. For example, you might want all geospatial searches to first select along a non-geospatial dimension and then match on location. See :doc:`/core/geohaystack` for more @@ -478,13 +478,13 @@ Glossary shard key In a sharded collection, a shard key is the field that MongoDB - uses to distribute documents among members of the + uses to distribute documents among members of the :term:`sharded cluster`. hashed shard key A :ref:`hashed shard key ` is a special type of :term:`shard key` that uses a hash of the value in the shard - key field is uses to distribute documents among members of the + key field to distribute documents among members of the :term:`sharded cluster`. query @@ -672,7 +672,7 @@ Glossary .. seealso:: :ref:`Replica Set Failover `. data-center awareness - A property that allows clients to address members in a system to + A property that allows clients to address members in a system based upon their location. :term:`Replica sets ` implement data-center @@ -695,7 +695,7 @@ Glossary ``/etc/rc.d/`` directories. map-reduce - A data and processing and aggregation paradigm consisting of a + A data processing and aggregation paradigm consisting of a "map" phase that selects data, and a "reduce" phase that transforms the data. In MongoDB, you can run arbitrary aggregations over data using map-reduce. diff --git a/source/tutorial/getting-started.txt b/source/tutorial/getting-started.txt index 107c5df3131..b188e1f0ea2 100644 --- a/source/tutorial/getting-started.txt +++ b/source/tutorial/getting-started.txt @@ -484,7 +484,7 @@ For more information on querying for documents, see the Limit the Number of Documents in the Result Set ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To increase perfomance, you can constrain the size of the result by +To increase performance, you can constrain the size of the result by limiting the amount of data your application must receive over the network. diff --git a/source/tutorial/shard-gridfs-data.txt b/source/tutorial/shard-gridfs-data.txt index 0f71ce3a643..c5e1a008bad 100644 --- a/source/tutorial/shard-gridfs-data.txt +++ b/source/tutorial/shard-gridfs-data.txt @@ -32,7 +32,7 @@ issue commands similar to the following: db.runCommand( { shardCollection : "test.fs.chunks" , key : { files_id : 1 , n : 1 } } ) -You may also want to shard using just the ``file_id`` field, as in +You may also want to shard using just the ``file_id`` field, as in the following operation: .. code-block:: javascript @@ -54,5 +54,5 @@ The default ``files_id`` value is an :term:`ObjectId`, as a result the values of ``files_id`` are always ascending, and applications will insert all new GridFS data to a single chunk and shard. If your write load is too high for a single server to handle, consider -a different shard key or use a different value for different value +a different shard key or use a different value for ``_id`` in the ``files`` collection. From fb7b58873b64188867506e91db8c9af6bb255570 Mon Sep 17 00:00:00 2001 From: Ed Costello Date: Fri, 14 Jun 2013 18:15:25 -0400 Subject: [PATCH 3/3] minor: read from / write to stylistic nit --- source/reference/glossary.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/source/reference/glossary.txt b/source/reference/glossary.txt index da4ee70eb4b..404592d46ca 100644 --- a/source/reference/glossary.txt +++ b/source/reference/glossary.txt @@ -572,8 +572,8 @@ Glossary A property of a distributed system requiring that all members always reflect the latest changes to the system. In a database system, this means that any system that can provide data must - reflect the latest writes at all times. In MongoDB, reads to a - primary have :term:`strict consistency`; reads to secondary + reflect the latest writes at all times. In MongoDB, reads from a + primary have :term:`strict consistency`; reads from secondary members have :term:`eventual consistency`. write concern