Skip to content

Collection of minor copy edits for typos and style #1045

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion source/administration/production-notes.txt
Original file line number Diff line number Diff line change
Expand Up @@ -408,7 +408,7 @@ here:
get some reporting back of its occurrence.

#. Or, if you want to fail over manually, you can set your secondaries
to ``priority:0`` in their configuration. Then manual action would be
to ``priority:0`` in their configurations. Then manual action would be
required for a failover. This is practical for a small cluster; for a
large cluster you will want automation.

Expand Down
4 changes: 2 additions & 2 deletions source/core/aggregation.txt
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ Pipeline Sequence Optimization
.. versionchanged:: 2.4

:term:`Aggregation` operations have an optimization phase which
attempts to re-arrange the pipeline for improved performance.
attempts to re-arrange the pipeline for improved performance.

``$sort`` + ``$skip`` + ``$limit`` Sequence Optimization
````````````````````````````````````````````````````````
Expand Down Expand Up @@ -284,7 +284,7 @@ the following:
``$limit`` + ``$skip`` + ``$limit`` + ``$skip`` Sequence Optimization
`````````````````````````````````````````````````````````````````````

When you have continuous sequence of :pipeline:`$limit` pipeline
When you have a continuous sequence of a :pipeline:`$limit` pipeline
stage followed by a :pipeline:`$skip` pipeline stage, the
aggregation will attempt to re-arrange the pipeline stages to combine
the limits together and the skips together. For example, if the
Expand Down
14 changes: 7 additions & 7 deletions source/core/replication.txt
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ different usage patterns than the other members and require separation
from normal traffic. Typically, hidden members provide reporting,
dedicated backups, and dedicated read-only testing and integration
support.

To configure a member to be a hidden member, see
:doc:`/tutorial/configure-a-hidden-replica-set-member`.

Expand All @@ -97,7 +97,7 @@ the latest entry in this member's oplog will not be more recent than
one hour old, and the state of data for the member will reflect the state of the
set an hour earlier.

.. example:: If the current time is 09:52 and the secondary is a
.. example:: If the current time is 09:52 and the secondary is
delayed by an hour, no operation will be more recent than 08:52.

Delayed members may help recover from various kinds of human error. Such
Expand Down Expand Up @@ -147,7 +147,7 @@ interactions with the rest of the replica set:
documentation :doc:`/tutorial/configure-ssl` for more
information. As with all MongoDB components, run arbiters on secure
networks.

To add an arbiter to the replica set, see
:doc:`/tutorial/add-replica-set-arbiter`.

Expand Down Expand Up @@ -209,7 +209,7 @@ that member remains available and accessible to a majority of the
replica set, there will be no rollback.

Rollbacks remove those operations from the instance that were never
replicated to the set so that the data set is in a consistent state.
replicated to so that the data set is in a consistent state.
The :program:`mongod` program writes rolled back data to a :term:`BSON`
file that you can view using :program:`bsondump`, applied manually
using :program:`mongorestore`.
Expand Down Expand Up @@ -518,7 +518,7 @@ The following factors affect how MongoDB uses space in the oplog:
- If a significant portion of your workload entails in-place updates.

In-place updates create a large number of operations but do not
change the quantity data on disk.
change the quantity of data on disk.

If you can predict your replica set's workload to resemble one
of the above patterns, then you may want to consider creating an oplog
Expand All @@ -541,7 +541,7 @@ Replica Set Deployment
Without replication, a standalone MongoDB instance represents a single
point of failure and any disruption of the MongoDB system will render
the database unusable and potentially unrecoverable. Replication
increase the reliability of the database instance, and replica sets
increases the reliability of the database instance, and replica sets
are capable of distributing reads to :term:`secondary` members depending
on :term:`read preference`. For database work loads dominated by read
operations, (i.e. "read heavy") replica sets can greatly increase the
Expand Down Expand Up @@ -646,7 +646,7 @@ other. The content of the key file is arbitrary but must be the same
on all members of the replica set and on all :program:`mongos`
instances that connect to the set.

The key file must be less one kilobyte in size and may only contain
The key file must be less than one kilobyte in size and may only contain
characters in the base64 set. The key file must not have group or "world"
permissions on UNIX systems. Use the following command to use the
OpenSSL package to generate "random" content for use in a key file:
Expand Down
2 changes: 1 addition & 1 deletion source/faq/fundamentals.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ FAQ: MongoDB Fundamentals
:local:

This document addresses basic high level questions about MongoDB and
it's use.
its use.

If you don't find the answer you're looking for, check
the :doc:`complete list of FAQs </faq>` or post your question to the
Expand Down
2 changes: 1 addition & 1 deletion source/reference/command/compact.txt
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ compact

:dbcommand:`compact` compacts existing documents, but does not
reset ``paddingFactor`` statistics for the collection. After the
:dbcommand:`compact` MongoDB will use the existing
:dbcommand:`compact` operation, MongoDB will use the existing
``paddingFactor`` when allocating new records for documents in
this collection.

Expand Down
2 changes: 1 addition & 1 deletion source/reference/command/replSetSyncFrom.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ replSetSyncFrom
behind the current member, :program:`mongod` will return and log a
warning, but it still *will* replicate from the member that is behind.

If you run :method:`rs.syncFrom()` during initial sync, MongoDB
If you run :dbcommand:`replSetSyncFrom` during initial sync, MongoDB
produces no error messages, but the sync target will not change
until after the initial sync operation.

Expand Down
26 changes: 13 additions & 13 deletions source/reference/glossary.txt
Original file line number Diff line number Diff line change
Expand Up @@ -116,21 +116,21 @@ Glossary
total data set. In production, all shards should be replica sets.
See :term:`sharding`.

.. seealso:: The documents in the :doc:`/sharding` section of manual.
.. seealso:: The documents in the :doc:`/sharding` section of this manual.

sharding
A database architecture that enable horizontal scaling by splitting
data into key ranges among two or more replica sets. This architecture
is also known as "range-based partitioning." See :term:`shard`.

.. seealso:: The documents in the :doc:`/sharding` section of manual.
.. seealso:: The documents in the :doc:`/sharding` section of this manual.

sharded cluster
The set of nodes comprising a :term:`sharded <sharding>` MongoDB deployment. A sharded cluster
consists of three config processes, one or more replica sets, and one or more
:program:`mongos` routing processes.

.. seealso:: The documents in the :doc:`/sharding` section of manual.
.. seealso:: The documents in the :doc:`/sharding` section of this manual.

partition
A distributed system architecture that splits data into ranges.
Expand Down Expand Up @@ -268,18 +268,18 @@ Glossary

btree
A data structure used by most database management systems
for to store indexes. MongoDB uses b-trees for its indexes.
to store indexes. MongoDB uses b-trees for its indexes.

ISODate
The international date format used by :program:`mongo`
to display dates. E.g. ``YYYY-MM-DD HH:MM.SS.milis``.

journal
A sequential, binary transaction used to bring the database into
A sequential, binary transaction log used to bring the database into
a consistent state in the event of a hard shutdown. MongoDB
enables journaling by default for 64-bit builds of MongoDB
version 2.0 and newer. Journal files are pre-allocated and will
exist as three 1GB file in the data directory. To make journal
exist as three 1GB files in the data directory. To make journal
files smaller, use :setting:`smallfiles`.

When enabled, MongoDB writes data first to the journal and then
Expand Down Expand Up @@ -371,7 +371,7 @@ Glossary

haystack index
In the context of :term:`geospatial` queries, haystack indexes
enhance searches by creating "bucket" of objects grouped by a second
enhance searches by creating "buckets" of objects grouped by a second
criterion. For example, you might want all geospatial searches
to first select along a non-geospatial dimension and then match
on location. See :doc:`/core/geohaystack` for more
Expand Down Expand Up @@ -478,13 +478,13 @@ Glossary

shard key
In a sharded collection, a shard key is the field that MongoDB
uses to distribute documents among members of the
uses to distribute documents among members of the
:term:`sharded cluster`.

hashed shard key
A :ref:`hashed shard key <index-type-hashed>` is a special type
of :term:`shard key` that uses a hash of the value in the shard
key field is uses to distribute documents among members of the
key field to distribute documents among members of the
:term:`sharded cluster`.

query
Expand Down Expand Up @@ -572,8 +572,8 @@ Glossary
A property of a distributed system requiring that all members
always reflect the latest changes to the system. In a database
system, this means that any system that can provide data must
reflect the latest writes at all times. In MongoDB, reads to a
primary have :term:`strict consistency`; reads to secondary
reflect the latest writes at all times. In MongoDB, reads from a
primary have :term:`strict consistency`; reads from secondary
members have :term:`eventual consistency`.

write concern
Expand Down Expand Up @@ -672,7 +672,7 @@ Glossary
.. seealso:: :ref:`Replica Set Failover <replica-set-failover>`.

data-center awareness
A property that allows clients to address members in a system to
A property that allows clients to address members in a system
based upon their location.

:term:`Replica sets <replica set>` implement data-center
Expand All @@ -695,7 +695,7 @@ Glossary
``/etc/rc.d/`` directories.

map-reduce
A data and processing and aggregation paradigm consisting of a
A data processing and aggregation paradigm consisting of a
"map" phase that selects data, and a "reduce" phase that
transforms the data. In MongoDB, you can run arbitrary aggregations
over data using map-reduce.
Expand Down
1 change: 0 additions & 1 deletion source/sharding.txt
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ Reference
- :doc:`/reference/sharding-commands`
- :doc:`/reference/config-database`
- :doc:`/reference/program/mongos`
- :doc:`/reference/config-database`

.. toctree::
:hidden:
Expand Down
2 changes: 1 addition & 1 deletion source/tutorial/getting-started.txt
Original file line number Diff line number Diff line change
Expand Up @@ -484,7 +484,7 @@ For more information on querying for documents, see the
Limit the Number of Documents in the Result Set
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To increase perfomance, you can constrain the size of the result by
To increase performance, you can constrain the size of the result by
limiting the amount of data your application must receive over the
network.

Expand Down
4 changes: 2 additions & 2 deletions source/tutorial/shard-gridfs-data.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ issue commands similar to the following:

db.runCommand( { shardCollection : "test.fs.chunks" , key : { files_id : 1 , n : 1 } } )

You may also want to shard using just the ``file_id`` field, as in
You may also want to shard using just the ``file_id`` field, as in
the following operation:

.. code-block:: javascript
Expand All @@ -54,5 +54,5 @@ The default ``files_id`` value is an :term:`ObjectId`, as a result
the values of ``files_id`` are always ascending, and applications
will insert all new GridFS data to a single chunk and shard. If
your write load is too high for a single server to handle, consider
a different shard key or use a different value for different value
a different shard key or use a different value
for ``_id`` in the ``files`` collection.