Skip to content

Commit 78bfcf7

Browse files
DOCSP-42969 - remove nested admonitions (#204) (#206)
(cherry picked from commit b37226e) Co-authored-by: Mike Woofter <108414937+mongoKart@users.noreply.github.com>
1 parent 209a23e commit 78bfcf7

File tree

3 files changed

+20
-35
lines changed

3 files changed

+20
-35
lines changed

source/includes/note-trigger-method.rst

Lines changed: 0 additions & 4 deletions
This file was deleted.

source/streaming-mode/streaming-read-config.txt

Lines changed: 14 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -78,12 +78,10 @@ You can configure the following properties when reading data from MongoDB in str
7878

7979
[{"$match": {"closed": false}}, {"$project": {"status": 1, "name": 1, "description": 1}}]
8080

81-
.. important::
82-
83-
Custom aggregation pipelines must be compatible with the
84-
partitioner strategy. For example, aggregation stages such as
85-
``$group`` do not work with any partitioner that creates more than
86-
one partition.
81+
Custom aggregation pipelines must be compatible with the
82+
partitioner strategy. For example, aggregation stages such as
83+
``$group`` do not work with any partitioner that creates more than
84+
one partition.
8785

8886
* - ``aggregation.allowDiskUse``
8987
- | Specifies whether to allow storage to disk when running the
@@ -131,14 +129,12 @@ You can configure the following properties when reading a change stream from Mon
131129
original document and updated document, but it also includes a copy of the
132130
entire updated document.
133131

132+
For more information on how this change stream option works,
133+
see the MongoDB server manual guide
134+
:manual:`Lookup Full Document for Update Operation </changeStreams/#lookup-full-document-for-update-operations>`.
135+
134136
**Default:** "default"
135137

136-
.. tip::
137-
138-
For more information on how this change stream option works,
139-
see the MongoDB server manual guide
140-
:manual:`Lookup Full Document for Update Operation </changeStreams/#lookup-full-document-for-update-operations>`.
141-
142138
* - ``change.stream.micro.batch.max.partition.count``
143139
- | The maximum number of partitions the {+connector-short+} divides each
144140
micro-batch into. Spark workers can process these partitions in parallel.
@@ -147,11 +143,9 @@ You can configure the following properties when reading a change stream from Mon
147143
|
148144
| **Default**: ``1``
149145

150-
.. warning:: Event Order
151-
152-
Specifying a value larger than ``1`` can alter the order in which
153-
the {+connector-short+} processes change events. Avoid this setting
154-
if out-of-order processing could create data inconsistencies downstream.
146+
:red:`WARNING:` Specifying a value larger than ``1`` can alter the order in which
147+
the {+connector-short+} processes change events. Avoid this setting
148+
if out-of-order processing could create data inconsistencies downstream.
155149

156150
* - ``change.stream.publish.full.document.only``
157151
- | Specifies whether to publish the changed document or the full
@@ -170,12 +164,10 @@ You can configure the following properties when reading a change stream from Mon
170164
- If you don't specify a schema, the connector infers the schema
171165
from the change stream document rather than from the underlying collection.
172166

173-
**Default**: ``false``
167+
This setting overrides the ``change.stream.lookup.full.document``
168+
setting.
174169

175-
.. note::
176-
177-
This setting overrides the ``change.stream.lookup.full.document``
178-
setting.
170+
**Default**: ``false``
179171

180172
* - ``change.stream.startup.mode``
181173
- | Specifies how the connector starts up when no offset is available.

source/streaming-mode/streaming-write.txt

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,8 @@ Write to MongoDB in Streaming Mode
5151

5252
* - ``writeStream.trigger()``
5353
- Specifies how often the {+connector-short+} writes results
54-
to the streaming sink.
54+
to the streaming sink. Call this method on the ``DataStreamWriter`` object
55+
you create from the ``DataStreamReader`` you configure.
5556

5657
To use continuous processing, pass ``Trigger.Continuous(<time value>)``
5758
as an argument, where ``<time value>`` is how often you want the Spark
@@ -62,8 +63,6 @@ Write to MongoDB in Streaming Mode
6263

6364
To view a list of all supported processing policies, see the `Java
6465
trigger documentation <https://spark.apache.org/docs/latest/api/java/org/apache/spark/sql/streaming/Trigger.html>`__.
65-
66-
.. include:: /includes/note-trigger-method
6766

6867
The following code snippet shows how to use the previous
6968
configuration settings to stream data to MongoDB:
@@ -119,7 +118,8 @@ Write to MongoDB in Streaming Mode
119118

120119
* - ``writeStream.trigger()``
121120
- Specifies how often the {+connector-short+} writes results
122-
to the streaming sink.
121+
to the streaming sink. Call this method on the ``DataStreamWriter`` object
122+
you create from the ``DataStreamReader`` you configure.
123123

124124
To use continuous processing, pass the function a time value
125125
using the ``continuous`` parameter.
@@ -130,8 +130,6 @@ Write to MongoDB in Streaming Mode
130130
To view a list of all supported processing policies, see
131131
the `pyspark trigger documentation <https://spark.apache.org/docs/latest/api/python/reference/pyspark.ss/api/pyspark.sql.streaming.DataStreamWriter.trigger.html>`__.
132132

133-
.. include:: /includes/note-trigger-method
134-
135133
The following code snippet shows how to use the previous
136134
configuration settings to stream data to MongoDB:
137135

@@ -186,7 +184,8 @@ Write to MongoDB in Streaming Mode
186184

187185
* - ``writeStream.trigger()``
188186
- Specifies how often the {+connector-short+} writes results
189-
to the streaming sink.
187+
to the streaming sink. Call this method on the ``DataStreamWriter`` object
188+
you create from the ``DataStreamReader`` you configure.
190189

191190
To use continuous processing, pass ``Trigger.Continuous(<time value>)``
192191
as an argument, where ``<time value>`` is how often you want the Spark
@@ -198,8 +197,6 @@ Write to MongoDB in Streaming Mode
198197
To view a list of all
199198
supported processing policies, see the `Scala trigger documentation <https://spark.apache.org/docs/latest/api/scala/org/apache/spark/sql/streaming/DataStreamWriter.html#trigger(trigger:org.apache.spark.sql.streaming.Trigger):org.apache.spark.sql.streaming.DataStreamWriter[T]>`__.
200199

201-
.. include:: /includes/note-trigger-method
202-
203200
The following code snippet shows how to use the previous
204201
configuration settings to stream data to MongoDB:
205202

0 commit comments

Comments
 (0)