You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
there are two kinds of receivers based on their reliability and fault-tolerance semantics.
202
+
203
+
1.*Reliable Receiver* - For *reliable sources* that allow sent data to be acknowledged, a
204
+
*reliable receiver* correctly acknowledges to the source that the data has been received
205
+
and stored in Spark reliably (that is, replicated successfully). Usually,
206
+
implementing this receiver involves careful consideration of the semantics of source
207
+
acknowledgements.
208
+
1.*Unreliable Receiver* - These are receivers for unreliable sources that do not support
209
+
acknowledging. Even for reliable sources, one may implement an unreliable receiver that
210
+
do not go into the complexity of acknowledging correctly.
211
+
212
+
To implement a *reliable receiver*, you have to use `store(multiple-records)` to store data.
213
+
This flavour of `store` is a blocking call which returns only after all the given records have
214
+
been stored inside Spark. If replication is enabled receiver's configured storage level
215
+
(enabled by default), then this call returns after replication has completed.
216
+
Thus it ensures that the data is reliably stored, and the receiver can now acknowledge the
217
+
source appropriately. This ensures that no data is caused when the receiver fails in the middle
218
+
of replicating data -- the buffered data will not be acknowledged and hence will be later resent
219
+
by the source.
220
+
221
+
An *unreliable receiver* does not have to implement any of this logic. It can simply receive
222
+
records from the source and insert them one-at-a-time using `store(single-record)`. While it does
223
+
not get the reliability guarantees of `store(multiple-records)`, it has the following advantages.
224
+
225
+
- The system takes care of chunking that data into appropriate sized blocks (look for block
226
+
interval in the [Spark Streaming Programming Guide](streaming-programming-guide.html)).
227
+
- The system takes care of controlling the receiving rates if the rate limits have been specified.
228
+
- Because of these two, *unreliable receivers are simpler to implement than reliable receivers.
229
+
230
+
The following table summarizes the characteristics of both types of receivers
231
+
232
+
<tableclass="table">
233
+
<tr>
234
+
<th>Receiver Type</th>
235
+
<th>Characteristics</th>
236
+
</tr>
237
+
<tr>
238
+
<td><b>Unreliable Receivers</b></td>
239
+
<td>
240
+
Simple to implement.<br>
241
+
System takes care of block generation and rate control.
242
+
No fault-tolerance guarantees, can loose data on receiver failure.
243
+
</td>
244
+
</tr>
245
+
<tr>
246
+
<td><b>Reliable Receivers</b></td>
247
+
<td>
248
+
Strong fault-tolerance guarantees, can ensure zero data loss.<br/>
249
+
Block generation and rate control to be handled by the receiver implementation.<br/>
250
+
Implementation complexity depends on the acknowledgement mechanisms of the source.
251
+
</td>
252
+
</tr>
253
+
<tr>
254
+
<td></td>
255
+
<td></td>
256
+
</tr>
257
+
</table>
258
+
259
+
## Implementing and Using a Custom Actor-based Receiver
197
260
198
261
Custom [Akka Actors](http://doc.akka.io/docs/akka/2.2.4/scala/actors.html) can also be used to
199
262
receive data. The [`ActorHelper`](api/scala/index.html#org.apache.spark.streaming.receiver.ActorHelper)
@@ -217,5 +280,3 @@ val lines = ssc.actorStream[String](Props(new CustomActor()), "CustomReceiver")
217
280
218
281
See [ActorWordCount.scala](https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala)
0 commit comments