Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions docs/structured-streaming-kafka-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -393,10 +393,12 @@ The following configurations are optional:
<td>int</td>
<td>none</td>
<td>streaming and batch</td>
<td>Minimum number of partitions to read from Kafka.
<td>Desired minimum number of partitions to read from Kafka.
By default, Spark has a 1-1 mapping of topicPartitions to Spark partitions consuming from Kafka.
If you set this option to a value greater than your topicPartitions, Spark will divvy up large
Kafka partitions to smaller pieces.</td>
Kafka partitions to smaller pieces. Please note that this configuration is like a `hint`: the
number of Spark tasks will be **approximately** `minPartitions`. It can be less or more depending on
rounding errors or Kafka partitions that didn't receive any new data.</td>
</tr>
<tr>
<td>groupIdPrefix</td>
Expand Down