Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 5 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -639,6 +639,7 @@ Apart from these, the following properties are also available, and may be useful
<td>false</td>
<td>
Whether to compress logged events, if <code>spark.eventLog.enabled</code> is true.
Compression will use <code>spark.io.compression.codec</code>.
</td>
</tr>
<tr>
Expand Down Expand Up @@ -773,14 +774,15 @@ Apart from these, the following properties are also available, and may be useful
<td>true</td>
<td>
Whether to compress broadcast variables before sending them. Generally a good idea.
Compression will use <code>spark.io.compression.codec</code>.
</td>
</tr>
<tr>
<td><code>spark.io.compression.codec</code></td>
<td>lz4</td>
<td>
The codec used to compress internal data such as RDD partitions, broadcast variables and
shuffle outputs. By default, Spark provides three codecs: <code>lz4</code>, <code>lzf</code>,
The codec used to compress internal data such as RDD partitions, event log, broadcast variables
and shuffle outputs. By default, Spark provides three codecs: <code>lz4</code>, <code>lzf</code>,
and <code>snappy</code>. You can also use fully qualified class names to specify the codec,
e.g.
<code>org.apache.spark.io.LZ4CompressionCodec</code>,
Expand Down Expand Up @@ -881,6 +883,7 @@ Apart from these, the following properties are also available, and may be useful
<code>StorageLevel.MEMORY_ONLY_SER</code> in Java
and Scala or <code>StorageLevel.MEMORY_ONLY</code> in Python).
Can save substantial space at the cost of some extra CPU time.
Compression will use <code>spark.io.compression.codec</code>.
</td>
</tr>
<tr>
Expand Down