Skip to content

Commit 8eb181c

Browse files
committed
Update doc
1 parent d72a3af commit 8eb181c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/sparkr.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ printSchema(people)
148148
</div>
149149

150150
The data sources API can also be used to save out DataFrames into multiple file formats. For example we can save the DataFrame from the previous example
151-
to a Parquet file using `write.df` (Before spark 1.7, mode's default value is 'append', we change it to 'error' to be consistent with scala api)
151+
to a Parquet file using `write.df` (Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API)
152152

153153
<div data-lang="r" markdown="1">
154154
{% highlight r %}
@@ -393,4 +393,4 @@ You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-ma
393393

394394
## Upgrading From SparkR 1.6 to 1.7
395395

396-
- Before Spark 1.7, the default save mode is `append` in api saveDF/write.df/saveAsTable, it is changed to `error` to be consistent with scala api.
396+
- Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API.

0 commit comments

Comments
 (0)