You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/sparkr.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ printSchema(people)
148
148
</div>
149
149
150
150
The data sources API can also be used to save out DataFrames into multiple file formats. For example we can save the DataFrame from the previous example
151
-
to a Parquet file using `write.df` (Before spark 1.7, mode's default value is 'append', we change it to 'error' to be consistent with scala api)
151
+
to a Parquet file using `write.df` (Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API)
152
152
153
153
<divdata-lang="r"markdown="1">
154
154
{% highlight r %}
@@ -393,4 +393,4 @@ You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-ma
393
393
394
394
## Upgrading From SparkR 1.6 to 1.7
395
395
396
-
-Before Spark 1.7, the default save mode is `append` in api saveDF/write.df/saveAsTable, it is changed to `error` to be consistent with scala api.
396
+
-Until Spark 1.6, the default mode for writes was `append`. It was changed in Spark 1.7 to `error` to match the Scala API.
0 commit comments