From 859d3ba63094917af03208f6ba1e0c1649149c70 Mon Sep 17 00:00:00 2001 From: Sean Owen Date: Fri, 1 Mar 2019 15:40:09 -0600 Subject: [PATCH] Clarify that Pyspark is on PyPi now --- docs/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/index.md b/docs/index.md index 8864239eb164..a85dd9e553ed 100644 --- a/docs/index.md +++ b/docs/index.md @@ -20,7 +20,7 @@ Please see [Spark Security](security.html) before downloading and running Spark. Get Spark from the [downloads page](https://spark.apache.org/downloads.html) of the project website. This documentation is for Spark version {{site.SPARK_VERSION}}. Spark uses Hadoop's client libraries for HDFS and YARN. Downloads are pre-packaged for a handful of popular Hadoop versions. Users can also download a "Hadoop free" binary and run Spark with any Hadoop version [by augmenting Spark's classpath](hadoop-provided.html). -Scala and Java users can include Spark in their projects using its Maven coordinates and in the future Python users can also install Spark from PyPI. +Scala and Java users can include Spark in their projects using its Maven coordinates and Python users can install Spark from PyPI. If you'd like to build Spark from