Skip to content

Conversation

@sitalkedia
Copy link

What changes were proposed in this pull request?

Upgrade snappy to 1.1.2.4 to improve snappy read/write performance.

How was this patch tested?

Tested by running a job on the cluster and saw 7.5% cpu savings after this change.

@JoshRosen
Copy link
Contributor

Jenkins, this is ok to test.

@JoshRosen
Copy link
Contributor

LGTM pending Jenkins. Thanks!

@JoshRosen
Copy link
Contributor

(Can you remove the "(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)" section from the PR Description?)

@SparkQA
Copy link

SparkQA commented Mar 31, 2016

Test build #54663 has finished for PR 12096 at commit e5a35fc.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@JoshRosen
Copy link
Contributor

Merging to master. Thanks!

(By the way, please keep reporting these hotspots that your profiling is uncovering; this is very useful feedback.)

@asfgit asfgit closed this in 8de201b Mar 31, 2016
@sitalkedia
Copy link
Author

@JoshRosen - Thanks a lot for helping out on this. We are trying to port some of our large hadoop jobs to Spark and we are seeing these performance bottlenecks. We will definitely keep reporting them as we discover more.

zzcclp pushed a commit to zzcclp/spark that referenced this pull request Apr 1, 2016
Upgrade snappy to 1.1.2.4 to improve snappy read/write performance.

Tested by running a job on the cluster and saw 7.5% cpu savings after this change.

Author: Sital Kedia <skedia@fb.com>

Closes apache#12096 from sitalkedia/snappyRelease.

(cherry picked from commit 8de201b)

Conflicts:
	dev/deps/spark-deps-hadoop-1
	dev/deps/spark-deps-hadoop-2.2
	dev/deps/spark-deps-hadoop-2.3
	dev/deps/spark-deps-hadoop-2.4
	dev/deps/spark-deps-hadoop-2.6
	pom.xml
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants