Skip to content

Commit c819572

Browse files
committed
Merge remote-tracking branch 'upstream/master' into constraint-cast
Conflicts: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/planning/patterns.scala
2 parents 5fa1760 + 43ef1e5 commit c819572

File tree

248 files changed

+4921
-3468
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

248 files changed

+4921
-3468
lines changed

R/README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ To set other options like driver memory, executor memory etc. you can pass in th
4040
If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example
4141
```
4242
# Set this to where Spark is installed
43-
Sys.setenv(SPARK_HOME="/Users/shivaram/spark")
43+
Sys.setenv(SPARK_HOME="/Users/username/spark")
4444
# This line loads SparkR from the installed directory
4545
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
4646
library(SparkR)
@@ -51,7 +51,7 @@ sc <- sparkR.init(master="local")
5151

5252
The [instructions](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark) for making contributions to Spark also apply to SparkR.
5353
If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes.
54-
Once you have made your changes, please include unit tests for them and run existing unit tests using the `run-tests.sh` script as described below.
54+
Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below.
5555

5656
#### Generating documentation
5757

@@ -60,17 +60,17 @@ The SparkR documentation (Rd files and HTML files) are not a part of the source
6060
### Examples, Unit tests
6161

6262
SparkR comes with several sample programs in the `examples/src/main/r` directory.
63-
To run one of them, use `./bin/sparkR <filename> <args>`. For example:
63+
To run one of them, use `./bin/spark-submit <filename> <args>`. For example:
6464

65-
./bin/sparkR examples/src/main/r/dataframe.R
65+
./bin/spark-submit examples/src/main/r/dataframe.R
6666

6767
You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first):
6868

6969
R -e 'install.packages("testthat", repos="http://cran.us.r-project.org")'
7070
./R/run-tests.sh
7171

7272
### Running on YARN
73-
The `./bin/spark-submit` and `./bin/sparkR` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
73+
The `./bin/spark-submit` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
7474
```
7575
export YARN_CONF_DIR=/etc/hadoop/conf
7676
./bin/spark-submit --master yarn examples/src/main/r/dataframe.R

common/network-common/src/main/java/org/apache/spark/network/TransportContext.java

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,8 @@
4343

4444
/**
4545
* Contains the context to create a {@link TransportServer}, {@link TransportClientFactory}, and to
46-
* setup Netty Channel pipelines with a {@link org.apache.spark.network.server.TransportChannelHandler}.
46+
* setup Netty Channel pipelines with a
47+
* {@link org.apache.spark.network.server.TransportChannelHandler}.
4748
*
4849
* There are two communication protocols that the TransportClient provides, control-plane RPCs and
4950
* data-plane "chunk fetching". The handling of the RPCs is performed outside of the scope of the

common/network-common/src/main/java/org/apache/spark/network/buffer/NettyManagedBuffer.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@
2828
/**
2929
* A {@link ManagedBuffer} backed by a Netty {@link ByteBuf}.
3030
*/
31-
public final class NettyManagedBuffer extends ManagedBuffer {
31+
public class NettyManagedBuffer extends ManagedBuffer {
3232
private final ByteBuf buf;
3333

3434
public NettyManagedBuffer(ByteBuf buf) {

common/network-common/src/main/java/org/apache/spark/network/client/StreamCallback.java

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -21,9 +21,9 @@
2121
import java.nio.ByteBuffer;
2222

2323
/**
24-
* Callback for streaming data. Stream data will be offered to the {@link #onData(String, ByteBuffer)}
25-
* method as it arrives. Once all the stream data is received, {@link #onComplete(String)} will be
26-
* called.
24+
* Callback for streaming data. Stream data will be offered to the
25+
* {@link #onData(String, ByteBuffer)} method as it arrives. Once all the stream data is received,
26+
* {@link #onComplete(String)} will be called.
2727
* <p>
2828
* The network library guarantees that a single thread will call these methods at a time, but
2929
* different call may be made by different threads.

common/network-common/src/main/java/org/apache/spark/network/client/TransportClientFactory.java

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ private static class ClientPool {
6464
TransportClient[] clients;
6565
Object[] locks;
6666

67-
public ClientPool(int size) {
67+
ClientPool(int size) {
6868
clients = new TransportClient[size];
6969
locks = new Object[size];
7070
for (int i = 0; i < size; i++) {

common/network-common/src/main/java/org/apache/spark/network/protocol/Message.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,15 @@ public interface Message extends Encodable {
3333
boolean isBodyInFrame();
3434

3535
/** Preceding every serialized Message is its type, which allows us to deserialize it. */
36-
public static enum Type implements Encodable {
36+
enum Type implements Encodable {
3737
ChunkFetchRequest(0), ChunkFetchSuccess(1), ChunkFetchFailure(2),
3838
RpcRequest(3), RpcResponse(4), RpcFailure(5),
3939
StreamRequest(6), StreamResponse(7), StreamFailure(8),
4040
OneWayMessage(9), User(-1);
4141

4242
private final byte id;
4343

44-
private Type(int id) {
44+
Type(int id) {
4545
assert id < 128 : "Cannot have more than 128 message types";
4646
this.id = (byte) id;
4747
}

common/network-common/src/main/java/org/apache/spark/network/protocol/RequestMessage.java

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,6 @@
1717

1818
package org.apache.spark.network.protocol;
1919

20-
import org.apache.spark.network.protocol.Message;
21-
2220
/** Messages from the client to the server. */
2321
public interface RequestMessage extends Message {
2422
// token interface

common/network-common/src/main/java/org/apache/spark/network/protocol/ResponseMessage.java

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,6 @@
1717

1818
package org.apache.spark.network.protocol;
1919

20-
import org.apache.spark.network.protocol.Message;
21-
2220
/** Messages from the server to the client. */
2321
public interface ResponseMessage extends Message {
2422
// token interface

common/network-common/src/main/java/org/apache/spark/network/sasl/SaslMessage.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,11 +36,11 @@ class SaslMessage extends AbstractMessage {
3636

3737
public final String appId;
3838

39-
public SaslMessage(String appId, byte[] message) {
39+
SaslMessage(String appId, byte[] message) {
4040
this(appId, Unpooled.wrappedBuffer(message));
4141
}
4242

43-
public SaslMessage(String appId, ByteBuf message) {
43+
SaslMessage(String appId, ByteBuf message) {
4444
super(new NettyManagedBuffer(message), true);
4545
this.appId = appId;
4646
}

common/network-common/src/main/java/org/apache/spark/network/server/OneForOneStreamManager.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,8 @@
3232
import org.apache.spark.network.client.TransportClient;
3333

3434
/**
35-
* StreamManager which allows registration of an Iterator&lt;ManagedBuffer&gt;, which are individually
36-
* fetched as chunks by the client. Each registered buffer is one chunk.
35+
* StreamManager which allows registration of an Iterator&lt;ManagedBuffer&gt;, which are
36+
* individually fetched as chunks by the client. Each registered buffer is one chunk.
3737
*/
3838
public class OneForOneStreamManager extends StreamManager {
3939
private final Logger logger = LoggerFactory.getLogger(OneForOneStreamManager.class);

0 commit comments

Comments
 (0)