Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-50382][CONNECT] Add documentation for general information on application development with/extending Spark Connect #48922

Closed
wants to merge 4 commits into from

Conversation

vicennial
Copy link
Contributor

@vicennial vicennial commented Nov 21, 2024

What changes were proposed in this pull request?

Adds a new page, app-dev-spark-connect.md, which is hyperlinked from the Use Spark Connect in standalone applications section in spark-connect-overview.

Why are the changes needed?

There is a lack of documentation in the area of application development (with Spark Connect) especially so on extending Spark Connect with custom logic/libraries/plugins.

Does this PR introduce any user-facing change?

Yes, new page titled "Application Development with Spark Connect"

Render screenshot:
image

How was this patch tested?

Local rendering

Was this patch authored or co-authored using generative AI tooling?

No

@github-actions github-actions bot added the DOCS label Nov 21, 2024
@vicennial
Copy link
Contributor Author

Thanks for the review @HyukjinKwon! I've addressed the feedback and updated the rendering in the PR description

Copy link
Contributor

@grundprinzip grundprinzip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing the write up!

@HyukjinKwon
Copy link
Member

Merged to master.

@nchammas
Copy link
Contributor

@grundprinzip - Does this work replace the WIP you had in #45340?

@vicennial
Copy link
Contributor Author

@nchammas Not completely. There is more information in #45340 that makes quite a lot of sense to include, especially about the bits that explain the concepts of relations/expressions/commands. I've been meaning to integrate this information and haven't got into it yet but it's on my radar

asfgit pushed a commit that referenced this pull request Feb 17, 2025
…ct Server Libraries

### What changes were proposed in this pull request?

This PR adds a sample project, `server-library-example` (under a new directory `connect-examples`) to demonstrate the workings of using Spark Connect Server Libraries (see #48922 for context).
The sample project contains several modules (`common`, `server` and `client`) to showcase how a user may choose to extend the Spark Connect protocol with custom functionality.

### Why are the changes needed?

Currently, there are limited resources and documentation to aid a user in building their own Spark Connect Server Libraries. This PR aims to bridge this gap by providing an exoskeleton of a project to work with.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
N/A

### Was this patch authored or co-authored using generative AI tooling?

Generated-by: Copilot

-------------------- Render of `README.md` below ----------------
# Spark Server Library Example - Custom Datasource Handler

This example demonstrates a modular maven-based project architecture with separate client, server
and common components. It leverages the extensibility of Spark Connect to create a server library
that may be attached to the server to extend the functionality of the Spark Connect server as a whole. Below is a detailed overview of the setup and functionality.

## Project Structure

```
├── common/                # Shared protobuf/utilities/classes
├── client/                # Sample client implementation
│   ├── src/               # Source code for client functionality
│   ├── pom.xml            # Maven configuration for the client
├── server/                # Server-side plugin extension
│   ├── src/               # Source code for server functionality
│   ├── pom.xml            # Maven configuration for the server
├── resources/             # Static resources
├── pom.xml                # Parent Maven configuration
```

## Functionality Overview

To demonstrate the extensibility of Spark Connect, a custom datasource handler, `CustomTable` is
implemented in the server module. The class handles reading, writing and processing data stored in
a custom format, here we simply use the `.custom` extension (which itself is a wrapper over `.csv`
files).

First and foremost, the client and the server must be able to communicate with each other through
custom messages that 'understand' our custom data format. This is achieved by defining custom
protobuf messages in the `common` module. The client and server modules both depend on the `common`
module to access these messages.
- `common/src/main/protobuf/base.proto`: Defines the base `CustomTable` which is simply represented
by a path and a name.
```protobuf
message CustomTable {
  string path = 1;
  string name = 2;
}
```
- `common/src/main/protobuf/commands.proto`: Defines the custom commands that the client can send
to the server. These commands are typically operations that the server can perform, such as cloning
an existing custom table.
```protobuf
message CustomCommand {
  oneof command_type {
    CreateTable create_table = 1;
    CloneTable clone_table = 2;
  }
}
```
- `common/src/main/protobuf/relations.proto`: Defines custom `relations`, which are a mechanism through which an optional input dataset is transformed into an
  output dataset such as a Scan.
```protobuf
message Scan {
  CustomTable table = 1;
}
```

On the client side, the `CustomTable` class mimics the style of Spark's `Dataset` API, allowing the
user to perform and chain operations on a `CustomTable` object.

On the server side, a similar `CustomTable` class is implemented to handle the core functionality of
reading, writing and processing data in the custom format. The plugins (`CustomCommandPlugin` and
`CustomRelationPlugin`) are responsible for processing the custom protobuf messages sent from the client
(those defined in the `common` module) and delegating the appropriate actions to the `CustomTable`.

## Build and Run Instructions

1. **Navigate to the sample project from `SPARK_HOME`**:
   ```bash
   cd connect-examples/server-library-example
   ```

2. **Build and package the modules**:
   ```bash
   mvn clean package
   ```

3. **Download the `4.0.0-preview2` release to use as the Spark Connect Server**:
   - Choose a distribution from https://archive.apache.org/dist/spark/spark-4.0.0-preview2/.
   - Example: `curl -L https://archive.apache.org/dist/spark/spark-4.0.0-preview2/spark-4.0.0-preview2-bin-hadoop3.tgz | tar xz`

4. **Copy relevant JARs to the root of the unpacked Spark distribution**:
   ```bash
    cp \
    <SPARK_HOME>/connect-examples/server-library-example/resources/spark-daria_2.13-1.2.3.jar \
    <SPARK_HOME>/connect-examples/server-library-example/common/target/spark-server-library-example-common-1.0-SNAPSHOT.jar \
    <SPARK_HOME>/connect-examples/server-library-example/server/target/spark-server-library-example-server-extension-1.0-SNAPSHOT.jar \
    .
   ```
5. **Start the Spark Connect Server with the relevant JARs**:
   ```bash
    bin/spark-connect-shell \
   --jars spark-server-library-example-server-extension,spark-server-library-example-common-1.0-SNAPSHOT.jar,spark-daria_2.13-1.2.3.jar \
   --conf spark.connect.extensions.relation.classes=org.example.CustomRelationPlugin \
   --conf spark.connect.extensions.command.classes=org.example.CustomCommandPlugin
   ```
6. **In a different terminal, navigate back to the root of the sample project and start the client**:
   ```bash
   java -cp client/target/spark-server-library-client-package-scala-1.0-SNAPSHOT.jar org.example.Main
   ```
7. **Notice the printed output in the client terminal as well as the creation of the cloned table**:
```protobuf
Explaining plan for custom table: sample_table with path: <SPARK_HOME>/spark/connect-examples/server-library-example/client/../resources/dummy_data.custom
== Parsed Logical Plan ==
Relation [id#2,name#3] csv
== Analyzed Logical Plan ==
id: int, name: string
Relation [id#2,name#3] csv
== Optimized Logical Plan ==
Relation [id#2,name#3] csv
== Physical Plan ==
FileScan csv [id#2,name#3] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/venkata.gudesa/spark/connect-examples/server-library-example/resou..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int,name:string>
Explaining plan for custom table: cloned_table with path: <SPARK_HOME>/connect-examples/server-library-example/client/../resources/cloned_data.custom
== Parsed Logical Plan ==
Relation [id#2,name#3] csv
== Analyzed Logical Plan ==
id: int, name: string
Relation [id#2,name#3] csv
== Optimized Logical Plan ==
Relation [id#2,name#3] csv
== Physical Plan ==
FileScan csv [id#2,name#3] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/venkata.gudesa/spark/connect-examples/server-library-example/resou..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int,name:string>
```

Closes #49604 from vicennial/connectExamples.

Authored-by: vicennial <venkata.gudesa@databricks.com>
Signed-off-by: Herman van Hovell <herman@databricks.com>
asfgit pushed a commit that referenced this pull request Feb 17, 2025
…ct Server Libraries

### What changes were proposed in this pull request?

This PR adds a sample project, `server-library-example` (under a new directory `connect-examples`) to demonstrate the workings of using Spark Connect Server Libraries (see #48922 for context).
The sample project contains several modules (`common`, `server` and `client`) to showcase how a user may choose to extend the Spark Connect protocol with custom functionality.

### Why are the changes needed?

Currently, there are limited resources and documentation to aid a user in building their own Spark Connect Server Libraries. This PR aims to bridge this gap by providing an exoskeleton of a project to work with.

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
N/A

### Was this patch authored or co-authored using generative AI tooling?

Generated-by: Copilot

-------------------- Render of `README.md` below ----------------
# Spark Server Library Example - Custom Datasource Handler

This example demonstrates a modular maven-based project architecture with separate client, server
and common components. It leverages the extensibility of Spark Connect to create a server library
that may be attached to the server to extend the functionality of the Spark Connect server as a whole. Below is a detailed overview of the setup and functionality.

## Project Structure

```
├── common/                # Shared protobuf/utilities/classes
├── client/                # Sample client implementation
│   ├── src/               # Source code for client functionality
│   ├── pom.xml            # Maven configuration for the client
├── server/                # Server-side plugin extension
│   ├── src/               # Source code for server functionality
│   ├── pom.xml            # Maven configuration for the server
├── resources/             # Static resources
├── pom.xml                # Parent Maven configuration
```

## Functionality Overview

To demonstrate the extensibility of Spark Connect, a custom datasource handler, `CustomTable` is
implemented in the server module. The class handles reading, writing and processing data stored in
a custom format, here we simply use the `.custom` extension (which itself is a wrapper over `.csv`
files).

First and foremost, the client and the server must be able to communicate with each other through
custom messages that 'understand' our custom data format. This is achieved by defining custom
protobuf messages in the `common` module. The client and server modules both depend on the `common`
module to access these messages.
- `common/src/main/protobuf/base.proto`: Defines the base `CustomTable` which is simply represented
by a path and a name.
```protobuf
message CustomTable {
  string path = 1;
  string name = 2;
}
```
- `common/src/main/protobuf/commands.proto`: Defines the custom commands that the client can send
to the server. These commands are typically operations that the server can perform, such as cloning
an existing custom table.
```protobuf
message CustomCommand {
  oneof command_type {
    CreateTable create_table = 1;
    CloneTable clone_table = 2;
  }
}
```
- `common/src/main/protobuf/relations.proto`: Defines custom `relations`, which are a mechanism through which an optional input dataset is transformed into an
  output dataset such as a Scan.
```protobuf
message Scan {
  CustomTable table = 1;
}
```

On the client side, the `CustomTable` class mimics the style of Spark's `Dataset` API, allowing the
user to perform and chain operations on a `CustomTable` object.

On the server side, a similar `CustomTable` class is implemented to handle the core functionality of
reading, writing and processing data in the custom format. The plugins (`CustomCommandPlugin` and
`CustomRelationPlugin`) are responsible for processing the custom protobuf messages sent from the client
(those defined in the `common` module) and delegating the appropriate actions to the `CustomTable`.

## Build and Run Instructions

1. **Navigate to the sample project from `SPARK_HOME`**:
   ```bash
   cd connect-examples/server-library-example
   ```

2. **Build and package the modules**:
   ```bash
   mvn clean package
   ```

3. **Download the `4.0.0-preview2` release to use as the Spark Connect Server**:
   - Choose a distribution from https://archive.apache.org/dist/spark/spark-4.0.0-preview2/.
   - Example: `curl -L https://archive.apache.org/dist/spark/spark-4.0.0-preview2/spark-4.0.0-preview2-bin-hadoop3.tgz | tar xz`

4. **Copy relevant JARs to the root of the unpacked Spark distribution**:
   ```bash
    cp \
    <SPARK_HOME>/connect-examples/server-library-example/resources/spark-daria_2.13-1.2.3.jar \
    <SPARK_HOME>/connect-examples/server-library-example/common/target/spark-server-library-example-common-1.0-SNAPSHOT.jar \
    <SPARK_HOME>/connect-examples/server-library-example/server/target/spark-server-library-example-server-extension-1.0-SNAPSHOT.jar \
    .
   ```
5. **Start the Spark Connect Server with the relevant JARs**:
   ```bash
    bin/spark-connect-shell \
   --jars spark-server-library-example-server-extension,spark-server-library-example-common-1.0-SNAPSHOT.jar,spark-daria_2.13-1.2.3.jar \
   --conf spark.connect.extensions.relation.classes=org.example.CustomRelationPlugin \
   --conf spark.connect.extensions.command.classes=org.example.CustomCommandPlugin
   ```
6. **In a different terminal, navigate back to the root of the sample project and start the client**:
   ```bash
   java -cp client/target/spark-server-library-client-package-scala-1.0-SNAPSHOT.jar org.example.Main
   ```
7. **Notice the printed output in the client terminal as well as the creation of the cloned table**:
```protobuf
Explaining plan for custom table: sample_table with path: <SPARK_HOME>/spark/connect-examples/server-library-example/client/../resources/dummy_data.custom
== Parsed Logical Plan ==
Relation [id#2,name#3] csv
== Analyzed Logical Plan ==
id: int, name: string
Relation [id#2,name#3] csv
== Optimized Logical Plan ==
Relation [id#2,name#3] csv
== Physical Plan ==
FileScan csv [id#2,name#3] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/venkata.gudesa/spark/connect-examples/server-library-example/resou..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int,name:string>
Explaining plan for custom table: cloned_table with path: <SPARK_HOME>/connect-examples/server-library-example/client/../resources/cloned_data.custom
== Parsed Logical Plan ==
Relation [id#2,name#3] csv
== Analyzed Logical Plan ==
id: int, name: string
Relation [id#2,name#3] csv
== Optimized Logical Plan ==
Relation [id#2,name#3] csv
== Physical Plan ==
FileScan csv [id#2,name#3] Batched: false, DataFilters: [], Format: CSV, Location: InMemoryFileIndex(1 paths)[file:/Users/venkata.gudesa/spark/connect-examples/server-library-example/resou..., PartitionFilters: [], PushedFilters: [], ReadSchema: struct<id:int,name:string>
```

Closes #49604 from vicennial/connectExamples.

Authored-by: vicennial <venkata.gudesa@databricks.com>
Signed-off-by: Herman van Hovell <herman@databricks.com>
(cherry picked from commit bd2b478)
Signed-off-by: Herman van Hovell <herman@databricks.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants