Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/building-spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -284,7 +284,7 @@ If use an individual repository or a repository on GitHub Enterprise, export bel

### Related environment variables

<table class="table table-striped">
<table>
<thead><tr><th>Variable Name</th><th>Default</th><th>Meaning</th></tr></thead>
<tr>
<td><code>SPARK_PROJECT_URL</code></td>
Expand Down
2 changes: 1 addition & 1 deletion docs/cluster-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ The [job scheduling overview](job-scheduling.html) describes this in more detail

The following table summarizes terms you'll see used to refer to cluster concepts:

<table class="table table-striped">
<table>
<thead>
<tr><th style="width: 130px;">Term</th><th>Meaning</th></tr>
</thead>
Expand Down
38 changes: 19 additions & 19 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ of the most common options to set are:

### Application Properties

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.app.name</code></td>
Expand Down Expand Up @@ -528,7 +528,7 @@ Apart from these, the following properties are also available, and may be useful

### Runtime Environment

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.driver.extraClassPath</code></td>
Expand Down Expand Up @@ -915,7 +915,7 @@ Apart from these, the following properties are also available, and may be useful

### Shuffle Behavior

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.reducer.maxSizeInFlight</code></td>
Expand Down Expand Up @@ -1290,7 +1290,7 @@ Apart from these, the following properties are also available, and may be useful

### Spark UI

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.eventLog.logBlockUpdates.enabled</code></td>
Expand Down Expand Up @@ -1682,7 +1682,7 @@ Apart from these, the following properties are also available, and may be useful

### Compression and Serialization

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.broadcast.compress</code></td>
Expand Down Expand Up @@ -1880,7 +1880,7 @@ Apart from these, the following properties are also available, and may be useful

### Memory Management

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.memory.fraction</code></td>
Expand Down Expand Up @@ -2005,7 +2005,7 @@ Apart from these, the following properties are also available, and may be useful

### Execution Behavior

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.broadcast.blockSize</code></td>
Expand Down Expand Up @@ -2250,7 +2250,7 @@ Apart from these, the following properties are also available, and may be useful

### Executor Metrics

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.eventLog.logStageExecutorMetrics</code></td>
Expand Down Expand Up @@ -2318,7 +2318,7 @@ Apart from these, the following properties are also available, and may be useful

### Networking

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.rpc.message.maxSize</code></td>
Expand Down Expand Up @@ -2481,7 +2481,7 @@ Apart from these, the following properties are also available, and may be useful

### Scheduling

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.cores.max</code></td>
Expand Down Expand Up @@ -2962,7 +2962,7 @@ Apart from these, the following properties are also available, and may be useful

### Barrier Execution Mode

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.barrier.sync.timeout</code></td>
Expand Down Expand Up @@ -3009,7 +3009,7 @@ Apart from these, the following properties are also available, and may be useful

### Dynamic Allocation

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.dynamicAllocation.enabled</code></td>
Expand Down Expand Up @@ -3151,7 +3151,7 @@ finer granularity starting from driver and executor. Take RPC module as example
like shuffle, just replace "rpc" with "shuffle" in the property names except
<code>spark.{driver|executor}.rpc.netty.dispatcher.numThreads</code>, which is only for RPC module.

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.{driver|executor}.rpc.io.serverThreads</code></td>
Expand Down Expand Up @@ -3294,7 +3294,7 @@ External users can query the static sql config values via `SparkSession.conf` or

### Spark Streaming

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.streaming.backpressure.enabled</code></td>
Expand Down Expand Up @@ -3426,7 +3426,7 @@ External users can query the static sql config values via `SparkSession.conf` or

### SparkR

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.r.numRBackendThreads</code></td>
Expand Down Expand Up @@ -3482,7 +3482,7 @@ External users can query the static sql config values via `SparkSession.conf` or

### GraphX

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.graphx.pregel.checkpointInterval</code></td>
Expand Down Expand Up @@ -3519,7 +3519,7 @@ copy `conf/spark-env.sh.template` to create it. Make sure you make the copy exec
The following variables can be set in `spark-env.sh`:


<table class="table table-striped">
<table>
<thead><tr><th style="width:21%">Environment Variable</th><th>Meaning</th></tr></thead>
<tr>
<td><code>JAVA_HOME</code></td>
Expand Down Expand Up @@ -3656,7 +3656,7 @@ Push-based shuffle helps improve the reliability and performance of spark shuffl

### External Shuffle service(server) side configuration options

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.shuffle.push.server.mergedShuffleFileManagerImpl</code></td>
Expand Down Expand Up @@ -3690,7 +3690,7 @@ Push-based shuffle helps improve the reliability and performance of spark shuffl

### Client side configuration options

<table class="table table-striped">
<table>
<thead><tr><th>Property Name</th><th>Default</th><th>Meaning</th><th>Since Version</th></tr></thead>
<tr>
<td><code>spark.shuffle.push.enabled</code></td>
Expand Down
13 changes: 13 additions & 0 deletions docs/css/custom.css
Original file line number Diff line number Diff line change
Expand Up @@ -1111,5 +1111,18 @@ img {
table {
width: 100%;
overflow-wrap: normal;
border-collapse: collapse; /* Ensures that the borders collapse into a single border */
}

table th, table td {
border: 1px solid #cccccc; /* Adds a border to each table header and data cell */
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I asked ChatGPT to recover the CSS with doc. It looks good so I keep it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The result of the changes are done by myself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why dropping the striped this is exactly what the bootstrap theme already provides us.

The easier way would be simply to

.table {
  border-collapse: collapse;
  border: 1px solid #ccc
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried the striped style but it was not very clear when there is long text ( you can find it in the commit history):
image

padding: 6px 13px; /* Optional: Adds padding inside each cell for better readability */
}

table tr {
background-color: white; /* Sets a default background color for all rows */
}

table tr:nth-child(2n) {
background-color: #F1F4F5; /* Sets a different background color for even rows */
}
14 changes: 7 additions & 7 deletions docs/ml-classification-regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -703,7 +703,7 @@ others.

### Available families

<table class="table table-striped">
<table>
<thead>
<tr>
<th>Family</th>
Expand Down Expand Up @@ -1224,7 +1224,7 @@ All output columns are optional; to exclude an output column, set its correspond

### Input Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand All @@ -1251,7 +1251,7 @@ All output columns are optional; to exclude an output column, set its correspond

### Output Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand Down Expand Up @@ -1326,7 +1326,7 @@ All output columns are optional; to exclude an output column, set its correspond

#### Input Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand All @@ -1353,7 +1353,7 @@ All output columns are optional; to exclude an output column, set its correspond

#### Output Columns (Predictions)

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand Down Expand Up @@ -1407,7 +1407,7 @@ All output columns are optional; to exclude an output column, set its correspond

#### Input Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand Down Expand Up @@ -1436,7 +1436,7 @@ Note that `GBTClassifier` currently only supports binary labels.

#### Output Columns (Predictions)

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand Down
8 changes: 4 additions & 4 deletions docs/ml-clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ called [kmeans||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf).

### Input Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand All @@ -61,7 +61,7 @@ called [kmeans||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf).

### Output Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand Down Expand Up @@ -204,7 +204,7 @@ model.

### Input Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand All @@ -225,7 +225,7 @@ model.

### Output Columns

<table class="table table-striped">
<table>
<thead>
<tr>
<th align="left">Param name</th>
Expand Down
2 changes: 1 addition & 1 deletion docs/mllib-classification-regression.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ classification](http://en.wikipedia.org/wiki/Multiclass_classification), and
[regression analysis](http://en.wikipedia.org/wiki/Regression_analysis). The table below outlines
the supported algorithms for each type of problem.

<table class="table table-striped">
<table>
<thead>
<tr><th>Problem Type</th><th>Supported Methods</th></tr>
</thead>
Expand Down
2 changes: 1 addition & 1 deletion docs/mllib-decision-tree.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The *node impurity* is a measure of the homogeneity of the labels at the node. T
implementation provides two impurity measures for classification (Gini impurity and entropy) and one
impurity measure for regression (variance).

<table class="table table-striped">
<table>
<thead>
<tr><th>Impurity</th><th>Task</th><th>Formula</th><th>Description</th></tr>
</thead>
Expand Down
2 changes: 1 addition & 1 deletion docs/mllib-ensembles.md
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ Note that each loss is applicable to one of classification or regression, not bo

Notation: $N$ = number of instances. $y_i$ = label of instance $i$. $x_i$ = features of instance $i$. $F(x_i)$ = model's predicted label for instance $i$.

<table class="table table-striped">
<table>
<thead>
<tr><th>Loss</th><th>Task</th><th>Formula</th><th>Description</th></tr>
</thead>
Expand Down
10 changes: 5 additions & 5 deletions docs/mllib-evaluation-metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ plots (recall, false positive rate) points.

**Available metrics**

<table class="table table-striped">
<table>
<thead>
<tr><th>Metric</th><th>Definition</th></tr>
</thead>
Expand Down Expand Up @@ -179,7 +179,7 @@ For this section, a modified delta function $\hat{\delta}(x)$ will prove useful

$$\hat{\delta}(x) = \begin{cases}1 & \text{if $x = 0$}, \\ 0 & \text{otherwise}.\end{cases}$$

<table class="table table-striped">
<table>
<thead>
<tr><th>Metric</th><th>Definition</th></tr>
</thead>
Expand Down Expand Up @@ -296,7 +296,7 @@ The following definition of indicator function $I_A(x)$ on a set $A$ will be nec

$$I_A(x) = \begin{cases}1 & \text{if $x \in A$}, \\ 0 & \text{otherwise}.\end{cases}$$

<table class="table table-striped">
<table>
<thead>
<tr><th>Metric</th><th>Definition</th></tr>
</thead>
Expand Down Expand Up @@ -447,7 +447,7 @@ documents, returns a relevance score for the recommended document.

$$rel_D(r) = \begin{cases}1 & \text{if $r \in D$}, \\ 0 & \text{otherwise}.\end{cases}$$

<table class="table table-striped">
<table>
<thead>
<tr><th>Metric</th><th>Definition</th><th>Notes</th></tr>
</thead>
Expand Down Expand Up @@ -553,7 +553,7 @@ variable from a number of independent variables.

**Available metrics**

<table class="table table-striped">
<table>
<thead>
<tr><th>Metric</th><th>Definition</th></tr>
</thead>
Expand Down
4 changes: 2 additions & 2 deletions docs/mllib-linear-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ training error) and minimizing model complexity (i.e., to avoid overfitting).
The following table summarizes the loss functions and their gradients or sub-gradients for the
methods `spark.mllib` supports:

<table class="table table-striped">
<table>
<thead>
<tr><th></th><th>loss function $L(\wv; \x, y)$</th><th>gradient or sub-gradient</th></tr>
</thead>
Expand Down Expand Up @@ -105,7 +105,7 @@ The purpose of the
encourage simple models and avoid overfitting. We support the following
regularizers in `spark.mllib`:

<table class="table table-striped">
<table>
<thead>
<tr><th></th><th>regularizer $R(\wv)$</th><th>gradient or sub-gradient</th></tr>
</thead>
Expand Down
Loading