Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Doc] Add missing yaml styling for code snippets #4170

Merged
merged 1 commit into from
Jun 24, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/analytics/explainers.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ http://<ingress-gateway>/seldon/<namespace>/<deployment name>/<predictor name>/a

So for example if you deployed:

```
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down
4 changes: 2 additions & 2 deletions doc/source/graph/graph-modes.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ by their name in constructing the inference graph in `spec.componentSpecs.graph`

The following is an example of a Seldon core inference graph with a
single predictor.
```bash
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down Expand Up @@ -53,7 +53,7 @@ seldon-c71cc2d950d44db1bc6afbeb0194c1da-5d8dddb8cb-xx4gv 5/5 Running 0
Another way of deployment is to implement the each node of inference graph in a seperate predictor which will result in having separate pods for
each inference graph node.

```bash
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down
2 changes: 1 addition & 1 deletion doc/source/graph/inference-graph.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ You can learn more about the SeldonDeployment YAML definition by reading the con

We provide an environment variable DEFAULT_USER_ID (set in the helm chart install with `.Values.defaultUserID`) which allows you to set the default user id the images will run under. This defaults to 8888. If you wish to override this for your specific Pod/Container rather than globally you can change it as shown in the example below:

```
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down
4 changes: 2 additions & 2 deletions doc/source/graph/scaling.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ If you use the annotation `seldon.io/engine-separate-pod` you can also set the n

As illustration, a contrived example showing various options is shown below:

```
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down Expand Up @@ -71,7 +71,7 @@ For more details see [a worked example for the above replica settings](../exampl

Its is possible to use the `kubectl scale` command to set the `replicas` value of the SeldonDeployment. For simple inference graphs this can be an easy way to scale them up and down. For example:

```
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down
12 changes: 6 additions & 6 deletions doc/source/ingress/istio.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ helm install seldon-core seldon-core-operator --set istio.enabled=true --repo ht

You need an istio gateway installed in the `istio-system` namespace. By default we assume one called seldon-gateway. For example you can create this with the following yaml:

```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
Expand Down Expand Up @@ -45,7 +45,7 @@ kubectl create -n istio-system secret tls seldon-ssl-cert --key=privkey.pem --ce

and create SSL Istio Gateway using following YAML file

```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
Expand Down Expand Up @@ -105,7 +105,7 @@ More information can be found [here](https://istio.io/latest/docs/reference/conf

You can set the policies to target all the models belonging to a specific namespace, but you must be using istio sidecar proxy,
and ensure your seldon operator configuration has the following:
```
```yaml
istio:
enabled: true
tlsMode: STRICT
Expand All @@ -116,7 +116,7 @@ resolve this issue are:
- You can specify that you want to allow GET requests to the prometheus endpoint in the `AuthorizationPolicy`

Example:
```
```yaml
- to:
- operation:
methods: ["GET"]
Expand All @@ -125,7 +125,7 @@ Example:
```

- You can also exclude ports in your Istio Operator configuration
```
```yaml
proxy:
autoInject: enabled
clusterDomain: cluster.local
Expand All @@ -140,7 +140,7 @@ If you saw errors like `Failed to generate bootstrap config: mkdir ./etc/istio/p
Istio proxy sidecar by default needs to run as root (This changed in version >= 1.7, non-root is the default)
You can fix this by changing `defaultUserID=0` in your helm chart, or add the following `securityContext` to your istio proxy sidecar.

```
```yaml
securityContext:
runAsUser: 0
```
Expand Down
6 changes: 3 additions & 3 deletions doc/source/ingress/openshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ If you run with Openshift RedHat Service Mesh you can work with Seldon by follow

Ensure you create a Gateway in istio-system. For

```
```yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
Expand All @@ -30,7 +30,7 @@ spec:

1. Update the Seldon Core CSV to activate istio. Add:

```
```yaml
config:
env:
- name: ISTIO_ENABLED
Expand All @@ -44,7 +44,7 @@ If you install Seldon Core in a particular namespace you will need to:

1. Add a NetworkPolicy to allow the webhooks to run. For the namespace yoy are running the operator create:

```
```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
Expand Down
6 changes: 3 additions & 3 deletions doc/source/servers/tensorflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ For REST you need to specify parameters for:
* signature_name
* model_name

```
```yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
Expand Down Expand Up @@ -45,7 +45,7 @@ For gRPC you need to specify the following parameters:
* model_input
* model_output

```
```yaml
apiVersion: machinelearning.seldon.io/v1alpha2
kind: SeldonDeployment
metadata:
Expand Down Expand Up @@ -86,7 +86,7 @@ Try out a [worked notebook](../examples/server_examples.html)

You can utilize Tensorflow Serving's functionality to load multiple models from one model repository as shown in this [example notebook](../examples/protocol_examples.html). You should follow the configuration details as disucussed in the [Tensorflow Serving documentation on advanced configuration](https://www.tensorflow.org/tfx/serving/serving_config).

```
```yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
Expand Down
6 changes: 3 additions & 3 deletions doc/source/streaming/kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ To allow TLS connections to Kafka for the consumer and produce use the following

An example spec that gets values from screts is shown below and comes from the [Kafka KEDA demo](../examples/kafka_keda.html).

```
```yaml
svcOrchSpec:
env:
- name: KAFKA_BROKER
Expand Down Expand Up @@ -118,7 +118,7 @@ An example spec that gets values from screts is shown below and comes from the [

KEDA can be used to scale Kafka SeldonDeployments by looking at the consumer lag.

```
```yaml
kedaSpec:
pollingInterval: 15
minReplicaCount: 1
Expand Down Expand Up @@ -146,7 +146,7 @@ In the above we:

The authentication trigger we used for this was extracting the TLS details from secrets, e.g.

```
```yaml
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
Expand Down
2 changes: 1 addition & 1 deletion operator/openshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

Kustomize v4 seems to run patches differently which results in fields not being removed as seen in the patch `config/openshift/patch_manager_env.yaml` which trys to get rid of the `securitycontext` with:

```
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
Expand Down