Skip to content

Latest commit

 

History

History
1504 lines (1504 loc) · 24 KB

README.md

File metadata and controls

1504 lines (1504 loc) · 24 KB

Packages:

serving.kubeflow.org/v1alpha2

Package v1alpha2 contains API Schema definitions for the serving v1alpha2 API group

Resource Types:

InferenceService

InferenceService is the Schema for the services API

Field Description
apiVersion
string
serving.kubeflow.org/v1alpha2
kind
string
InferenceService
metadata
Kubernetes meta/v1.ObjectMeta
Refer to the Kubernetes API documentation for the fields of the metadata field.
spec
InferenceServiceSpec


default
EndpointSpec

Default defines default InferenceService endpoints

canary
EndpointSpec
(Optional)

Canary defines an alternate endpoints to route a percentage of traffic.

canaryTrafficPercent
int
(Optional)

CanaryTrafficPercent defines the percentage of traffic going to canary InferenceService endpoints

status
InferenceServiceStatus

AlibiExplainerSpec

(Appears on: ExplainerSpec)

AlibiExplainerSpec defines the arguments for configuring an Alibi Explanation Server

Field Description
type
AlibiExplainerType

The type of Alibi explainer

storageUri
string

The location of a trained explanation model

runtimeVersion
string

Defaults to latest Alibi Version

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.

config
map[string]string

Inline custom parameter settings for explainer

AlibiExplainerType (string alias)

(Appears on: AlibiExplainerSpec)

CustomSpec

(Appears on: ExplainerSpec, PredictorSpec, TransformerSpec)

CustomSpec provides a hook for arbitrary container configuration.

Field Description
container
Kubernetes core/v1.Container

DeploymentSpec

(Appears on: ExplainerSpec, PredictorSpec, TransformerSpec)

DeploymentSpec defines the configuration for a given InferenceService service component

Field Description
serviceAccountName
string
(Optional)

ServiceAccountName is the name of the ServiceAccount to use to run the service

minReplicas
int
(Optional)

Minimum number of replicas, pods won’t scale down to 0 in case of no traffic

maxReplicas
int
(Optional)

This is the up bound for autoscaler to scale to

EndpointSpec

(Appears on: InferenceServiceSpec)

Field Description
predictor
PredictorSpec

Predictor defines the model serving spec

explainer
ExplainerSpec
(Optional)

Explainer defines the model explanation service spec, explainer service calls to predictor or transformer if it is specified.

transformer
TransformerSpec
(Optional)

Transformer defines the pre/post processing before and after the predictor call, transformer service calls to predictor service.

EndpointStatusMap (map[invalid type]*github.com/yuzisun/kfserving/pkg/apis/serving/v1alpha2.StatusConfigurationSpec alias)

(Appears on: InferenceServiceStatus)

EndpointStatusMap defines the observed state of InferenceService endpoints

Explainer

ExplainerConfig

(Appears on: ExplainersConfig)

Field Description
image
string
defaultImageVersion
string
allowedImageVersions
[]string

ExplainerSpec

(Appears on: EndpointSpec)

ExplainerSpec defines the arguments for a model explanation server, The following fields follow a “1-of” semantic. Users must specify exactly one spec.

Field Description
alibi
AlibiExplainerSpec

Spec for alibi explainer

custom
CustomSpec

Spec for a custom explainer

DeploymentSpec
DeploymentSpec

(Members of DeploymentSpec are embedded into this type.)

ExplainersConfig

Field Description
alibi
ExplainerConfig

InferenceServiceSpec

(Appears on: InferenceService)

InferenceServiceSpec defines the desired state of InferenceService

Field Description
default
EndpointSpec

Default defines default InferenceService endpoints

canary
EndpointSpec
(Optional)

Canary defines an alternate endpoints to route a percentage of traffic.

canaryTrafficPercent
int
(Optional)

CanaryTrafficPercent defines the percentage of traffic going to canary InferenceService endpoints

InferenceServiceStatus

(Appears on: InferenceService)

InferenceServiceStatus defines the observed state of InferenceService

Field Description
Status
knative.dev/pkg/apis/duck/v1beta1.Status

(Members of Status are embedded into this type.)

url
string

URL of the InferenceService

traffic
int

Traffic percentage that goes to default services

canaryTraffic
int

Traffic percentage that goes to canary services

default
EndpointStatusMap

Statuses for the default endpoints of the InferenceService

canary
EndpointStatusMap

Statuses for the canary endpoints of the InferenceService

ONNXSpec

(Appears on: PredictorSpec)

ONNXSpec defines arguments for configuring ONNX model serving.

Field Description
storageUri
string

The location of the trained model

runtimeVersion
string

Allowed runtime versions are [v0.5.0, latest] and defaults to the version specified in kfservice config map

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.

Predictor

PredictorConfig

(Appears on: PredictorsConfig)

Field Description
image
string
defaultImageVersion
string
defaultGpuImageVersion
string
allowedImageVersions
[]string

PredictorSpec

(Appears on: EndpointSpec)

PredictorSpec defines the configuration for a predictor, The following fields follow a “1-of” semantic. Users must specify exactly one spec.

Field Description
custom
CustomSpec

Spec for a custom predictor

tensorflow
TensorflowSpec

Spec for Tensorflow Serving (https://github.com/tensorflow/serving)

tensorrt
TensorRTSpec

Spec for TensorRT Inference Server (https://github.com/NVIDIA/tensorrt-inference-server)

xgboost
XGBoostSpec

Spec for XGBoost predictor

sklearn
SKLearnSpec

Spec for SKLearn predictor

onnx
ONNXSpec

Spec for ONNX runtime (https://github.com/microsoft/onnxruntime)

pytorch
PyTorchSpec

Spec for PyTorch predictor

DeploymentSpec
DeploymentSpec

(Members of DeploymentSpec are embedded into this type.)

PredictorsConfig

Field Description
tensorflow
PredictorConfig
tensorrt
PredictorConfig
xgboost
PredictorConfig
sklearn
PredictorConfig
pytorch
PredictorConfig
onnx
PredictorConfig

PyTorchSpec

(Appears on: PredictorSpec)

PyTorchSpec defines arguments for configuring PyTorch model serving.

Field Description
storageUri
string

The location of the trained model

modelClassName
string

Defaults PyTorch model class name to ‘PyTorchModel’

runtimeVersion
string

Allowed runtime versions are [0.2.0, latest] and defaults to the version specified in kfservice config map

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.

SKLearnSpec

(Appears on: PredictorSpec)

SKLearnSpec defines arguments for configuring SKLearn model serving.

Field Description
storageUri
string

The location of the trained model

runtimeVersion
string

Allowed runtime versions are [0.2.0, latest] and defaults to the version specified in kfservice config map

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.

StatusConfigurationSpec

StatusConfigurationSpec describes the state of the configuration receiving traffic.

Field Description
name
string

Latest revision name that is in ready state

host
string

Host name of the service

replicas
int

TensorRTSpec

(Appears on: PredictorSpec)

TensorRTSpec defines arguments for configuring TensorRT model serving.

Field Description
storageUri
string

The location of the trained model

runtimeVersion
string

Allowed runtime versions are [19.05-py3] and defaults to the version specified in kfservice config map

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.

TensorflowSpec

(Appears on: PredictorSpec)

TensorflowSpec defines arguments for configuring Tensorflow model serving.

Field Description
storageUri
string

The location of the trained model

runtimeVersion
string

Allowed runtime versions are [1.11.0, 1.12.0, 1.13.0, 1.14.0, latest] or [1.11.0-gpu, 1.12.0-gpu, 1.13.0-gpu, 1.14.0-gpu, latest-gpu] if gpu resource is specified and defaults to the version specified in kfservice config map.

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.

Transformer

Transformer interface is implemented by all Transformers

TransformerConfig

(Appears on: TransformersConfig)

Field Description
image
string
defaultImageVersion
string
allowedImageVersions
[]string

TransformerSpec

(Appears on: EndpointSpec)

TransformerSpec defines transformer service for pre/post processing

Field Description
custom
CustomSpec

Spec for a custom transformer

DeploymentSpec
DeploymentSpec

(Members of DeploymentSpec are embedded into this type.)

TransformersConfig

Field Description
feast
TransformerConfig

VirtualServiceStatus

VirtualServiceStatus captures the status of the virtual service

Field Description
URL
string
CanaryWeight
int
DefaultWeight
int
Status
knative.dev/pkg/apis/duck/v1beta1.Status

XGBoostSpec

(Appears on: PredictorSpec)

XGBoostSpec defines arguments for configuring XGBoost model serving.

Field Description
storageUri
string

The location of the trained model

runtimeVersion
string

Allowed runtime versions are [0.2.0, latest] and defaults to the version specified in kfservice config map

resources
Kubernetes core/v1.ResourceRequirements

Defaults to requests and limits of 1CPU, 2Gb MEM.


Generated with gen-crd-api-reference-docs on git commit 52e137b.