2. Kubeflow Pipelines
•Pre-built components: Just provide params or code snippets
(e.g. training code)
•Create your own components from code or libraries
•Use any runtime, framework, data types
•Attach k8s objects - volumes, secrets
Containerized implementations of ML Tasks
•Specified via Python DSL
•Inferred from data dependencies on input/output
Specification of the sequence of steps
•A “Run” = Pipeline invoked w/ specific parameters
•Can be cloned with different parameters
Input Parameters
•Invoke a single run or create a recurring scheduled pipeline
Schedules
3. KFServing Component
● Allows usage of KFServing
within a Kubeflow pipeline.
● Uses KFServing Python
package (v0.5.1) and v1beta1
API.
● Can easily deploy
InferenceServices and perform
canary rollouts.
● Supports passing in raw
InferenceService YAML
● Source Code
7. Analysis
● Pipeline components rely on passing in
specific args to some CLI program.
● The primary support is for predictors with
limited customization options.
● KFServing supports many PodSpec fields
that aren’t first-class component
arguments.
● Transformers and explainers are also not
first class.
8. Analysis
● Users can still deploy InferenceServices
with full customizability by passing the
component their YAML definitions.
○ All PodSpec fields.
○ Transformers/Explainers.
○ Specific annotations/labels.
● This is the most flexible format.
○ Perhaps this should be the recommended way?
○ Offers format consistency compared with
kubectl/KFServing interactions.
isvc_yaml = '''
apiVersion: serving.kubeflow.org/v1beta1
kind: InferenceService
metadata:
name: torchserve-transformer
spec:
transformer:
containers:
- image: kfserving/torchserve-image-transformer:latest
name: kfserving-container
predictor:
nodeSelector:
disktype: ssd
pytorch:
storageUri: gs://torchserve/image_classifier
'''
kfserving_op(
action='apply',
inferenceservice_yaml=isvc_yaml
)