Kubeflow Tutorial-Simplified ML Model Serving | by Josh Bottum | Nov, 2020 | Medium medium.com
Serving an ML model can be complicated. This new Kubeflow tutorial (from KubeCon) shows how data scientists can deploy their models (from a Jupyter Notebook) with a simplified Kale-based workflow. According to this Gartner blog post, the majority of ML projects will have challenges i.e. “Through 2022, only 20% of analytic insights will deliver business outcomes”. As I map this to Kubeflow workflows, I have noticed that data scientists often find ML model serving to be complicated. Since KubeCon on Friday (11/20), I have been using this tutorial to learn about the new Kubeflow workflows to serve a model with a predictor and transformer. Although the workflow is the most complex of any of our deliveries i.e. it teaches you to build a pipeline that trains, tunes and deploys an ML model, the instructions are easy to follow and you can complete the tutorial in a few hours. Quite frankly, the number of commands that you have to run is relatively low and most of the time spent is waiting on hyperparameter tuning to identify the best parameters for your deployment (and you can leave this process running and return to it when it is finished).
Report Story