Machine Learning Ops Workshop with SageMaker: lab guides and materials.
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
Machine Learning Ops Workshop with SageMaker and CodePipeline: lab guides and materials.
Data Scientists and ML developers need more than a Jupyter notebook to create a ML model, to test it, to put it into production and to integrate it with a portal and/or a basic web/mobile application, in a reliable and flexible way.
There are two basic questions that you should consider when you start developing a ML model for a real Business Case:
So, if you're not happy with the answers you have, MLOps is a concept that can help you: a) to create or improve the organization culture for CI/CD applied to ML; b) to create an automated infrastructure that will support your processes.
In this workshop you'll see how to create/operate an automated ML pipeline using a traditional CI/CD tool, called CodePipeline, to orchestrate the ML workflow. During the exercises you'll see how to create a Docker container from scratch with your own algorithm, start a training/deployment job by just copying a .zip file to an S3 repo, run A/B tests and more. This is a reference architecture that can be used as an inspiration to create your own solution.
Amazon Sagemaker, a service that supports the whole pipeline of a ML Model development lifecycle, is the heart of this solution. Around it, you can add several different services as the AWS Code* for creating an automated pipeline, building your docker images, train/test/deploy/integrate your models, etc.
Another AWS service that can used for this purpose is Step Functions. In this link, you'll also find the documentation of the python library that can be executed directly from your Jupyter notebook.
Ah, you have a Kubernetes cluster and want to integrate SageMaker to that and manage the ML Pipeline from the cluster. No problem, take a look on the SageMaker Operators for Kubernetes.
Anyway, there are lots of workflow managers that can be perfectly integrated with SageMaker to do the same job! Pick yours and use your creativity to create your own MLOps plaform!
You should have some basic experience with: - Train/test a ML model - Python (scikit-learn) - Jupyter Notebook - AWS CodePipeline - AWS CodeCommit - AWS CodeBuild - Amazon ECR - Amazon SageMaker - AWS CloudFormation
Some experience working with the AWS console is helpful as well.
In order to complete this workshop you'll need an AWS Account with access to the services above. There are resources required by this workshop that are eligible for the AWS free tier if your account is less than 12 months old. See the AWS Free Tier page for more details.
In this workshop you'll implement and experiment a basic MLOps process, supported by an automated infrastructure for training/testing/deploying/integrating ML Models. It is comprised into four parts:
Parts 2 and 3 are supported by automated pipelines that reads the assets produced by the ML devoloper and execute/control the whole process.
For part 2 the following architecture will support the process. In part 2 you'll create a Docker image that contains your own implementation of a RandomForest classifier, using python 3.7 and scikit-learn. Remember that if you are happy with the built-in XGboost you can skip this part.
For part 3 you'll make use of the following structure for training the model, testing it, deploying it in two different environments: DEV - QA/Development (simple endpoint) and PRD - Production (HA/Elastic endpoint).
Altough there is an ETL part in the Architecture, we'll not use Glue or other ETL tool in this workshop. The idea is just to show you how simple it is to integrate this Architecture with your Data Lake and/or Legacy databases using an ETL process
It is important to mention that the process above was based on an Industry process for Data Mining and Machine Learning called CRISP-DM.
CRISP-DM stands for “Cross Industry Standard Process – Data Mining” and is an excellent skeleton to build a data science project around.
There are 6 phases to CRISP: - Business understanding: Don’t dive into the data immediately! First take some time to understand: Business objectives, Surrounding context, ML problem category. - Data understanding: Exploring the data gives us insights about tha paths we should follow. - Data preparation: Data cleaning, normalization, feature selection, feature engineering, etc. - Modeling: Select the algorithms, train your model, optimize it as necessary. - Evaluation: Test your model with different samples, with real data if possible and decide if the model will fit the requirements of your business case. - Deployment: Deploy into production, integrate it, do A/B tests, integration tests, etc.
Notice the arrows in the diagram though. CRISP frames data science as a cyclical endeavor - more insights leads to better business understanding, which kicks off the process again.
First, you need to execute a CloudFormation script to create all the components required for the exercises.
|US East (N. Virginia)|
Then open the Jupyter Notebook instance in Sagemaker and start doing the exercises:
First delete the folowing stacks: - mlops-deploy-iris-model-dev - mlops-deploy-iris-model-prd - mlops-training-iris-model-job
Then delete the stack you created. If you named AIWorkshop, find this stack using the CloudFormation console and delete it.
WARNING: All the assets will be deleted, including the S3 Bucket and the ECR Docker images created during the execution of this workshop.
This sample code is made available under a modified MIT license. See the LICENSE file.