Amazon SageMaker – Build, Train, & Deploy Machine Learning Models at Scale
SageMaker is a fully managed end-to-end machine learning service provided by Amazon Web Services (AWS). One can use the JupyterNotebook by launching the notebook instance from the SageMaker console. It also has the ability to secure and use containers for model training and to manipulate deployment work for predictive model hosting.
SageMaker provides services for managing model development process of machine learning. It takes over the complicated and troublesome parts involved in the model development process. In addition to lowering the threshold of engineers who are planning to start machine learning in the future, data scientists, Artificial Intelligence engineers, and machine learning experts quickly build models, allowing for scalable training and quick release (deployment).
Integrated Jupyter authoring notebook instances provide easy access to survey and analysis data sources, eliminating the need to manage servers. General machine learning algorithms can also be used. Such algorithms are optimized for efficient execution on very large volumes of data in distributed environments. Because its own algorithms and frameworks are natively supported, Amazon SageMaker also provides flexible training with flexibility that can be tailored to your specific workflow.
From the Amazon SageMaker console launch with one click and deploy the model in a safe and scalable environment. Training and hosting will be charged for each minute usage. There is no minimum fee or prepayment obligation.
How to use Amazon SageMaker
It is convenient to be able to do flow from instance creation of notebook to operation and deployment of model with one product. It seems to be able to use it even for the purpose of using only notebook simply. You can stop it when you do not use it.
Amazon Sagemaker is a service that allows users to control algorithms to some degree.
One can also use an existing machine learning framework (such as TensorFlow)
Sagemaker is divided into several phases as described below
- Study (tuning)
SageMaker Consists of Three Modules
One can perform preprocessing for cleansing such as removal of outliers and loading into the model. Notebook instances can also use GPU instances. Also, you can add python library freely and author data.
You can train models using SageMaker’s Built-in algorithm, Deep Learning framework, and Docker’s own learning environment. The generated model is saved in S3. This model can be hosted in SageMaker as it is, or it can be taken out of AWS and deployed to an IoT device and so on.
You can create an HTTPS endpoint that infers in real time, using a model trained by SageMaker or a model brought from other than AWS. Endpoints can be scaled, A/B tests that deploy multiple models at the same time, and custom Docker images with their own applications can also be used.
Starting a Notebook Instance
- From the console of SageMaker, One can easily start up by clicking ‘Create Notebook Instance’ and entering the instance name
- As long as this notebook instance is running, as with the EC 2 instance, weigh-in charges will occur
- One can also change the status to Stopped while you are not using it
- After creating the notebook instance, click ‘Open’ from the console to access Jupyter.
- Newly create a notebook of ‘conda_python 3’
Operation on Jupyter
- Upload training data from JupyterNotebook to S3 bucket
- Start container from JupyterNotebook
- Container gets training data and trains model
- Upload the training model to S3 bucket
- Prediction from JupyterNotebook Activate the instance for hosting the model
- Host the model uploaded to the S3 bucket
- You can obtain the forecast by throwing the API to the endpoint