Deployment is the process of taking a trained ML model available to users or other systems.
This stage mainly concerns software engineering and is one of the most difficult stages in the life cycle of machine learning projects.
With SmartPredict, you don't need to be an expert to deploy your trained model from the Build space, as the Deploy space is also a full drag and drop workspace.
Of course, you no longer need to code or install a framework to push your trained model to the cloud and put it into production.
So this guide tells you more about how it works.
By reading all the content in this guide you will be able to build a flowchart in deploy space.
Specifically, you will be covering the following:
- How to deploy a trained ML/DL model in SmartPredict?
- Tips to successfully deploy a trained ML model in SmartPredict
- Example of flowchart to deploy a trained model
- Make a prediction in the Prediction space
- Benefits of deploying a model in SP
How to deploy a model in SmartPredict?
Deploying a trained model with SmartPredict consists of building, manually (Manualflow) or automatically (Autoflow) a flowchart that represents the pipeline deployment in the Deploy space.
Once the flowchart is launched, the entire pipeline is deployed as a web service.
Besides, an URL is generated automatically in the Monitor space, which is used to access the web service from any kind of IT environment.
Furthermore, you can directly test the deployed model in the Prediction space, where you can submit new data that the model will receive as an input and with which it will make the prediction.
The video below shows you the deployment stage in forecasting problems with AutoML.
Note that the flowchart is generated automatically with the Autoflow.
video (forecasting autoflow deployment and generation of API in monitor space)
Tips to successfully deploy a trained ML model
In SmartPredict the flowchart to deploy a trained model should be created manually if the project is processed with Manualflow. Elseways, it is generated automatically with the Autoflow process but you can still modify it according to your needs.
To make a successful flowchart, there is only one rule to know: “ The type (array, dataframe, …) and format ( encoded, …) of the data that we provide as input to the deployed model must be the same as those used during the training stage”.
- Also know that the entire flowchart will be deployed as a Web Service.
- Moreover, all modules including the user's Custom Module can be used to build the flowchart.
The flowchart in the Deploy space is fully customizable then we can realize a serverless application here. -->See the contact email application in the next article as an example.
Thus, there is no defined flowchart in this space : it depends on the type of data you insert to the Web service and the modification needed so that it can be handled by the trained model.
You can also make some data transformation of the predicted data such as data merging, … depending on the output you want to have from the web service.
For more information on how to use the web service once the flowchart is deployed, see the How to use the deployed model? article.
By way of illustration, let’s see the flowchart build to deploy the trained model in a Bank Marketing project and the prediction stage that we make on it.
Example of flowchart to deploy a trained model
The following figure shows you the flowchart build to deploy a model in the Bank Marketing project.
The whole project is described in the guide to classification project.
Once it is deployed, the web service can make predictions from new data in format json.
See in the next section the prediction stage.
( figure of the deployment flowchart in BM project sample dataset)
Deployment flowchart in Bank Marketing project
The workflow behind this flowchart is explained in the following figure :
Hence, to deploy a trained model the flowchart in the Deploy space should contains the following modules :
- the Web Service IN
- the saved modules during the model training : the trained model, the data preprocessing pipeline
- the Predictor module
- le Web Service OUT
Once built we just have to click on the rocket icon to deploy it. An url is generated automatically on the Monitor space which you use to access the web service.
Make a prediction in the Prediction space
Once the entire pipeline is deployed as a web service, we can directly make a prediction in the Prediction space.
The web service then works as shown in this figure below : it receives data in input (in format JSON, file, datasets), and returns prediction data ( in format json,table, or a line chart) in output.
Input data is received by the "Web Service IN" module, and successively processed by the modules making up the entire deployed flowchart. The result thus obtained is a prediction made by the trained model module, and returned by the "Web Service OUT" module.
The deployed model in production
By way of illustration, we’re going to make a prediction with the deployed model in the Bank Marketing project.
Let’s predict if one customer will subscribe to a bank account : we enter in the input space the information about him/her in json format. Click on the arrow to send it to the web service. We see the predicted result returned by the deployed model in the OUTPUT space.
As we can see, the customer is going to create a “term deposit” account.
(Image to predict if a customer will subscribe to the account in BM.)
The prediction made on a client
We can also make a prediction on many customers by drag and dropping files containing the new dataset.
(Image of making predictions with file and dataset)
Prediction from an input file.
If the input format is JSON, the Web Service IN module returns a dictionaries then we need a Dataframe loader/converter module to convert it as a Dataframe. The Web Service IN returns directly a dataframe if the input format is a file or dataset, so it is not compulsory to use the Dataframe loader / converter module.
In case of errors, check logs in the Deploy space, fix up the flowchart and deploy again. You can use a Data/Object/Type logger module to see logs in any module’s output.
You will now be able to deploy a model and make predictions in SmartPredict.
Read this section again if you haven’t grasped it yet or ask questions to the community.
Let's sum up.
Benefits of deploying a model in SP
To wrap up, let's highlight with the following table, the benefits you can gain by deploying your model with SmartPredict.