Step 6. Deploy the project

This page describes how to deploy a project after it has been built

Deployment module inventory

The deployment stage is somehow, the ultimate goal of all the former configuration steps performed before.

Deploying our Machine Learning model will generate a REST API to which we can send our data and from which we receive the predictions returned by our model..

For deploying our model , we need to assemble a new set of modules. Once again, they can be dragged and dropped and are accessible from the right pane toolbar in the same location.

It is when our pipeline can finally be active and execute its function within the use cases and applications we designed it for.

Within the deploy tab, the predefined modules are already present :

  • Web service in and

  • Web service out.

However , to render our workflow functional, we still need to add onto it:

>> The detailed configuration of those latter, as well as where to find them, will be explored in the next section about flowchart structure.

Deployment configurations

For deploying a project , we are required to design a new flowchart on the basis of our prior build. Nevertheless, this new flowchart differs in content , therefore we need to take notice of the changes.

To structure our flowchart, we need to find every one of the requested modules and transport them into the workspace. Again , they will be dragged and dropped . To find them , we can directly type their name into the search bar.

  1. Let us begin by finding our model in ‘Trained models’ . In our case,it is still the same ‘SVC_model_Iris’ as created before. Once found, let us place it into the workspace.

  2. Now, look for the data frame loader/converter which is located in the module ‘Data retrieval’.

  3. Select the DataFrame loader/converter >> Click on Parameters. Here, we need to specify the data input type.

  4. Within the "Column selection" section, choose the ‘Dictionary’ - as it is the type of input data we deal with.

  5. For "keys to keep", add all the features without any exception: sepal.length, sepal.width,petal.length, petal.width AND variety. Validate the new setting by clicking on the "Save" button.

3. Concerning the Predictor, let us choose the Predictor ML models . Get to the right pane. Click on Core modules >> Training and prediction>> Predictor ML models.

4. Finally, for features selectors, click on Core modules>> Features selector >>Menu>> Parameters. Once more , we are requested to specify which features we choose to add.

In our project, we want to include all features. This time, we are going to choose ‘Enter columns name manually’>> As a value, we shall put "All">> Click on ‘Save’.

As an alternative to entering all the features one by one , we can simply put ‘all’ instead .

That is it , we have finished the hardest part of our project ! But for sure, it was not this hard at all in the end, wasn’t it? All we need to do next is just launch the model as a REST API Webservice .

To do so, Click on the ‘Rocket icon’ –up on the extreme left corner- in order to initiate the process.

Last updated