SmartPredict
  • Documentation
  • OVERVIEW
    • Presentation
      • Key features
      • Who may benefit from its use
      • The SmartPredict Graphical User Interface (GUI)
        • The SmartPredict Modules
        • Focus on the Notebooks
        • Practical shortcuts
    • Prerequisites
  • Getting started
    • Getting started (Part1) : Iris classification project
      • Project description
      • Step 1. Create the project
      • Step 2. Upload the dataset
      • Step 3. Preprocess the dataset
      • Step 4. Build the flowchart
        • Set up the flowchart
        • Configure the modules
      • Step 5. Run the build
      • Step 6. Deploy the project
      • Step 7. Make inferences with our model
      • Conclusion
  • Getting started (Part 2): Predicting the passengers' survival in the Titanic shipwreck
    • Project description
    • Step 1. Create the project
    • Step 2. Upload the dataset
    • Step 3. Preprocess the dataset
    • Step 4. Build the flowchart
    • Step 5. Run the build
    • Step 6. Deploy the pipeline
    • Step 7. Make inferences with our pipeline
  • MODULE REFERENCE
    • CORE MODULES
      • Introduction
      • Basic Operations
        • Item Saver
      • Web Services
        • Web Service IN and OUT
      • Data retrieval
        • Data fetcher
        • Data frame loader/converter
        • Image data loader
      • Data preprocessing
        • Introduction
        • Array Reshaper
        • Generic Data Preprocessor
        • Missing data handler
        • Normalizer
        • One Hot Encoder
        • Ordinal Encoder
      • Data selection
        • Features selector
        • Generic data splitter
        • Labeled data splitter
      • Training and Prediction
        • Predictor DL models
        • Predictor ML models
        • Predictor ML models (Probabilistic models)
        • Trainer ML models
        • Trainer/Evaluator DL models
      • Evaluation and fine-tuning
        • Cross Validator for ML
        • Evaluator for ML models
      • Machine Learning algorithms
        • ML modules in SmartPredict
        • Decision Tree Regressor
        • KNeighbors Classifier
        • KNeighbors Regressors
        • Linear Regressor
        • Logistic Regressor
        • MLP Regressor
        • Naive Bayes Classifier
        • Random Forest Classifier
        • Random Forest Regressor
        • Support Vector Classifier
        • Support Vector Regressor
        • XGBoost Classifier
        • XGBoost Regressor
      • Deep learning algorithms
        • Dense Neural Network
        • Recurrent Neural Networks
      • Computer Vision
        • Convolutional Recurrent Networks
        • Fully Convolutional Neural Networks
        • Face detector
        • Image IO
        • Image matcher
        • Yolo
      • Natural Language Processing
        • Introduction
        • Text cleaner
        • Text vectorizer
      • Times Series processing
        • TS features selector
      • TensorFlow API
        • LSTM Layer
        • Dense Layer
      • Helpers
        • Data/Object Logger
        • Object Selector (5 ports)
      • Conclusion
  • CUSTOM MODULES
    • Function
    • Class
    • Use cases
Powered by GitBook
On this page
  • Deployment module inventory
  • Deployment configurations
  • Launching our model

Was this helpful?

  1. Getting started
  2. Getting started (Part1) : Iris classification project

Step 6. Deploy the project

This page describes how to deploy a project after it has been built

Deployment module inventory

The deployment stage is somehow, the ultimate goal of all the former configuration steps performed before.

Deploying our Machine Learning model will generate a REST API to which we can send our data and from which we receive the predictions returned by our model..

For deploying our model , we need to assemble a new set of modules. Once again, they can be dragged and dropped and are accessible from the right pane toolbar in the same location.

It is when our pipeline can finally be active and execute its function within the use cases and applications we designed it for.

Within the deploy tab, the predefined modules are already present :

  • Web service in and

  • Web service out.

However , to render our workflow functional, we still need to add onto it:

>> The detailed configuration of those latter, as well as where to find them, will be explored in the next section about flowchart structure.

Deployment configurations

For deploying a project , we are required to design a new flowchart on the basis of our prior build. Nevertheless, this new flowchart differs in content , therefore we need to take notice of the changes.

To structure our flowchart, we need to find every one of the requested modules and transport them into the workspace. Again , they will be dragged and dropped . To find them , we can directly type their name into the search bar.

  1. Let us begin by finding our model in ‘Trained models’ . In our case,it is still the same ‘SVC_model_Iris’ as created before. Once found, let us place it into the workspace.

  2. Now, look for the data frame loader/converter which is located in the module ‘Data retrieval’.

  3. Select the DataFrame loader/converter >> Click on Parameters. Here, we need to specify the data input type.

  4. Within the "Column selection" section, choose the ‘Dictionary’ - as it is the type of input data we deal with.

  5. For "keys to keep", add all the features without any exception: sepal.length, sepal.width,petal.length, petal.width AND variety. Validate the new setting by clicking on the "Save" button.

3. Concerning the Predictor, let us choose the Predictor ML models . Get to the right pane. Click on Core modules >> Training and prediction>> Predictor ML models.

4. Finally, for features selectors, click on Core modules>> Features selector >>Menu>> Parameters. Once more , we are requested to specify which features we choose to add.

In our project, we want to include all features. This time, we are going to choose ‘Enter columns name manually’>> As a value, we shall put "All">> Click on ‘Save’.

As an alternative to entering all the features one by one , we can simply put ‘all’ instead .

That is it , we have finished the hardest part of our project ! But for sure, it was not this hard at all in the end, wasn’t it? All we need to do next is just launch the model as a REST API Webservice .

To do so, Click on the ‘Rocket icon’ –up on the extreme left corner- in order to initiate the process.

PreviousStep 5. Run the buildNextStep 7. Make inferences with our model

Last updated 5 years ago

Was this helpful?

Launching our model

🚀
Arranging the components of the deployment flowchart.
The deployment flowchart finished.
The DataFrame loader's new configuration.
In the features selector choose "all" features.
To deploy, click on the 'Rocket' icon.