SmartPredict
  • Documentation
  • OVERVIEW
    • Presentation
      • Key features
      • Who may benefit from its use
      • The SmartPredict Graphical User Interface (GUI)
        • The SmartPredict Modules
        • Focus on the Notebooks
        • Practical shortcuts
    • Prerequisites
  • Getting started
    • Getting started (Part1) : Iris classification project
      • Project description
      • Step 1. Create the project
      • Step 2. Upload the dataset
      • Step 3. Preprocess the dataset
      • Step 4. Build the flowchart
        • Set up the flowchart
        • Configure the modules
      • Step 5. Run the build
      • Step 6. Deploy the project
      • Step 7. Make inferences with our model
      • Conclusion
  • Getting started (Part 2): Predicting the passengers' survival in the Titanic shipwreck
    • Project description
    • Step 1. Create the project
    • Step 2. Upload the dataset
    • Step 3. Preprocess the dataset
    • Step 4. Build the flowchart
    • Step 5. Run the build
    • Step 6. Deploy the pipeline
    • Step 7. Make inferences with our pipeline
  • MODULE REFERENCE
    • CORE MODULES
      • Introduction
      • Basic Operations
        • Item Saver
      • Web Services
        • Web Service IN and OUT
      • Data retrieval
        • Data fetcher
        • Data frame loader/converter
        • Image data loader
      • Data preprocessing
        • Introduction
        • Array Reshaper
        • Generic Data Preprocessor
        • Missing data handler
        • Normalizer
        • One Hot Encoder
        • Ordinal Encoder
      • Data selection
        • Features selector
        • Generic data splitter
        • Labeled data splitter
      • Training and Prediction
        • Predictor DL models
        • Predictor ML models
        • Predictor ML models (Probabilistic models)
        • Trainer ML models
        • Trainer/Evaluator DL models
      • Evaluation and fine-tuning
        • Cross Validator for ML
        • Evaluator for ML models
      • Machine Learning algorithms
        • ML modules in SmartPredict
        • Decision Tree Regressor
        • KNeighbors Classifier
        • KNeighbors Regressors
        • Linear Regressor
        • Logistic Regressor
        • MLP Regressor
        • Naive Bayes Classifier
        • Random Forest Classifier
        • Random Forest Regressor
        • Support Vector Classifier
        • Support Vector Regressor
        • XGBoost Classifier
        • XGBoost Regressor
      • Deep learning algorithms
        • Dense Neural Network
        • Recurrent Neural Networks
      • Computer Vision
        • Convolutional Recurrent Networks
        • Fully Convolutional Neural Networks
        • Face detector
        • Image IO
        • Image matcher
        • Yolo
      • Natural Language Processing
        • Introduction
        • Text cleaner
        • Text vectorizer
      • Times Series processing
        • TS features selector
      • TensorFlow API
        • LSTM Layer
        • Dense Layer
      • Helpers
        • Data/Object Logger
        • Object Selector (5 ports)
      • Conclusion
  • CUSTOM MODULES
    • Function
    • Class
    • Use cases
Powered by GitBook
On this page
  • 1. Deleting the dataframe loader
  • 2. Adding the processing pipeline and the dirty dataset

Was this helpful?

  1. Getting started
  2. Getting started (Part1) : Iris classification project
  3. Step 4. Build the flowchart

Set up the flowchart

In this part, we are going to remove unrequired elements and complete the flowchart by adding the remaining components of our model: the processing pipeline and the initial dataset .

PreviousStep 4. Build the flowchartNextConfigure the modules

Last updated 5 years ago

Was this helpful?

1. Deleting the dataframe loader

As we already processed our dataset , the dataframe loader is not needed anymore so we may delete it. To delete a module, click on the module menu (the three dots on its right) >> Then on Delete. An alert box asks for confirmation>> Click on OK.

2. Adding the processing pipeline and the dirty dataset

All other flowchart modules being already set into place, let us attach in the two last components .

1. From the right sidebar, click on the processing pipeline icon. A list of processing pipelines shows. Look for the one we have just made earlier and drag and drop it into the workflow .

2. Then , on top of this latter, let us place the dirty dataset we initially uploaded (remember the iris dirty dataset?)

Getting back to the main menu, click on the Dashboard icon . Then from the right panel, find and click the Dataset icon (second from the left). Look for our initial dataset iris_dataset_dirty.

Once found, connect this initial dataset on top of the processing pipeline to apply the processing mechanism onto it.

As always, a dashed line shows the path for connecting the output of a module to the input of another, which definitely makes the task effortless.

3. Finally, connect the whole set (processingpipeline+dirtydataset)(processingpipeline+dirty dataset)(processingpipeline+dirtydataset) to the feature selector.

Deleting unused modules is easy.
Adding the pipeline and the initial dataset.