Step 6. Deploy the pipeline

This section exposes the steps to deploy the pipeline.

Our build have run successfully, let us now deploy our pipeline.

After clicking on the "Prepare for deployment" icon in the build tab, we land on a new tab which is the deployment workspace.

Click on Prepare for deployment to initiate the process.

In this workspace, we are presented a flowchart with Web services IN and OUT built-in modules .

We need to transport modules into the deployment workspace.

To complete our flowchart, we need to further add other modules. To quickly find them , directly type their name in the search field.

The required modules are :

  1. Dataframe loader (to find how to set it up , check the previous section 'Building a flowchart') , however as an input type it will be "Dictionary" instead of Dataframe.

  2. Ordinal encoder (same configuration as seen before in the build flowchart)

  3. Features selector (idem)

  4. ML predictor (the core module )

  5. and , of course, our trained model (found in the 'Trained Models' sub-tab).

To find them more quickly, directly type their name in the search field.

The deployment resembles the build flowchart with only a few changes.

The Dataframe loader configuration:

This time notice that we choose Dictionary as an Input type.

The Dataframe loader's deployment configuration.

The Features selector's configuration :

Instead of selecting features one by one, we can also just input "all" into 'Selected features'.

The deployment configuration of the features selector.

Deploying the pipeline

To deploy the project , click on the 🚀 rocket icon.

Deploy the pipeline by clicking on the rocket.

Afterwards , the pipeline deployment flowchart displays in the deploy tab. It follows the same principle as for the build except that the dataframe loader is now present.

For a better understanding of the modules' function, a glossary of Machine Learning terms is provided here, here and there .