Step 6. Deploy the pipeline

This section exposes the steps to deploy the pipeline.

Our build have run successfully, let us now deploy our pipeline.

After clicking on the "Prepare for deployment" icon in the build tab, we land on a new tab which is the deployment workspace.

In this workspace, we are presented a flowchart with Web services IN and OUT built-in modules .

To complete our flowchart, we need to further add other modules. To quickly find them , directly type their name in the search field.

The required modules are :

  1. Dataframe loader (to find how to set it up , check the previous section 'Building a flowchart') , however as an input type it will be "Dictionary" instead of Dataframe.

  2. Ordinal encoder (same configuration as seen before in the build flowchart)

  3. Features selector (idem)

  4. ML predictor (the core module )

  5. and , of course, our trained model (found in the 'Trained Models' sub-tab).

To find them more quickly, directly type their name in the search field.

The Dataframe loader configuration:

This time notice that we choose Dictionary as an Input type.

The Features selector's configuration :

Instead of selecting features one by one, we can also just input "all" into 'Selected features'.

Deploying the pipeline

To deploy the project , click on the 🚀 rocket icon.

Afterwards , the pipeline deployment flowchart displays in the deploy tab. It follows the same principle as for the build except that the dataframe loader is now present.

For a better understanding of the modules' function, a glossary of Machine Learning terms is provided here, here and there .

Last updated