Step 6. Deploy the pipeline
This section exposes the steps to deploy the pipeline.
Last updated
This section exposes the steps to deploy the pipeline.
Last updated
Our build have run successfully, let us now deploy our pipeline.
After clicking on the "Prepare for deployment" icon in the build tab, we land on a new tab which is the deployment workspace.
In this workspace, we are presented a flowchart with Web services IN and OUT built-in modules .
To complete our flowchart, we need to further add other modules. To quickly find them , directly type their name in the search field.
The required modules are :
Dataframe loader (to find how to set it up , check the previous section 'Building a flowchart') , however as an input type it will be "Dictionary" instead of Dataframe.
Ordinal encoder (same configuration as seen before in the build flowchart)
Features selector (idem)
ML predictor (the core module )
and , of course, our trained model (found in the 'Trained Models' sub-tab).
To find them more quickly, directly type their name in the search field.
This time notice that we choose Dictionary as an Input type.
Instead of selecting features one by one, we can also just input "all" into 'Selected features'.
Afterwards , the pipeline deployment flowchart displays in the deploy tab. It follows the same principle as for the build except that the dataframe loader is now present.