Step 7. Make inferences with our pipeline

After having built and deployed our pipeline, we are now finally going to make inferences with it.

🏁 Perform testing with SmartPredict

Making inferences with our pipeline is somehow the ultimate objective of the previous stages . As an all-in-one platform, SmartPredict is conceived to include all the tools needed for easily managing every stage of an AI project from end to end , even until the testing stage.

To test if our model performs as expected, let us test it right from SmartPredict' s testing tab.

SmartPredict Test tab makes it possible to directly upload a JSON file or insert the JSON codes ourselves.

For our Titanic project, to test if a given passenger survived or not , let us for instance choose a passenger in the test dataset quite randomly.

Paste the below code into the Test tab:


The anticipated result is '' Not Survived' since this passenger is a typical third class person. Plus, he was a man who also traveled alone. Hence the obtained result should be < 0 > i.e this fated passenger did not survived .

Let us test if our model provides the expected output.

The output of our model is Not survived, just like we expected.

N.B Testing with an external software

As an alternative, we may also opt for testing with the help of external tools such as Postman , which is a handy unit testing software.

To do so , we need to get the URL of our API which is how we can call it anytime we need to inject the model into the target use case.

Enter in the Monitor tab and Copy and Paste into clipboard the following information:

  • The active URL

  • The access token

Then, insert them into the corresponding field within the testing software.

To see the details of the typical test operation, check this previous walkthrough and place the previous code block in the field intended for it within the code body in Postman workspace.

If we test with an external software , do not forget to insert the access token into the code.