Imagine spending hours writing code to train several object detection machine learning model, and now you have to write yet another bulk of code to inspect and compare between models before selecting your best choice. This, coupled with the lack of data visualisation certainly serves as a bane for anyone who is heavily invested in machine learning, and is a huge red flag for newcomers intending to enter this field.
We have consolidated these obstacles faced by machine learning users, and created an open-source library - Portal
Portal is the fastest way to load and visualize your deep learning models. Say goodbye to the extensive hours spent wrangling with cv2 or matplotlib codes to test your models. With Portal’s sleek interface, users can test their model on images and videos, while interactively changing inference confidence thresholds levels, IoU values and much more!
Using Portal, you can visualise the model inspection and comparison process, and make a better-informed decision when selecting the best model that you have trained. For more information, you can refer to our user guide here
Let's recap with some step-by-step. To inspect a model, you will have to onboard your model and datasets on Portal. Portal supports Datature Hub, TensorFlow and DarkNet models (PyTorch Support incoming for you fans). You may register any of the respective model and load them.
Register and Load Your Model
Register your model by clicking the "+" icon at the top of the application. This will bring you to the Model Registration interface, where you then input the directory of your model. Currently models from TensorFlow and DarkNet are supported, but support for more types of models will soon be available. Successfully loaded models will appear in the Model Loading interface.
Upon registering your model, you can now load your model, and it will then be ready to perform inferences. To access the model loading interface, click on the Load Model box located at the top bar. At his point, it is also worthy to note that you can register multiple models on Portal and swap between those to perform comparisons later on.
Add Your Images Folder
To load your test images onto Portal, click on the “Open Folder” button located on the right toolbar, or simply press “O”. This will bring you to the folder selection interface, where you can simply browse through your directories (Electron App Only) or simply paste the folder path (if you ran portal.py) into the dialog to select the folder containing your images.
Performing Predictions / Inferences
Now you’re all set to make your first prediction using Portal!
The images you have loaded previously should all be appearing at the bottom of the application. Select one of them, then select the button “Analyze” located at the left bar. Predictions may take a while depending on the complexity of the models used and the specifications of your computer.
Adjusting confidence thresholds is easy. Simply slide the Confidence Threshold bar and the predictions for the various threshold will appear / disappear. This will help you determine the best trade off before deploying your model and provide better insights on how your model is performing especially on video inferences.
It is also important to note that Portal works for mask predictions as well - and it works well especially when you have MaskRCNN models that you'd like to analyze. Potential use case can include crack segmentation, or anomaly detection in biomedical images, and as well as many other potential industrial use cases.
There are many parameters that can be tuned when visualizing inferences. Advanced options such as updating IoU, Class Filtering, and Hiding of Classes are all available for control using the control panel on the right. Advanced options such as video frame interval for video predictions.
Wait.. did we mention that Portal supports video inferences?
Portal provides frame-by-frame predictions for any duration. Once the video prediction is loaded, Portal enables you to make changes in the settings of the inferences, such as adjusting classes and thresholds, without re-loading the entire prediction.
This provides a ton of convenience over traditional cv2 and matplotlib scripts used to analyze model performance on videos. Simply click Analyze on your video assets and a progress bar will indicate our engine's progress in running inferences on your video assets.
Finally, to improve your presentation of the analysis or to highlight various annotations, click on the cogwheel at the bottom right corner of the canvas to make a bunch of adjustments to the visuals. Quality-of-Life features such as opacity, annotation outline and image colour settings do not actually affect your actual data and are purely visuals changes.
There are several hotkeys with Portal, such as `L` to show/hide labels, to aid your workflow. Press `?` to show a list of possible hotkeys to use!
So, Whats Next?
Fork it. Clone it. Improve it. Portal is an open-source project coded to give you the flexibility to contribute the features you want to see. More importantly, you can edit it and ship it with your product for free. If you have a super-duper custom model that you absolutely would love to use with Portal - no worry! We have included the documentation on How You Can Contribute and Add Your Custom Model Into Portal.
If you have more questions, feel free to join our Community Slack to post your questions. If you have troubles building your own model, simply use our platform, Nexus, to build one in a couple of hours for free!
Either way, head down to https://github.com/datature/portal to begin your experiments today!