Training an Instance Segmentation Model with Custom Data

Learn how to train an instance segmentation model in 5 easy steps. Try out new network architectures and augmentation techniques quickly and easily. Read more!

Keechin
Editor

Introduction

Have you ever trained an AI to recognize objects in a photo? If so, then you might have come across the popular technique of identifying image objects using instance segmentation models such as MaskRCNN. MaskRCNN models enable an entire spectrum of industries to benefit from artificial intelligence with its pixel-level outline of detected objects and their corresponding class. This enables possibilities such as lesion segmentation in medical images, identifying various defects in materials and their corresponding sizes, and performing aerial imagery analysis from satellite imagery tiles.

Masked-Based Insights

Training a Custom MaskRCNN Model

We created Datature to remove all the hassle and complexity involved in getting neural networks models like MaskRCNN to work with your applications. This is especially important since training a MaskRCNN model with a sufficiently large batch size typically requires multiple GPUs to orchestrate and combining losses from those GPUs is a hassle. Our platform gives you the power do this without having to worry about infrastructure or codes.

In this video, we'll demonstrate how to train an instance segmentation model using MaskRCNN on the Datature platform.

Using Your Own Data

By onboarding your data to the Datature platform, you can use your own data to train this exact model for your application. Additionally, you get the ability to customize and tweak the pipeline such as introducing image augmentations and customizing the classes which are all built into the the platform. The transfer learning method is a powerful technique Datature leverages to allow you to dramatically speed up the learning time of our neural networks. This means as long as you have your own data, you are just a few clicks away from building your own MaskRCNN model to solve real-world problems!

Tutorial Recap

With Datature, you can simply onboard your image dataset as well as existing annotations to the platform. We ingest LabelMe and COCO Mask for teams looking to move their projects onto our platform. For more information, you can refer to our user guide here.

In the tutorial video, we will show you how you can collaboratively annotate masks / polygons on the Datature. We support polygonal tools and a bunch of tag management interfaces to ensure that teams build their datasets, fast.

Labelling Experience on the Datature Platform

Creating a MaskRCNN Training Pipeline

Using our drag-and-drop interface, as per the video, you will be greeted with our latest offering: MaskRCNN Inception V2. Simply use this in conjunction with our augmentations options, connect the boxes, and you are ready to go.

MaskRCNN Option on Datature Workflows

With the data, labels, and pipeline all setup, it is time for users to select an infrastructure to run this training on. We offer multi-GPU training and we recommend it especially when running a batch size of more than 2 with the latest MaskRCNN model. Luckily, selecting this option is as easy as checking a few boxes.

Running Multi-GPU Training on Nexus

Monitoring Multiple Experiments

We feature a built in TensorBoard where users can monitor their model training and determine very early on if there are signs of overfitting, convergence or errors with their labels. After starting the training above, you will be brought the following interface.

Teams Can Monitor Training and Export Results

Finally, we have a checkpoint system where the best weights will be saved as the model training runs. This means that even if the training starts to diverge towards the end, users will still find their best set of weights (lowest validation loss), sitting in their artifacts folder, patiently waiting for deployment.

Artifacts Management in one Project

Visualizing Your Model on Portal

We introduced Portal to help users take a closer look at the models that have trained. With Portal, you will be able to run inferences on a test dataset and see how it might perform prior to deploying. Portal feature many tools such as confidence threshold settings, video inferences, tag filtering and more to help your team analyze your results.

Our tutorial on Portal can be found here and if you'd like to propose features or contribute - head down to Portal's GitHub Page!

Running Inferences with Portal

Deploying Your Model

Once users download or select various weights, they can deploy it in their own applications or on a IoT device such as a smart camera. Alternatively, Datature has an open-source model loader library named Hub and we will soon be releasing our Model API Endpoints soon for users who would like to host their trained models online.

Datature's Open-Source Model Loader Library on GitHub

If you'd like a demo on how to run those scripts to visualize your model's performance, definitely check out the video above where I made a bunch of predictions on Squirrels and Butterflies!

Finally, we will be releasing an open source app for visualizing models from Datature and other TensorFlow v2 based models and you can interactively drop images into it, share the results and do a bunch of data science wizardry essentially.

If you'd like to have an update on when these features get released, please consider subscribing to our fortnightly newsletter! ❤️

Want To Use This For Your Project?

Feel free to signup for a free account to try it out here and we would love to hear your feedback on this!

Build models with the best tools.

develop ml models in minutes with datature

START A PROJECT