Label Your Data with Segment Anything on Nexus

Leverage Segment Anything with IntelliBrush and our Label Everything tool to efficiently label your data on Nexus' Annotator.

Leonard So
Editor

What is Segment Anything?

Segment Anything (SAM) is a segmentation model released by Meta earlier in 2023, designed to segment objects in images through multi-modal prompts. SAM’s outstanding performance rivals and outperforms other models in various task formats, from automatic mask annotation to interactive segmentation.

Segment Anything was designed to be a foundation model in the computer vision space, meaning that the model is able to learn from a generalized task such that its feature space can be adapted to more specific vision tasks. To accommodate all possible tasks, the Segment Anything model was trained to predict valid masks given multi-modal prompts. Additionally, the researchers behind Segment Anything realized the importance of data quality and quantity in order to train a foundation model. As such, they constructed a data engine to produce their open source dataset of 11 million images and over 1 billion masks, the SA-1B Dataset. The dataset engine was composed of three stages:

  1. Assisted Manual Annotation: Annotators annotate alongside SAM to pick out all masks in an image
  2. Semi-Automatic Annotation: Annotators are asked to only annotate masks that SAM is unsure of or has missed out on
  3. Full-Auto: SAM is allowed to fully predict masks with its ability to grade and sort masks with quantitative confidence

The result of training Segment Anything is a model that has been able to reach state-of-the-art benchmarks in many tasks and its impact will certainly evolve to become even more influential as more use cases adapt to accommodate the fundamental semantic understanding of visual features that SAM has.

How to Use SAM on Nexus?

Datature strives to allow state-of-the-art technology to be accessible and easy to use for all. Segment Anything, as the name implies, is well-adapted to visual features in most contexts and industry use cases without any pre-training. With such generalized utility out of the box, it is the perfect tool to accelerate user annotation on Nexus’ Annotator. With this in mind, we’re integrating SAM into our Annotator in two main ways.

IntelliBrush 2.0

The first part will be in allowing for the option to use SAM in our flagship IntelliBrush feature, which is an model-assisted interactive segmentation tool with positive and negative clicks. While the overall performance between the original IntelliBrush and the version using SAM doesn’t differ by much, users will have the option to use either one, as they each have their own characteristics that may have pros and cons depending on the use case.

Here are some interesting differences that we have observed from our own use:

  • Subsections vs. Whole Objects: SAM prefers subsections of an object while IntelliBrush prefers to annotate the entire object, e.g. where IntelliBrush would annotate the entire car, SAM would annotate just the windshield.
  • Distinguishing Shadows: SAM outlines the object without shadow, while IntelliBrush is more inclined to outline the object and the shadow. However, SAM may not always respond well to darkly shaded areas.
Example of SAM being integrated into IntelliBrush

When a user selects IntelliBrush for an image for the first time, the model computes the image embeddings. After a few seconds, you will be able to use IntelliBrush as usual. In addition to the normal IntelliBrush annotations with left and right clicks, when you hover your mouse over parts of an image, you will also be able to see what IntelliBrush proposes as an initial object mask.

Compilation of IntelliBrush being used in various industry use cases

What about Domain-Specific Images?

This method of labelling also works for medical images, especially when selecting outlines of masses, skull-stripping, and many other feats. This allows domain experts to label 10x faster by leveraging their experience and the “predictive” nature of IntelliBrush. Here’s an example of IntelliBrush selecting a cell outline in one click.

Example of IntelliBrush selecting a cell outline in one click

Everything

The second part of SAM’s integration into our platform is the Everything tool. Selecting the tool will automatically create predicted masks for every object detected in the image. 

Example of SAM being integrated into the Everything Tool

These will appear as greyed-out masks with a dashed outline. From here, you can then select the appropriate class tag and select and assign masks you deem to be appropriate and accurate. These will then appear as solid masks in their assigned tag colors. Once satisfied, you can confirm the annotations. 

Select the appropriate class tag and assign masks, annotations will appear as solid masks in their assigned tag colors

This is a quick way that reduces annotation time. Users will see similarities with the idea of our Model-Assisted Labelling tool. However, the underlying SAM model is capable of identifying most objects in a class-agnostic manner, whereas the Model-Assisted Labelling leverages your previously trained models on Nexus to label your data with the class assigned as well, given the contextual knowledge it has gathered from prior trainings.

Datature makes it easy to try all these tools on the Nexus platform. With Datature’s Starter plan, users can sign up for free without a credit card with 500 IntelliBrush credits to use on IntelliBrush and Everything mode, as well as the rest of the intelligent tools available, including AI mask refinement, video interpolation, and video tracking. To learn more about the credit system and how to purchase more, please visit our pricing page.

What’s Next?

If you want to try out any of the features described above, please feel free to sign up for an account and use our Annotator to try out IntelliBrush and Annotate Everything.

Our Developer’s Roadmap

Datature is always invested in utilizing the latest research to improve our platform. Given the way that machine learning both in the generative AI and computer vision space is continuing to rapidly evolve, we are closely monitoring and reviewing new research for new features. We are internally testing another one of Meta’s latest projects, DINOv2, to examine its ability to capture features in even more generalized computer vision tasks.

Want to Get Started?

If you have questions, feel free to join our Community Slack to post your questions or contact us about how intelligent annotation tools fit in with your usage. 

For more detailed information about the SAM functionality, customization options, or answers to any common questions you might have, read more about the process on our Developer Portal.

Build models with the best tools.

develop ml models in minutes with datature

START A PROJECT