Building a Simple Inference Dashboard with Streamlit

Streamlit empowers users to visualise inference results using custom trained computer vision models while staying within the ease of a Python environment.

Wei Loon Cheng
Editor

What is Streamlit?

Streamlit is a Python open-source app framework that allows you to easily convert your data scripts into informative, clutter-free web applications. They provide a front-end interface for visualising data through interactive dashboards with an easy-to-use API that synergizes well with other popular data manipulation and visualization tools in the Python space.

Why Use Streamlit?

Python is typically used in the machine learning inference pipeline, which includes image pre-processing, model loading, model inference, and image post-processing. Streamlit is also conveniently built in Python, allowing you to connect an elegantly simple front-end interface to display the inference results without worrying about any routes, HTTP requests, or the need to learn any front-end programming languages like React.

Streamlit is also compatible with many commonly-used libraries for data science, such as Numpy, Pandas, and Matplotlib. This enables Streamlit to be quickly integrated into most data science applications. Streamlit as an app framework is still continuing to ramp up its customization options, so it is increasingly becoming a staple in app development for a broader audience beyond those not familiar with frontend development.

Streamlit aligns well with Datature’s goal of democratizing the power of code and computer vision, by facilitating clean and simple app frameworks for those who want custom app frameworks to suit their needs but lack the frontend development knowledge. Datature helps to facilitate this further by providing Streamlit sample codes as a plug-and-play base for further development. For our example on building a basic model inference dashboard, Streamlit is the perfect tool to facilitate our simple setup.

How Do You Set Up Streamlit for Inference with TensorFlow?

To get started, it is recommended that you do this in a virtual environment (using virtualenv or virtualenvwrapper). Once your virtual environment is activated, you can install Streamlit.

To install Streamlit, run the following command:


`$ pip install streamlit`

To test that Streamlit is working, you can run the following command:


`streamlit hello`

A browser window similar to the image below should be displayed. If your browser window does not open automatically, you can access it at `https://localhost:8501`.

How Do You Create Dashboard Widgets for Inference?

Streamlit contains widgets that represent various components in a typical interface that users can interact with, such as text fields and buttons. Defining a widget is as simple as declaring a variable in Python, reducing the need for any callbacks like you might in JavaScript.

For this example, we will use the open source Python library, Datature Hub, to retrieve a trained model from Datature Nexus for inference.

To retrieve your artifact, all that’s needed is to input your project secret and model key, which we can parse using two text fields.

How to input your project secret and model key:


st.text_input("Project Secret", key="project_secret")
st.text_input("Model Key" , key="model_key")

We then initialise Datature Hub to automatically download and load your model into memory, ready for inference.

The actual code to perform inference should be customised based on the type of model you have trained and the type of data to be processed. All these prediction scripts for our model offerings can be found on our GitHub. For this example, we will be using a custom-trained EfficientDet-D1-640x640 TensorFlow object detection model to identify red blood cells, white blood cells, and platelets in images.

Initialise Datature Hub to automatically download and load your model into memory:


from datature_hub.hub import HubModel

## Initialise Datature Hub
hub: HubModel = HubModel(
    model_key=model_key,
    project_secret=project_secret,
)

## Load Tensorflow model
trained_model = hub.load_tf_model().signatures["serving_default"]

To upload images for predictions, we can define a file uploader widget in one line of code. We can also specify allowed file types to be only .jpg, .png, and .jpeg, as well as the ability to accept multiple files.

How to upload images for predictions:


## Upload images
st.sidebar.file_uploader(
    "Image uploader",
    type=['jpg', 'png', 'jpeg'],
    accept_multiple_files=True,
    label_visibility="collapsed",
    help="Upload image(s) to predict. Supported file types: jpg, png, jpeg",
    key="uploaded_imgs",
)

After the model has run predictions on the image, we can display a side-by-side comparison of the original and output images with predictions drawn on them, as well as a JSON list of the predictions and their various attributes. We also include a slider to adjust the threshold values of the predictions in real-time. This allows us to streamline the visualisation of results similar to the image below, instead of having to perform further post-processing separately. The following code displays the before and after image comparison as well as a display of the JSON file on the right.

How to display the before and after image comparison and a display of the JSON file:


## Create threshold slider
st.slider(label="Threshold",
          min_value=0.0,
          max_value=1.0,
          value=0.7,
          step=0.01,
          label_visibility="collapsed",
          help="Threshold")

## Create three columns
col1, col2, col3 = st.columns([0.4, 0.4, 0.2])

## Display original image
with col1:
    st.markdown("Original")
    st.image(origi_image, use_column_width="auto")

## Display output image with predictions overlaid
with col2:
    st.markdown("Output")
    st.image(output_image, use_column_width="auto")

## Display JSON output of predictions
with col3:
    st.markdown("JSON Output")
    st.json(json_output, expanded=False)
    
    
Example of a Datature Inference Dashboard

To perform model inference on another image, you can simply upload a different image in the file uploader. To change the model being used, simply replace the Project Secret and Model Key to the new ones of the corresponding model and enter the inputs. The new model will then be loaded and you can now upload new images for inference.

Check out the full inference dashboard code and follow the instructions to set up and run the app.

How Can You Deploy Your Inference Dashboard?

Streamlit allows you to deploy your inference dashboard directly via their Community Cloud. All that is required is to push your Streamlit app to a GitHub repository, and sign in to Streamlit using the same GitHub account. Once you are signed in, you can choose the repository containing your Streamlit app, ensure that the main file path is the same file path that you ran the `streamlit run` local command with, and click Deploy. You can always view your app deployment limits in the settings tab.

Additionally, you can also deploy your apps on the various cloud providers such as Google Cloud Platform, AWS, and Azure. You can learn more about how to achieve this here!

Additional Deployment Capabilities

Once your inference dashboard has been deployed, you can fully utilise your deep learning model to visualise predictions with any images at hand, taking the usage of your deep learning pipeline to the next level. With other interesting but useful features in Streamlit’s API reference, you have the freedom to further customise your inference dashboard in any way you desire. You can also visit Streamlit’s project showcase gallery for some inspiration.

If you decide to deploy your inference dashboard for public use, you can consider platform deployment on Datature Nexus to send data to a hosted model for predictions instead, so that you do not have to worry about handling high traffic. With our Inference API, you can always alter the deployment’s capability as needed.

Our Developer’s Roadmap

This is just the start of our efforts to provide plug-and-play codes with Streamlit apps. Moving forward, we will continue to provide Streamlit apps to display and provide base functionality for new features to give users in our community a solid foundation and head start in developing and innovating amazing solutions for various use cases. We hope that you will share your creations with us as well!

Want to Get Started?

If you have questions, feel free to join our Community Slack to post your questions or contact us about how the customizable inference dashboard fits in with your usage.

For more detailed information about Streamlit’s dashboard functionality, customization options, or answers to any common questions you might have, read more about Streamlit’s API reference.

Build models with the best tools.

develop ml models in minutes with datature

START A PROJECT