How to Load Vision Models on Raspberry Pi for Edge Deployment

We are excited to introduce Datature Edge, a simple way for users to deploy their trained deep learning model on edge devices like the Raspberry Pi.

Wei Loon Cheng
Editor

What is Edge Deployment?

Ever wondered how you are able to use Google Lens to translate menus of foreign restaurants even while offline? Instead of sending the image to their servers for translation, the Google Translate app has a tiny in-built prediction model that runs entirely on your phone’s processors. This is an example of edge deployment, where deep learning models are brought forward to mobile and embedded devices. In other words, prediction tasks are run entirely on the local device without the data ever leaving the device.

Why is Edge Deployment Important?

Deep learning is becoming increasingly prevalent in our society. From shopping recommendations to identifying famous landmarks in pictures, people are starting to be ever-reliant on such features present in their mobile phones. Many state-of-the-art deep learning models require powerful hardware in the form of servers with multiple GPUs. However, not everyone has access to these, especially given the limited computing power of mobile devices. Hence, liberating deep learning from such static servers has plenty of benefits.

  • Reduced Latency: Server communication can incur some latency, as data needs to be sent to the servers for processing, and the results are sent back to the device to be displayed. The latency can become quite significant depending on the type of data (e.g. 4K image vs a text file) or the amount of data (e.g. video feed running at 60 FPS). Running such prediction tasks on-device ensures a smoother and more real-time experience by minimising the waiting time for a task to be completed.
  • Connectivity: Since prediction models are hosted on-device, edge deployment offers a degree of freedom for devices to operate regardless of whether they are connected to the Internet, since there is no need for any external data transmission. This is crucial for tasks like legged robots and drones mapping terrain in remote areas, or even in outer space.
  • Privacy: Data transmission signals are prone to interception. By removing the need to transmit data, edge deployment creates a privacy shield where all data collected by the device can only be accessed from the device itself. This is important to protect sensitive user data, especially when running on personal devices.

What Are the Applications of Vision Model Inference on Raspberry Pi?

Many devices are designed to be small and portable. Take Amazon’s Alexa, or Google Home Mini, for example. It would be impractical to install multiple GPUs in these devices simply for voice recognition. Other devices like drones have a maximum weight capacity. Having the capability to run lightweight vision models on microprocessors like the Raspberry Pi allows drones to perform tasks like terrain mapping and surveillance.

Source

Raspberry Pi offers integrations with a wide range of peripherals, some of which include controllers, displays, and speakers. With the right set of accessories, you can implement a deep learning solution for just about any use case. If you would like to integrate your Raspberry Pi with your drone, do check out this cool tutorial!

Why Datature Edge?

Our edge deployment of trained models furthers Datature’s mission to democratise the power of computer vision through low code requirements and ease of use.

Edge deployment coupled with Datature Nexus platform allows users to have uninterrupted access to their trained models for inference without the need to reload the model or manage it on your own. This takes the responsibility of deployment off of you so that you can focus on utilising the prediction inference in the most effective manner possible. We make this simple by streamlining the entire process from model loading to the inference, and finally the visualisation.

How to Set Up Datature Edge on Your Raspberry Pi

For this example, we will be using the Raspberry Pi 4b with a 32-bit Raspbian Buster OS. Please note that the steps involved may differ if you have a different architecture or operating system.

If you have not set up your camera, please refer to this tutorial. Ensure that your camera is enabled and you are able to capture a still image with `raspistill`. The camera will be initialised for 5 seconds before the image is captured.


raspistill -o image.jpg -w 640 -h 480
xdg-open image.jpg

We will be using Python 3.7 for the environment, PiCamera for streaming, TFLite for inference, and OpenCV for visualisation. To learn how to convert your trained Tensorflow model to TFLite, check out our handy guide.

The first step is to download some handy scripts from that should minimise any chances of throwing your brand-new Raspberry Pi out of the window (yes, we know it can be quite frustrating at times).


git clone https://github.com/datature/edge.git
cd raspberry-pi

Run the setup script to set up your environment. This updates your firmware, configures your camera using `raspi-config` and installs the necessary packages, such as Datature Hub, Tensorflow, PiCamera and OpenCV for inference for model loading and inference.


chmod u+x setup.sh
./setup_datature_edge.sh

Once the script has been executed to completion, reboot your Raspberry Pi for the camera configuration settings to take effect. Then, check that you have the following four files in their respective directories.

  • /usr/bin/datature-edge: this is the binary executable compiled from `datature-edge.sh` that allows you to start and stop the camera streaming and inference, and switch between models by specifying the model key and project secret from Datature Nexus.
  • /usr/src/datature-edge/datature_edge.py: this is the Python script that grabs frames from the Raspberry Pi camera stream, performs inference, and displays the prediction results in real-time. The parent directory should contain other supporting files as well.
  • /etc/datature_edge.conf: this is a configuration file that stores user parameters such as the confidence threshold and model input size. They will be passed on to the Python script upon invocation.

MODEL_KEY=
PROJECT_SECRET=
HUB_DIR=/home/pi/.dataturehub
MODEL_FORMAT=tflite
INPUT_SHAPE=320,320
THRESHOLD=0.7
FRAME_SIZE=640,480
FRAMERATE=32
MAX_BUFFER=100

  • /etc/systemd/system/datature_edge.service: this is a system-level file that is invoked upon startup. This allows the Python inference script to be executed automatically even after the Raspberry Pi has been rebooted. The inference script will also be automatically restarted upon failures (such as OOM errors or with an accidental KeyboardInterrupt) to ensure that minimal user intervention is required.

[Unit]
Description=Datature Edge Daemon Script
Wants=graphical.target
After=graphical.target

[Service]
Environment=DISPLAY=:0
Environment=XAUTHORITY=/home/pi/.Xauthority
EnvironmentFile=/etc/datature_edge.conf
Type=simple
ExecStart=sudo /usr/bin/python3 /usr/src/datature-edge/datature_edge.py \
    --model_key $MODEL_KEY \
    --project_secret $PROJECT_SECRET \
    --hub_dir $HUB_DIR \
    --model_format $MODEL_FORMAT \
    --threshold $THRESHOLD \
    --input_shape $INPUT_SHAPE \
    --frame_size $FRAME_SIZE \
    --framerate $FRAMERATE \
    --max_buffer $MAX_BUFFER

Restart=on-failure
RestartSec=5s

[Install]
WantedBy=graphical.target

Check the status of the Datature Edge service by running:


sudo systemctl status datature_edge.service

Disable this service and run the script manually, run:


sudo systemctl disable datature_edge.service

How to Run Datature Edge on a Live Camera Stream

To initialise the camera stream, load your model, and begin the inference process, run `datature-edge` with your specified model key and project secret from Datature Nexus. This process can take some time depending on the size of your model. The model format and input size of the model are also required fields. Currently, the only model formats we support are Tensorflow (tf) and TFLite (tflite), but we plan to expand to more formats in the future.


#!/bin/bash
datature-edge \
    --model_key MODEL_KEY \
    --project_secret PROJECT_SECRET \
    --model_format MODEL_FORMAT \ # (tf, tflite)
    --input_shape INPUT_SHAPE # ,
datature-edge --help # to view all options

The executable will download your model using our open-source model loader, Datature Hub, and load it in memory. If you would like to use a custom model, you can change the execution mode by adding the option `--local`. Then, you would need to specify a path to your custom model and a path to the labels map as shown below.


#!/bin/bash
datature-edge \
    --local \
    --model_path MODEL_PATH \
    --label_path LABEL_PATH \
    --model_format MODEL_FORMAT \ # (tf, tflite)
    --input_shape INPUT_SHAPE # ,
datature-edge --help # to view all options

The camera will be initialised and start capturing frames. The model will then analyse each frame and return the predictions, if any. You should be able to see a window displaying the output from the camera stream. To test if your model works, grab a relevant image on your phone or laptop and point the camera at it. If your model has been trained well, you should be able to see the predictions overlaid on the camera feed.

To stop Datature Edge, run:


datature-edge --stop

Voila! You now have a working edge-deployed inference service!

Additional Deployment Capabilities

Once inference on your Raspberry Pi is up and running, you can now fully utilise your deep learning model for inference, taking the usage of your deep learning pipeline to the next level. If latency is not a priority, you can also consider platform deployment on Datature Nexus to send data to a hosted model for predictions instead. With our Inference API, you can always alter the deployment’s capability as needed.

Our Developer’s Roadmap

Additionally, we have roadmaps in place to make Datature Edge more versatile by adding compatibility with other edge deployment formats such as ONNX. This will allow Datature Edge to serve a wider suite of devices and applications. We are also looking at integrating a simple frontend inference dashboard with Streamlit to stream the camera feed and prediction results for convenient visualisation.

Want to Get Started?

If you have questions, feel free to join our Community Slack to post your questions or contact us about how edge deployment fits in with your usage.

For more detailed information about Datature Edge’s functionality, customization options, or answers to any common questions you might have, read more about Datature Edge on our Developer Portal.

Build models with the best tools.

develop ml models in minutes with datature

START A PROJECT