Embark on the Journey of Vision Board with TensorFlow Lite Neural Networks

RT-Thread IoT OS
5 min readApr 26, 2024
  1. Introduction

With the advancement of artificial intelligence, neural networks, and machine learning applications, the demand for edge processing scenarios is increasing. In response, transfer learning networks tailored for IoT and embedded devices have emerged, and TensorFlow Lite was born in such a context.

Although TensorFlow Lite is sufficiently compact and fast, it still faces challenges when deployed on resource-constrained microcontrollers, especially for tasks like image processing. Fortunately, the RT-Thread Vision Board boasts powerful performance and ample SDRAM for data processing, making it suitable for low-data-volume and low-quality image recognition tasks.

If you’re interested about the RT-Thread Vision Board, you can find more details here: https://medium.com/@rt-thread/pre-order-arm-cortex-m85-vision-board-15fb4ceabf8c

Note: This chapter will demonstrate how to use the edgeimpulse.com website to independently train neural network models and implement machine learning functionality.

2. Introduction to Vision Board (Cortex M85 Core)
Core: 480 MHz Arm Cortex-M85, including Helium and TrustZone technologies
Storage: Integrated 2MB/1MB flash and 1MB SRAM (including TCM, 512KB ECC protected)
Peripherals: Compatible with xSPI quad-line OSPI (with XIP and real-time decryption/DOTF), CAN-FD, Ethernet, USBFS/HS, 16-bit camera interface, and I3C, among others
Advanced Security Features: Superior encryption algorithms, TrustZone, immutable storage, anti-tampering features with DPA/SPA attack protection, secure debugging, secure factory programming, and lifecycle management support

Delivers performance of 6.39 CoreMark/MHz, supporting demanding IoT applications requiring high computational performance and DSP or ML capabilities.

3. Preliminary Setup
Below are the software and reference materials required for this experiment:
Development Tools: MDK5 V5.3.8, OpenMv IDE V4.0.14
Demo Code: https://github.com/RT-Thread-Studio/sdk-bsp-ra8d1-vision-board
EDGE IMPULSE Website: studio.edgeimpulse.com
Image Materials: https://github.com/JiaBing912/VisionBoard-Picture-training-material

4. Development Environment Setup

This experiment will be based on the vision_board_openmv demo. Double-click the mklinks.bat file to execute the script, which will generate two folders: rt-thread and libraries.

Run env, enter menuconfig, navigate to Enable OpenMV for RT-Thread → Enable tflite support, and enable this functionality.

Save and exit, then input scons — target=mdk5 to regenerate the mdk5 project.

Next, open the mdk5 project, compile it, and proceed with flashing.

5. Uploading to Edge Impulse for Training
5.1 Register Account and Create Edge Impulse Project Open the Edge Impulse website, register/login, and then create a new project in the Projects tab.

For example, let’s consider digit recognition.

5.2 Uploading Training Dataset
Follow these steps: Dashboard → Add existing data → Upload data. (You can obtain training set image samples from the Image Materials section in the Preliminary Setup. For this experiment, we are using handwritten digit image samples from the mnist_lite folder.)

Note: If images in the dataset are not labeled (refer to the official documentation), manually add labels or select Enter label to define labels. The dataset format in the image materials is in 28x28 pixel BMP files, which Edge Impulse webpage does not support. Therefore, you will need to use a Python script to convert images. Images in the mnist_lite folder have already been converted to 256x256 PNG format.

5.3 Generating Features
Click on Impulse design → Create Impulse. Then, click Add a processing block, Add a learning block to input data objects, select the training model, and save.

Continue to the Image section on the left, choose RGB for Color depth, and save. You will automatically be redirected to the feature generation page. Click Generate features and wait for the feature generation process to complete. Afterward, a three-dimensional image display will appear.

5.4 Transfer Learning
Click Transfer learning on the left, then set training parameters such as training cycles, learning rate, etc. Next, select the most suitable training model for your experiment and click Start training.

If the final results and accuracy do not meet your experimental requirements, try retraining by adjusting parameters and selecting a different training model.

5.5 Deployment on Vision Board
Click on Deployment on the left, search for OpenMV library, click Build, and wait for the firmware to generate.

Unzip the downloaded compressed file (“trained.tflite”, “labels.txt”, “ei_image_classification.py”), rename ei_image_classification.py to main.py, and then copy all three files to the SD card (ensure the SD card contains no other files).

Connect the Vision Board via USB-OTG using a type-C cable, drag main.py into the OpenMV IDE, open and run it. You will see the recognition results and accuracy in the serial terminal.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —
This translation is derived from an original article written by RT-Thread Club user “Jiabing” and is subject to the CC 4.0 BY-SA copyright license. For reprinting, please include the original source link and this disclaimer. Original article link.

Order RT-Thread Vision Board, go to: https://www.aliexpress.com/item/1005006676753692.html

--

--

RT-Thread IoT OS

An Open-Source Community-Powered Real-Time Operating System (RTOS) Project! Let’s develop, DIY, create, share, and explore this new IoT World together!