Red Vision Examples
Now that we've got the Red Vision firmware loaded and the kit assembled on our RedBoard IoT, let's take a look at a few of the examples. We'll be demonstrating how to open and run the examples using the Thonny IDE. If you've never used Thonny before, you can download it and find documentation on its use here.
Thonny Setup
We'll need to start by opening the Thonny IDE and selecting the RedBoard IoT's port and MicroPython version. Open Thonny and check the bottom-right corner of the window to see that Thonny has automatically detected the RedBoard IoT's COM port and is set to either "MicroPython (Generic)" or "MicroPython (RP2040)". With the RedBoard IoT detected it should read "MicroPython (generic) Board CDC @ COM#". If you don't see this, click on that message and select the right COM port and set it to MicroPython.
Example 01 - Hello OpenCV
The first example is set up as a hardware test for the Touch Display board and the LCD. It initializes the display and generates a template image of text and shapes. Click the folder icon in the top left and select "MicroPython device" from the pop-up menu to open the Red Vision examples loaded on the RedBoard. In this menu, open the Red Vision Examples folder and double-click on "ex01_hello_opencv.py". Once it opens, click the green "Run" button in the top of the window. You should see the display initialize and then show the image below:
Example 02 - Camera
The second example performs a similar hardware test but this time for the Camera Board and the HM0B10. It initializes both the camera and display, shows a quick splash image and then shows whatever the camera sees on the display. In the Red Vision Examples folder, double-click on "ex02_camera.py". After opening the file, click the "Run" button and you should see the display initialize with the splash screen pictured below followed by a greyscale stream of what the camera is viewing.
Try moving the camera around or placing objects in front of it to test the video stream.
Example 03 - Touch Screen
The third example tests the touch screen functionality by "turning on" pixels touched by a finger or other capacitive source (stylus, etc.). Double-click on "ex03_touch_screen.py" in the Red Vision Examples folder and click the "Run" button after it opens. The screen initializes with the same splash display as before and then follows it with a black screen with "Touch to Draw" printed in white text at the top of the screen. Try moving your finger across the display to draw or write something and you should see the display update following that path. It's not extremely precise as it's just a 240x240px display but you can draw a frumpy smiley face like the image below:
Example 06 - Detect SFE Logo
The sixth example demonstrates how to use a basic vision processing pipeline. A pipeline is a sequence of steps used to extract meaningful data from an image. In this example, the pipeline attempts to have the camera detect the SparkFun flame logo using contour matching. Open the Red Vision examples folder again, double click "ex_06_detect_sfe_logo.py" to open the example and click the "Run" button.
While the example is running, if the camera detects the SparkFun logo in frame, it outlines it on the display and draws a bounding box along with a target over the center of the logo to show how to get useful numerical data from an image such as the position and size of an object:
Now if you've got something with the SparkFun flame logo handy (maybe the red box your order arrived in or another SparkFun board), position it so the logo is in view of the camera and it should identify it; outline it and give information on the size and location of the logo similarly to the photo above.



