Some promotional ConnectCore Mini development kits include a free Google Coral mini PCIe accelerator card. However, you can use Google Coral with any ConnectCore development board that has a PCI interface. Contact your local Digi representative or Digi Technical Support for more information on adding Google Coral functionality to your ConnectCore project. To purchase a Google Coral mini PCIe accelerator separately, go to https://www.coral.ai/products/pcie-accelerator/. |
The following sample applications allow you to test Google Coral with the ConnectCore 8M Mini.
Still image classification
This demo runs an inference on the edge Tensor Processing Unit (TPU) to classify an image of a bird.
The root file system contains an already trained TensorFlow Lite model and a text file with a list of almost a thousand species of birds.
Test the model against the included image of a parrot:
# cd /opt/pycoral
# python3 classify_image.py \
--model mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--labels inat_bird_labels.txt \
--input parrot.jpg
You see results like this:
----INFERENCE TIME----
Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory.
13.1ms
3.3ms
3.3ms
3.2ms
3.3ms
-------RESULTS--------
Ara macao (Scarlet Macaw): 0.75781
The example repeats the same inference five times. Inference speeds might differ on your device.
The last line contains the top classification label of the bird species with the confidence score, from 0 to 1.
On the parrot.jpg
example above, the image was classified as species Ara macao (Scarlet Macaw) with a confidence of 75.8%.
To test the model with other images:
-
Print the available species on the labels text file:
# cat /opt/pycoral/inat_bird_labels.txt 0 Haemorhous cassinii (Cassin's Finch) 1 Aramus guarauna (Limpkin) 2 Rupornis magnirostris (Roadside Hawk) 3 Cyanocitta cristata (Blue Jay) 4 Cyanocitta stelleri (Steller's Jay) 5 Balearica regulorum (Grey Crowned Crane) ... 961 Mitrephanes phaeocercus (Tufted Flycatcher) 962 Ardenna creatopus (Pink-footed Shearwater) 963 Ardenna gravis (Great Shearwater) 964 background
-
Select a species and search the Internet for images of it.
-
Transfer the image to the target.
-
Re-run the classification model with the new image (in the example
tigrina.jpg
):# python3 classify_image.py \ --model mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \ --labels inat_bird_labels.txt \ --input tigrina.jpg ----INFERENCE TIME---- Note: The first inference on Edge TPU is slow because it includes loading the model into Edge TPU memory. 13.1ms 3.3ms 3.3ms 3.3ms 3.3ms -------RESULTS-------- Setophaga tigrina (Cape May Warbler): 0.36328
For more information on this and other image classification models, see https://coral.ai/models/image-classification/.
Speech recognition
This demo runs an inference on the edge Tensor Processing Unit (TPU) to classify short phrases spoken (in English) at a microphone.
Connect a microphone to the MIC jack on the ConnectCore 8M Mini Development Kit. |
The root file system contains an already trained TensorFlow Lite model and a text file with a list of over 140 short phrases.
To test the model on your ConnectCore 8M Mini Development Kit:
-
Check the list of recognizable phrases with this command:
# cat /opt/libedgetpu/keyword/config/labels_gc2.raw.txt what_can_i_say what_can_you_do yes no start_window start_application ... twelve_o_clock channel_twelve position_twelve
-
Run the demo with the following command:
# cd /opt/libedgetpu/keyword/ # python3 run_model.py
The application auto-selects the audio input device and starts speech recognition.
-
Speak at the microphone: "yes". The console shows something like this:
negative (0.996) negative (0.996) negative (0.996) * yes* (0.918) negative (0.082) * yes* (0.832) negative (0.168) negative (0.914) yes (0.082) negative (0.996)
Highlighted on the left column the dominant recognized phrase along with a confidence score, from 0 to 1.
-
Speak at the microphone: "What can I say".
negative (0.996) negative (0.988) * what_can_i_say* (0.996) * what_can_i_say* (0.996) * what_can_i_say* (0.992) negative (0.008) negative (0.996) negative (0.996)
You can try with other phrases from the list.
For more information on this speech classification model, see https://coral.ai/models/speech-recognition/.
Object detection with video camera
This demo runs an inference on the edge Tensor Processing Unit (TPU) to classify objects appearing in front of a video camera.
Connect a camera and a display (or monitor) to your ConnectCore 8M Mini Development Kit to run this demo. |
To test the model on your ConnectCore 8M Mini Development Kit:
-
Check the list of recognizable objects with this command:
# cat /opt/libedgetpu/camera/all_models/imagenet_labels.txt 0 background 1 tench, Tinca tinca 2 goldfish, Carassius auratus ... 806 soccer ball 807 sock 808 solar dish, solar collector, solar furnace 809 sombrero 810 soup bowl
-
Run the following command, passing the video device that corresponds to your camera (in the example
/dev/video0
):# cd /opt/libedgetpu/camera/gstreamer # python3 classify.py --videosrc /dev/video0
You see camera streaming video to your display.
-
Place one object of the list in front of the camera (best results with a uniform background). The display shows a text label with the results of the inference together with a confidence score, from 0 to 1, such as the following:
For more information on this and other object detection models, see https://coral.ai/models/object-detection/.
Pose estimation with video camera
This demo runs an inference on the edge Tensor Processing Unit (TPU) to identify several points on human bodies as captured with a camera.
Connect a camera and a display (or monitor) to your ConnectCore 8M Mini Development Kit to run this demo. |
To test the model on your ConnectCore 8M Mini Development Kit:
-
Run this command, passing the video device that corresponds to your camera (in the example
/dev/video0
):# cd /opt/libedgetpu/bodypix # python3 bodypix.py --videosrc /dev/video0
You see camera streaming video to your display.
-
Put yourself in front of the camera and see how the model prints nodes at several points of your body and your face.
For more information on this and other object detection models, see https://coral.ai/models/pose-estimation/.
Additional demos
See Add new examples to your device, for information on adding and testing other Coral demos.