Some promotional ConnectCore Mini development kits include a free Google Coral mini PCIe accelerator card. However, you can use Google Coral with any ConnectCore development board that has a PCI interface. Contact your local Digi representative or Digi Technical Support for more information on adding Google Coral functionality to your ConnectCore project. To purchase a Google Coral mini PCIe accelerator separately, go to https://www.coral.ai/products/pcie-accelerator/. |
To add Google Coral to you Digi Embedded Yocto project, edit your project’s conf/local.conf
file and add the following lines:
# Pycoral libraries and packages
IMAGE_INSTALL_append = " python3-pycoral"
# Coral examples
IMAGE_INSTALL_append = " libedgetpu-camera libedgetpu-keyword libedgetpu-bodypix"
# Package installer for Python
IMAGE_INSTALL_append = " python3-pip"
-
The first line adds python3-pycoral package, which provides the required libraries and packages to exercise the Google Coral mini PCIe accelerator.
-
The second line adds several examples that Digi has integrated to build with Digi Embedded Yocto (see Test Google Coral sample applications).
-
The third line installs the package installer for Python, which provides an easy way to install additional Python libraries without needing to rebuild your project. This is useful to test other Coral demos different from the ones Digi Embedded Yocto provides.
Add new examples to your device
This section describes how to add an example, such as the Semantic segmentation demo, from the official Google Coral website.
This example performs semantic segmentation on an image. It takes an image as input and creates a new version of that image showing which pixels correspond to each recognized object.
-
Clone the pycoral git repository on your host PC:
$ git clone https://github.com/google-coral/pycoral.git
-
Run the Bash script to install the requirements for the chosen demo:
$ cd pycoral $ bash examples/install_requirements.sh semantic_segmentation.py DOWNLOAD: deeplabv3_mnv2_pascal_quant_edgetpu.tflite % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 172 100 172 0 0 345 0 --:--:-- --:--:-- --:--:-- 344 100 2841k 100 2841k 0 0 2941k 0 --:--:-- --:--:-- --:--:-- 2941k DOWNLOAD: bird.bmp % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 138 100 138 0 0 277 0 --:--:-- --:--:-- --:--:-- 277 100 514k 100 514k 0 0 643k 0 --:--:-- --:--:-- --:--:-- 6126k
-
Create a folder in your target device to upload all the binaries:
# mkdir /opt/semantic
-
Copy the required binaries from the host to your device:
-
The Python demo itself:
semantic_segmentation.py
-
All the files downloaded by the requirements script:
-
deeplabv3_mnv2_pascal_quant_edgetpu.tflite
-
bird.bmp
-
-
-
Run the example on your target device (check the demo parameters with
--help
):# cd /opt/semantic # python3 semantic_segmentation.py \ --model deeplabv3_mnv2_pascal_quant_edgetpu.tflite \ --input bird.bmp \ --keep_aspect_ratio \ --output ./segmentation_result.jpg
You see results like this:
Done. Results saved at ./segmentation_result.jpg
-
Display the resulting image using gstreamer:
# gst-launch-1.0 filesrc location=segmentation_result.jpg ! jpegdec ! imagefreeze ! waylandsink