Both allow engineers, designers, and manufacturers to easily build on-device Artificial Intelligence (AI) and Machine Learning (ML) inferencing into projects and products and take application ideas from prototype to development much faster.
These new Google products simplify the development of intelligent devices that are capable of neural network inferencing.
The entry-level local AI project kits for voice and vision provide simple ways for designers to experiment with machine learning and enable developers to harness the power of AI and on-device inferencing to build intelligent solutions that can benefit a wide range of industries including the development of smart cities, manufacturing, automotive, healthcare and agriculture. Users can also easily bring high-speed ML inferencing to a range of existing systems using Coral’s USB accelerators.
Commenting Lee Turner, Global Head of Semiconductors and SBC at Farnell, said, “Many of our customers have expressed interest in integrating AI into their projects but often don’t know where to begin. To address this knowledge gap, we have invested in a selected range of AIY project kits and Coral USB accelerators from Google to enable students, professional engineers, makers, and manufacturers to easily develop intelligent devices that can solve real-world problems using AI and ML.
"Our customers can now use Google’s advanced technology to add speech and image recognition to their projects, bringing creative on-device local AI application ideas from prototype to development much faster."
Google’s AIY and Coral technology provides a complete platform of hardware components, software tools and pre-compiled models for building devices with local AI. The in-stock range now available from Farnell includes:
- Google AIY Voice Kit (G950-00865-01) allows users to experiment with machine learning and AI by building their own natural language processor and connecting it to the Google Assistant. This turns the kit into a voice assistant that responds to questions and commands. The kit can also be used to add speech recognition and AI processing to Raspberry Pi projects. Users can use sample code or Google Cloud Speech-to-Text service, which converts spoken commands into text to trigger actions in a program’s code. Key phrase detection can be used in projects that feature voice recognition to control robots, music, games and more.
- Google AIY Vision Kit (G950-00866-01) contains all the components and software required to experiment with image recognition using neural networks. Users can build their own intelligent camera that can see and recognize up to 1,000 common objects, detect faces, emotions and poses and carry out object segmentation using advance image detection modelling. The kit, powered by Raspberry Pi, can achieve computer vision without a cloud connection as real-time deep neural networks are run directly on the device.
- Coral USB Accelerator (G950-01456-01) is an easy to build and fast to deploy accessory bringing high-accuracy custom image classification to intelligent devices with AutoML Vision Edge. Users can connect a Google Edge TPU coprocessor to existing systems through a USB port to enable high-speed machine learning inferencing on a wide range of systems. The on-board Edge TPU is a small ASIC designed by Google that accelerates TensorFlow Lite models in a power efficient manner. It can perform 4 trillion operations per second (TOPS), using 0.5 watts for each TOPS, and can execute state-of-the-art mobile vision models at almost 400 frames per second. This on-device ML processing reduces latency, increases data privacy, and removes the need for a constant internet connection.