Raspberry Pi is a capable little machine, but if you’re interested in developing your own embedded machine-learning applications, training custom models on the platform has historically been tricky due to the Pi’s limited processing power.

But things have just taken a big step forward. Yesterday, Edge Impulse, the cloud-based development platform for machine learning on edge devices, announced its foray into embedded Linux with full, official support for the Raspberry Pi 4. As a result, users can now upload data and train their own custom machine-learning algorithms in the cloud, and then deploy them back to their Raspberry Pi.

SEE: C++ programming language: How it became the foundation for everything, and what’s next (free PDF) (TechRepublic)

Four new machine-learning software development kits (SDKs) for Raspberry Pi are available week, including C++, Go, Node.js and Python, allowing users to program their own custom applications for inferencing. Support for object detection has also been added, meaning Raspberry Pi owners can use camera data captured on their device to train their own custom object detection algorithms, instead of having to rely on ‘stock’ classification models.

This will allow developers to build their own computer vision applications, either by using a Raspberry Pi camera or by plugging a webcam into one of the Raspberry Pi 4’s USB slots. Edge Impulse demonstrated the new machine-learning capabilities in a video that showed one of its engineers building a system capable of recognizing multiple objects through a camera from scratch, before deploying it back to a Raspberry Pi.

As well as collecting data from a camera microphone, the new SDKs allow users to capture data from any other type of sensor that can be connected to Raspberry Pi, including accelerometers, magnetometers, motion sensors, humidity and temperature sensors — the list goes on.

Alasdair Allan, Raspberry Pi’s technical documentation manager, said that while performance metrics for Edge Impulse were “promising“, it still fell a little below that which they’d seen using Google’s TensorFlow Lite framework, which also allows users to build machine-learning models for deep-learning tasks like image and speech recognition on the Raspberry Pi.

SEE: Raspberry Pi and machine learning: How to get started (TechRepublic)

However, Allan noted that the huge variety in data types and use cases for machine-learning applications made it “really hard to compare performance across even very similar models.”

He added: “New Edge Impulse announcement offers two very vital things: a cradle-to-grave framework for collecting data and training models then deploying these custom models at the edge, together with a layer of abstraction.

“Increasingly we’re seeing deep learning eating software as part of a general trend towards increasing abstraction, sometimes termed lithification, in software. Which sounds intimidating, but means that we can all do more, with less effort. Which isn’t a bad thing at all.”

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays