Machine Learning at the Edge on the Raspberry Pi - On Demand Webinar
The worlds of Machine Learning and Edge Computing are merging faster than anyone may have imagined even a couple of years ago. Today you can build custom ML models that are optimized to run on low-power microcontrollers or single-board computers like the Raspberry Pi.
Plus, if you use a low-power cellular module like the Blues Wireless Notecard, you can securely route ML-derived inferences to your cloud of choice!
In this webinar, we walk through two somewhat impractical but super fun projects that test the limits of ML on the Raspberry Pi:
- The "Remote Birding with TensorFlow Lite and Raspberry Pi" project will show you how to use ML on an RPi in a remote environment (complete with cellular connectivity and solar power!).
- In "Busted! Create an ML-Powered Speed Trap" we will walk through building a portable "speed trap" that uses ML to identify vehicles, a radar sensor to measure speed, and the Notecard to report data to the cloud.
By the end of the webinar, you will have a basic understanding of some common image-related ML concepts and how to start implementing them in an edge computing scenario on the Raspberry Pi.
Webinar Transcript - Machine Learning at the Edge on the Raspberry Pi
Speaker: Rob Lauer - Director of Developer Relations 00:00
Hi, everyone. Good morning, good afternoon, and good evening to all. Welcome to this webinar today as we cover a useful and pragmatic topic, that being machine learning on the Raspberry Pi, but through the lens of two kind of silly or impractical, but really fun, projects: remote bird detection and a portable traffic monitoring/speed trap solution. Now, we've only budgeted a short 45-ish minutes for this webinar, yet, we're going to be cramming a lot of topics and concepts into this time, including those you see here. Before we jump in, let's cover some quick logistics. This webinar you see is actually pre-recorded. Now I usually like doing them live, but since I'm going to be demoing a project running on a Raspberry Pi to my right over here, I wanted to be able to switch views seamlessly. Therefore, while the Rob you're looking at now is pre-recorded, the real Rob is ready to answer your questions. Just use the Q&A panel provided and I will answer them as quickly as possible. Likewise, if you run into any issues with the audio or video, have no fear, there will be a recording posted on YouTube in the coming days. You'll find it on the Blues Wireless YouTube channel. You will also receive a link in a follow-up email. While you're there on our YouTube channel, be sure to check out our brand-new tongue-in-cheek Blues Wireless TV videos.
So quick intros: My name is Rob Lauer, and I'm Developer Relations lead at Blues Wireless. If you're a Twitter person, you can find me there @RobLauer. With me today is special guest Alessandro Grande. Alessandro is Director of Technology at Edge Impulse, and he's going to be providing some more detailed info about how Edge Impulse can accelerate your machine learning capabilities. It just so happens this service is what I used for the portable speed trap project. Super happy to have Alessandro here. You can also find him on Twitter @Alessandrodevs.
I mentioned I work for this company called Blues Wireless. Now we are not a cellular service provider, but rather an IoT company that is focused on bringing cellular connectivity to the masses. Now since I'm guessing many of you are new to the Blues Wireless ecosystem of hardware and services, first off, welcome. At Blues, our mission is to make cellular IoT easy for developers and affordable for all. I promise this is relevant to what we're talking about today. Our focus is on securing your data from the moment it's acquired by a sensor all the way through to landing on your cloud application of choice. All of our hardware solutions are low power out of the box to the tune of about eight micro amps when idle. We're also a developer-focused company. Now most of us at Blues are current or former engineers, so we know the pains that we all experience and are building solutions to really ease your IoT connectivity burdens. Now, to help visualize where Blues Wireless fits in your current IoT solution, let's start with your sensors. You're acquiring data of all types from temperature, humidity, motion, gases, and so on, and this data is likely processed by a microcontroller or single-board computer, but you need to get that data to your cloud. You'd then use the Blues Wireless Notecard to securely deliver data over cellular to our cloud service Notehub. That then lets you route data to your cloud of choice. Notehub really takes a lot of the pain out of securely transferring data and conforming to whatever data structures your endpoint is anticipating.
Now zooming in on our hardware quickly, the Notecard is the core of what we provide. It’s a low power cellular and GPS module measuring a tiny 30 by 35 millimeters. It has an M.2 edge connector for embedding in your project. The API, the way you interact with a Notecard, is all JSON. Gone are the days of archaic AT commands to manage your cellular modem. We also have SDKs for Python, Go, Arduino, and C, C++ as well. There are Notecard varieties that work globally using NB-IoT, LTE-M, and Cat-1 cellular standards. To make it easier to use your Notecard when you're prototyping, or when you're ready to embed in a permanent solution, we provide Notecarriers. These are development boards that allow you to snap in a Notecard and connect it to virtually any solution you can dream up. So the Notecard AF, for example, includes a Feather-compatible socket. The A model includes headers that let you hook it up to any microcontroller you want. The B is a small form factor Notecarrier, and the Notecarrier Pi is a Pi HAT for working with any Raspberry Pi-compatible single-board computer. That's what we're going to be using today.
Again, Notehub is the Blues Wireless cloud service that receives data from the Notecard and in turn routes it to your cloud application of choice. Also, within Notehub, you can manage fleets of devices, you can update both microcontroller and Notecard firmware over the air, and Notehub is all about securely transmitting data, both from the Notecard and to your cloud. It's all accessible via developer-friendly, JSON-based API. Everything's JSON in JSON out. Now today, again, we're looking at these topics of ML and cellular IoT and the Raspberry Pi through the lens of a backyard birding solution and a speed trap. Two pretty different scenarios. However, the Venn diagram of topics does match up incredibly well when we look at the core issues and concepts we're tackling with each.
The first step in this process, for me was to answer this question, what is machine learning, actually? I'll be honest, if I knew anything about machine learning before I started this project, it's that about 60% of the time it works every time, so it's imperfect to say the least. Case in point, as an engineer, my instinct wasn't really to read any silly guides or tutorials, it was just to download some code and start using it. Literally, for one of my first attempts at an ML concept called image classification, I took one of these little squishy stress balls here and ran it through a publicly available image classification model. And the results, as you can see, were sketchy at best, so I knew I'd have to do at least a little bit of research. So I started my ML journey by reading quotes like this: “Machine learning is the study of computer algorithms that improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence.”
Okay, I kind of get it. It sounds a bit complicated, though, and I'm kind of an idiot, so I sort of reframe that last quote into “Machine learning lets computers learn from data… versus being explicitly told what to do.” Now, if you think of what we as software engineers do all day every day, it's to explicitly tell a computer to do something. ML is a little more like, “Hey computer, here’s a vague idea of a problem I want you to solve. Let me know how you interpret future questions that I send your way.” Now, to use machine learning in practice, you need a model. At its core, an ML model is a file that's been trained to recognize patterns in data. It could be images, could be text, audio, you name it, but these model files are only as smart as we train them to be. For instance, if you train a model on cats, will this cat be recognized? It's certainly a cat, but it has no fur and it's wearing a robe for some reason. Finding success in machine learning is about providing the right instructions, or the right blueprint, to enable your model to learn appropriately.
A lot of it is garbage in garbage out. For today, think of machine learning as this means of providing some guideposts for a computer to learn from and make its own inferences. A subset of machine learning is this field of TinyML. Now this is the idea of utilizing small and highly optimized ML models on low power devices or smartphones, like we're going to be doing today with the Raspberry Pi. This also ties into another concept you've likely heard about: edge computing. Edge computing is a super important concept in this realm. With edge computing, you are physically moving computation closer to the source of data—think smart meters, smart streetlights, or rain gauges in the field for irrigation. Now, this is a more secure process, to accumulate and process data on these devices versus transferring everything back and forth from the cloud.
You obviously have reduced bandwidth requirements when you are making these calculations and inferences on a device versus sending all that data to the cloud for processing. To throw yet another machine learning term at you, we get to TensorFlow Lite. TensorFlow Lite is a platform from Google that enables on-device machine learning, and you can create inferences on extremely low powered devices like microcontrollers and even smartphones. There are TensorFlow Lite SDKs available that work on a variety of languages and run on MCUs and single-board computers like the Raspberry Pi, a pretty good place to start. To build on the story and to make the usage of TensorFlow Lite even easier, we have solutions like Edge Impulse.
I'm going to save the proper description of Edge Impulse for Alessandro, but I will say, Edge Impulse is my favorite ML SAS product available today. They didn't even have to pay me to say that. With mostly web-based tooling, you can pretty easily create, train, and test your ML models in a really engaging toolset; I highly recommend checking it out. One last explanatory slide about what we're tackling today when we talk about machine learning in the context of these two projects. We're talking about image-based machine learning. At a high level, there are three types of image-based ML problems to be solved. The first is image classification. We're answering a single question. Is this a dog? What single object does this image represent? This is what we're going to be using in the birding project. Next is object detection. How many objects are in this image? What are the objects in this image? This is used with Edge Impulse in the speed trap project. The last is image segmentation. What are the objects in the image? Where are they precisely in the image?
Each level as you work up requires more host horsepower from your microcontroller or single-board computer, as you might imagine.
Speaker: Rob 10:28
With that high level intro into some of the concepts we're covering today, let's dive into the projects themselves, starting with remote birding on the Raspberry Pi. Now I'm sure there are a million simpler ways I could have approached this problem, say with a simple motion sensor at a bird feeder. But not only would our false-positive rates be through the roof, it’s just not that fun either. In the IoT world, we are all about fun and impractical. Right? My story really began when one day this Tweet comes across my feed: “And just like that, I turned into a person that calls people to come look at the bird feeder.” And I thought yes, this is totally me. Especially during the pandemic, I could be caught in a trance-like state just staring outside at the birds, coffee in hand, robe around me. But I also thought, hey, I could build a device to do this watching for me, which is a little bit depressing if you let yourself think about it. But I like to reserve my share of ridiculousness for these projects, and this one is no exception. The question I had to ask is if I could build a device that would recognize birds and then subsequently notify me which bird was at the feeder, so I started thinking about how I'm going to build this project. Now, it ended up being a bit of overkill, but I knew I wanted to use a Raspberry Pi, as it's really powerful and could handle virtually any problem I threw at it. I knew I was using something called machine learning, which I really barely understood, and I'd need a camera to take a picture of a bird and a language like Python to program my logic, then there's the issue of running it off the power grid, and what about connectivity, it’d be outside of Wi Fi, much less notifications, and I got myself all in a tizzy really quickly right away. I then channeled my internal Bob Ross, took a deep breath, and broke it down into some more tangible pieces.
The first issue to tackle was creating a valid machine learning model or really, ideally for me in this scenario, just using one that already existed, because I'm really lazy. This is in fact, what I did. Now, Google also provides a resource called TensorFlow Hub that contains a variety of free and open ML models. What did I find? Well, there's a bird image classification model that uses images from every bird in existence. Now, this sounds good in theory, right? No gaps in my data streams. It's 100% guaranteed that birds in my backyard feeder will be in this model, true. However, remember my whole garbage in garbage out model. That also applies to having too much data, as we'll see in a bit. But I didn't have a start for this project with an existing ML model, so it was a big shortcut for me. At this point in the process, I'd only checked off one box: the ML model to use. I had a lot more questions to answer about the hardware I needed to assemble and the software I needed to write.
As you know by now, I did decide on the Raspberry Pi 4. If you're familiar with the Raspberry Pi ecosystem, you'll know the 4 is the most powerful model. It runs Raspberry Pi OS, which is based on Debian, Linux flavor, kind of a power hog, though, which had to be addressed. Now I effectively solved that problem by powering my Pi with a 30,000 milliamp USB power bank. This one is so big it's not legal to fly with. I wanted this to be a slightly more sustainable solution, so I added a 42-watt solar array, which in theory could add just enough juice to my battery pack on sunny days to keep it going for a little bit longer than that. Now it says 42 watts, but there was a significant amount of loss from the panel to the battery, and you need full sun to take advantage of all those solar cells, but it was enough to extend the lifetime of the project a tiny bit.
Finally, I used an infrared motion sensor that can be used to limit the amount of time I had to power the camera. So, a bird flies to the feeder. This low-power sensor tells the camera to wake up and snap a picture. At this point, at the risk of taking a detour, I did want to talk briefly about the feasibility of using a Raspberry Pi in a remote setting. Turning a Pi into an edge device can be incredibly useful when you're doing a lot of processing, but it comes at a cost. I actually wrote up a couple of tutorials that I posted on hackster.io. They cover solar powered crypto mining, which yes is ridiculous and dumb. And optimizing a Pi for off-grid power consumption. If you want to learn more about those, head over to my hackster page at hackster.io/rob-lauer. Now back to my problems. Since I'd be out of the range of Wi-Fi, I went with the Blues Wireless Notecard to add low power cellular connectivity.
This is what we do at Blues, so you've already heard me talk about this, using the Notecard with a Pi. I used the Raspberry Pi HAT that we affectionately call the Notecarrier Pi. It literally plugs into the top of the 40-pin connector on top of every Raspberry Pi. Again, it ended up being great power wise, since when idle, it sips only about eight microamps, so it barely even registers on the Pi in terms of power consumption. I've got the cellular module that can relay data from my Pi, but where does that data go? Well, at Blues Wireless, we have this service called Notehub.io that will route data to any cloud—could be Azure, AWS, Google Cloud, or any IoT SAS platform. Really, any RESTful endpoint can receive data from Notehub. I kept this simple though; the only thing I wanted to do was send myself a text message. To do that, I used Notehub to relay data to Twilio.
Now I'm starting to get these puzzle pieces of hardware and services together, so what can I start to build here? Well, we can take a look at the high level workflow here. Again, we have a PIR motion sensor that will detect motion from a bird at the feeder with infrared. That's going to wake up the camera module, the Pi camera module, and take a quick picture of whatever's detected. We're going to feed that image into our TensorFlow at runtime using Python. If the TF Lite runtime says, “Yep, it's a bird,” we pass that info on to Notehub over cellular. Then Notehub simply tells Twilio, “Hey send a text message.” I'll show some more code in a second, but I wanted to show off how many lines of Python it took to utilize that TF Lite runtime in Python—not too many. When it was all said and done, the total lines of code for the entire project ended up being about 118. Take out line breaks and comments, and it's easily less than 100. Here's a final look at the deployment. You know, not exactly ready to commercialize, but after assembling it in my basement and writing the necessary Python code and doing some basic testing, I was certainly ready for the big reveal outside.
All right, drumroll please. This is when the magic struck. A bird was spotted in my deployment outside. Awesome, unreal, 89% match even, this felt amazing. However, remember back when I talked about how broad that bird model was—every bird in the world? That ended up being a critical flaw in v1 of this project. See that scientific name there, Gallus Gallus? That is actually a tropical bird, known more commonly as a red jungle fowl, not too likely to be in my backyard here in the States. However, at the very least, I felt pretty good because I knew that at least something was happening that was in the realm of what I can consider success for v1. Now you can find this project along with the other ones on Hackster, if you want to dive into some more of the code and the details, but let's switch gears here a little bit and walk through some of that Python code involved in this project. We can also do a little demo here in my basement office using the Raspberry Pi I've mounted right next to me here.
Okay, so let's briefly look at some of the code here for this birding project. It's all Python and should be pretty intuitive, more or less. In terms of the file structure, quite simple. There's one main script that holds the vast majority of the logic. There are a couple of files here related to the machine learning model. That's the TF Lite file; here is the ML model. This label file is what translates the inference results to the actual name of the bird. I do have some private keys stored here, some very basic information with phone numbers, and a link or a key to my project in Notehub.io. Otherwise, everything important is in our bird.py file. We do some imports, because we need to configure a variety of pieces of hardware here, that being the PIR motion sensor and the Raspberry Pi camera. You'll notice the ML model here anticipates a 224 by 224 image, so a pretty small image size.
We also need to initialize our Notecard for cellular. I'm going to configure that Notecard to speak to a specific Notehub.io project, and our program is going to communicate with the Notecard over I2C. We’re going to send our first JSON request, that being the hub.set request that is going to configure that Notecard, to talk to our Notehub project. We're going to set the cellular mode to periodic to be power conscious and only connect every 120 seconds if data is required to be sent. Since we're working with Twilio, we need to specify a from and to phone number. The from will be from Twilio, the to being my own personal number. We're specifying some paths to local file assets, two of them being the machine learning files, the third being simply a path to where we're going to save those images of birds that come in. Finally, the probability threshold: This is simply a threshold at which I want to be notified that the bird exists at a feeder. I’m setting it very low for our demos here today, at a 40% chance that we have a match just to make sure that some data comes through. In our main function here, I’m only checking for motion every 30 seconds. I’m simply asking, “Is movement detected at the feeder? If so, run this check for bird function.” When we look at this check for bird function, we are initializing our interpreter from TensorFlow Lite, we are starting the camera, giving it a couple of seconds to adjust light balance and capture an image, then we are using the classify image function to actually get the results out. We get the results and stop the camera preview. If the probability returned is greater than that threshold I mentioned, then we can safely say that we have a bird. We can extract the necessary data about the bird and send a Note or an event to Notehub that will in turn send us that text message. Now, I’m going to skip over some of this, as this has to do with the Tensor Flow Lite runtime.
Let’s skip forward to the most important part, and that is the send Note function. This again is going to send data to Notehub.io, and we are going to compose the body of our Note, or the body of our event, with the bird that was found, the probability match, and the from and the to numbers. That's it. This is where our project starts. We run this on a Raspberry Pi every 30 seconds, we grab a picture, and we look for if a bird was recognized by our machine learning model. If so, we send that data to Notehub. Now to securely sync data stored in Notehub to your cloud application of choice, you use a feature called routes in Notehub. In this case, I'm creating a very simple route that has a general HTTP/HTTPS request response. I added some additional headers to handle my authorization and a content type that's required by the Twilio route, and here's my endpoint URL provided by Twilio. I'm saying only route certain Note files. If you recall from my code, I'm labeling this note file bird.qo.
I'm using JSONata, which is a really powerful language to convert JSON on the fly so I can convert this into the format that Twilio expects to see it to send a text message. That's about it. Then there are some request rate limitations, or I can specify timeouts as well. With this configuration, any data that comes into Notehub will be securely sent to Twilio, and then Twilio can send me a text message. Let's see this in action. All right, let's see if we can pull this off. Again, I've got a Raspberry Pi mounted to my wall here with a Notecarrier Pi HAT and the Notecard, and I had my Pi camera module mounted to the wall. The only thing missing from this setup is the PIR sensor. I wanted to eliminate that complexity from this, just to make sure my demo would work as well as it possibly could. Speaking of the demo, we're going to run this printed image of a crow past it and we'll see how well that works.
Kind of a mockup of our bird feeder scenario here a little bit. Let's see how this works when I run the script, it should pop up with the image here. Let's see if it’s going to recognize that bird. Happy birds sitting at the feeder. All right, well, we've got something here. We got a 12% probability match that it’s a Junco hyemalis oreganus, if I can pronounce that correctly. If I just stop that there, now we're going to look at this data on Notehub. We can see that my device was last seen recently, and I can go to my events tab here. If I bring this over, we can see that my Junco hyemalis oreganus bird was detected and relayed, so I should be able to grab my phone here. Sure enough, I actually got two messages. I think that an extra one was captured, the Ardenna creatopus and Junco hyemalis oreganus were captured. All right, so with that demo, I'd like to turn this now over to Alessandro Grande. Again, Alessandro is Director of Technology at Edge Impulse and a big reason why so many of us new to ML have been successful at creating some really engaging and creative projects.
Speaker: Rob 24:27
All right, so with that demo, I'd like to turn this now over to Alessandro Grande. Again, Alessandro is Director of Technology at Edge Impulse and a big reason why so many of us new to ML have been successful at creating some really engaging and creative projects.
Speaker: Alessandro Grande - Director of Technology at Edge Impulse
Hi, Rob, thank you for having me on the show. I'm really excited to be here today and talk about how ML and IoT really come together and show you some of the cool things that we at Edge Impulse have been doing in the last few months. First of all, I just wanted to kind of give you a quick introduction and let your audience know who I am. I'm Alessandro Grande. I'm the Director of Technology at Edge Impulse. My background is in embedded software engineering. I was working for ARM, where I've spent many of the last years. I was lucky enough to be involved in Tiny ML and embedded ML from the start a few years ago when TensorFlow Lite micro came about. It was really cool to see everything getting started and this traction around really doing more computing at the edge, right. One of the cool things was that ARM and Google really did a lot of great work together. After a few years of working on this at ARM, I became so passionate about it that I wanted to move on and do more in this space, and that's how I came to learn about Edge Impulse and move eventually to Edge Impulse, where I'm now spending my time. It's really, really a cool place to be, and the reason for that is that we are the number one developer platform for everything embedded ML. Anything from a Cortex-M, ARM Cortex M microcontroller, all the way up to Cortex A. Think about Arduino to Raspberry Pi. Then from there, though we also support any other Linux-based platform, for example, we support Nvidia, NVIDIA Jetson as well. We allow developers to do ML development for any targets, as I mentioned, from Cortex-M to Linux-based devices. The cool thing is that what we do at Edge Impulse is really developer tools, empowering engineers all over the world to bring ML to production in real embedded systems. Okay, so I wanted to perhaps show you some of the cool stuff that we've done in the last few months. Here we go. Let me share my screen for a second. Okay, so one of the cool announcements from the last few months is actually the EON Tuner. I'm not sure if you're familiar with it, but this is another piece of the puzzle, I guess. It is “AutoML” that allows developers and engineers to engineer their machine learning pipeline because it actually allows you, as a developer and an engineer, to pick the best combination of feature extraction algorithms and ML models that combined could give you the best results that you want. What's really cool is that you can set the constraints that you have in your system, and the tuner will find the best feature extraction model to fit your needs.
Okay, so I'll show you here the docs on the Edge Impulse website. You can see here, docs.edgeimpulse.com/docs/edge-impulse-studio/eon-tuner, and on here, you can find a lot of information on what the EON Tuner does. More importantly, you can find two public projects. You can see them here: “Recognize sounds from audio” or “responding to your voice.” Okay, let's click on recognize sounds from audio, for example. What is a public project first of all? A public project is the evolution of pre-trained models. Why do I say this? Because pre-trained models are effectively a way for developers to share what they've done with other developers in a format that is usable, because you can use a pre-trained model in your flow. But it doesn't give you the capability of sharing everything from data to models and feature extraction algorithms all in one. A public project is effectively a way of sharing all the work that you've done from the data collection to the feature extraction to the model all in one project with everyone you want. What I can do actually is I can clone this project.
It will ask me what name I want to give it. I will call it Alessandro recognize sound from audio. It will clone this project. This is my Alessandro project. What this allows you to do now that you've got this project cloned in your repository, you can see here that it's part of my projects on Edge Impulse itself. Now I can actually do modifications to this project. As usual, you can see on the left all the different steps that you can walk through. First of all, you can pick the device that you want to connect. In this case, we're not picking a specific target, because my objective today is to show you how to go from a public project to your own custom project in your workspace. Even more interestingly, in this public project, I will assure you what the EON Tuner is capable of. This project is actually recognizing two different classes. It is recognizing the difference between water noise from a faucet and normal white noise. You can see that for example, we've collected 13 minutes of data overall, and you can see that there are a bunch of different noises that we've collected. We've collected seven minutes of faucet sounds that are fairly different from each other to give a good example of data here. Then what you would actually move to the Impulse design, where you can actually create your Impulse. In this case, because it's a clone project, you already have all the blocks chosen for you that were already chosen in a prebuilt project. We've chosen our feature extraction algorithm and MFE, followed by an NN classifier here. Obviously, you can dig down as usual, in the different blocks and you can see the spectrogram of this. You can actually move around throughout the different sounds and you can see how the spectrograms changes.
Same for the classifier, you can see all the different aspects of how this is built and all that. But what's more interesting, is actually what the EON Tuner shows you. You can see here that the EON Tuner is giving you basically different combinations of feature extraction and then classifiers to give you the response that you want within your constraints, let's say. In this case, the time per inference that we set is 100 milliseconds. We've set a target RAM of 340 and a target ROM of 1024 kilobytes. And we're actually running the models here on an M7, clocked at 216 megahertz. You can see that the EON Tuner has actually gone through a bunch of different ML pipelines and gives you a lot of different results. What's really cool here is that it shows you, for example, that this first is actually doing a lot of DSP. So you can see here that it's keeping well within that 100 milliseconds. So it’s doing 52 milliseconds of DSP, and only 17 milliseconds of NN.
Then you can see various other models or pipelines. This one, for example, is splitting almost half and half between the DSP and NN, and so on. There are some that are doing more DSP and some that are doing more NN. For example, this one is doing way more neural network processing, than it is DSP. But the cool thing about all this is that the EON Tuner gives you the ability to search the state space and find the best model and DSP pipeline for your use case. You can then look, these results here are based on the validation dataset. But you can also look at how is this performing on my test dataset? And you can see here that all three performing 100%. Now the dataset that we have here, the test dataset is quite small. This is just an example, but this allows you to really look at how well your model is performing on your test dataset and potentially on your production dataset.
As usual, you can then go through the retrain, the live classification. Finally, you can deploy to all the targets that we have available as a pre-built binary. Also in the same way you can create a library, a C++ library or an Arduino library, to include in your project. With this, I'm happy to have shown you the latest and greatest parts of our tool and what it does. As always, if you have any questions, or if you want to chat more about this, reach out to me or to Edge Impulse on our forum, and we'll happily answer any of your questions. I wanted to really thank you Rob for having me on the show. It's been a pleasure to share the latest insights about Edge Impulse with your community and I’m really looking forward to having you and the rest of the Blues Wireless team at Imagine 2021 in September. I’m looking forward to that event and to chatting about IoT and embedding and it all coming together. Thank you.
Speaker: Rob 35:20
All right. Hopefully, it's clear how valuable Edge Impulse can be when you're working with machine learning on microcontrollers, and single-board computers like the Raspberry Pi, which happens to be a great transition into the second project I wanted to talk about today. To start out, part of the pandemic for me and my family meant filling this newfound time void with walking around the neighborhood. I started to notice that certain roads really do get a lot more traffic, and people are clearly speeding through them at a ridiculous rate of speed. I wanted to stop assuming what I saw as some relatively reckless driving behavior and start proving it. I thought, with the right hardware and services, could I build an IoT project that would do just that? And this, of course, became what I've termed a portable speed trap. This project uses a machine learning model to identify vehicles, not individual vehicles, so we're not looking at individual vehicle detail or license plates. Just is it a car? Is it a van, or is it a bike or a person? Also, this uses a Doppler radar sensor. Now, they’re not very cheap, but for $250, I was able to get a sensor that could quickly and accurately sense the speed of an object in front of it. I used a Notecard to relay cellular data to the cloud and a cloud-based dashboard to pretty up the data and kind of make sense of everything I was sending its way. Now, I can't continue talking about this project without acknowledging some of the awesome comments I've received about the project: “Rob seems kind of like a narc” at the top, that's probably my favorite one. It is, yes, it is kind of a Karen-esque project, I will admit. But I can't emphasize enough that this project does not accumulate any private data. No license plate info and no pictures are saved.
Not even tracking anything like the make, model, or colors of a car. It's just purely aggregated sensor data. I’ll throw that disclaimer out there. So fast forwarding in time, my initial cobbled-together solution looked something like this spaghetti monstrosity. Again, using a Raspberry Pi and a Pi camera module, radar sensors are what you see dangling down at the bottom. In the upper right is a seven-segment display that just provides some active feedback in the field so I know something is going on and the program hasn't crashed. For this project, I used Edge Impulse to create an object detection model. I ended up supplying my model with about 200-ish images. If I were building a production-ready version of this model, I'd probably want to 10x that number, to be honest. But my model, even with only 200 or so images had an accuracy of about 87%, which is not too bad. Now, unlike my birding project, this time I did create the model from scratch. To do that, I sat down for about a half-hour or so and took pictures of every car that drove by on the street. And yes, even a cop at one point, which was pretty awesome. You'll notice the box drawn around the image. This is from Edge Impulse, the Edge Impulse web-based UI. So part of the process of creating your own Edge Impulse object detection model is to classify what and where objects are in your training and testing images. There's a super slick UI provided to do this.
This ended up being my deployment on this road. The speed limit is about 30 miles an hour, but people routinely go considerably faster. Anyway, my assembly was a little cleaner than my test implementation but still not exactly ready to be commercialized. The results—well, I used Ubidots as my cloud reporting provider, again, sending data from Notecard to our Notehub service and onto Ubidots. I calculated a speeding event as going more than five miles per hour over the speed limit. I found that about half the cars were technically speeding, with the fastest clocking in at about 54. Super fun project. You can also read more about this project on my hackster page. Again, let's take a quick look at the code and do as good of a demo as we can possibly do here in my office. Okay, so here's the code for the speed trap project. Again, a lot of the same concepts going on as in the birding project. There’s the same general feel, we're initializing some variables for Edge Impulse and we've got a path to our Edge Impulse generated model file. We're initializing our seven-segment display. Again, initializing the Notecard. The code you see is a little bit different because in this project, I'm actually using the Note Python SDK provides a fluent API that makes it a little bit easier to configure and work with your Notecard. We're also using GPS location tracking, which is pretty handy with this kind of project so you can actually tell where you are using that GPS module on the Notecard.
Let's fast forward here a little bit, the Edge Impulse runtime is going to return an array of data that we can sort through. So we can actually get the type of vehicle returned by my model, which is going to be like a car, van, or bike, something like that, and the confidence of the result. For debugging purposes, we're going to print out what it is and how confident we are in the data. I'm going to say here that if the device type is a car and we're greater than or equal to 60% confidence, we can go ahead and get the speed. So we grab the current speed, which was retrieved from this speed.py script here. This all comes from the OPS module, and we get the current GPS location from the car. I defaulted to a specific latitude and longitude in this case, in case I wasn't able to get a good GPS reading. Some simple logic to check if the vehicle is speeding as well.
Then I add a Note. I add an event to Notehub.io. I send a bunch of data like the timestamp, the percentage confidence, the location, the speed, the current speed limit, and a Boolean whether or not they are actually speeding. Then we repeat this ad nauseam. Again, less than 200 lines of code for running this project from scratch. All right, time for demos round two. Now, if you thought the first demo was a little bit tricky with holding up a printed picture of a bird in front of our camera in order to fake out an ML model, well, this time, we're gonna try and fake it out with a matchbox car with the added variable of also trying to register speed with our OmniPresense OPS 243 speed sensor. To take one step back, we're using the same Raspberry Pi, the same Notecarrier Pi and Notecard, and the Pi camera module. Now we're going to be pulling in images at about 10 frames per second.
That's a lot of data for the Edge Impulse runtime to handle, so we'll see how that holds up. Let's jump right into it. Let's see what we can do about faking everything out here. So if you can see the shell; you'll see that we're initializing our Doppler speed sensor. In order to register a speed, I'm going to start flailing my arms here, while holding up an image or holding up a matchbox car, rather. We'll see if we can possibly match a car zooming in here. Okay, we got one there. Let’s see if we can get another one. Okay, got another one in. Maybe I can get another one. Just ridiculous. I’m gonna get one more, then I have to quit. Okay, I'll have to be happy with the two I guess. Well, let's pop over to Notehub here. We can see a device is active here. Let's check out our events, and let's take a look at this event. We've got a Note that was captured. There's 52% confidence, and we've captured a speed of two miles an hour in a speed zone with a speed limit of 25 miles an hour. So our data is coming through Notehub. Let's see what this looks like to Ubidots. Let's refresh this to make sure we're looking at the most recent data.
Sure enough, here's my event. Here's my location on the map along with a confidence level, the speed, and the speed limit. Most importantly, we can head back over to our dashboard and see what our latest data looks like. Now, this is great. The location shows up on the map. Here's our average recorded speed, which is about 1.6 miles an hour. Nobody's ever speeding because we're not able to get up to 25 miles an hour. The fastest speed I was able to record with my flailing arms was about four miles an hour. So a fun little project to do. I'm glad we were able to fake out our system just a tiny bit to see this all in action.
Speaker: Rob 44:07
That's it. Thank you all very much for attending. Take a look at dev.blues.io for a variety of developer resources to learn more about Blues Wireless and the Notecard. Take 15% off any starter kit in the Blues Wireless store at bitly/blues-edgeimpulse. If and when you want to learn more about Edge Impulse, head to EdgeImpulse.com. Also, there is a big event coming up at the end of September called Imagine that the Edge Impulse folks are putting on. Learn more about that edgeimpulse.com/imagine.
Thanks again everyone, and have a great rest of your day.