AI on the Edge: What it is, and Data Storage Challenges it Poses

AI on the edge refers to doing the AI processing and equations at the site of the object collecting the data.

“AI on the edge” is not a term I was familiar with when I listened to the podcast, “A Conversation with Marc Staimer of Dragon Slayer Consulting,” where Marc was being interviewed by BrainChip Vice President Rob Telson. To be perfectly honest, I was most interested in the fact that Marc’s business was called “Dragon Slayer” Consulting – I was thinking that if he really wanted to do some wicked “AI on the edge” he really should team up with the DethWench!

But in all seriousness, by the end of the podcast, I was thinking maybe he should team up with me in real life. This is because this thing called “AI on the edge” is basically a huge data storage problem I didn’t know about until I heard the podcast. Let me explain the problem as I understand it from what Marc said, then provide what I believe could theoretically solve it.

AI on the Edge: What Does that Mean?

I’ll explain to you what I understand “AI on the edge” to be after listening to Marc explain it in the podcast. Imagine you have a highly-technological object that is producing a lot of data that can be collected – like a Tesla car. The Tesla will have data coming from sensors (e.g., cooling system, speedometer, fuel use), and also coming from video (e.g., back of car, driver’s view). You will undoubtedly have other data streams, but those are a few ones that come to mind. (Personally, I take public transportation and walk…)

The trick is that those data are all produced at the Tesla, and are stuck there until they are moved. The data are stuck at the physical car. I would have called that “local” data, but what Marc calls it is “data on the edge” – meaning that the data are stuck at the site of where the data sources are.

Now let’s say you want to use those data in an artificial intelligence (AI) equation – like the airbag wants to know whether to deploy or not. Obviously, you need to take that “data on the edge” and do your “AI on the edge”. Marc’s point is that you really don’t have time to send the data back to some warehouse, grind them through a little extract-transform-load (ETL) protocol, serve them up to the AI engine, get the decision result from the AI engine (deploy or not deploy), and send that back to the Tesla’s airbag! It would all be over by then! (Which is why I take public transportation and walk…)

Solving the “AI on the Edge” Data Storage Problem

I could tell from listening to Marc that he was expecting people like me – data warehouse designers – to solve the data storage problem. In my opinion, he sounded a little exasperated by the problem. I admit, I felt like it sounded unsolvable when he was talking about it! Think of all that data on the Tesla. Even if you could store it all locally – er, “on the edge” – how could you do the minimum necessary ETL and feed it into an AI equation while you were having a car accident, and have it return a result in time for your airbag to intelligently deploy? I was starting to have more appreciation for the recent Tesla car accidents we hear about that people suspect are due to faulty AI deployment.

Theoretical Solution to the “AI on the Edge” Data Storage Problem

Well, here is my theoretical solution. Let me describe it in two diagrams. Here is the first one.

This diagram shows how video feed data could be scraped off and added to a local data closet without maxing out space.

Let me explain this diagram for you. Here, we are having a car accident. The blue car on the right is the Tesla. I have labeled four data sources: two sensors in the car (Sensor 1 and Sensor 2), and two video feeds (Video 1 and Video 2).

In this diagram, we will concentrate on just one of those data sources – Video 2. Let us think about video data, and how it moves through the pipeline in packets. I made a dotted line because I want you to imagine each packet marching out of the Video 2 source one by one. The packet gets to a routing mechanism, which decides whether or not any data from Video 2 needs to be included in an AI equation. Basically, I assume all of the sources will have at least some data that is needed for AI on the edge, but each piece of data will need to be evaluated, because you only want to keep the data needed for the AI equation.

The data that are not needed for the equation can be lazily sent to a warehouse off site. But the data needed are copied off, go through the ETL needed to reshape it for the “data closet” (I think this is what Marc meant when he said that term in the podcast). I think we are using the term “closet” because it is little, and we really don’t have much storage space at the physical car. (Although I wish he’d have used another word, because it is Pride month right now, and we really do not need our data to be in the closet!)

I call this a “Monte Carlo data closet”. I guess I am creative. I call it that because I envision doing essentially Monte Carlo sampling on-the-fly. A baby dataset of incoming ETL data to feed the AI algorithm is added to the closet, while old data of the same size is stripped off of the closet and sent to the data warehouse. That way, we don’t run out of room in the closet. (Note: This is not my actual clothing closet, which has no room left!)

Then, the new refreshed Monte Carlo dataset is sent from Video 2’s data closet to the AI engine (which is “on the edge”) for the equation. The result comes out – deploy airbag or don’t deploy airbag – and the result is sent back to the AI result warehouse for safekeeping. That way, we can go to the warehouse and recreate anything we need later, after the accident.

Envisioning my Solution to the “AI on the Edge” Data Storage Problem

I wanted to make a second diagram for you to compare to first diagram so you could see what was specific to the data source (Video 2) and what is used by all the data sources. In this one, instead of the data from Video 2, we follow the data from Sensor 1.

This second diagram shows another data feed coming from a sensor in the car.

Sensor 1 has its own data closet that just holds the ETL-prepared data needed from Sensor 1 for the AI engine. But as you can see, the AI engine is using data from the data closets from each sensor. This solves the issue that Marc was pointing out, which is being able to get the AI result quickly locally so the decisions can be used ASAP.

Marc was concerned about the AI engine having the data available and being able to chew through it locally fast enough so that a decision could be acted upon. I agree with Marc, but my concern is that any local solution won’t be replicable. This is solved in my design by the warehouses off site that store the results so they can be retrieved in the future.

Updated June 5, 2022. Added banner October 26, 2022. Revised banners July 10, 2023.

Read all of our data science blog posts!

Descriptive Analysis of Black Friday Death Count Database: Creative Classification

Descriptive analysis of Black Friday Death Count Database provides an example of how creative classification [...]

Classification Crosswalks: Strategies in Data Transformation

Classification crosswalks are easy to make, and can help you reduce cardinality in categorical variables, [...]

FAERS Data: Getting Creative with an Adverse Event Surveillance Dashboard

FAERS data are like any post-market surveillance pharmacy data – notoriously messy. But if you [...]

Dataset Source Documentation: Necessary for Data Science Projects with Multiple Data Sources

Dataset source documentation is good to keep when you are doing an analysis with data [...]

Joins in Base R: Alternative to SQL-like dplyr

Joins in base R must be executed properly or you will lose data. Read my [...]

NHANES Data: Pitfalls, Pranks, Possibilities, and Practical Advice

NHANES data piqued your interest? It’s not all sunshine and roses. Read my blog post [...]

Color in Visualizations: Using it to its Full Communicative Advantage

Color in visualizations of data curation and other data science documentation can be used to [...]

Defaults in PowerPoint: Setting Them Up for Data Visualizations

Defaults in PowerPoint are set up for slides – not data visualizations. Read my blog [...]

Text and Arrows in Dataviz Can Greatly Improve Understanding

Text and arrows in dataviz, if used wisely, can help your audience understand something very [...]

Shapes and Images in Dataviz: Making Choices for Optimal Communication

Shapes and images in dataviz, if chosen wisely, can greatly enhance the communicative value of [...]

Table Editing in R is Easy! Here Are a Few Tricks…

Table editing in R is easier than in SAS, because you can refer to columns, [...]

R for Logistic Regression: Example from Epidemiology and Biostatistics

R for logistic regression in health data analytics is a reasonable choice, if you know [...]

1 Comments

Connecting SAS to Other Applications: Different Strategies

Connecting SAS to other applications is often necessary, and there are many ways to do [...]

Portfolio Project Examples for Independent Data Science Projects

Portfolio project examples are sometimes needed for newbies in data science who are looking to [...]

Project Management Terminology for Public Health Data Scientists

Project management terminology is often used around epidemiologists, biostatisticians, and health data scientists, and it’s [...]

Rapid Application Development Public Health Style

“Rapid application development” (RAD) refers to an approach to designing and developing computer applications. In [...]

Understanding Legacy Data in a Relational World

Understanding legacy data is necessary if you want to analyze datasets that are extracted from [...]

Front-end Decisions Impact Back-end Data (and Your Data Science Experience!)

Front-end decisions are made when applications are designed. They are even made when you design [...]

Reducing Query Cost (and Making Better Use of Your Time)

Reducing query cost is especially important in SAS – but do you know how to [...]

Curated Datasets: Great for Data Science Portfolio Projects!

Curated datasets are useful to know about if you want to do a data science [...]

Statistics Trivia for Data Scientists

Statistics trivia for data scientists will refresh your memory from the courses you’ve taken – [...]

Management Tips for Data Scientists

Management tips for data scientists can be used by anyone – at work and in [...]

REDCap Mess: How it Got There, and How to Clean it Up

REDCap mess happens often in research shops, and it’s an analysis showstopper! Read my blog [...]

GitHub Beginners in Data Science: Here’s an Easy Way to Start!

GitHub beginners – even in data science – often feel intimidated when starting their GitHub [...]

ETL Pipeline Documentation: Here are my Tips and Tricks!

ETL pipeline documentation is great for team communication as well as data stewardship! Read my [...]

Benchmarking Runtime is Different in SAS Compared to Other Programs

Benchmarking runtime is different in SAS compared to other programs, where you have to request [...]

End-to-End AI Pipelines: Can Academics Be Taught How to Do Them?

End-to-end AI pipelines are being created routinely in industry, and one complaint is that academics [...]

Referring to Columns in R by Name Rather than Number has Pros and Cons

Referring to columns in R can be done using both number and field name syntax. [...]

The Paste Command in R is Great for Labels on Plots and Reports

The paste command in R is used to concatenate strings. You can leverage the paste [...]

Coloring Plots in R using Hexadecimal Codes Makes Them Fabulous!

Recoloring plots in R? Want to learn how to use an image to inspire R [...]

Adding Error Bars to ggplot2 Plots Can be Made Easy Through Dataframe Structure

Adding error bars to ggplot2 in R plots is easiest if you include the width [...]

AI on the Edge: What it is, and Data Storage Challenges it Poses

“AI on the edge” was a new term for me that I learned from Marc [...]

Pie Chart ggplot Style is Surprisingly Hard! Here’s How I Did it

Pie chart ggplot style is surprisingly hard to make, mainly because ggplot2 did not give [...]

Time Series Plots in R Using ggplot2 Are Ultimately Customizable

Time series plots in R are totally customizable using the ggplot2 package, and can come [...]

Data Curation Solution to Confusing Options in R Package UpSetR

Data curation solution that I posted recently with my blog post showing how to do [...]

Making Upset Plots with R Package UpSetR Helps Visualize Patterns of Attributes

Making upset plots with R package UpSetR is an easy way to visualize patterns of [...]

4 Comments

Making Box Plots Different Ways is Easy in R!

Making box plots in R affords you many different approaches and features. My blog post [...]

Convert CSV to RDS When Using R for Easier Data Handling

Convert CSV to RDS is what you want to do if you are working with [...]

GPower Case Example Shows How to Calculate and Document Sample Size

GPower case example shows a use-case where we needed to select an outcome measure for [...]

Querying the GHDx Database: Demonstration and Review of Application

Querying the GHDx database is challenging because of its difficult user interface, but mastering it [...]

Variable Names in SAS and R Have Different Restrictions and Rules

Variable names in SAS and R are subject to different “rules and regulations”, and these [...]

Referring to Variables in Processing Data is Different in SAS Compared to R

Referring to variables in processing is different conceptually when thinking about SAS compared to R. [...]

Counting Rows in SAS and R Use Totally Different Strategies

Counting rows in SAS and R is approached differently, because the two programs process data [...]

Native Formats in SAS and R for Data Are Different: Here’s How!

Native formats in SAS and R of data objects have different qualities – and there [...]

SAS-R Integration Example: Transform in R, Analyze in SAS!

Looking for a SAS-R integration example that uses the best of both worlds? I show [...]

Dumbbell Plot for Comparison of Rated Items: Which is Rated More Highly – Harvard or the U of MN?

Want to compare multiple rankings on two competing items – like hotels, restaurants, or colleges? [...]

2 Comments

Data for Meta-analysis Need to be Prepared a Certain Way – Here’s How

Getting data for meta-analysis together can be challenging, so I walk you through the simple [...]

Sort Order, Formats, and Operators: A Tour of The SAS Documentation Page

Get to know three of my favorite SAS documentation pages: the one with sort order, [...]

Confused when Downloading BRFSS Data? Here is a Guide

I use the datasets from the Behavioral Risk Factor Surveillance Survey (BRFSS) to demonstrate in [...]

2 Comments

Doing Surveys? Try my R Likert Plot Data Hack!

I love the Likert package in R, and use it often to visualize data. The [...]

2 Comments

I Used the R Package EpiCurve to Make an Epidemiologic Curve. Here’s How It Turned Out.

With all this talk about “flattening the curve” of the coronavirus, I thought I would [...]

Which Independent Variables Belong in a Regression Equation? We Don’t All Agree, But Here’s What I Do.

During my failed attempt to get a PhD from the University of South Florida, my [...]

“AI on the edge” was a new term for me that I learned from Marc Staimer, founder of Dragon Slayer Consulting, who was interviewed in a podcast. Marc explained how AI on the edge poses a data storage problem, and my blog post proposes a solution!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Verified by MonsterInsights