Hedgehog AI Network

Forbes: Qualcomm’s Boom Highlights AI Shift To The Edge

Written by Marc Austin | May 30, 2024 10:05:12 PM

I encourage you to read industry observer Scott Raynovich's article about the AI Edge.  This was a great read for me, and the article quotes my co-founder Mike Dvorkin. 

“We have only scratched the surface of AI as it moves out into verticals, private AI, edge, and distributed cloud. There's more to AI than LLMs and SLMs, and vertical/domain-specific models will dominate the new deployments outside of the large cloud players,” Mike Dvorkin, a cofounder and CTO of cloud networking company Hedgehog, told me in a recent interview. “The opportunity is immense, and it will require new thinking about infrastructure and how it's consumed."

 

Wireless + mobile + IoT + cloud infrastructure + edge infrastructure = AI infrastructure

The key market news in Scott's article is Qualcomm's announcement of the Snapdragon® X80 5G Modem-RF System.  Why the heck is a new 5G device radio chip relevant to AI?   

I recall the first Qualcomm Snapdragon chip in 2000 offering the promise of location based services for my Mobiquity mobile ride sharing app.  It turned out Mobiquity was about 7 years too early.  It needed 3G and iPhone infrastructure to be a viable app.  That started my journey on mobile applications on pretty much any device with enough processing power and a network connection.  My career evolved into running one of the first smartphone businesses, then circled back to where we started with this new term Internet of Things.    

Most all of the IoT projects we enabled over the years involved some kind of data collection for big data for machine learning.  Many also involved AI inference.  Tesla Autopilot is a great example probably worth a story. 

When I was at Jasper, our first smart car customer was Tesla.  They used Jasper Control Center (now Cisco IoT Control Center) to add 4G LTE connectivity to their cars.  Tesla used this connectivity to collect data like vehicle diagnostics, road conditions and driving activity.  This data traveled over the LTE network into a Tesla private cloud data center.  Tesla then used the data to train their Autopilot self-driving AI model.  Once trained, they used the LTE network again to deploy the Autopilot model to compute resources in the car to enable the self-driving feature.  Autopilot needs to run in the car because it needs immediate access to new data coming from vehicle sensors and real-time road conditions to make safe driving decisions.  In this situation latency from running the AI inference back in the cloud quite literally kills.  You have to run the inference next to the source of data.  Low latency inference and executing actions with that inference are the primary requirements for AI infrastructure at the data edge.         

What is the AI Edge?

The AI Edge feels like a frustratingly vague term, but it generally means a location like a car or a factory where IoT sensors collect data sets for AI training and fine-tuning.  Once the training and tuning is complete, the AI Edge becomes the place where you use tuned models for low-latency AI inference with new data inputs and some kind of autonomous action.   Many times that inference data or automation action comes from the same IoT devices that collected data for AI training and fine-tuning.    

We have designed the Hedgehog AI network for exactly these kinds of distributed cloud use cases.  To make it all work you need IoT devices connected to GPU clusters in cloud for training and fine tuning, then you need smaller GPU clusters next to the IoT devices for low latency inference.  Hedgehog is a key piece of the AI infrastructure equation.  It will be interesting when the investor community comes to that realization the same way they suddenly realized Qualcomm is a relevant piece of the AI puzzle.