In late 2022, ChatGPT had its “iPhone moment” and quickly became the poster child of the Gen AI movement after going viral within days of its release. For LLMs’ next wave, many technologists are eyeing the next big opportunity: going small and hyper-local. 

The core factors driving this next big shift are familiar ones: a better customer experience tied to our expectation of immediate gratification, and more privacy and security baked into user queries within smaller, local networks such as the devices we hold in our hands or within our cars and homes without needing to make the roundtrip to data server farms in the cloud and back, with inevitable lag times increasing over time. 

While there’s some doubts on how quickly local LLMs could catch up with GPT-4’s capabilities such as its 1.8 trillion parameters across 120 layers that run on a cluster of 128 GPUs, some of the world’s best known tech innovators are working on bringing AI “to the edge” so new services like faster, intelligent voice assistants, localized computer imaging to rapidly produce image and video effects, and other types of consumer apps. 

For example, Meta and Qualcomm announced in July they have teamed up to run big AI models on smartphones. The goal is to enable Meta’s new large language model, Llama 2, to run on Qualcomm chips on phones and PCs starting in 2024. That promises new LLMs that can avoid cloud’s data centers and their massive data crunching and computing power that is both costly and becoming a sustainability eye-sore for big tech companies as one of the budding AI’s industry’s “dirty little secrets” in the wake of climate-change concerns and other natural resources required like water for cooling. 

The challenges of Gen AI running on the edge

Like the path we’ve seen for years with many types of consumer technology devices, we’ll most certainly see more powerful processors and memory chips with smaller footprints driven by innovators such as Qualcomm. The hardware will keep evolving following Moore’s Law. But in the software side, there’s been a lot of research, development, and progress being made in how we can miniaturize and shrink down the neural networks to fit on smaller devices such as smartphones, tablets and computers. 

Neural networks are quite big and heavy. They consume huge amounts of memory and need a lot of processing power to execute because they consist of many equations that involve multiplication of matrices and vectors that extend out mathematically, similar in some ways to how the human brain is designed to think, imagine, dream, and create. 

There are two approaches that are broadly used to reduce memory and processing power required to deploy neural networks on edge devices: quantization and vectorization: 

Quantization means to convert floating-point into fixed-point arithmetic, that is more or less like simplifying the calculations made. If in floating-point you perform calculations with decimal numbers, with fixed-point you do them with integers. Using these options  lets neural networks take up less memory, since floating-point numbers occupy four bytes and fixed-point numbers generally occupy two or even one byte. 

Vectorization, in turn, intends to use special processor instructions to execute one operation over several data at once (by using Single Instruction Multiple Data – SIMD – instructions). This speeds up the mathematical operations performed by neural networks, because it allows for additions and multiplications to be carried out with several pairs of numbers at the same time.

Other approaches gaining ground for running neural networks on edge devices, include the use of Tensor Processor Units (TPUs) and Digital Signal Processors (DSPs) which are processors specialized in matrix operations and signal processing, respectively; and the use of Pruning and Low-Rank Factorization techniques, which involves analyzing and removing parts of the network that don’t make relevant difference to the result.

Thus, it is possible to see that techniques to reduce and accelerate neural networks could make it possible to have Gen AI running on edge devices in the near future.

The killer applications that could be unleashed soon 

Smarter automations

By combining Gen AI running locally – on devices or within networks in the home, office or car –  with various IoT sensors connected to them, it will be possible to perform data fusion on the edge. For example, there could be smart sensors paired with devices that can listen and understand what’s happening in your environment,  provoking an awareness of context and enabling intelligent actions to happen on their own – such as automatically turning down music playing in the background during incoming calls, turning on the AC or heat if it becomes too hot or cold, and other automations that can occur without a user programming them. 

Public safety 

From a public-safety perspective, there’s a lot of potential to improve what we have today by connecting an increasing number of sensors in our cars to sensors in the streets so they can intelligently communicate and interact with us on local networks connected to our devices. 

For example, for an ambulance trying to reach a hospital with a patient who needs urgent care to survive, a connected intelligent network of devices and sensors could automate traffic lights and in-car alerts to make room for the ambulance to arrive on time. This type of connected, smart system could be tapped to “see” and alert people if they are too close together in the case of a pandemic such as COVID-19, or to understand suspicious activity caught on networked cameras and alert the police. 

Telehealth 

Using the Apple Watch model extended to LLMs that could monitor and provide initial advice for health issues, smart sensors with Gen AI on the edge could make it easier to identify potential health issues – from unusual heart rates, increased temperature, or sudden falls with no limited to no movement. Paired with video surveillance for those who are elderly or sick at home, Gen AI on the edge could be used to send out urgent alerts to family members and physicians, or provide healthcare reminders to patients. 

Live events + smart navigation

IoT networks paired with Gen AI at the edge has great potential to improve the experience at live events such as concerts and sports in big venues and stadiums. For those without floor seats, the combination could let them choose a specific angle by tapping into a networked camera so they can watch along with live event from a particular angle and location, or even re-watch a moment or play instantly like you can today using a TiVo-like recording device paired with your TV. 

That same networked intelligence in the palm of your hand could help navigate large venues – from stadiums to retail malls – to help visitors find where a specific service or product is available within that location simply by asking for it. 

While these new innovations are at least a few years out, there’s a sea change ahead of us for valuable new services that can be rolled out once the technical challenges of shrinking down LLMs for use on local devices and networks have been addressed. Based on the added speed and boost in customer experience, and reduced concerns about privacy and security of keeping it all local vs the cloud, there’s a lot to love.