The edge is growing, and cloud providers know it. That’s why they’re creating more tools to help with embedded programming.
According to IDC’s research, edge computing is growing, with 73% of companies in 2021 saying that computing is a strategic initiative for them and they are already making investments to adopt it. Last year, especially, saw a lot of that growth, according to Dave McCarthy, the research vice president of Cloud and Edge Infrastructure Services at IDC.
Major cloud providers have already realized the potential for the technology and are adding edge capabilities to their toolkit, which now change the way developers can build for that technology.
“AWS was trying to ignore what was happening in the on-premises and edge world thinking that everything would go to the cloud,” McCarthy said. “So they finally kind of realized that in some cases, cloud technologies, the cloud mindset, I think works in a lot of different places, but the location of where those resources are has to change.”
For example, in December 2020, AWS came out with AWS Wavelength, which is a service that enables users to deliver ultra-low latency applications for 5G devices. In a way, AWS is embedding some of their cloud platform inside of telco networks such as Verizon, McCarthy explained.
Also, last year, AWS rewrote Greengrass, an open-source edge runtime, to be more friendly to cloud-native types of environments. Meanwhile, Microsoft is doing the same with its own IoT platform.
“This distribution of infrastructure is becoming more and more relevant. And the good news for developers is it gives them so much more flexibility than they had in the past; flexibility about saying, I don’t have to compromise anymore because my cloud native kind of development strategy is limited to certain deployment locations. I can go all-in on cloud native, but now I have that freedom to deploy anywhere,” McCarthy said.
Development for these types of devices has also significantly changed since its early stages.
At first, the world of embedded systems was that intelligent devices gathered info on the world. Then, AI was introduced and all of that data that was acquired began being processed in the cloud. Now, the world of edge computing is about moving real-time analysis to happen at the edge.
“Where edge computing came in was to marry the two worlds of IoT and AI or just this intelligence system concept in general, but to do it completely autonomously in these locations,” McCarthy said. “Not only were you collecting that data, but you had the ability to understand it and take action, all within that sort of edge location. That opened the door to so many more things.”
In the early days of the embedded software world, everything seemed very unique, which required specialized frameworks and a firm understanding of how to develop for embedded operating systems. That has now changed with the adoption of standardized development platforms, according to McCarthy.
Support for edge deployments
A lot more support for deployments at the edge can now be seen in cloud native and container-based applications.
“The fact that the industry, in general, has started to align around Kubernetes as being the main orchestration platform for being able to do this just means that now it’s easier for developers to think about building applications using that microservices mindset, they’re putting that code in containers with the ability to place those out at the edge,” McCarthy said. “Before, if you were an embedded developer, you had to have this specialized skill set. Now, this is becoming more available to a wider set of developers that maybe didn’t have that background.”
Some of the more traditional enterprise environments, like VMware or Red Hat, also have been looking at how to extend their platforms to the edge. Their strategy, however, has been to take their existing products and figure out how to make them more edge-friendly.
In many cases, that means being able to support smaller configurations, being able to handle situations where the edge environment might be disconnected.
This is different from the approach of a company like SUSE, which has a strategy to create some edge-specific things, according to McCarthy. When you look at SUSE’s Enterprise Linux, you know, they have created a micro version that’s specifically designed for the edge.
“These are two different ways of tackling the same problem,” McCarthy said. “Either way, I think they’re both trying to attack this from that perspective of let’s create standardization with familiar tools so that developers don’t have to relearn how to do things. In some respects, what you’re doing is abstracting some of the complexity of what might be at the edge, but give them that flexibility of deployment.”
This standardization has proven essential because the further you move towards the edge, there is greater diversity in hardware types. Depending on the type of sensors being dealt with, there can be issues with communication protocols and data formats.
This happens especially in vertical industries such as manufacturing that already have legacy technology that needs to be brought into this new world, McCarthy said. However, this level of uniqueness is becoming rarer than before with less on the unique side and more being standardized.
Development requirements differ
Developing for the edge is different than for other form factors because edge devices have a longer lifespan than things that can be found in a data center; something that’s always been true in the embedded world. Developers now have to think about the longer lifespan of both the hardware and the software that sits on top of it.
At the same time, though, the fast pace of today’s development world has driven the demand to deliver new features and functionalities faster, even for these devices, according to McCarthy.
That’s why the edge space has seen the prevalence of device management capabilities offered by cloud providers that give enterprises information about whether they can turn off that device, update the firmware of that device, or change configurations.
In addition to elucidating the life cycle, device management also helps out with security, because it offers guidance on what data to pull back to a centralized location versus what can potentially be left out on the edge.
“This is so you can get a little bit more of that agility that you’ve seen in the cloud, and try to bring it to the edge,” McCarthy said. “It will never be the same, but it’s getting closer.”
Decentralization a challenge
Developing for the edge still faces challenges due to its decentralization nature, which requires more monitoring and control than a traditional centralized computing model would need, according to Mrudul Shah, the CTO of Technostacks, a mobile app development company in the United States and India.
Connectivity issues can cause major setbacks on operations, and often the data that is processed at the edge is not discarded, which causes unnecessary data stuffing, Shah added.
The demand for application use cases at these different edge environments is certainly extending the need for developers to consider the requirements in that environment for that particular vertical industry, according to Michele Pelino, a principal analyst at Forrester.
Also, the industry has had a lot of device fragmentation, so there is going to be a wide range of vendors that say they can help out with one’s edge requirements.
“You need to be sure you know what your requirements are first, so that you can really have an apples to apples conversation because they are going to be each of those vendor categories that are going to come from their own areas of expertise to say, ‘of course, we can answer your question,’ but that may not be what you need,” Pelino said.
Currently, for most enterprise use cases for edge computing, commodity hardware and software will suffice. When sampling rates are measured in milliseconds or slower, the norms are low-power CPUs, consumer-grade memory and storage, and familiar operating systems like Linux and Windows, according to Brian Gilmore, the director of IoT Product Management at InfluxData, an open-source time series database.
The analytics here are applied to data and events measured in human time, not scientific time, and vendors building for the enterprise edge are likely able to adapt applications and architectures built for desktops and servers to this new form factor.
“Any developer building for the edge needs to evaluate which of these edge models to support in their applications. This is especially important when it comes to time series data, analytics, and machine learning,” Gilmore said. “Edge autonomy, informed by centralized — currently in the cloud — evaluation and coordination, and right-place right-time task execution in the edge, cloud, or somewhere in between, is a challenge that we, as developers of data analytics infrastructure and applications, take head on.”
No two edge deployments the same
An edge architecture deployment asks for comprehensive monitoring, critical planning, and strategy as no two edge deployments are the same. It is next to impossible to get IT staff to a physical edge site, so deployments should be critically designed as a remote configuration to provide resilience, fault tolerance and self-healing capabilities, Technostacks’ Shah explained.
In general, a lot of the requirements that developers need to account for will depend on the environment that edge use case is being developed for, according to Forrester’s Pelino.
“It’s not that everybody is going in one specific direction when it comes to this. So you sort of have to think about the individual enterprise requirements for these edge use cases and applications with their developer approach, and sort of what makes sense,” Pelino said.
To get started with their edge strategy, organizations need to first make sure that they have their foundation in place, usually starting with their infrastructure, IDC’s McCarthy explained.
“So it means making sure that you have the ability to place applications where you need so that you have the management and control planes to address the hardware, the data, and the applications,” McCarthy explained.
Companies also need to layer that framework for future expansion as the technology becomes even more prevalent.
“Start with the use cases that you need to address for analytics, for insight for different kinds of applications, where those environments need to be connected and enabled, and then say ok, these are the types of edge requirements I have in my organization,” Forrester’s Pelino said. “Then you can speak to your vendor ecosystem about do I have the right security, analytics, and developer capabilities in-house, or do I need some additional help?”
When adopted correctly, edge environments can provide many benefits.
Low latency is one of the key benefits of computing at the edge, along with the ability to do AI and ML analytics in different locations which might have not been possible before, which can save cost by not sending everything to the cloud.
At the edge, data collection speeds can approach near-continuous analog to digital signal conversion outputs of millions of values per second, and maintaining that precision is key to many advanced use cases in signal processing and anomaly detection. In theory, this requires specific hardware and software considerations — FPGA, ASIC, DSP, and other custom processors, highly accurate internal clocks, hyper-fast memory, real-time operating systems, and low-level programming which eliminates internal latency, InfluxData’s Gilmore explained.
Despite popular opinion, the edge is beneficial for security
Security has come up as a key challenge for edge adoption because there are more connected assets that contain data, and there is also an added physical component for those devices to get hacked. But, it can also improve security.
“You see people are concerned about the fact that you’re increasing the attack surface, and there’s all of this chance for somebody to insert malware into the device. And unfortunately, we’ve seen examples of this in the news where devices have been compromised. But, there’s another side of that story,” IDC’s McCarthy said. “If you look at people who are concerned about data sovereignty, like having more control about where data lives and limiting the movement of data, there is another storyline here about the fact that edge actually helps security.”
Security comes into play at many different levels of the edge environment. It is necessary at the point of connecting the device to the network, at the data insight analytics piece in terms of ensuring who gets access to it, and security of the device itself, Forrester’s Pelino explained.
Also, these devices are now operating in global ecosystems, so organizations need to determine if they match the regulatory requirements of that area.
Security capabilities to address many of these concerns are now coming from the different cloud providers, and also chipset manufacturers offer different levels of security to their components.
In edge computing, any data traversing the network back to the cloud or data center can also be secured through encryption against malicious attacks, Technostacks’ Shah added.
What constitutes edge is now expanding
The edge computing field, in general, is now expanding to fields such as autonomous driving, real-time insight into what’s going on in a plant or a manufacturing environment, or even what’s happening with particular critical systems in buildings or different spaces such as transportation or logistics, according to Pelino. It is growing in any business that has a real-time need or has distributed operations.
“When it comes to the more distributed operations, you see a lot happening in retail. If you think about typical physical retailers that are trying to close that gap between the commerce world, they have so much technology now being inserted into those environments, whether it’s just the point of sale system, and digital signage, and inventory tracking,” IDC’s McCarthy said.
The edge is being applied to new use cases as well. For example, Auterion builds drones that they can then give to fire services. Whenever there’s a fire, the drone immediately shoots and sends back footage of what is happening in that area before the fire department gets there and says what kind of fire to prepare for and to be able to scan whether there are any people in there. Another new edge use case is the unmanned Boeing MQ-25 aircraft that can connect with a fighter at over 500 miles per hour autonomously.
“While edge is getting a lot of attention it is still not a replacement for cloud or other computing models, it’s really a complement,” McCarthy said. “The more that you can distribute some of these applications and the infrastructure underneath, it just enables you to do things that maybe you were constrained on before.”
Also, with remote work on the rise and the aggressive acceleration of businesses leveraging digital services, edge computing is imperative for a cheaper and reliable data processing architecture, according to Technostacks’ Shah.
Companies are seeing benefits in moving to the edge
Infinity Dish
Infinity Dish, which offers satellite television packages, has adopted edge computing in the wake of the transition to the remote workplace.
“We’ve found that edge computing offers comparable results to the cloud-based solutions we were using previously, but with some added benefits,” said Laura Fuentes, operator of Infinity Dish. “In general, we’ve seen improved response times and latency during data processing.”
Further, by processing data on a local device, Fuentes added that the company doesn’t need to worry nearly as much when it comes to data leaks and breaches as it did using cloud solutions.
Lastly, the transmission costs were substantially less than they would be otherwise.
However, Fuentes noted that there were some challenges with the adoption of edge.
On the flip side, we have noticed some geographic discrepancies when attempting to process data. Additionally, we had to put down a lot of capital to get our edge systems up and running—a challenge not all businesses will have the means to solve,” Fuentes said.
Memento Memorabilia
Kane Swerner, the CEO and co-founder of Memento Memorabilia, said that as her company began implementing edge throughout the organization, hurdles and opportunities began to emerge.
Memento Memorabilia is a company that offers private signing sessions to guarantee authentic memorabilia from musicians, celebrities, actors, and athletes to fans.
“We can simply target desired areas by collaborating with local edge data centers without engaging in costly infrastructure development,” Swerner said. “To top it all off, edge computing enables industrial and enterprise-level companies to optimize operating efficiency, improve performance and safety, automate all core business operations, and guarantee availability most of the time.”
However, she said that one significant worry regarding IoT edge computing devices is that they might be exploited as an entrance point for hackers. Malware or other breaches can infiltrate the whole network via a single weak spot.
There are 4 critical markers for success at the edge
A recent report by Wind River, a company that provides software for intelligent connected systems, found that there are four critical markers for successful intelligent systems: true compute on the edge, a common workflow platform, AI/ML capabilities, and ecosystems of real-time applications.
The report “13 Characteristics of an Intelligent Systems Future” surveyed technology executives across various mission-critical industries and revealed the 13 requirements of the intelligent systems world for which industry leaders must prepare. The research found that 80% of these technology leaders desire intelligent systems success in the next five years.
True compute at the edge, by far the largest of the characteristics of the survey at 25.5% of the total share, is the ability of devices to fully function in near-latency-free mode on the farthest edge of the cloud, for example, a 5G network, an autonomous vehicle, or a highly remote sensor in a factory system.
The report stated that by 2030, $7 trillion of the U.S. economy will be driven by the machine economy, in which systems and business models increasingly engage in unlocking the power of data and new technology platforms. Intelligent systems are helping to drive the machine economy and more fully realize IoT, according to the report.
Sixty-two percent of technology leaders are putting into place strategies to move to an intelligent systems future, and 16% are already committed, investing, and performing strongly. It’s estimated that this 16% could realize at least four times higher ROI than their peers who are equally committed but not organized for success in the same way.
The report also found that the two main challenges for adopting an intelligent systems infrastructure are a lack of skills in this field and security concerns.
“So when we did the simulation work with about 500 executives, and said, look, here are the characteristics, play with them, we got like 4,000-plus simulations, things like common workflow platform, having an ecosystem for applications that matter, were really important parts of trying to break that lack of skill or lack of human resource in this journey,” said Michael Gale, Chief Marketing Officer at Wind River.
For some industries, the move to edge is essential for digital transformation, Gale added.
“Digital Transformation was an easy construct in finance, retail services business. It’s really difficult to understand in industrial because you don’t really have to have a lot of humans to be part of it. It’s a machine-based environment,” Gale said. “I think it’s a realization intelligence systems model is the transformation moment for the industrial sector. If you’re going to have a full lifecycle intelligence systems business, you’re going to be a leader. If you’re still trying to do old things, and wrap them with intelligent systems, you’re not going to succeed, you have to undergo this full transformational workflow.”