While the first thing that might come to mind when picturing the near-future applications of augmented reality are snazzier Snapchat filters, for major players in AR development, it has serious potential in business environments, with a definite roadmap, and the use cases have already been illustrated.

Hardware manufacturer Stanley Black & Decker, for instance, has made use of Meta’s proprietary “immersive AR” headset for training and to display schematics of high-value equipment right before their engineer’s eyes during repairs. This sort of real-time 3D informational visualization is a major focus for developers of enterprise AR who hope to bring the technology into their workplace as a common tool, Meta’s chief revenue officer Joe Mikhail said.

“We’ve been at it for several years, and generation over generation, we have identified the general productivity needs in the workforce so that every professional in the office can use [AR], which is 3D presentation,” Mikhail said. “So we can all tell stories and transfer knowledge by actually building experiences that are immersive and interactive, so you and I across the world can have a full-blown experience where we share ideas on other products or concepts and we can both manipulate and touch them and have a real conversation in real time. That is our first-party generalizable application, which we’re seeing a lot of engagement from the market around.”

Mikhail says the company has whittled down the use-cases that appear to be the most sought-after by enterprises looking to insert AR experiences into their work environment.

Three key use cases
“The first of three use-cases we see that are general for the entire market are sales and marketing presentations,” Mikhail said. “So it changes how we sell products and it improves conversion rates — that’s what our customers are saying — as well as to possibly sell products that are expensive to build at a lower cost by using a lot of heavy prototypes. With Stanley Black & Decker, in addition to consumer toolsets, they build industrial tooling solutions for railroads, maintenance and things like that, multi-million-dollar pieces of equipment, $500,000 prototypes, and can’t even ship it around the world to show customers, so they go back to blueprints and slides. This changes the entire experience when you can see it at 1:1 to scale in context, and it helps them sell more.

The second use case is design review, he continued. “If we can shorten the design cycle by not waiting for 3D prints, lowering the costs, again having decision makers be able to see a high fidelity prototype virtually across the world and save the cost and the time for design reviews, that’s a huge use-case.”

The third use case is corporate training in pre-production, when the manufacturing line is being built and a staff is being created before the project is launched. “On day one, they’re productive as they’ved trained in-full around the project virtually.”

For Paul Reynolds, founder of augmented and virtual reality prototyping platform Torch3D, these sorts of applications of the technology will be invaluable and eventually ubiquitous, just as other ways to visually share information have become mundane through advancement.

“If you think about the way modern smartphones have changed how we work and how we communicate with each other, even in an internal, professional setting, snapping a photo of a whiteboard is kind of a no-brainer these days, as is showing a video you recorded on your phone as a way of helping people see what you’re trying to communicate to them,” Reynolds said. “If you think about that metaphor and you think about 3D, my ability to put a virtual 3D object in the meeting room, or my ability to quickly put together a spatially arranged concept in collaboration with my stakeholders — if you think about how much conversation time that saves, it’s pretty huge … it could be a physical spatial thing where maybe you and I are brainstorming a new retail display or a new warehouse layout.”

So while the ideas are fully-formed and the first steps have been made toward bringing AR into the workplace, industry professionals say that there are a few reasons that despite rapid advancements in AR technology, broad adoption of AR in workplaces outside of heavy industry using proprietary headset technology like Meta’s Meta 2, and upcoming Meta 3, is a few years away.

The first is a major skill-gap that needs to be bridged before advancement moves at a pace rapid enough for enterprise applications that utilize AR. While many skilled developers are ready for the rapid delivery and turnaround required for enterprise development, they’re missing a key ingredient that will rocket AR development forward — the ability to think in three dimensions. That’s where expertise from the video game industry comes in handy, says Tim Huckaby, founder of Xenoholographics, a subsidiary of his company Interknowlogy, a Microsoft partner, which is currently developing AI-powered applications for Microsoft’s Hololens augmented reality platform and mobile devices, bouncing off of the company’s work creating applications for Microsoft’s Kinect motion sensor device for the Xbox line.

“An enterprise developer, typically, would be very good at the software architecture for CRUD applications, but this 3D world is a world that most engineers and most developers have never lived in,” Huckaby said. “They don’t have any formal training in it, they didn’t go to school for it.”

Gaming leads the way
It doesn’t help, Huckaby said, that the primary tools for developing any kind of XR applications, whether they’re games, educational experiences or what have you, are video game engines, especially C#-based Unity, environments that those outside of game development have little to no experience in. The only answer, in Huckaby’s mind, is collaboration between the companies responsible for developing these environments.

“Game engines have facilitated [AR development], but we’re now going to need that type of high-level runtime engine for augmented reality so that the common enterprise developer doesn’t have to have a math background or a 3D background,” Huckaby said. “They essentially can program with all of the tools forming the platform, like they have right now in Visual Studio and some of the other lifecycle tools we have, but there’s a huge, huge gap between enterprise programming right now in Visual Studio and lifecycle tools — and frankly on the other platforms too — and what Unity is, which is specifically designed for games and 3D creations. There’s a giant grand canyon between the two and they barely talk. And I don’t know how it’s going to be fixed unless Microsoft and Unity do a joint venture — or anything — together.”

The idea of game developers leading the charge in XR design isn’t unique to Huckaby’s experience, as Reynolds echoed many of his points, elaborating that the skill gap was actually a bit of a surprise for XR frontrunners like Reynolds, who comes from a long career in AAA game development and advising at companies like Electronic Arts and Atari, Inc., and who got his start in the XR development world with the Magic Leap platform in 2014, where his role evolved from game developer to non-game application development lead.

“We all just kind of assumed everyone was going to use game engines and didn’t really pay attention to the fact that that’s a difficult learning curve and probably not necessary for most people and the ideas that they want to build,” Reynolds explained. “The deepest well of interactive 3D experience we have is in the gaming world. There are some things that make sense between making entertainment and game experiences and making 3D applications. I came from the same mentality — ‘we’ll just support popular game engines for the Magic Leap — we’ll support Unity and Unreal.’ And the mentality for a lot of people was, well, Unity is the easiest version of true game development that there’s been, but that doesn’t mean it’s actually easy to use. It’s just the easiest version of game development and, in particular, we watched a lot of talented user experience people and designers who just couldn’t iterate with the technology to help us figure out what is this next generation of interaction that makes sense for the most people.”

Huckaby thinks this leaves a big opening for another company to step in. While he mused that it could be Epic Games’ with their C++-based Unreal Engine, Amazon has already taken steps to make XR development more friendly to those with a different background and even those with little-to-no coding experience with their browser-based Amazon Sumerian utility running on Amazon Web Services, but those proprietary solutions come with their own set of hurdles, Reynolds said.

“If an engineer is fully committed and comfortable with the AWS ecosystem, then Sumerian could be a viable alternative to game engines like Unity or Unreal,” Reynolds said. “It does require interacting with AWS administration tools which makes it even less accessible to creatives and developers not familiar with the AWS Management Console.”

But, Reynold’s explained, “We have found that a lot of these ‘easy to jump into’ 3D development environments and frameworks are fine for simple experiences, but they fall down pretty quickly if you’re building more sophisticated applications.”

Outside of the simple knowledge of how to develop 3D experiences, there’s also a technological hurdle that needs to be faced. While plenty powerful for novelty AR experiences, everyone agreed that consumer smartphones and even specialized AR headsets like the Hololens or Meta won’t be able to deliver the most exciting, impressive or truly useful AR experiences on their own hardware for quite a while. That’s why much of the XR industry is banking on the recently finalized 5G standard for delivering its content from the cloud.

“Generations forward, especially with 5G of course, will be more distributed architecture or cloud OS concepts, where really all of the encryption, authentication, reconciliation of who’s talking to whom and where the content resides and who’s looking at what from what angle, all is operated in the cloud,” Meta’s Mikhail said. “Because if we all believe, which we do, that this technology is headed towards a pair of glasses, not a head mounted display perse, it would have to be a very thin client. 5G-enabled solves one part of it and then doing all of the heavy compute on the cloud is the other part of it and then you’re dealing with an optical engine really and a communication module on your head.”

Meta is making steps towards this goal with their pending Meta 3 model, which will enable real-time sharing of remotely-delivered visualizations between two remote Meta 3 headsets, but for now, the technology is still in it’s beginning stages.

Innovations in headsets
“It’s happening now in different flavors,” Mikhail said. “Last year, we announced a partnership with Zoom where you can take the feed from the Meta 2 headset and share what you’re looking at in terms of the digital holograms, models etc. to anyone on a Zoom call. So today you and I can be on a Zoom call, I can put on a Meta headset, and I can load up a model and instead of looking at a 2D slide, you’ll see a 3d hologram, you’ll see my hands around it and I can explain things in 3D. But the consumption is in 2D. You’re still consuming the content on a 2D screen, whether it’s your laptop or a conference room screen. Moving forward, the next generation of this technology will be two users in the Meta headset seeing everything in 3D and then further, this is the longer-term vision, is going to be telepresence where I see you as a hologram in front of me and the content and vice versa. Today you’re getting 3d content shared and consumed on 2D screens.

In the near future, he continued, “meetings are happening where we’re collaborating around a 3D model for board meetings, executive meetings, design reviews, sales and training all happening in this. Short-term, I’d say two or three quarters out.”

Huckaby says that Xenoholographic has much the same aim for remotely-delivered content on the platforms they develop for.

“We’ve built over the last nine months, a cloud-based AR system in Azure for the light-weight conversion of 3D and holographic content up in the cloud so it can be delivered to the device — and the device is either a smartphone, or the device is a Hololens. That’s what we came out this year on the market with. And obviously if you’re wearing the glasses, you just get this incredible immersive experience where a teleconference with people all over the world totally makes sense.

“The cool things about this tech and I think what you’re going to see in innovation over the next couple of years, and frankly where we have both of our patents pending, is being able to deliver what is historically really heavyweight content.”

But even the major leap in bandwidth awarded by a 5G connection doesn’t fully solve the problem, Huckaby explained.

“In Hololens [and mobile] development, there’s a huge issue where you basically have to compile the 3D objects and holographic content into the application itself,” Huckaby said. “That’s why in the past couple of years, the only AR apps that you see are pretty narrow in focus and pretty static in content.

“Even something like Pokemon Go — every time they release a new thing to find, you essentially have to download an entire new app from the App Store, which is absolutely unacceptable. It’s not enterprise software. I truly believe that this genesis we’re going through in mixed reality and augmented reality for the business part of the world is building enterprise-level software as opposed to these one-off, garage-developer type things. The only way this is going to succeed in the broad business categories is to build the software like we build enterprise software. And that means with performance and scale and structure and easy obtainability and all of that stuff, which is not what’s happening right now if you download an AR app on your iPhone.”

So while the technology is rapidly approaching where it needs to be to deliver a useful experience in a work environment, there are still plenty of kinks to work and still plenty of changes that need to be made to the tools and practices that will make AR or any XR technology friendly enough for the developer that they’d be a common part of a business’s operations.

“The interesting part about these new 3D technologies is they give us this capability of communicating not only visually, but communicating at physical scale,” Torch3D’s Reynolds said. “I can see it becoming a fundamental way that we communicate concepts and ideas if we do it visually. There’s a lot of interesting anecdotal data backing up that if I can show you what I’m talking about we can both be aligned much quicker and actually get to the meat of the conversation. When you talk about internal productivity, it’s all about taking advantage of visuals and the scale and the spatial relationship that these new technologies provide us. I don’t think we can overspeculate where all of these technologies are going to find their real utility and value. That timeline’s a ways off, something as serious as a shareholder type of meeting, it’s hard enough to do with video conferencing. We’ll know the technology will have hit mainstream that it’s so ubiquitous and reliable that you would actually use it for that particular type of use case.”