When talking about the Internet of Things, people usually think of the software in their cars, or smart appliances that communicate with back-end systems to keep the house running smoothly, or robots on factory floors doing manufacturing. When talk turns to wearable devices, though, the conversation seems to begin and end with watches, wristbands, glasses and now virtual reality headsets.

And today, it seems, the Internet of Things is leapfrogging past wearables, as industrial uses for systems-on-chip present more opportunity for developers. The ultimate goal of the Internet of Things is not to create a world where machines act alone, as on a factory floor or embedded in an HVAC system. Quite the contrary. The goal is to create a world in which people can interact with their technologically savvy machines and appliances to improve the human condition.

(Related: Making wearables work for consumers)

Telematics—the use of software and hardware technology in cars—can help us avoid accidents, can place our own entertainment at our fingertips, can improve our gas mileage, and now even hit the brakes for us when we get too close to the car in front. It can flag us when it’s time for service, sense when it’s raining so it can automatically turn on the wipers, and even sense how we drive to help us reduce our insurance costs if we drive cautiously enough.

There are so many other applications in so many other verticals: We derive healthcare benefits when sensors can communicate with medical professionals, hospitals and even pharmacies to keep us healthy.

Unfortunately, healthcare seems to be the only place that wearables are having an impact. There are now all manner of health-related wearable devices—from Fitbit and Pebble to offerings from Apple, Garmin and Rogue Fitness—that count our steps, monitor our heart rate and breathing, and some that even monitor the blood sugar levels of diabetics at regular intervals.

We do not mean to belittle the value of these devices. They are perhaps the most important kind of human-device interaction that we can have: little machines that help us stay healthy, fit and alive. But to truly gain wide traction beyond the fitness buffs among us, wearable devices are going to have to offer more. Google’s first attempt at putting a computer literally in front of your face with its Glass project stalled; the company is only now beginning to revisit the feasibility and value of information delivered by eyewear.

That’s not to say there is no room for wearables. The Apple Watch, which had tremendous uptake upon its release, offers a wide array of applications for its App Store and takes into consideration that these devices need to be stylish if they’re to be worn. But for wearable technology to really take off, we want to see a whole lot more functionality, or more vertical uses cases, that scream “MUST HAVE.” Bring the world to us in our wearable devices. Augment our reality beyond buzzes and vibrations. Reimagine the form factor. Then, perhaps, we’ll be motivated to put one on.

It’s been five years since the Kinect hit the market, and almost two since it’s been relevant. A three-year lifespan is pretty bad for any video game product, but it’s especially bad if that product was touted as far more than just a video game peripheral.

The Kinect’s ability to see and hear users was a major technological feat. And launching it initially tied to a game console was a no-brainer, demonstrating its capabilities with motion-capture games like Dance Central. But Microsoft never intended Kinect to remain in the living room. In 2011, the company released Kinect for Windows, hoping to move the technology to the hands of developers, who would then spread it to places elsewhere.

As for that, the only practical use we could find was with NASA’s Jet Propulsion Laboratory. Other than that, it seems that Kinect’s non-Xbox uses were confined largely to hobbyists.

What ultimately sidelined the Kinect was its lack of usefulness. It turns out there’s only so much that video games can make of motion controls before they become tedious. Combined with a series of faux pas with the Kinect for Xbox One’s rollout, the company decided it was a good idea to ship the console without the peripheral.

But now a new peripheral is here: the virtual reality headset known as HoloLens. Again attached to the Xbox, the HoloLens was demonstrated at this year’s E3 as a new way to work with Minecraft. The demonstration received mixed reviews, especially after questions were raised about the accuracy of the depiction of what HoloLens could do. But regardless, there seems to be more skepticism about just how fun it is this time around.

Aside from the usual questions about how good it is at entertaining users (how can one play with virtual Minecraft Legos if one can’t feel them in their hand, for instance), Microsoft hasn’t laid out a solid non-Xbox use case for the technology. NASA is using it to help train astronauts and virtually explore Mars, but NASA is a niche case.

Microsoft probably does not want the HoloLens to be another pure video game peripheral, and it certainly does not want it to be the next Kinect. Wider adoption will rely on a successful launch with the Xbox, though, and that means creating games that can use it in an engaging way. That’s a highly dubious proposition coming from the Xbox, which has struggled to get anyone interested in it, period. An impressive but impractical technology is not going to change anyone’s minds.