Microsoft really wants to blur the line between the digital and real worlds. While HoloLens can stick humans in a bizarro universe filled with holograms and Minecraft blocks, a new program could eventually help robots and self-driving cars better “see” their surroundings.
It’s called SemanticPaint, and it’s a new project by the Microsoft Research team. Using Kinect, it allows users to digitally scan their surroundings, while simultaneously labeling objects by “painting” them different colors. Users do so simply by touching the item or surface and using vocal identification, like saying “banana” out loud. Then, those objects are individually separated into categories: books, chairs, cups, whatever. The more objects you interact with, the better the system gets at automatically categorizing them, even if you move into another room.
You eventually end up with a program that can automatically identify stuff in its environment. And since SemanticPaint is online, that kind of batch learning happens super quickly, versus offline programs, which can take hours or days.
Those kinds of scanning-and-identifying powers are absolutely crucial with tech like self-driving cars. Right now, researchers are exploring tech with lasers and lidar
to gauge what’s on the road. Robots, too, can benefit from auto-updating AR programs like SemanticPaint, since it makes them smarter and more aware of what’s around them.
Between stuff like this and HoloLens, I’m starting to think Microsoft wants all our lives to feel like an episode of Reboot. I’m cool with it.