AR glasses with multimodal AI nets funding from Pokémon GO creator

In the week when gadget lovers around the world are enchanted by Vision Pro, a young, brave startup is trying to carve out a space for its augmented reality device that features a form factor starkly different from Apple’s device.

Today, Singapore-based Brilliant Labs announced its new product, Frame, a pair of lightweight AR glasses powered by a multimodal AI assistant called Noa. The glasses have captured the attention and investment of John Hanke, CEO of Niantic, the augmented reality platform behind games like Pokémon GO. Brilliant Labs declined to disclose the amount of funding it received from Hanke.

In a video demo seen by TechCrunch, one of Brilliant Labs’ founders asked Noa through voice for a self-introduction. After about three seconds, the agent generated and projected an answer in text onto the lenses.

In addition to voice commands, Noa is capable of visual processing, image generation and translation, thanks to the handful of AI models it has integrated: conversational search engine Perplexity AI; Stability AI’s text-to-image model Stable Diffusion; OpenAI’s latest text-generation model GPT4; and the speech recognition system Whisper. Frame’s lenses have a resolution of 640 x 400 for displaying videos and photos.

With these features, a user shopping at a mall can ask Noa to check the online prices of a pair of shoes they are looking at through Frame, for instance.

“The future of human/AI interaction will come to life in innovative wearables and new devices, and I’m so excited to be bringing Perplexity’s real-time answer engine to Brilliant Labs’ Frame,” Aravind Srinivas, CEO and founder of Perplexity, said in a statement.

The question is whether Frame will be responsive enough for any of its AI-generated responses to be helpful. Brilliant Labs’ Bluetooth-enabled devices rely on a smartphone to access the various AI models today. Eventually, though, the founders want to do away with the phone host and embed lightweight machine-learning models directly into the glasses.

Comments