Memories AI is developing a specialized visual memory layer designed to allow wearables and robotic systems to store, index, and recall visual experiences, effectively providing machines with human-like memory capabilities.
The Foundation of Visual Intelligence
Shen explains that the core challenge in building this layer involves two critical pillars: constructing the infrastructure to embed and index video into a searchable data format, and gathering the specific training data required to make that system functional.
In July 2025, the company debuted its large visual memory model (LVMM). Shen compares this technology to a more compact, specialized version of Gemini Embedding 2, the multimodal retrieval model recently launched by Google.
Proprietary Hardware for Data Training
To fuel the model’s training, the company developed LUCI, a proprietary wearable hardware device used by their internal “data collectors” to capture relevant video footage. Shen clarified that Memories AI has no intention of becoming a hardware manufacturer or selling these devices. The team opted to build their own hardware because existing off-the-shelf recorders failed to meet their needs, often prioritizing high-definition output over the power efficiency required for long-term data collection.
Scaling Through Strategic Partnerships
The company has already released the second generation of its LVMM and secured a partnership with Qualcomm. This collaboration will enable the model to run directly on Qualcomm processors starting later this year.
While Shen declined to name specific partners, he confirmed that Memories AI is already collaborating with several major wearable technology companies. Despite current market traction, the leadership team remains focused on the long-term potential of the robotics and wearable sectors.
“In terms of commercialization, we are more focused on the model and the infrastructure, because ultimately we think the wearables and robotics market will come, but it’s probably just not now,” Shen noted.
