💥 Read this trending post from TechCrunch 📖
📂 **Category**: Enterprise,AI,nvidia,GTC,nvidia gtc,artificial intelligence,Nvidia GTC 2026,Memories.ai
✅ **What You’ll Learn**:
Sean Chen believes that artificial intelligence will need to remember what it sees in order to succeed in the physical world. Shen’s company, Memories.ai, uses Nvidia AI tools to build the infrastructure for wearables and robots to be able to remember and retrieve visual memories.
Memories.ai announced its collaboration with semiconductor giant Nvidia at the GTC conference on Monday. Through this partnership, Memories.ai is using Nvidia’s Cosmos Reason 2 software, a logical vision language model, and Nvidia Metropolis, a video search and summarization application, to further develop its visual memory technology.
Shen (pictured above left) told TechCrunch that he and his co-founder and CTO Ben Zhou (pictured above right) got the idea for the company while building the AI system behind Meta’s RayBan glasses. Building the AI glasses got them thinking about how people would actually use the technology in real life if users couldn’t remember the video data they were recording.
They looked around to see if they could find anyone already building this type of visual memory solution for AI. When they couldn’t, they decided to break out of the meta and build it themselves.
“AI is already doing well in the digital world, so what about the physical world?” Shane said. “AI wearables, robots need memories too. … Ultimately, you need AI to have visual memories. We believe in this future.”
The ability of AI systems to remember in general is relatively new. OpenAI updated ChatGPT to start remembering past chats in 2024 and fine-tuned this feature in 2025. Both Elon Musk’s xAI and Google Gemini have also launched their own memory tools in the past two years.
But these developments have largely focused on textual memory, Shin said. Text-based memory is more structured and easier to index but is not useful for physical AI applications that largely interact with the world through sight and visuals.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Memories.ai launched in 2024 and has raised $16 million to date, through an $8 million seed round in July 2025 and an $8 million extension. The round was led by Susa Ventures and included Seedcamp, Fusion Fund, and Crane Venture Partners, among others.
Shen said successfully building the visual memory layer requires two things: building the infrastructure needed to embed and index videos into a data format that can be stored and retrieved, and capturing the data needed to train the model to do so.
The company launched the Large Visual Memory Model (LVMM) in July 2025. Shin said it is comparable to a smaller version of Gemini Embedding 2, a multimodal indexing and retrieval model, released earlier this month.
To collect the data, the company created the LUCI, a device worn by the company’s “data collectors” that records the video used to train the model. Shen said they do not plan to become a hardware company, nor do they sell such devices, but built their own devices because they were not satisfied with off-the-shelf video recorders that focused on battery-hungry, high-definition video formats.
The company has released the second generation of this LVMM device and has signed a partnership with Qualcomm to run on Qualcomm processors starting later this year.
Shen said Memories.ai is also working with some large wearable companies already, but he declined to reveal those companies. Despite some demand now, Shen sees greater opportunities in wearables and robotics yet to come.
“In terms of commercialization, we are focusing more on the model and infrastructure, because we believe eventually the market for wearables and robotics will come, but maybe not now,” Shen said.
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Memories.ai #building #visual #memory #layer #wearables #robots**
🕒 **Posted on**: 1773693145
🌟 **Want more?** Click here for more info! 🌟
