Multiverse Computing pushes its compressed AI models into the mainstream

Why it matters: This shift towards on-device AI could revolutionize data privacy and reduce dependency on cloud infrastructure.
- Lux Capital advises companies to secure compute capacity commitments in writing due to high private company defaults and financial instability in the AI supply chain.
- Multiverse Computing offers an alternative by compressing AI models from major labs like OpenAI and Meta, enabling them to run locally on user devices.
- CompactifAI app showcases Multiverse's technology with an offline, local AI chat model named Gilda, prioritizing user privacy by keeping data on-device.
- Ash Nazg system automatically routes processing between local devices and cloud-based models when device specifications are insufficient, though this compromises the privacy advantage.
- Multiverse's self-serve API portal targets businesses and developers, providing direct access to its compressed models without requiring third-party marketplaces like AWS.
Amidst rising financial instability in the AI supply chain, Multiverse Computing is pushing its quantum-inspired compression technology, CompactifAI, into the mainstream. This allows major AI models to run locally on devices, reducing reliance on external compute infrastructure and addressing privacy concerns, though device limitations currently restrict mass adoption.


