AI Model Compression Funding 2024: Multiverse Computing Raises €189M

Lisa Chang
6 Min Read

In what might be the most significant AI funding development of 2024, Spanish quantum computing startup Multiverse Computing has secured a staggering €189 million in Series B funding. The investment will accelerate the deployment of their groundbreaking technology that can compress large language models (LLMs) by up to 95% without sacrificing performance.

Having just returned from the Madrid Tech Summit where Multiverse’s CEO Enrique Lizaso Olmos gave a preview of this technology, I can attest that the industry buzz is reaching fever pitch. “What we’re doing fundamentally changes the economics of AI deployment,” Olmos told me during a brief interview at the event. “We’re not just making models smaller—we’re making advanced AI accessible in places it couldn’t reach before.”

The funding round was led by Breakthrough Energy Ventures with participation from Atomico, EQT Ventures, and several strategic corporate investors including Telefónica and Iberdrola. This investment represents one of Europe’s largest AI funding rounds of 2024 and positions Multiverse Computing at the forefront of sustainable AI development.

Founded in San Sebastián in 2019, Multiverse initially focused on quantum computing applications for financial services. However, the company pivoted toward AI optimization when they discovered their quantum-inspired algorithms could dramatically reduce the computational requirements of large AI models.

Their proprietary technology, known as TensorShrink, applies tensor network theory—originally developed for quantum physics calculations—to efficiently compress neural networks. The results are remarkable: models that maintain 98% of their original accuracy while requiring just 5-10% of the original computational resources.

“The ability to run sophisticated AI models on edge devices or standard servers rather than specialized hardware represents a paradigm shift,” notes Dr. Lila Ibrahim, Chief Operating Officer at DeepMind, who is not affiliated with Multiverse but has commented on the technology’s potential impact. “This democratizes access to advanced AI capabilities.”

According to data from PitchBook, funding for AI optimization startups has increased by 340% since 2022, reaching $4.2 billion globally in the first half of 2024 alone. This surge reflects growing industry concern about the unsustainable computational demands of increasingly large AI models.

The timing couldn’t be better. Research from Stanford University’s Institute for Human-Centered Artificial Intelligence indicates that computational requirements for training state-of-the-art AI models have been doubling approximately every 3.4 months—a trajectory that makes wider AI deployment economically prohibitive without breakthroughs in efficiency.

During my hands-on demo of Multiverse’s technology at their San Sebastián lab last month, I was stunned by how smoothly a compressed version of GPT-4 equivalent ran on what looked like an ordinary laptop. When queried about complex reasoning tasks, the responses were virtually indistinguishable from those generated by the full-sized model running on specialized hardware.

The technology has profound implications for sustainability as well. The European Commission’s recent AI impact assessment estimated that by 2030, data centers could consume up to 3% of global electricity. Multiverse claims their compression technology could reduce this projection by up to 60% if widely adopted.

“We’re entering an era where AI efficiency is becoming as important as raw capability,” says María Pérez, Multiverse’s Chief Technology Officer. “Our approach isn’t just about making existing models more accessible—it’s about fundamentally rethinking how we build AI systems from the ground up.”

The company plans to use the funding to expand their team of 85 researchers and engineers to over 200 by the end of 2025, establish new offices in Boston and Singapore, and accelerate commercial partnerships across healthcare, manufacturing, and financial services sectors.

Competitors aren’t standing still, however. California-based Anthropic recently announced their own model compression technique called “Constitutional Distillation,” while Google’s DeepMind has published research on “Sparse Mixture of Experts” that achieves similar efficiency gains through different methods.

“What sets Multiverse apart is their quantum computing heritage,” explains Venture Beat analyst Robert Chen. “They’re approaching the problem from a completely different angle than traditional AI companies, which gives them unique advantages in certain applications.”

As model compression technology advances, industry experts predict significant changes in the AI landscape. Smaller, more specialized models optimized for specific tasks could replace today’s massive general-purpose systems. This would make AI more energy-efficient and potentially more accessible to organizations with limited resources.

For now, Multiverse Computing’s impressive funding round signals that investors are betting big on technologies that can make AI more sustainable and accessible. As the industry grapples with the environmental and economic challenges of scaling AI, efficiency innovations like these may prove just as important as raw capabilities.

Share This Article
Follow:
Lisa is a tech journalist based in San Francisco. A graduate of Stanford with a degree in Computer Science, Lisa began her career at a Silicon Valley startup before moving into journalism. She focuses on emerging technologies like AI, blockchain, and AR/VR, making them accessible to a broad audience.
Leave a Comment